Prosecution Insights
Last updated: April 17, 2026
Application No. 18/435,855

SYSTEM AND METHOD FOR DATA HARVESTING FROM ROBOTIC OPERATIONS FOR CONTINUOUS LEARNING OF AUTONOMOUS ROBOTIC MODELS

Non-Final OA §103
Filed
Feb 07, 2024
Examiner
GILLIARD, DELOMIA L
Art Unit
2661
Tech Center
2600 — Communications
Assignee
unknown
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
976 granted / 1089 resolved
+27.6% vs TC avg
Moderate +10% lift
Without
With
+10.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
12 currently pending
Career history
1101
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1089 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2022/0126864 A1 to Moustafa et al., hereinafter, “Moustafa” in view of US 2024/0051568 A1 to Capellier. Claim 1. A method comprising: detecting a trigger event during operation of an autonomous ground vehicle traveling between two physical locations, Moustafa FIG. 12, Event Detection 1205 Moustafa [0155] FIG. 148 illustrates an example route that a vehicle may take to get from point A to point B (Examiner interprets “point A to point B” to be two physical locations. Moustafa [0199] A planning and decision stage 510 may additionally include making decisions relating to the path plan in reaction to the detection of obstacles and other events to decide on whether and what action to take to safely navigate the determined path in light of these events. Examiner interprets the determined path to be traveling between two physical locations wherein the autonomous ground vehicle Moustafa [FIG. 2] Vehicle 105 and autonomous driving 210 – perception engine 238, comprises primary sensors 2D camera 272, 3D camera 274, LIDAR 270, secondary sensors Thermal 280, Ultrasound 272 and Bio sensor 284 , a location module localization engine 240, a navigational control system GPS 268, a communication module comm modules 212, and movement systems IMU 278; generating event sequence data from primary sensor data, secondary sensor data, spatiotemporal data, and telemetry data through operation of a reporter; Moustafa [0233] FIGS. 12 and 13, a machine-learning based event or scenario detection engine (e.g., 1040) , FIG. 34 3402 - sensors (data) 3408 – event sequence communicating the event sequence data to cloud storage Moustafa [0185], [0210] and [0227] communicating events, tasks and scenario to cloud based services. Moustafa [0254] When the autonomous driving engine (e.g., 515) determines a pull-over event or the remote valet support logic (e.g., 1805) determines that a handoff request should be sent, a signal may be sent to the TCU 1810) to send vehicle location and pull-over location to various cloud-based entities… and raw data to a streaming database; Moustafa [0383] In vehicle 3850, CCU 3840 may receive near-continuous data feeds from sensors 3855A-3855E. Examiner interprets near-continuous data feeds to be raw data and CCU to be a streaming database to be continuous flow of data processed as it is generated. Moustafa [0383] CCU 4040 can receive instructions from an autonomous ECU or driver, in addition to feedback from one or more of the steering, throttle, and brake sensors and/or actuators, sending commands to the appropriate ECUs. Vehicle behavior learning to produce vehicle behavior model often uses raw data that may be generated as discussed above Moustafa [0442] The determined context(s) is often expressed as metadata associated with the raw data. Moustafa [0443] The determined context is stored in metadata/context dataset 5110 with the associated timestamp which can be used to map the context back to the raw data stream (e.g., the image data and/or the non-image sensor dataset). Examiner understands a streaming database to utilize metadata for query – specification [0046] transforming the raw data into normalized data stored in a relational database through operation of a normalizer; Moustafa [0843] the raw sensor depth data is transformed into normalized range. Moustafa [0443] The determined context is stored in metadata/context dataset 5110 with the associated timestamp which can be used to map the context back to the raw data stream (e.g., the image data and/or the non-image sensor dataset). Examiner interprets “mapping the context back to the raw data stream” to be relational Moustafa [0436], [0445], [0447] and [0452] teaches database - images and related metadata/context is stored operating a machine learning model within an active learning pipeline to generate a model update from aggregate training data generated from the training data by an aggregator; Moustafa FIG. 12 illustrates a representation of an example event detection machine learning model. Moustafa [0200] the in-vehicle processing system implementing an autonomous driving stack allows driving decisions to be made and controlled without the direct input of the passengers in the vehicle, with the vehicle's system instead relying on the application of models, including machine learning models, which may take as inputs data collected automatically by sensors on the vehicle, data from other vehicles or nearby infrastructure (e.g., roadside sensors and cameras, etc.), and data (e.g., map data) … The models relied upon by the autonomous vehicle's systems may also be developed through training on data sets…these sensors primarily focus on vehicle safety, such as detecting surrounding vehicles and obstacles and traffic events, to help determine safe and reliable path plans and decisions within the traditional sense-plan-act autonomous driving pipeline. Examiner interprets [0200] to be operating machine learning models. Moustafa [0202] autonomous driving pipeline (605-sensing, 610-planning, 615-acting) (understood to be the active learning pipeline) Moustafa [0225] the connected device will only collect and transport the sensor data … which may be updated (e.g., dynamically) as the model continues to evolve and train. Moustafa [0382] Feedback 3825 can be sent to cloud vehicle data system 3820 for aggregation and re-computation to update regression models in multiple vehicles to optimize behavior. In at least some examples, one or more edge devices 3830 may perform aggregation and possibly some training/update operations. In these examples, feedback 3835 may be received from regression models (e.g., 3844) to enable these aggregations, training, and/or update operations. and reconfiguring the navigational control system with the model update communicated from the active learning pipeline to the autonomous ground vehicle. Moustafa [0200] Moustafa [0225] the connected device will only collect and transport the sensor data … which may be updated (e.g., dynamically) as the model continues to evolve and train. Moustafa [0226] FIG. 10, element 1020…autonomously steer, accelerate, and brake the vehicle 105 Moustafa [0382] While Moustafa teaches operating a curation systems [0175] a data collection module 234 (Examiner understands this to be a curation system), where the data is collected from sources to be used as inputs for machine learning and Moustafa [0216] data provided to an in-vehicle computing system of a vehicle (e.g., 805) may be timed, filtered, and curated based on the specific characteristics and capabilities of that vehicle. However, Moustafa fails to explicitly teach identifying true trigger events. Capellier [0024] is in the same field of an autonomous vehicle sensing and navigating an environment teaches a route with two physical locations with the use of various sensors and machine learning, teaches operating a curation system to identify true trigger events from the normalized data and extract training data by way of a discriminator; Capellier [0087] teaches triggered ODD scenarios (event). Capellier [0019] a generative adversarial network (GAN), which includes a generator network and a discriminator network, may be trained by training the generator network to generate one or more synthesized scenarios and the discriminator network to distinguish between at least one true scenario originating from the perception system of an autonomous vehicle, [0094] Moustafa [0003] operate in an autonomous mode in which the vehicle navigates through an environment with little or no input from a driver to include one or more sensors that are configured to sense information about the environment. Capellier [0001] an autonomous vehicle is capable of sensing and navigating through its surrounding environment with minimal to no human input to safely navigate the vehicle along a selected path, the vehicle may rely on a motion planning process to generate, update, and execute one or more trajectories through its immediate surroundings. Thus, before the effective filing date of the present application, it would have been obvious to one of ordinary skill in the art to combine the teachings of Moustafa with the teachings of Capellier [0020] for operating the vehicle in accordance with other desirable characteristics such as path length, ride quality or comfort, required travel time, observance of traffic rules and adherence to driving practices. Claim 2. Moustafa further comprises: configuring an event handler with event triggers; Moustafa [0430] Data handler 4930 may perform one or more actions with respect to the instance of data… operating the navigational control system comprising an image recognition model Moustafa [0439] GAN is a type of generative model that uses machine learning, more specifically deep learning, to generate images (e.g., still images or video clips) and [0443] For model development, the image data and non-sensor image data is often collected in the cloud and data scientist and machine learning experts are given access to enable them to generate models that can be used in different parts of the autonomous vehicle. a controller, Moustafa [FIG. 2] Drive Controls 220, [0181] the vehicle's autonomous driving stack to control a control unit of the vehicle in order to change driving maneuver, [0183] and the event handler to receive: Moustafa [0430] Data handler 4930 receives data from the sensors of the autonomous vehicle the primary sensor data from the primary sensors; Moustafa [FIG. 2], 2D camera 272, 3D camera 274, LIDAR 270, the spatiotemporal data from the location module; Moustafa [FIG. 2] localization engine 240 the secondary sensor data from the secondary sensors; Moustafa [FIG. 2] Thermal 280, Ultrasound 272 and Bio sensor 284 and the telemetry data from the movement systems; Moustafa [FIG. 2] IMU 278 controlling the movement systems through operation of the controller Moustafa [FIG. 2] Drive Controls 220, [0181] the vehicle's autonomous driving stack to control a control unit of the vehicle in order to change driving maneuver, (also Capellier [0019]) and the image recognition model to transport the autonomous ground vehicle between two physical locations; Moustafa [0226] an autonomous driving stack 1015 using various artificial intelligence logic and machine learning models may receive or retrieve the sensor data to generate outputs to the actuation and control block 1020 to autonomously steer, accelerate, and brake the vehicle 105. Examiner understands the machine learning models to be the image recognition model (see {0439]) and “autonomously steer, accelerate and brake” to be transporting the autonomous ground vehicle. communicating the raw data comprising the primary sensor data, the secondary sensor data, the spatiotemporal data, and the telemetry data to the streaming database by way of the communication module; Moustafa [0173] communication modules 212, [0383] In vehicle 3850, CCU 3840 may receive near-continuous data feeds from sensors 3855A-3855E. and operating the event handler to monitor the primary sensor data, the secondary sensor data, the spatiotemporal data, and the telemetry data for the event triggers. Moustafa [0430] Data handler 4930 receives data from the sensors of the autonomous vehicle Claim 3. Moustafa further teaches configuring the normalizer with the event sequence data to transform the raw data into the normalized data. Moustafa [0843] the raw sensor depth data is transformed into normalized range. Claim 4. Moustafa and Capellier further teaches communicating the raw data to the curation system from the streaming database; Moustafa [0529] In various embodiments, the raw sensor data may be supplied to the training algorithm 6802. In addition, or as an alternative, classifications based on the raw sensor data may be supplied to the ML algorithm 6802 to train the driver state model 6808. operating the discriminator to identify at least one trigger event in the raw data; Capellier [0095] FIG. 6, the trained discriminator network may be applied to detect when a vehicle encounters an out of operational design domain (ODD) scenario (block 604). and triggering the reporter to generate the event sequence data. Moustafa [1141] to cause the machine to collect the sensor data from the one or more extraneous sensors at a computing system extraneous to the particular vehicle…Examiner understands to cause the machine to collect to be triggering the reporter to generate the event sequence data, and the sensor data from the one or more extraneous sensors is understood to be the event sequence data. Claim 5. Moustafa further teaches wherein the training data comprises image data collected by the primary sensors during operation of the autonomous ground vehicle during the trigger event. Moustafa [0441] system 5100 accesses real data sources 5102 and stores the real data sources in image dataset 5104 and non-image sensor dataset 5106. The real data sources 5102 may represent data collected from live vehicles or simulated driving environments Claim 6. Capellier further teaches wherein the discriminator is configured by way of a user interface to identify the true trigger events from false positives. Capellier [0019] the discriminator network to distinguish between at least one true scenario originating from the perception system of an autonomous vehicle Claim 7. Moustafa further teaches wherein the machine learning model and the image recognition model are semantic segmentation models. Moustafa [0807] Although the above examples have been described with respect to object detection, the concepts may be applied to other autonomous driving operations, such as semantic segmentation and object tracking. Moustafa [0867] one or more of these models comprises a recurrent neural network (RNN) (e.g., in a segmentation model learning how to categorize pixels in a scene by predicting the sequence of polygon coordinates that bound objects). Moustafa [0868] In a segmentation model, a soft target may indicate, for each pixel, softmax probabilities of that pixel with respect to different semantic categories. Claim 8. Reviewed and analyzed in the same way as claim 1. See the above analysis and rationale. Claim 9. Reviewed and analyzed in the same way as claim 2. See the above analysis and rationale. Claim 10. Reviewed and analyzed in the same way as claim 3. See the above analysis and rationale. Claim 11. Reviewed and analyzed in the same way as claim 5. See the above analysis and rationale. Claim 12. Reviewed and analyzed in the same way as claim 6. See the above analysis and rationale. Claim 13. Reviewed and analyzed in the same way as claim 7. See the above analysis and rationale. Claim 14. Reviewed and analyzed in the same way as claim 1. See the above analysis and rationale. Claim 15. Reviewed and analyzed in the same way as claim 2. See the above analysis and rationale. Claim 16. Reviewed and analyzed in the same way as claim 3. See the above analysis and rationale. Claim 17. Reviewed and analyzed in the same way as claim 4. See the above analysis and rationale. Claim 18. Reviewed and analyzed in the same way as claim 5. See the above analysis and rationale. Claim 19. Reviewed and analyzed in the same way as claim 6. See the above analysis and rationale. Claim 20. Reviewed and analyzed in the same way as claim 7. See the above analysis and rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DELOMIA L GILLIARD whose telephone number is (571)272-1681. The examiner can normally be reached 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DELOMIA L GILLIARD/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Feb 07, 2024
Application Filed
Dec 24, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602805
DATA TRANSMISSION THROTTLING AND DATA QUALITY UPDATING FOR A SLAM DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12602932
SYSTEMS AND METHODS FOR MONITORING USERS EXITING A VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12602796
SYSTEM, DEVICE, AND METHODS FOR DETECTING AND OBTAINING INFORMATION ON OBJECTS IN A VEHICLE
2y 5m to grant Granted Apr 14, 2026
Patent 12602952
IMAGE-BASED AUTOMATED ERGONOMIC RISK ROOT CAUSE AND SOLUTION IDENTIFICATION SYSTEM AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602895
MACHINE LEARNING-BASED DOCUMENT SPLITTING AND LABELING IN AN ELECTRONIC DOCUMENT SYSTEM
2y 5m to grant Granted Apr 14, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+10.2%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 1089 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month