Prosecution Insights
Last updated: April 19, 2026
Application No. 17/529,737

Method for Generating Training Data for a Recognition Model for Recognizing Objects in Sensor Data from a Surroundings Sensor System of a Vehicle, Method for Generating a Recognition Model of this kind, and Method for Controlling an Actuator System of a Vehicle

Non-Final OA §103
Filed
Nov 18, 2021
Examiner
NGUYEN, TRI T
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 10m
To Grant
82%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
125 granted / 183 resolved
+13.3% vs TC avg
Moderate +13% lift
Without
With
+13.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
31 currently pending
Career history
214
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
57.5%
+17.5% vs TC avg
§102
7.2%
-32.8% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 183 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/29/2025 has been entered. Response to Amendment The amendment filed 09/29/2025 has been entered. Claims 1-5 and 7-12 remain pending in the application. Response to Arguments Applicant’s arguments, filed 09/29/2025, with respect to the rejections of claims 1, 10 and 12 under 103 have been fully considered and are persuasive because of the amendments. Therefore, the rejections have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Liang et al. (US Pub. 20190171223) in view of Schindler et al. (DE102017006155A1- Method for operating a sensor system of a vehicle) in view of Omar et al. (Multiple Sensor Fusion and Classification for Moving Object Detection and Tracking) and further in view of Hyland et al. (REAL-VALUED (MEDICAL) TIME SERIES GENERATION WITH RECURRENT CONDITIONAL GANS). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7-8 and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Liang et al. (US Pub. 20190171223) in view of Schindler et al. (DE102017006155A1- Method for operating a sensor system of a vehicle) in view of Omar et al. (Multiple Sensor Fusion and Classification for Moving Object Detection and Tracking) and further in view of Hyland et al. (REAL-VALUED (MEDICAL) TIME SERIES GENERATION WITH RECURRENT CONDITIONAL GANS). As per claim 1, Liang teaches a method for training for a recognition model configured to recognize objects in sensor data from a surroundings sensor system of a vehicle [Fig. 4A, paragraph 0033, “the predictor 404 may detect lane markings”; Examiner's Note (EN): The predictor corresponds to the recognition model and the detected lane markings correspond to the recognized objects; Paragraph 0010, the images from a camera attached to an autonomous device ("The autonomous device may be any device that operates autonomously or semi-autonomously, such as a car ... images may be obtained from a camera associated with the autonomous device", correspond to the sensor data from the surroundings sensor system of a vehicle], the method comprising: the first sensor data including a plurality of chronologically successive real measurements of the surroundings sensor of the surroundings sensor system [paragraph 0010, “The real operating images may be obtained from a camera associated with the autonomous device"; and paragraph 0036, “Real source images 418 are images of real-world scenes related to the operation of other devices that are obtained from a dataset. These real source images correspond to real operating images”; (EN): A camera is encompassed by the BRI of a surroundings sensor system. Multiple real operating images from a camera associated with a specific device is encompassed by the BRI of a plurality of chronologically successive real measurements of a first surroundings sensor of the surroundings sensor system)]; inputting first simulation data into the training data generation model [paragraph 0041, “The generator402 is configured to map a real operating image 408 ... in the real domain to a fake virtual image 410”; (EN): (page 14, lines 7-10) of the instant specification state "generating the first simulation data by means of a computation model, which described physical properties at least of the first surroundings sensor and of surroundings of the vehicle. The computation model may, for example, comprise a sensor model" and (page 16, lines 7-9) of the instant specification state "In the case of passive sensor modalities such as a camera, the computation model may also be divided into the described components", but does not appear to explicitly define a first simulation data. As such, simulation data encompasses data which is simulated, such as LIANG's fake virtual images, as well as data which is utilized when simulating, or generating, simulated data, such as LIANG's real operating images. As a result, inputting the real operating images 418 from a camera sensor to generator 402 in order to produce simulated data is encompassed by the BRI of inputting first simulation data into the training data generation model]; generating second simulation data based on the first simulation data using the training data generation model [paragraph 0039, “Thus, given datasets of real source images 418 and true virtual images 416, ... the generator 402 to transform a real operating image 408 to its corresponding canonical representation 410 in the virtual domain”; (EN): Generating a set of fake virtual images (alternately referenced as corresponding canonical representation 410) for a set of real operating images 408 utilizing the generator 402 is encompassed by the BRI of generating second simulation data based on the first simulation data utilizing the generation model. As discussed above, the predictor is trained utilizing the fake virtual images, demonstrating that the fake virtual images are encompassed by the BRI of as the training data]; inputting the second simulation data into a further learning algorithm [paragraphs 0032-0033, “The one or more fake virtual images 410 are input to the predictor 404 ... The predictor 404 is configured to process the fake virtual images 410”; (EN): As outlined above, the fake virtual images 410 correspond to the second simulation data. The predictor, which is distinct from the generator, corresponds to the further learning algorithm]; and training the recognition model to recognize objects based on the second simulation data using the further learning algorithm [paragraph 0033, “The predictor 404 is configured to process the fake virtual images 410 ... The predictor404 may be trained to map a fake virtual image to a particular command in a library of commands based on prediction information present in the fake virtual image. For example, the predictor 404 may detect lane markings included in the fake virtual image and match one or more characteristics of the lane markings, such as a degree of curvature, to a command related to steering angle”; (EN): As outlined above, training the predictor 404 to detect lane makings corresponds to training the recognition model to recognize objects and the fake virtual images correspond to the second simulation data]. Liang does not teach the surroundings sensor system having a first surroundings sensor and a second surroundings sensor that captures a different type of sensor data than the first surroundings sensor; inputting first sensor data and second sensor data into a learning algorithm, the second sensor data including a plurality of chronologically successive real measurements of the second surroundings sensor of the surroundings sensor system, each real measurement in the plurality of chronologically successive real measurements of the second surroundings sensor being assigned to a temporally corresponding real measurement in the plurality of chronologically successive real measurements of the first surroundings sensor; training a training data generation model configured to generate measurements of the second surroundings sensor assigned to measurements of the first surroundings sensor based on the first sensor data and the second sensor data using the learning algorithm; the first simulation data including a plurality of chronologically successive simulated measurements of the first surroundings sensor; the second simulation data including a plurality of chronologically successive simulated measurements of the second surroundings sensor. Schindler teaches the surroundings sensor system having a first surroundings sensor and a second surroundings sensor that captures a different type of sensor data than the first surroundings sensor [paragraph 0001, “a method for operating a sensor system of a vehicle having at least two environmental sensors; paragraph 0023, “The vehicle 11 is designed as a passenger car. The vehicle 11 comprises a sensor system 10, which has two environmental sensors 12 and 16. The first environment sensor 12 may be a radar sensor. First sensor data can be provided with the first environment sensor 12 that describe an environment 20 of the vehicle 11. The second environment sensor 16 may be a lidar sensor. Second sensor data can be provided with the second environment sensor 16, which also describe the environment 20”]; inputting first sensor data and second sensor data into a learning algorithm [paragraphs 0010-0011, “the first classifier classifies first sensor data of the first environment sensor … the first classifier receives second sensor data of a second environment sensor different from the first environment sensor”]; training a training data generation model configured to generate measurements of the second surroundings sensor assigned to measurements of the first surroundings sensor based on the first sensor data and the second sensor data using the learning algorithm [paragraph 0008, “a first of the classifiers is trained with first training data of the first environment sensor”; paragraphs 0024-0025, “A first classifier 14 is assigned to the first environment sensor 12, which classifies the sensor data acquired by means of the first environment sensor 12 in order to obtain environmental information from the environment 20 of the vehicle 11 … the first classifier 14 is trained with first training data of the first environment sensor 12”; It can be seen that in order for the first classifier to accurately classify first sensor data of first environment sensor 12, the first classifier must be trained on the first training data that is closely related to or representative of the first sensor data; paragraph 0011, “the first classifier receives second sensor data of a second environment sensor different from the first environment sensor. In a further step, this second sensor data is classified on the basis of the first classification model, in particular by the first classifier”; examiner interprets the classified second sensor data as the measurement of the second sensor data that is generated using the first classifier which is trained using training data that is representative of the first sensor data]; It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the recognition model configured to recognize objects in sensor data of Liang to include the surroundings sensor system having a first surroundings sensor and a second surroundings sensor that captures a different type of sensor data than the first surroundings sensor, inputting first sensor data and second sensor data into a learning algorithm, and generating sensor data of a second sensor based on corresponding sensor data of a different type from a first sensor of Schindler. Doing so would help operating the sensor system that comprises at least two sensors to detect objects surrounding the vehicle (Schindler, 0001-0002). Liang and Schindler do not teach the second sensor data including a plurality of chronologically successive real measurements of the second surroundings sensor of the surroundings sensor system, each real measurement in the plurality of chronologically successive real measurements of the second surroundings sensor being assigned to a temporally corresponding real measurement in the plurality of chronologically successive real measurements of the first surroundings sensor; the first simulation data including a plurality of chronologically successive simulated measurements of the first surroundings sensor; the second simulation data including a plurality of chronologically successive simulated measurements of the second surroundings sensor. Omar teaches the second sensor data including a plurality of chronologically successive real measurements of the second surroundings sensor of the surroundings sensor system [page 530, section 8.a, paragraph 3, "Let us consider two sources of evidence S1 and S2. Each of these sources provides a list of detections A = {a1, a2, …, ab} and B = {b1, b2, …, bn}, respectively"; (EN): Omar teaches detections between "Sensors S1 and S2" (page 530, section 6.1, paragraph 1, Omar), which is described as "including information from different sensor views of the environment, e.g., impact points provided by lidar and image patches provided by camera" (page 525, section 1, paragraph 5, Omar). Given Omar states that "We consider the LI DAR ... scanner as the main sensor in our configuration" (page 527, section 6.1, paragraph 1, Omar), Omar's list of detections for the second source is encompassed by the BRI of a second surroundings sensor measurement. Lists of camera and LIDAR detections in Omar's driving scenarios are chronologically successive, as lists themselves may be understood to express chronologically successive elements], each real measurement in the plurality of chronologically successive real measurements of the second surroundings sensor being assigned to a temporally corresponding real measurement in the plurality of chronologically successive real measurements of the first surroundings sensor [page 530, section 8.a, paragraph 5, “If two associated detections have complementary information, this is passed directly to the fused object representation; if the information is redundant, it is combined according to its type”; (EN): As discussed above, Omar's statements are situated in the context of object detection for driving with a pair of chronologically successive lists of detection information corresponding to the pair of sensors]; It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the recognition model configured to recognize objects in sensor data of Liang to include a plurality of chronologically successive real measurements of a second surroundings sensor of the surroundings sensor system of Omar, and to improve upon the machine learning system of LIANG with Omar's multiple sensor fusion because "An advantage of our fusion approach at the detection level is that the description of the objects can be enhanced by adding knowledge from different sensor sources" (Omar, page 526, section 2, paragraph 7). Liang, Schindler and Omar do not teach the first simulation data including a plurality of chronologically successive simulated measurements of the first surroundings sensor; the second simulation data including a plurality of chronologically successive simulated measurements of the second surroundings sensor. Hyland teaches the first simulation data including a plurality of chronologically successive simulated measurements of the first surroundings sensor [page 7, section 5, paragraph 2, "we focus on generating the four most frequently recorded, regularly sampled variables measured by bedside monitors: oxygen saturation measured by pulse oximeter (SpO2), heart rate (HR), respiratory rate (RR) and mean arterial pressure (MAP). In the elCU dataset, these variables are measured every five minutes ... we downsample [sic] to one measurement every fifteen minutes"; (EN): The generated, equivalently simulated, oxygen saturation variables correspond to the simulated measurements, which are chronological and successive, sampled every 5 minutes in the real dataset and every 15 minutes in the generated, or simulated, dataset. The pulse oximeter is encompassed by the BRI of a first surroundings sensor]; the second simulation data including a plurality of chronologically successive simulated measurements of the second surroundings sensor [page 7, section 5, paragraph 2, "we focus on generating the four most frequently recorded, regularly-sampled variables measured by bedside monitors: oxygen saturation measured by pulse oximeter (Sp02), heart rate (HR), respiratory rate (RR) and mean arterial pressure (MAP). In the elCU dataset, these variables are measured every five minutes ... we downsample [sic] to one measurement every fifteen minutes"; (EN): The generated heart rate variables correspond to the simulated measurements, which are chronological and successive, sampled every 5 minutes in the real dataset and every 15 minutes in the generated, or simulated, dataset. The bedside monitors which measure, or regularly sample, the heart rate is encompassed by the BRI of a second surroundings sensor]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the recognition model configured to recognize objects in sensor data of Liang (as modified) to include Hyland's GAN based time series data generation because "Access to data is one of the bottlenecks in the development of machine learning solutions to domain-specific problems. The availability of standard datasets (with associated tasks) has helped to advance the capabilities of learning systems in multiple tasks. However, progress appears to lag in other fields" (Hyland, page 1, section 1, paragraph 1) As per claim 2, Liang, Schindler, Omar and Hyland teach the method according to claim 1. Liang further teaches the learning algorithm includes an artificial neural network [paragraph 0030, “The generator 402 includes convolutional layers, followed by residual blocks, followed by deconvolutional layers. In one configuration, two of the convolutional layers are 3x3 kernel and stride size 2. Two deconvolutional layers with stride ½ then transform the feature to the same size as the input real operating image 408. Instance normalization is used for all the layers. In operation, the generator402 receives an input corresponding to one or more real operating images 408 and transforms each real operating image 408 input to a fake virtual image 410"; (EN): As discussed above, the generator402 corresponds to the claimed learning algorithm. (page 8, lines 1-3) of the instant specification state "the method may be based on the use of an artificial neural network ... Specifically, methods for style transfer by means of a generative adversarial network, GAN for short ... may be used", but does not appear to explicitly define an artificial neural network. As such, as written, the BRI of the learning algorithm including an artificial neural network includes a generator from a GAN which is comprised of convolution layers, residual blocks, and deconvolution layers, with normalization”]. As per claim 3, Liang, Schindler, Omar and Hyland teach the method according to claim 1. Liang further teaches the learning algorithm includes a generator configured to generate the second simulation data and a discriminator configured to evaluate the second simulation data based on at least one of (i) the first sensor data and (ii) the second sensor data [paragraphs 0038-0039, “Generally, an objective of the generator402 is to increase its ability to generate fake virtual images 410 to fool the discriminator 406, while an objective of the discriminator is to increase its ability to correctly discriminates [sic] fake virtual images from true virtual images 416. Thus, given datasets of real source images 418 and true virtual images 416 ..., the learning objective trains the generator"; (EN): As discussed above, the fake virtual images correspond to the second simulation data, and the real source images 418 and true virtual images 416 correspond to the first and second sensor data, respectively”]. As per claim 4, Liang, Schindler, Omar and Hyland teach the method according to claim 1. Liang further teaches generating the first simulation data using a computation model that describes physical properties of the first surroundings sensor and of surroundings of the vehicle [paragraph 0080, “model 400 obtains one or more real operating images 408 associated with operation of the autonomous device. The real operating images 408 may be obtained from a camera associated with the autonomous device"; (EN): (page 16, lines 7-9) of the instant specification state "In the case of passive sensor modalities such as a camera, the computation model may also be divided into the described components", but does not appear to explicitly define a computation model. As such, the BRI of a computation model includes the camera sensor model. As discussed above, the first simulation data corresponds to the real image data. An image is reasonably understood to describe physical properties of the surroundings in which it was generated. Thus, the computation model that describes physical properties of the first surroundings sensor and of surroundings of the ve hide corresponds to the camera sensor model”]. As per claim 5, Liang, Schindler, Omar and Hyland teach the method according to claim 4. Liang further teaches the computation model is configured to assign a target value to be output by the recognition model to each of the measurements in the plurality of measurements of the first surroundings sensor [paragraph 0013, “The real source images may be annotated with corresponding ground-truth operating parameters obtained from a real operating experience"; (EN): As discussed above, LIAN G's real source images correspond to the measurements of the first surroundings sensor. The BRI of a target value for a measurement includes a ground-truth annotation for an image”]. Hyland further teaches simulated measurements in the plurality of chronologically successive simulated measurements [page 7, section 5, paragraph 2, "we focus on generating the four most frequently recorded, regularly-sampled variables measured by bedside monitors: oxygen saturation measured by pulse oximeter (SpO2), heart rate (HR), respiratory rate (RR) and mean arterial pressure (MAP). In the elCU dataset, these variables are measured every five minutes"; (EN): As discussed above, generated variables associated with time series data sets is encompassed by the BRI of simulated measurements. A sample rate of five minutes per sample is encompassed by the BRI of a plurality of chronologically successive simulated measurements]; It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the recognition model configured to recognize objects in sensor data of Liang (as modified) to include Hyland's GAN based time series data generation because "Access to data is one of the bottlenecks in the development of machine learning solutions to domain-specific problems. The availability of standard datasets (with associated tasks) has helped to advance the capabilities of learning systems in multiple tasks. However, progress appears to lag in other fields" (Hyland, page 1, section 1, paragraph 1) As per claim 7, Liang, Schindler, Omar and Hyland teach the method according to claim 1. Liang further teaches inputting the first simulation data as training data into the further learning algorithm [paragraph 0084, “To this end, the database of true virtual images to which real operating images are mapped may also be used to train a predictor 404"; (EN) As discussed above, the first simulation data corresponds to the real operating images. Data which is utilized to train a model must first be input or loaded by the model, and thus a teaching to train a model with specific data encompasses a teaching to input the data to the model. As discussed above, the further learning algorithm corresponds to the predictor 404”], the first simulation data having been generated using a computation model that describes physical properties of the first surroundings sensor and of surroundings of the vehicle [paragraph 0080, “model 400 obtains one or more real operating images 408 associated with operation of the autonomous device. The real operating images 408 may be obtained from a camera associated with the autonomous device"; (EN): As discussed above, the first simulation data corresponds to the real image data and the camera sensor model corresponds to the computation model. An image is reasonably understood to describe physical properties of the surroundings in which it was generated”]; and at least one of: generating, based on the second simulation data using the further learning algorithm, as the recognition model a second classifier configured to assign object classes to measurements of the second surroundings sensor [paragraph 0033, “The predictor 404 is configured to process the fake virtual images 410 ... For example, the predictor 404 may detect lane markings included in the fake virtual image"; and FIG. 4A, elements 402, 410, and 404”; (EN): As discussed above, the predictor 404 is trained on the generated fake virtual images 410 which correspond to the second simulation data. Similarly, as discussed above, the predictor corresponds to the recognition model]; Omar further teaches generating, based on the first simulation data using the further learning algorithm, as the recognition model a first classifier configured to assign object classes to measurements of the first surroundings sensor [page 528, section 6.b.2, paragraphs 1-2, "we used the regions of interest (ROI) provided by lidar detection to focus on specific regions of the image ... For each class of interest (pedestrian, bike, car, truck), a binary classifier was trained off-line to identify object (positive) and non-object (negative) patches"; (EN): A set of classifications assigned to patches which correspond to LIDAR detections, where LIDAR detections correspond to the first surroundings sensor measurements, is encompassed by the BRI of a first classifier configured to assign object classes to measurements of the first surroundings sensor]; It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the recognition model configured to recognize objects in sensor data of Liang to include generating, based on the first simulation data using the further learning algorithm, as the recognition model a first classifier configured to assign object classes to measurements of the first surroundings sensor of Omar. Doing so would help classifying the moving objects (Omar, abstract). As per claim 8, Liang, Schindler, Omar and Hyland teach the method according to claim 7. Liang further teaches inputting, into the further learning algorithm, target values to be output by the recognition model [paragraph 0084, “the predictor 404 may be trained to minimize the mean square loss between predicted controlling commands and ground-truth controlling commands from human experts"; (EN): As discussed above, the BRI of a target value for a measurement includes a ground-truth annotation for an image. As discussed above, any training which utilizes the fake virtual images must first access or receive the fake virtual images, the predictor404 corresponds to the further learning algorithm, and the trained predictor 404 corresponds to the recognition model. Minimizing an error between the output and the ground-truth annotation is encompassed by the BRI of outputting target values”], the target values having been assigned by the computation model to each of the plurality of measurements of the first surroundings sensor [paragraph 0013, “The real source images may be annotated with corresponding ground-truth operating parameters obtained from a real operating experience"; (EN): As discussed above, LIANG's real source images correspond to the measurements of the first surroundings sensor. As discussed above, the BRI of a target value for a measurement includes a ground-truth annotation for an image”]; and generating the recognition model further based on the target values using the further learning algorithm [paragraph 0084, “the predictor 404 may be trained to minimize the mean square loss between predicted controlling commands and ground-truth controlling commands from human experts"; (EN): As discussed above, the ground truth controlling commands annotated for an image correspond to the target values, or labels”]. Hyland further teaches simulated measurements in the plurality of chronologically successive simulated measurements [page 7, section 5, paragraph 2, "we focus on generating the four most frequently recorded, regularly-sampled variables measured by bedside monitors: oxygen saturation measured by pulse oximeter (SpO2), heart rate (HR), respiratory rate (RR) and mean arterial pressure (MAP). In the elCU dataset, these variables are measured every five minutes"; (EN): As discussed above, generated variables associated with time series data sets is encompassed by the BRI of simulated measurements. A sample rate of five minutes per sample is encompassed by the BRI of a plurality of chronologically successive simulated measurements]; It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the recognition model configured to recognize objects in sensor data of Liang (as modified) to include Hyland's GAN based time series data generation because "Access to data is one of the bottlenecks in the development of machine learning solutions to domain-specific problems. The availability of standard datasets (with associated tasks) has helped to advance the capabilities of learning systems in multiple tasks. However, progress appears to lag in other fields" (Hyland, page 1, section 1, paragraph 1). Claim 10 is substantially similar to claim 1 and thus is rejected under the same rationale as claim 1. Liang further teaches data processing apparatus ... a processor configured to [paragraph 0088, “The apparatus 1000 may include one or more processors 1002 configured to access and execute computer executable instructions stored in at least one memory 1004”]. As per claim 11, Liang, Schindler, Omar and Hyland teach the method according to claim 1. Liang further teaches wherein the method is performed by a processor that executes instructions of a computer program [paragraph 0088, “The apparatus 1000 may include one or more processors 1002 configured to access and execute computer executable instructions stored in at least one memory 1004”]. Claim 12 is substantially similar to claim 1 and thus is rejected under the same rationale as claim 1. Liang further teaches A non-transitory computer-readable medium that stores a computer program for [paragraph 0088, “computer executable instructions stored in at least one memory 1004”]; the computer program including instructions that, when executed by a processor, cause the processor to [paragraph 0088, “Software or firmware implementations of the processor 1002 may include computer executable or machine-executable instructions written in any suitable programming language to perform the various functions described herein”]. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Liang et al. in view of Schindler et al. in view of Omar et al. in view of Hyland et al. in view of Amini et al. (Variational Autoencoderfor End-to-End Control of Autonomous Driving with Novelty Detection and Training De-biasing) and further in view of Djuric (US Pub. 20190049970). As per claim 9, Liang, Schindler, Omar and Hyland teach the method according to claim 1. Liang further teaches receiving further sensor data generated by the surroundings sensor system [Fig. 9, element 902 “obtain one or more real operating images associated with operation of the autonomous device”; (EN): Real operating images associated with operation of the autonomous device is encompassed by the BRI of receiving further sensor data from the surroundings sensor system, which corresponds to LIAN G's camera, as discussed above]; inputting the further sensor data into the recognition model [Fig. 9, elements 902,904 and 906 PNG media_image1.png 482 322 media_image1.png Greyscale (EN): LIANG in paragraph 0084 specifies that "This prediction may be performed by a predictor module 404" which corresponds to the recognition model, as discussed above]; Liang, Schindler, Omar and Hyland do not teach controlling an actuator system of the vehicle by: controlling the actuator system based on outputs from the recognition model. Amini teaches controlling an actuator system of the vehicle by [page 569, section 1, paragraph 5, AMINI: "We use end-to-end autonomous driving as the robotic control use case. Here a steering control command is predicted from only a single input image ... Control systems for autonomous vehicles"; (EN): (page 11, lines 10-11) of the instant specification state "The actuator system may, for example, comprise a steering actuator", but does not appear to explicitly define an actuator system of the vehicle. As such, a system which controls autonomous vehicles via steering control is encompassed by the BRI of controlling an actuator system of the vehicle]: It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the recognition model configured to recognize objects in sensor data of Liang to include Amini's vehicle actuator control because "As a safety-critical task, autonomous driving is particularly well suited for our approach. Control systems for autonomous vehicles, when deployed in the real world, face enormous amounts of uncertainty and possibly even environments that they have never encountered before. Additionally, autonomous driving is a safety critical application of robotics; such control systems must possess reliable ways of assessing their own confidence" (Amini, page 569, section 1, paragraph 5). Liang, Schindler, Omar, Hyland and Amini do not teach controlling the actuator system based on outputs from the recognition model. Djuric teaches controlling the actuator system based on outputs from the recognition model [Fig. 6, paragraph 0103, “At (614), the method 600 can include controlling a motion of the vehicle based at least in part on the output. For instance, the vehicle computing system 102 can control a motion of the vehicle 104 based at least in part on the output 406 from the model 136 (e.g., the machine learned model). The vehicle computing system 102 can generate a motion plan 134 for the vehicle 104 based at least in part on the output 406, as described herein. The vehicle computing system 102 can cause the vehicle 104 to travel in accordance with the motion plan 134"; (EN): Causing the vehicle to travel in accordance with the plan is encompassed by the BRI of controlling the actuator system based on outputs from the recognition model”]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have modified the recognition model configured to recognize objects in sensor data of Liang to include Djuric's autonomous vehicle control based on the output of machine learning models as it "provide a number of technical effects and benefits ... In particular, by ... using machine learning models, the systems and methods of the present disclosure can better predict one or more future locations of an object. The improved ability to predict future object location(s) can enable improved motion planning and other control of the autonomous vehicle based on such predicted future object locations, thereby further enhancing passenger safety and vehicle efficiency ... The present disclosure also provides additional technical effects and benefits, including, for example, enhancing passenger/vehicle safety and improving vehicle efficiency by reduction collisions" (Djuric, 0036) Prior Art The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Mohta et al. (US Pub. 2024/0369977) describes a method for detecting and predicting the objects within the surrounding environment of a system such as an autonomous vehicle. Bruns et al. (US Pub. 2020/0384989) describes a method for improving the detection of objects by the driver assistance system. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRI T NGUYEN whose telephone number is 571-272-0103. The examiner can normally be reached M-F, 8 AM-5 PM, (CT). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, OMAR FERNANDEZ can be reached at 571-272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TRI T NGUYEN/Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Nov 18, 2021
Application Filed
Mar 05, 2025
Non-Final Rejection — §103
May 16, 2025
Response Filed
Jun 30, 2025
Final Rejection — §103
Sep 29, 2025
Request for Continued Examination
Oct 06, 2025
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572820
METHODS AND SYSTEMS FOR GENERATING KNOWLEDGE GRAPHS FROM PROGRAM SOURCE CODE
2y 5m to grant Granted Mar 10, 2026
Patent 12536418
PERTURBATIVE NEURAL NETWORK
2y 5m to grant Granted Jan 27, 2026
Patent 12524662
BLOCKCHAIN FOR ARTIFICIAL INTELLIGENCE TRAINING
2y 5m to grant Granted Jan 13, 2026
Patent 12493963
JOINT UNSUPERVISED OBJECT SEGMENTATION AND INPAINTING
2y 5m to grant Granted Dec 09, 2025
Patent 12468974
QUANTUM CONTROL DEVELOPMENT AND IMPLEMENTATION INTERFACE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
82%
With Interview (+13.2%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 183 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month