Prosecution Insights
Last updated: April 19, 2026
Application No. 18/342,499

WORLD MODEL GENERATION AND CORRECTION FOR AUTONOMOUS VEHICLES

Non-Final OA §103
Filed
Jun 27, 2023
Examiner
PEKO, BRITTANY RENEE
Art Unit
3665
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Torc Robotics, Inc.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
97%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
130 granted / 157 resolved
+30.8% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
7 currently pending
Career history
164
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
21.3%
-18.7% vs TC avg
§112
9.5%
-30.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 157 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 16 December 2025 has been entered. Response to Arguments Applicant's arguments filed 16 December 2025 have been fully considered. Applicant’s recent amendments to independent claims 1, 9 and 17 have required further searching and consideration by the Examiner. Upon further searching and consideration, an updated rejection is provided below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-7, 9, 11-15, 17, and 19-20 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bande et al. "Banda" (US 2023/0366699 A1) in view of Gansch et al. "Gansch"(US 2022/0258765 A1) and Yang et al. "Yang" (US 2021/0004613 A1). Regarding claim 1, Bande teaches A system for correcting geometry in a world model see at least the abstract, comprising: at least one processor coupled to non-transitory memory see at least FIG. 5, the at least one processor configured to: retrieve, from a world model, expected geometric data for a road traveled by an autonomous vehicle see at least [0039] where the system 200 can obtain a road model based on an environment map 205 (e.g. HD map). The HD map containing road features (i.e. geometric data such as the expected locations of road lanes) [0022] & [0029]. Further see at least [0032] where the system 200 can be part of an autonomous driving system included in an autonomous vehicle; receive sensor data from a plurality of sensors of the autonomous vehicle, the sensor data captured during operation of the autonomous vehicle see at least [0023] & [0038] where a plurality of sensors 204 of a vehicle are configured to output sensor data to a sensor-based map generator 206; generate a predicted geometry for a feature of the road based on the sensor data see at least [0038] where the sensor data is used by the sensor-based map generator 206 to generate a sensor-based map 207. The sensor-based map 207 can include one or more elements defining elements or components of the environment surrounding the vehicle, such as a lane boundary, road boundary, etc.; detect an error in the expected geometric data based on the predicted geometry of the feature see at least [0027] & [0040]-[0043] where “the map comparison engine 208 can identify one or more parameters of the environment map-based road model and the sensor map-based road model and then compare the difference in the parameters between the environment map-based road model and the sensor map-based road model. The one or more parameters can be a lateral difference or offset between map/model elements an angular difference or offset between map/model elements, and/or other parameters”; generate a correction to the world model based on the error see at least [0026]; [0029] and [0043] where a correction may be determined for the environment map-based road model based on the difference in the parameters between the environment map-based road model and the sensor map-based road model; modify the world model based on the correction see at least [0026], [0029] and [0053] where, for example, the correction may include shifting one or more road lane elements based on the difference; and navigate the autonomous vehicle based at least in part on the modified world model see at least [0052]-[0053] where the process 400 includes determining a correction for the first map based on the difference being less than a threshold difference and further comprises performing at least one navigation function using the first map (i.e., HD map) when the difference is less than the threshold difference. Bande teaches all of the elements of the current invention as stated above except wherein the processor is configured to: detect temporal features based on the sensor data; incorporate the temporal features into a modified world model by: including the temporal features in the modified world model as the autonomous vehicle is approaching the temporal features and the temporal features become proximate the autonomous vehicle; and removing the temporal features from the modified world model as the autonomous vehicle is departing from the temporal features and the temporal features become irrelevant to navigation of the autonomous vehicle; However, Gansch discloses that is known to provide the processor is configured to detect temporal features based on the sensor data; and incorporate the temporal features into a modified world model by: including the temporal features in the modified world model as the autonomous vehicle is approaching the temporal features and the temporal features become proximate the autonomous vehicle; See at least the abstract, [0007] and [0009] where a method for modeling the surroundings of an automated vehicle is provided in which a world model is generated based on data acquired by vehicle sensors and external sensors. The world model may comprise environment conditions such as weather conditions, road conditions, light conditions and/or coordinates of surrounding objects (i.e., temporal features) which may be detected by a plurality of sensors such as camera and/or radar sensor and/or rain sensor and/or lidar sensor and/or pressure sensor and/or GPS receiver, etc. The world model may be adjusted dynamically based on the received environment information [0032]. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified Bande to incorporate the teachings of Gansch and provide the processor configured to detect temporal features based on the sensor data and incorporate the temporal features into a modified world model. In doing so, this “achieves the technical advantage of it being possible to calculate a safe and stable world model, even in case of dynamically changing availabilities of information sources, or the environment information provided by them, particularly during runtime” [0019] thus making the world model more reliable and safer [0017]. Yang discloses that is known to provide the processor configured to incorporate the temporal features into a modified world model by: removing the temporal features from the modified world model as the autonomous vehicle is departing from the temporal features and the temporal features become irrelevant to navigation of the autonomous vehicle see at least FIG. 14; steps 1405 and 1410 and [0128]-[0132] where sensor data such as camera images and/or lidar point clouds are obtained and a learning model is applied to the images and/or LIDAR point clouds in order to classify each of the detected objects into classes which may include dynamic objects (such as people, bicycles, cars, animals, etc.). The dynamic objects are considered temporal features since they are in motion and therefore are expected to exist in their respective detected locations temporarily. Further, see at least FIG. 14; step 1435 and [0141]-[0142] where the dynamic objects may be removed from the map in order to facilitate localization when a vehicle utilizes the map. “For example, the dynamic objects may change location or simply not be present when an autonomous vehicle is in the same place at some future time and using the map.” It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified Bande in view of Gansch to incorporate the teachings of Yang and provide the processor configured to incorporate the temporal features into a modified world model by: removing the temporal features from the modified world model as the autonomous vehicle is departing from the temporal features and the temporal features become irrelevant to navigation of the autonomous vehicle. In doing so, the method is improved by helping facilitate localization of the vehicle by removing dynamic objects. “If the dynamic objects is left in the map, any localization may be comprised and/or cause errors because the objects to help facilitate localization is not present or has moved as compared to sensor data being collected by the autonomous vehicle” [0141]. Regarding claim 3, Bande in view of Gansch and Yang teaches The system of claim 1, wherein the feature of the road comprises one or more of a shoulder of the road, a lane of the road, or an intersection of the road see at least Bande [0027] & [0031] where the feature may include a location of road lane. Regarding claim 4, Bande in view of Gansch and Yang teaches The system of claim 1, wherein the expected geometric data for the road comprises one or more of a location of a shoulder of the road, a location of one or more lane lines of the road, or a number of lanes of the road see at least Bande [0031] where the vehicle 102 can determine a location of lanes on the road 106 (e.g., a location of a lane marker 108) and [0029] where a correction may include shifting one or more road lane elements by an amount determined by the difference between the one or more road lane elements of the HD map and the one or more corresponding road lane elements of the sensor-based map. Regarding claim 5, Bande in view of Gansch and Yang teaches The system of claim 1, wherein the plurality of sensors comprises one or more of a light detection and ranging (LiDAR) sensor, a radar sensor, a camera, or an inertial measurement unit (IMU) see at least Bande [0038] where the one or more sensors 204 may include LIDAR sensors, radar sensors, inertial sensors, etc. Regarding claim 6, Bande in view of Gansch and Yang teaches The system of claim 1, wherein the at least one processor is further configured to transmit the correction to at least one server to correct corresponding map information see at least Bande [0053]. Regarding claim 7, Bande in view of Gansch and Yang teaches The system of claim 1, wherein the at least one processor is further configured to detect the error responsive to a difference between the predicted geometry for the feature and expected geometric of the feature indicated in the expected geometric data satisfying a threshold see at least Bande [0029] and [0053] where a correction for the first map (i.e., HD map) is determined if the difference between the one or more road lane elements of the HD map (i.e., expected geometric of the feature) and the one or more corresponding road lane elements of the sensor-based map (i.e., predicted geometry of the feature) is less than a threshold difference. Claims 9 and 17 have substantially similar technical features as claim 1 and are therefore rejected under the same rationale. Claim 11 has substantially similar technical features as claim 3 and is therefore rejected under the same rationale. Claims 12 & 20 comprise substantially similar technical features as claim 4 and are therefore rejected under the same rationale. Claims 13 & 19 comprise substantially similar technical features as claim 5 and are therefore rejected under the same rationale. Claim 14 has substantially similar technical features as claim 6 and is therefore rejected under the same rationale. Claim 15 has substantially similar technical features as claim 7 and is therefore rejected under the same rationale. Regarding claim 22, Bande in view of Gansch and Yang teaches The system of claim 1, wherein the at least one processor is further configured to: detect the temporal features including weather conditions see at least Gansch [0007] and [0014] where the world model may comprise environment conditions such as weather conditions (such as rain, fog, etc.) which may be detected by a plurality of sensors such as camera and/or rain sensors. Claim(s) 2, 10 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bande in view of Gansch and Yang as applied to claim 1 above, and further in view of Elluswamy et al. " Elluswamy " (US 2024/0304003 A1). Regarding claim 2, Bande in view of Gansch and Yang does not explicitly disclose The system of claim 1, wherein the at least one processor is further configured to execute an artificial intelligence model using at least a portion of the sensor data as input to generate the predicted geometry for the feature of the road. Rather, Bande discloses the process of inputting sensor data (from one or more sensors 204) into the sensor-based map generator 206 in order to generate a sensor-based map 207. The sensor-based map 207 including one or more elements defining a lane boundary, boundary, etc. However, Elluswamy et al. (US 2024/0304003 A1) teaches that it is known to provide: The system of claim 1, wherein the at least one processor is further configured to execute an artificial intelligence model using at least a portion of the sensor data as input to generate the predicted geometry for the feature of the road see at least [0015] and [0031]-[0032] where a trained machine learning model is used to predict a three-dimensional representation of one or more features for autonomous driving, such as lane lines. It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to have modified Bande in view of Gansch and Yang to incorporate the teachings of Elluswamy and provide the system of claim 1, wherein the at least one processor is further configured to execute an artificial intelligence model using at least a portion of the sensor data as input to generate the predicted geometry for the feature of the road. In doing so, this greatly improves the accuracy of lane line detection and the detection of corresponding lanes and identified drivable paths [0015]. Claims 10 and 18 comprise substantially similar technical features as claim 2 and are therefore rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ostafew et al. (US 2021/0148726 A1) discloses an apparatus for safety-assured remote driving for autonomous vehicles. The invention includes incorporating dynamic and static objects into a world model and planning a trajectory for an autonomous vehicle according to the world model. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Brittany Renee Peko whose telephone number is (408)918-7506. The examiner can normally be reached Monday - Thursday 8:30-6:30 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Bishop can be reached at 571-270-3713. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.R.P./02/06/2025 Examiner, Art Unit 3665 /Erin D Bishop/ Supervisory Patent Examiner, Art Unit 3665
Read full office action

Prosecution Timeline

Jun 27, 2023
Application Filed
Apr 22, 2025
Non-Final Rejection — §103
Jul 03, 2025
Interview Requested
Jul 14, 2025
Examiner Interview Summary
Jul 28, 2025
Response Filed
Sep 05, 2025
Final Rejection — §103
Nov 03, 2025
Interview Requested
Nov 12, 2025
Examiner Interview Summary
Dec 16, 2025
Request for Continued Examination
Jan 09, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103
Apr 07, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589747
VEHICLE CONTROL SYSTEMS FOR AUTOMATED VEHICLE PLATOON DRIVING
2y 5m to grant Granted Mar 31, 2026
Patent 12583436
HYBRID ELECTRIC VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12580242
BATTERY TEMPERATURE CONTROL APPARATUS AND METHOD FOR ELECTRIC VEHICLES
2y 5m to grant Granted Mar 17, 2026
Patent 12576736
BATTERY ELECTRIC VEHICLE
2y 5m to grant Granted Mar 17, 2026
Patent 12576858
INTELLIGENT SETTINGS OF ONBOARD SENSORS ON A VEHICLE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
97%
With Interview (+14.2%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 157 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month