Prosecution Insights
Last updated: April 19, 2026
Application No. 18/217,467

USING RADAR DATA FOR AUTOMATIC GENERATION OF MACHINE LEARNING TRAINING DATA AND LOCALIZATION

Non-Final OA §103
Filed
Jun 30, 2023
Examiner
ALQADERI, NADA MAHYOOB
Art Unit
3664
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Torc Robotics, Inc.
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
67 granted / 90 resolved
+22.4% vs TC avg
Strong +31% interview lift
Without
With
+30.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
32 currently pending
Career history
122
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
7.3%
-32.7% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 90 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1-18 are pending in Instant Application. Examiner’s Note Examiner has cited particular paragraphs/columns and line numbers or figures in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in their entirety as potentially teaching all of part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Applicant is reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claims. Furthermore, the Examiner is not limited to Applicant’s definition which is not specifically set forth in the claims. Response to Arguments Regarding 101 Rejection: Applicant’s amendment overcome the 101-rejection raised in the previous action; therefore the 101 rejection is withdrawn. Regarding 112 Rejection: Applicant’s amendment overcome the previous 112 rejection; therefore the 112 rejection is withdrawn. Regarding 103 rejection: Applicant's arguments filed 12/19/2025 have been fully considered. Due to the applicant’s amendments, amendments changed the scope of the claim in which Examiner agrees that the previous references do not teach. However, Examiner brings forth new references, Harrison and Urtasun, in which a new rejection can be found below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Harrison (US 20220044359) in view of Urtasun (US 20210200212). Regarding Claim 1, Harrison discloses A method comprising: instructing, by a processor, a time signal from a grand master clock to be transmitted to a second processor associated with a sensor of an autonomous vehicle traveling on a roadway; (Harrison, see at least [0021] wherein a training data provided to the super-resolution network includes radar data acquired by radar 106 and time-synchronized lidar data acquired by lidar 104 or by other lidars in autonomous vehicles in the vicinity of ego vehicle 100, such as lidar 112 in autonomous vehicle 110 and lidar 116 in autonomous vehicle 114. ) instructing, by the processor, the second processor associated with the sensor of the autonomous vehicle to sync an internal clock with the time signal; (Harrison, see at least [0027] wherein lidar data is captured with associated timestamps by one or more lidar sensors. The captured radar data includes coarse-resolution radar data. A training set is formed with lidar data that is time-synchronized with the coarse-resolution radar dataset.) retrieving, by the processor, a first set of image data of the roadway from the sensor, the image data including frames of at least one of the images or videos captured by the sensor of the autonomous vehicle; synching the first set of image data with the time signal; (Harrison, see at least [0023-0029] wherein a training data set is used to train a super-resolution network. The training data set includes one or more image sets of radar data and lidar data and associated timestamp data. The super-resolution network is trained to map the radar dataset having first time stamp data into a radar dataset that corresponds to the lidar dataset having second time stamp data that is nearest to the first timestamp data.) identifying, by the processor, a second set of image data of the roadway from a distinct autonomous vehicle traveling on the roadway, the distinct autonomous vehicle distinct from the autonomous vehicle as being a different vehicle or as the autonomous vehicle traveling on the roadway at a different time; (Harrison, see at least Fig. 1 in which shows multiple vehicles on the road with cameras, radar sensors and lidar sensors collecting data and [0064-0066] wherein the ego vehicle communicates with other vehicles nearby to collect data. Also see at least [0023-0029] wherein a training data set is used to train a super-resolution network. The training data set includes one or more image sets of radar data and lidar data and associated timestamp data. The super-resolution network is trained to map the radar dataset having first time stamp data into a radar dataset that corresponds to the lidar dataset having second time stamp data that is nearest to the first timestamp data.) generating, by the processor, a map layer based on the synched first set of image data; (Harrison, see at least [0022] wherein received data is mapped into respective datasets and is time-synchronized.) Harrison does not explicitly disclose generating, by the processor, a map layer based on the synched first set of image data; executing, by the processor, a matching protocol to match an object within the second set of image data with an object within the map layer; labeling, by the processor, the second set of image data to generate a training dataset by labeling matching objects between the map layer and the second set of image data as ground truth; and training, by the processor, at least one machine learning model using the training dataset, the at least one machine learning model configured to predict attributes used to navigate a third autonomous vehicle, wherein an autonomous system of the third autonomous vehicle is configured to control the third autonomous vehicle to travel along a route planned based on the attributes predicted by the at least one trained machine learning model. However, Urtasun discloses executing, by the processor, a matching protocol to match an object within the second set of image data with an object within the map layer; (Urtasun, see at [0116] wherein to train a machine-learned model, a training data set can include a large number of previously obtained representations of input data, as well as corresponding labels that describe corresponding outputs associated with the corresponding input data. A training data set can more particularly include a first portion of data corresponding to one or more representations of input data. The input data can, for example, be recorded or otherwise determined while a vehicle is in navigational operation and/or the like. The training dataset can further include a second portion of data corresponding to labels identifying outputs. The labels included within the second portion of data within the training dataset can be manually annotated, automatically annotated, or annotated using a combination of automatic labeling and manual labeling. ** data is sensor data, which the sensor can be a camera to capture images. Also see [0132].) labeling, by the processor, the second set of image data to generate a training dataset by labeling matching objects between the map layer and the second set of image data as ground truth; (Urtasun, see at least [0027], [0054], [0058] wherein an autonomous vehicle can include an onboard vehicle computing system in which can identify one or more objects that are around the vehicle based on sensor data and/or map data. The vehicle computing system can process the sensor data, map data, etc. to obtain perception data. The vehicle computing system then generates the perception data that is indicative of one or more states (current state/past state) of objects that are within the vehicle’s environment. The perception data describes for a given time or time period, current/past speed and velocity, and much more. Also see [0116-0118] wherein the training dataset can further include a second portion of data corresponding to labels to identify outputs. More than one set of data and ground truth data is used to train the model.) and training, by the processor, at least one machine learning model using the training dataset, the at least one machine learning model configured to predict attributes used to navigate a third autonomous vehicle, (Urtasun, see at least [0116-0118] wherein training data can be utilized to train a machine-learned model (behavioral planning stage and trajectory planning stage) and in which would then be used to navigate an autonomous vehicle.) wherein an autonomous system of the third autonomous vehicle is configured to control the third autonomous vehicle to travel along a route planned based on the attributes predicted by the at least one trained machine learning model. (Urtasun, see at least [0056-0057] wherein the autonomy computing system can include a perception system, a prediction system, a motion planning system and other systems to determine a motion plan for controlling the motion of the vehicle accordingly. ** machine learning model is used to determine a motion plan to generate target trajectories for the vehicle.) Accordingly, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified Harrison to include the capability of utilizing map data and using this data to further generate a training data set to further classify objects in which this training data can be used to control an autonomous vehicle to travel along a planned route as taught by Urtasun with reasonable expectation that this would allow for the processors to communicate operations between different sensor components from different nearby vehicles and therefore improve vehicle reliability when being controlled to avoid an object. Regarding Claim 2, Harrison in view of Urtasun discloses The method of claim 1, further comprising: (see rejection above) Harrison further discloses in response to retrieving the first set of image data, filtering, by the processor, at least one object from the first set of image data received by the sensor of the autonomous vehicle, the at least one filtered object excluded from the generated map layer. (Feit, see at least [0041] wherein the perception module may also include a multi-object tracker to track the identified targets over time with the user of a Kalman filter. The multi-object tracker 818 matches candidate targets identified by the target identification and decision module 814 with targets it has detected in previous time windows. By combining information from previous measurements, expected measurement uncertainties, and some physical knowledge, the multi-object tracker 818 generates robust, accurate estimates of target locations. ) Regarding Claim 3, Harrison in view of Urtasun discloses The method of claim 1, (see rejection above) Harrison further discloses wherein the first set of image data and the second set of image data further comprise a corresponding time-stamp. (Harrison, see at least [0023-0029] wherein a training data set is used to train a super-resolution network. The training data set includes one or more image sets of radar data and lidar data and associated timestamp data. The super-resolution network is trained to map the radar dataset having first time stamp data into a radar dataset that corresponds to the lidar dataset having second time stamp data that is nearest to the first timestamp data.) Regarding Claim 4, Harrison in view of Urtasun discloses The method of claim 1, further comprising: (see rejection above) Harrison further discloses syncing, by the processor, the internal clock of the sensor of the autonomous vehicle with a second internal clock of a second sensor of the autonomous vehicle. (Harrison, see at least [0023-0029] wherein a training data set is used to train a super-resolution network. The training data set includes one or more image sets of radar data and lidar data and associated timestamp data. The super-resolution network is trained to map the radar dataset having first time stamp data into a radar dataset that corresponds to the lidar dataset having second time stamp data that is nearest to the first timestamp data.) Regarding Claim 5, Harrison in view of Urtasun discloses The method of claim 1, (see rejection above) Harrison further discloses wherein the sensor is a LiDAR sensor. (Harrison, see at least [0027] wherein lidar data is captured with associated timestamps by one or more lidar sensors.) Regarding Claim 6, Harrison in view of Urtasun discloses The method of claim 1, Harrison discloses wherein the generated map layer further includes: the time signal. (Harrison, see at least [0022] wherein received data is mapped into respective datasets and is time-synchronized.) As per claims 7-12, the claim is directed towards a non-transitory machine-readable storage medium that recites similar limitations performed by the method of claim 1-6. The cited portions of Harrison and Urtasun used in the rejection of claim 1-6 teach the same system limitations of claim 7-12. Claim 7 further recites “A non-transitory machine-readable storage medium having computer- executable instructions stored thereon that, when executed by one or more processors, cause the one or more processors to perform operations comprising” which is disclosed in [0007] of Urtasun. Therefore, claims 7-12 are rejected under the same rationales used in the rejections of claims 1-6 as outlined above. As per claims 13-18, the claim is directed towards a system that recites similar limitations performed by the method of claim 1-6. The cited portions of Harrison and Urtasun used in the rejection of claim 1-6 teach the same system limitations of claim 13-18. Therefore, claims 13-18 are rejected under the same rationales used in the rejections of claims 1-6 as outlined above. Relevant Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. CN 119165760 – The disclosure discloses time synchronized sensor data for robotics systems and applications. In various examples, a set of associated time stamps is sampled from a clock source. The set of associated time stamps is used to calculate conversion data, such as the offset and/or rate of change of the clock source. Offset and/or rate of change may be used to convert a time stamp to a reference time domain. The sampled clock source may be frequency locked and the conversion may be performed without using the rate of change. For example, an operating average of the offsets may be used to perform conversion. The converted time stamp and corresponding sensor measurements may be provided to one or more applications for performing one or more operations of the machine, such as sensing and/or control operations. US 20170168494 A1– A sensor interface for an autonomous vehicle. The sensor interface generates a plurality of sensor pulses that are each offset in phase relative to a local clock signal by a respective amount. The sensor interface receives sensor data from a sensor apparatus and formats the sensor data based at least in part on the plurality of sensor pulses to enable the sensor data to be used for navigating the autonomous vehicle. For example, the sensor interface may add a timestamp to the sensor data indicating a timing of the sensor data in relation to the local clock signal. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NADA MAHYOOB ALQADERI whose telephone number is (571) 272-2052. The examiner can normally be reached Monday – Friday, 8AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rachid Bendidi can be reached on (571) 272-4896. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NADA MAHYOOB ALQADERI/Examiner, Art Unit 3664 /RACHID BENDIDI/Supervisory Patent Examiner, Art Unit 3664
Read full office action

Prosecution Timeline

Jun 30, 2023
Application Filed
Apr 26, 2025
Non-Final Rejection — §103
Jun 06, 2025
Interview Requested
Jun 18, 2025
Applicant Interview (Telephonic)
Jun 24, 2025
Examiner Interview Summary
Jul 08, 2025
Response Filed
Oct 18, 2025
Final Rejection — §103
Dec 05, 2025
Interview Requested
Dec 19, 2025
Response after Non-Final Action
Jan 22, 2026
Request for Continued Examination
Feb 19, 2026
Response after Non-Final Action
Mar 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12576839
METHOD AND SYSTEM OF ROAD DRIVING OPTIMIZATION WITH DECOUPLING OF VEHICLE STATUS AND TRAFFIC FACTORS
2y 5m to grant Granted Mar 17, 2026
Patent 12570288
METHOD AND APPARATUS FOR MANAGING A VEHICLE PLATOON
2y 5m to grant Granted Mar 10, 2026
Patent 12570313
VEHICLE CONTROL DEVICE AND METHOD FOR CONTROLLING VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12565205
AUTOMATIC SPEED CONTROL FOR A VEHICLE
2y 5m to grant Granted Mar 03, 2026
Patent 12552267
VEHICLE AND VEHICLE MANAGEMENT SYSTEM WITH A PREDICTIVE POWER SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+30.8%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 90 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month