Prosecution Insights
Last updated: April 19, 2026
Application No. 18/133,752

METHODS FOR CREATING TRAINING DATA FOR DETERMINING VEHICLE FOLLOWING DISTANCE

Non-Final OA §102
Filed
Apr 12, 2023
Examiner
MEHMOOD, JENNIFER
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Geotab Inc.
OA Round
3 (Non-Final)
65%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
95%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
160 granted / 247 resolved
+2.8% vs TC avg
Strong +31% interview lift
Without
With
+30.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
21 currently pending
Career history
268
Total Applications
across all art units

Statute-Specific Performance

§101
2.8%
-37.2% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
31.9%
-8.1% vs TC avg
§112
17.6%
-22.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 247 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following is a response to an RCE received 1-20-2026. Claim Rejections - 35 USC § 102/103 Claim(s) 1, 3-6 and 8-20 is/are rejected under 35 U.S.C. 102((a)(1)) as anticipated by or, in the alternative, under 35 U.S.C. 103 as obvious over KSR v. Teleflex 550 U.S. 398. Claim(s) 1, 3-6 and 8-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kim (EP 3855409). With respect to claim 1, Kim teaches predicting a distance between a first and second vehicle, see para. 14. Kim teaches respective parameter data ( referred to by first feature points), see also paras. 20-26 and 34. Kim teaches a vehicle 1 with a camera for taking the photograph of a second vehicle located in the front of the first vehicle. See paras. 14 and 184. Kim teaches a device 10 for simulating by a processor 2124 the first and second vehicles in their driving positions in accordance with frame data, see paras. 20-26. Kim teaches representing several frames for determining pixel information providing a perspective of the second vehicle, see paras. 20-26 and 38. Kim teaches a calculation unit 15 for labeling the distance value between the first and second vehicles. Kim teaches a computer program for causing a computer readable storage medium (see storage unit 110 as explained at para. 188) for causing the storage of multiple image frames and the calculated distance values, see paras. 2, 14, 38. In the alternative, the Examiner does not concede that the claims are not anticipated, however the instant specification and the Kim reference suggests the virtual environment. Paragraphs 13-15 of the instant specification suggests that other parameters can be processed with this image, such as effects of snow, rain, fog, and different resolutions perhaps to affect blurring. See also para. 90, lines 1-3 and 14-17 regarding actual images. The broad language of the specification states “representing a perspective from a …….vehicle.” Furthermore, by adding, fog, snow, rain and blurring, attributes that are not part of the original image, appears to also constitute a virtual environment since these attributes are computer generated. The Examiner contends that Applicant’s use of the term, virtual environment, appears to be defined as camera generated data being added to or being processed with certain attributes such as affects with rain, fog, snow and blur. Similarly, Kim teaches creating a virtual environment as the present invention. Kim uses algorithms and programs for exhibiting such factors as: “… shift, rotation, brightness and blur to images acquired by an image acquiring unit 11. While the word virtual may not appear in Kim, it is suggested since the same functions of the present invention are being performed by the Kim reference. However, the broad definition of a representation of a perspective taken, by a rear vehicle imaging a front vehicle under certain environmental conditions, appears to satisfy the virtual environment as defined by the present specification. The Examiner contends that it would have been obvious to one of ordinary skill in the art, at the time of the effective filing of the claimed invention, to operate in a virtual environment where images are created by a camera and other visual effects are superimposed creating a virtual environment. With respect to claim 3, Kim teaches calculation of distance data between the first and second vehicles (see paras. 11, 14, 20-26). The labeling of the distance is taught by Kim with respect to the actual distance value determined by the distance calculation device 15, see also paras. 21-23. With respect to claim 4, Kim teaches at least one processor, 2124 for determining the distance values between the first and second vehicles, see paras. 11, 14 and 20-26. With respect to claim 5, Kim teaches a user input to make corrections to render appropriate output, see para. 67. With respect to claim 6, Kim teaches autonomous computations via the machine learning, see para. 66. With respect to claims 8 and 9, Kim teaches determining if the first and second vehicles have a distance labeled that is less than a predetermined threshold, see para. 147, see also para. 148. Inherently, distance measurements are taken when the vehicular distance is greater than the threshold. With respect to claim 10, Kim teaches a camera, see para. 172 for processing the one or more images of the perspective taken by the camera (dash cam). The camera inherently has a resolution. With respect to claim 11, Kim teaches wherein a parameter is a focal length and a predicted width of the vehicle. See para. 34. With respect to claims 12 and 13, Kim teaches the claimed features. See paras. 12 and 27 regarding blur and distortion as claimed. With respect to claims 14 and 15, Kim teaches the environmental effects as claimed, see paras. 13 and 28. With respect to claim 16, Kim teaches taking one image to determine features of a frame, see paras. 14 and 17. With respect to claim 17, Kim teaches taking plurality of images to determine features of a second frame, see paras. 14 and 15, 16 and 18. With respect to claim 18, Kim teaches detecting the movement of the front vehicle with respect to the rear vehicle (paras 20-26 and 34) as performed by at least one processor 2414 and dash cam taught at para. 172. With respect to claims 19 and 20, the first and second vehicles are on a road with one vehicle behind the other. The Kim reference broadly reads on a vehicle which could be offset from the center of the rear vehicle, hence the establishment of the lateral position as claimed is inherent by Kim. Allowed Claims Claim 2 was previously indicated as containing allowable subject matter and has been amended to include the limitations of the base claim. Claim 7 was previously indicated as containing allowable subject matter and has been amended to include the limitations of the base claim. Examiner’s Remarks In at least paragraph 5 of the Applicant’s specification, the “virtual image” appears to be defined as a perspective taken of a camera of a rear vehicle imaging a frontal vehicle. Paragraphs 13-15 of the instant specification suggests that other parameters can be processed with this image, such as effects of snow, rain, fog, and different resolutions perhaps to affect blurring. See also para. 90, lines 1-3 and 14-17 regarding actual images. Applicant attempts to argue that the present invention is directed to a virtual environment and Kim is directed to generating photographed images. The broad language of the specification states “representing a perspective from a …….vehicle.” Furthermore, by adding, fog, snow, rain and blurring, attributes that are not part of the original image, appears to also constitute a virtual environment since these attributes are computer generated. Applicant’s use of the term, virtual environment, appears to interpret camera generated data being added with other image affects such as rain, fog, snow and blur. In like manner, at para. 68, Kim teaches creating a virtual environment as the present invention. Kim uses algorithms and programs for exhibiting such factors as: “… shift, rotation, brightness and blur to images acquired by an image acquiring unit 11. Applicant has argued that Kim’s photographs are acquired from a photographic device. Applicant appears to argue that photographed images taken by Kim, are not virtual as claimed. However, the broad definition of a representation of a perspective taken, by a rear vehicle imaging a front vehicle under certain environmental conditions, appears to satisfy the virtual environment as defined by the present specification. Furthermore, the broad reading of adding affects to the image, as performed by the present invention and the Kim reference appears to create a virtual image. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEROME GRANT II whose telephone number is (571)272-7463. The examiner can normally be reached M-F 9:00 a.m. - 5:00 p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JEROME GRANT II/Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Apr 12, 2023
Application Filed
Jul 12, 2025
Non-Final Rejection — §102
Oct 01, 2025
Response Filed
Oct 17, 2025
Final Rejection — §102
Dec 12, 2025
Interview Requested
Dec 22, 2025
Response after Non-Final Action
Dec 29, 2025
Applicant Interview (Telephonic)
Dec 29, 2025
Examiner Interview Summary
Jan 20, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Feb 12, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572774
NEURAL NETWORK PROCESSOR AND METHOD OF NEURAL NETWORK PROCESSING
2y 5m to grant Granted Mar 10, 2026
Patent 10269295
ORGANIC LIGHT EMITTING DISPLAY DEVICE AND DRIVING METHOD THEREOF
2y 5m to grant Granted Apr 23, 2019
Patent 9245189
OBJECT APPEARANCE FREQUENCY ESTIMATING APPARATUS
2y 5m to grant Granted Jan 26, 2016
Patent 8344909
METHOD AND SYSTEM FOR COLLECTING TRAFFIC DATA, MONITORING TRAFFIC, AND AUTOMATED ENFORCEMENT AT A CENTRALIZED STATION
2y 5m to grant Granted Jan 01, 2013
Patent 8294567
METHOD AND SYSTEM FOR FIRE DETECTION
2y 5m to grant Granted Oct 23, 2012
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
95%
With Interview (+30.6%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 247 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month