Prosecution Insights
Last updated: April 19, 2026
Application No. 18/684,613

DETECTED OBJECT PATH PREDICTION FOR VISION-BASED SYSTEMS

Non-Final OA §102§112
Filed
Feb 16, 2024
Examiner
CARTER, AARON W
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Tesla Inc.
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
94%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
866 granted / 1017 resolved
+23.2% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
17 currently pending
Career history
1034
Total Applications
across all art units

Statute-Specific Performance

§101
10.1%
-29.9% vs TC avg
§103
28.1%
-11.9% vs TC avg
§102
30.2%
-9.8% vs TC avg
§112
19.4%
-20.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1017 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement filed 12/20/2024 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been considered except where lined through. A copy of the document labeled “C4” under the “NON PATENT LITERATURE DOCUMENTS” section has not been provided. Claim Objections Claim 1 is objected to because of the following informalities: In line 16, “store the process plurality of predicted paths” appears to be a reference to the previous limitation and it would appear that the term “process” here should be changed to “processed”. Appropriate correction or further explanation is required. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 10-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 10 recites the limitation "storing the process plurality of predicted paths" in line 11. There is insufficient antecedent basis for this limitation in the claim. Specifically, it’s unclear what is meant by “the process” portion of the limitation. Claims 11-17 are rejected by the virtue of their dependency upon rejected claim 10. Claim 18 recites the limitation "wherein the obtained first ground truth label data" in line 3. There is insufficient antecedent basis for this limitation in the claim. Specifically, the “first” label is not used previously in the claim. Claim 18 recites the limitation "storing the process plurality of predicted paths" in line 8. There is insufficient antecedent basis for this limitation in the claim. Specifically, it’s unclear what is meant by “the process” portion of the limitation. Claims 19-21 are rejected by the virtue of their dependency upon rejected claim 18. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 2020/0174481 to Van Heukelom et al. (“Van Heukelom”) (from the IDS filed 4/15/24 Citation No. “A438”). Regarding claim 1, Van Heukelom discloses a system for managing vision systems in vehicles, the system comprising: one or more computing systems including processing devices and memory, that execute computer-executable instructions (Fig. 7; paragraphs 75, 113 and 114), for implementing a vision system processing component (Fig. 8, paragraph 118) operative to: obtain first ground truth label data associated with collected vision data from one or more vision systems, wherein the obtained first ground truth label data corresponds to attributes of travel surfaces including at least one of road edges ground truth labels, lane line ground truth labels, or road markings (Fig. 8, element 806; paragraphs 12, 31-32 and 121, wherein lane lines, lane dividers (i.e. road edges) and other road markings (e.g. crosswalks) are detected/labeled in captured images (i.e. vision data)); obtain second ground truth label data associated with collected vision data from one or more vision systems, wherein the obtained second ground truth label data corresponds to attributes of one or more detected dynamic objects (Fig. 8, element 806; paragraphs 12, 30, 32 and 121, wherein attributes associated with vehicles and pedestrians (i.e. dynamic objects) are detected/labeled in captured images (i.e. vision data)); process the obtained first and second ground truth label data associated with the collected vision data to form a plurality of predicted paths of travel, wherein each individual predicted path of travel is associated with a confidence value (Fig. 1; Fig. 8, element 806; paragraphs 13-16, 31-35 and 121, wherein the labeled/detected data is used to predict a probabilities (i.e. confidence value) associated with possible paths of travel); process the plurality of predicted paths of travel based on at least one additional ground truth label (Fig. 1; Fig. 8, element 808; paragraphs 38 and 122, wherein an overlapping region (130) and region probability are determined based a future region of a vehicle (112*) (i.e. additional ground truth label data) and prediction probabilities (124) (i.e. predicted paths of travel), which corresponds to processing the paths based on at least one additional label); and store the process plurality of predicted paths of travel and associated confidence value (Fig. 8, elements 812-820; paragraphs 38, 39, 124-129, wherein the overlapping regions and probabilities, comprising predicted paths and confidence values, are stored and further processed to, for example, determine collision risk). Regarding claim 2, Van Heukelom discloses the system as recited in Claim 1,wherein the vision system processing component processes the obtained first and second ground truth label data associated with the collected vision data to form a plurality of predicted paths of travel based on selecting potential paths of travel exceeding a minimal confidence value threshold (Fig. 8, elements 812-818; paragraphs 124-129, wherein paths associated with overlapping regions exceeding a minimal confidence/probability threshold value are selected for further processing). Regarding claim 3, Van Heukelom discloses the system as recited in Claim1, wherein the first and second ground truth label data corresponds to one or more objects detected within a horizon of the captured video data (Fig. 1, element 104; paragraph 27, wherein an image/video captured by the vehicle captures the first and second label/detected data within a horizon perspective). Regarding claim 4, Van Heukelom discloses the system as recited in Claim 3, wherein the first and second ground truth label data corresponds to one or more objects detected beyond a current defined location of the vehicle (Fig. 1, element 104; paragraph 27, wherein an image/video captured by the vehicle captures the first and second label/detected data surrounding/beyond the vehicle). Regarding claim 5, Van Heukelom discloses the system as recited in Claim 1, wherein the attributes of one or more detected dynamic objects corresponds to at least one of yaw, velocity or acceleration of the dynamic object (paragraphs 12, 13, 77 and 82, wherein yaw, velocity and acceleration of dynamic objects (e.g. pedestrian, animal, other vehicles, etc.) are detected). Regarding claim 6, Van Heukelom discloses the system as recited in Claim 1,wherein the vision system processing component processes the plurality of predicted paths of travel based on at least one additional ground truth label by identifying at least one static object that may interfere with a predicted path of travel (Fig. 1; Fig. 8, element 808; paragraphs 12, 38 and 122, wherein static objects (e.g. lane lines, lane dividers, crosswalks) correspond to first or additional labels that are identified and may interfere with the predicted path associated with the overlapping regions). Regarding claim 7, Van Heukelom discloses the system as recited in Claim 1,wherein a sum of confidence values associated with two or more of the plurality of predicted paths of travel exceeds 100% (paragraph 35, wherein the path prediction probability distribution adds up to 1 corresponding to 100%). Regarding claim 8, Van Heukelom discloses the system as recited in Claim 1, wherein a sum of confidence values associated with the plurality of predicted paths of travel does not exceed 100% (paragraph 35, wherein the path prediction probability distribution adds up to 1 corresponding to 100%). Regarding claim 9, Van Heukelom discloses the system as recited in Claim1,wherein the vision system processing component processes the plurality of predicted paths of travel based on modeled feasibility cone for a detected dynamic object (Fig. 3 and paragraphs 51-52, wherein the heat map of prediction probabilities corresponds to a “modeled feasibility cone” for the dynamic object/vehicle). Regarding claim 10, Van Heukelom discloses a method for managing vision systems in vehicles, the system comprising: obtaining first ground truth label data associated with collected vision data from one or more vision systems, wherein the obtained first ground truth label data corresponds to attributes of travel surfaces (Fig. 8, element 806; paragraphs 12, 31-32 and 121, wherein attributes of the travel surface like lane lines, lane dividers (i.e. road edges) and other road markings (e.g. crosswalks) are detected/labeled in captured images (i.e. vision data)); obtaining second ground truth label data associated with collected vision data from one or more vision systems, wherein the obtained second ground truth label data corresponds to attributes of one or more detected dynamic objects (Fig. 8, element 806; paragraphs 12, 30, 32 and 121, wherein attributes associated with vehicles and pedestrians (i.e. dynamic objects) are detected/labeled in captured images (i.e. vision data)); processing the obtained first and second ground truth label data associated with the collected vision data to form a plurality of predicted paths of travel, wherein each individual predicted path of travel is associated with a confidence value (Fig. 1; Fig. 8, element 806; paragraphs 13-16, 31-35 and 121, wherein the labeled/detected data is used to predict a probabilities (i.e. confidence value) associated with possible paths of travel); and storing the process plurality of predicted paths of travel and associated confidence value (Fig. 8, elements 812-820; paragraphs 38, 39, 124-129, wherein the overlapping regions and probabilities, comprising predicted paths and confidence values, are stored and further processed to, for example, determine collision risk). Regarding claims 11, 12, 14, 16 and 17, please refer to the rejections of claims 2, 3, 5, 6 and 9, respectively, above. Regarding claim 13, Van Heukelom discloses the method as recited in Claim10,wherein the obtained first ground truth label data corresponds to attributes of travel surfaces including at least one of road edges ground truth labels, lane line ground truth labels, or road markings (Fig. 8, element 806; paragraphs 12, 31-32 and 121, wherein attributes of the travel surface like lane lines, lane dividers (i.e. road edges) and other road markings (e.g. crosswalks) are detected/labeled in captured images (i.e. vision data)). Regarding claim 15, Van Heukelom discloses the method as recited in Claim10 further comprising processing the plurality of predicted paths of travel based on at least one additional ground truth label (Fig. 1; Fig. 8, element 808; paragraphs 38 and 122, wherein an overlapping region (130) and region probability are determined based a future region of a vehicle (112*) (i.e. additional ground truth label data) and prediction probabilities (124) (i.e. predicted paths of travel), which corresponds to processing the paths based on at least one additional label). Regarding claim 18, Van Heukelom discloses a method for managing vision systems in vehicles, the system comprising: obtaining ground truth label data associated with collected vision data from one or more vision systems, wherein the obtained first ground truth label data corresponds to attributes of travel surfaces and one or more detected dynamic objects (Fig. 8, element 806; paragraphs 12, 30-32 and 121, wherein attributes of the travel surface like lane lines, lane dividers (i.e. road edges) and other road markings (e.g. crosswalks) as well as vehicles and pedestrians (i.e. dynamic objects) are detected/labeled in captured images (i.e. vision data)); generating a plurality of predicted paths of travel based on the obtained ground truth label data associated with the collected vision data, wherein each individual predicted path of travel is associated with a confidence value (Fig. 1; Fig. 8, element 806; paragraphs 13-16, 31-35 and 121, wherein the labeled/detected data is used to predict a probabilities (i.e. confidence value) associated with possible paths of travel); and storing the process plurality of predicted paths of travel and associated confidence value (Fig. 8, elements 812-820; paragraphs 38, 39, 124-129, wherein the overlapping regions and probabilities, comprising predicted paths and confidence values, are stored and further processed to, for example, determine collision risk). Regarding claims 19-21, please refer to the rejections of claims 2, 13 and 5, respectively, above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See attached PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON W CARTER whose telephone number is (571)272-7445. The examiner can normally be reached 8am - 5pm (Mon - Fri). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached at (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON W CARTER/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597229
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12586177
DAMAGE INFORMATION PROCESSING DEVICE, DAMAGE INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12586199
DIFFUSION-BASED OPEN-VOCABULARY SEGMENTATION
2y 5m to grant Granted Mar 24, 2026
Patent 12586278
AI-DRIVEN PET RECONSTRUCTION FROM HISTOIMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12579636
IMAGE PROCESSING DEVICE, PRINTING SYSTEM, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
94%
With Interview (+8.3%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 1017 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month