Prosecution Insights
Last updated: April 19, 2026
Application No. 18/321,550

VISION-BASED SYSTEM WITH THRESHOLDING FOR OBJECT DETECTION

Non-Final OA §103
Filed
May 22, 2023
Examiner
DANG, HUNG Q
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Tesla Inc.
OA Round
5 (Non-Final)
68%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
87%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
1257 granted / 1841 resolved
+10.3% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
95 currently pending
Career history
1936
Total Applications
across all art units

Statute-Specific Performance

§101
4.2%
-35.8% vs TC avg
§103
54.1%
+14.1% vs TC avg
§102
23.6%
-16.4% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1841 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/06/2025 has been entered. Response to Arguments Applicant’s arguments filed 06/20/2025 have been considered but they are moot in view of a new ground of rejections. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-11, 13-17, 19-20, and 22-25 are rejected under 35 U.S.C. 103 as being unpatentable over Watanabe et al. (US 2019/0005338 A1 – hereinafter Watanabe) and Bohme at al. (US 2018/0052225 A1 – hereinafter Bohme). Regarding claim 1, Watanabe discloses a method for processing inputs in a vision-only system (Figs. 1-2 – a method for processing image inputs in a vision-only system shown in Fig. 2) for classification of a detected object ([0046] – for classifying objects such as vehicles, pedestrians, and various obstacles, etc.) comprising: obtaining a set of data corresponding to operation of a vehicle, wherein the set of data includes a set of images corresponding to a vision system ([0043]-[0046] – obtaining a set of images corresponding to operation of a vehicle, i.e. vehicle 100 as shown in Fig. 1, the set of data includes a set of images corresponding to a vision system comprising at least a camera such as a stereo camera); processing individual image data from the set of images to determine whether object detection is depicted in the individual image data ([0079]-[0082] – processing individual image data to detect an object); updating object information corresponding to a processing result based on the processing of the individual image data ([0079]-[0082] – updating information that identifies whether there is an object); determining that at least one object attribute of a detected object represented satisfies at least one threshold indicating that the object is a tracked object ([0083]; [0094]-[0096] – determining at least one object attribute of the detected object to determine whether the object is a tracked object); in response to determining that the at least one object attribute satisfies the at least one threshold, assigning or updating at least one object attribute ([0084]; [0096]-[0097] – assigning or updating a size of the object); and classifying the detected object based on the assigned or updated at least one object attribute ([0084]; [0096]-[0097] – classifying the detected object based on comparing the calculated size with preset sizes in a table). However, Watanabe discloses the at least one object attribute in the determining step is “inclination or gradient of the object” and the at least one object attribute in assigning or updating step is “size of the object”. Watanabe also does not disclose updating object information corresponding to a sequence of processing results based on the processing of the individual image data; and determining that at least one object attribute of a detected object, i.e. a size of the object, represented across the sequence of processing results satisfies at least one threshold indicating that the object is a tracked object, wherein the at least one threshold is an attribute-specific persistence threshold applied to the detected object and specified as a total number of consecutive recorded instances of at least one attribute required for the at least one attribute to be assigned or updated to the tracked detected object. Bohme discloses updating object information corresponding to a sequence of processing results based on the processing of the individual image data ([0063] – updating size of the object corresponding to a sequence of processing results of a number of consecutive recorded instances of object); and determining that at least one object size of a detected object represented across the sequence of processing results satisfies at least one threshold indicating that the object is a tracked object, wherein the at least one threshold is an attribute-specific persistence threshold applied to the detected object and specified as a total number of consecutive recorded instances of at least one size required for the at least one attribute to be assigned or updated to the tracked detected object ([0063] – for the predefined number of consecutive scanning cycles, an attribute-specific persistence threshold, e.g. size consistency, i.e. a predefined minimum size, of an object over a number of consecutive recorded instances). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of using at least one object attribute as size of object taught by Bohme into the step of determining taught by Wanatabe to improve accuracy of detecting a real object to be tracked by taking into consideration consistency of the attribute across processing a sequence of frames. Regarding claim 2, see the teachings of Watanabe and Bohme as discussed in claim 1 above, in which Bohme also discloses the sequence of processing results comprises a set of sequential entries based on time, and each entry of the set of sequential entries includes at least an indication of an object detection ([0063] – a size is entered each time an object is detected to make sure that the final result is a true object or not after a number of consecutive object detections). The motivation for incorporating the teachings of Bohme has been discussed in claim 1 above. Regarding claim 3, see the teachings of Watanabe and Bohme as discussed in claim 2 above, in which Bohme also discloses the at least one object attribute satisfies the at least one threshold comprises determining whether a total number of object detections in the set of sequential entries exceeds a threshold value ([0063] – at least exceeding a predefined number of consecutive recorded instances). The motivation for incorporating the teachings of Bohme has been discussed in claim 1 above. Regarding claim 4, see the teachings of Watanabe and Bohme as discussed in claim 2 above, in which Bohme also discloses the threshold value is determined based on a level of confidence ([0063] – the predefined number of consecutive scanning cycles having an associated level of confidence, the larger the predefined number is, the higher the level confidence gets). The motivation for incorporating the teachings of Bohme has been discussed in claim 1 above. Regarding claim 6, see the teachings of Watanabe and Bohme as discussed in claim 2 above, in which Bohme also discloses determining whether the at least one object attribute satisfies the at least one threshold comprises determining whether a last entry in the set of sequential entries indicates an object detection ([0063] – the last entry among a number of consecutive entries satisfies a detection result that it is a true object). The motivation for incorporating the teachings of Bohme has been discussed in claim 1 above. Regarding claim 7, Watanabe in view of Bohme also discloses the method of claim 1, wherein the individual image data includes one or more combined images from two or more camera images of the vision system (Fig. 2; [0049] – one or more combined images from a stereo camera image, which provides a combination of either one left image and one right image). Claim 8 is rejected for the same reason as discussed in claim 1 above in view of Watanabe also disclosing a system (Fig. 2) comprising one or more processors (Fig. 2 – CPU 123) and non-transitory computer storage media (Fig. 2 – RAM or ROM 122) storing instructions that when executed by the one or more processors, cause the processors to perform the recited operations ([0053]-[0054] – the CPU executes computer executable code stored in the RAM), wherein the system is included in an autonomous or semi-autonomous vehicle (Fig. 1; [0047] - included in an at least semi-autonomous vehicle 100). Claim 9 is rejected for the same reason as discussed in claim 2 above. Claim 10 is rejected for the same reason as discussed in claim 3 above. Claim 11 is rejected for the same reason as discussed in claim 4 above. Claim 13 is rejected for the same reason as discussed in claim 6 above. Claim 14 is rejected for the same reason as discussed in claim 7 above. Claim 15 is rejected for the same reason as discussed in claim 1 above in view of Miura also disclosing non-transitory computer storage media storing instructions that when executed by a system of one or more processors ([0053]-[0054] – a memory storing computer executable code, when executed by a CPU) which are included in an autonomous or semi-autonomous vehicle ([0047]; Fig. 1 - included in a semi-autonomous vehicle 100), cause the system to perform the recited operations (see discussion of claim 1 above). Claim 16 is rejected for the same reason as discussed in claim 2 above. Claim 17 is rejected for the same reason as discussed in claim 3 above. Claim 19 is rejected for the same reason as discussed in claim 6 above. Claim 20 is rejected for the same reason as discussed in claim 7 above. Regarding claim 22, see the discussion of Watanabe and Bohme as discussed in claim 1 above. Bohme also discloses the at least one attribute to be assigned to the tracked object is at least one of a position, rotation, velocity, acceleration, or classification ([0063] – at least velocity). The motivation for incorporating the teachings of Bohme into the method has been discussed in claim 1 above. Regarding claim 23, see the discussion of Watanabe and Bohme as discussed in claim 22 above, in which Bohme also discloses the at least one attribute is a classification ([0063] – real or critical object vs objects caused by disturbances), and wherein classifying the tracked object comprises classifying the tracked object according to the classification ([0084]; [0096]-[0097] – based on classification, e.g. normal object vs. abnormal object, classifying the object accordingly). The motivation for incorporating the teachings of Bohme into the method has been discussed in claim 1 above. Claim 24 is rejected for the same reason as discussed in claim 23 above. Claim 25 is rejected for the same reason as discussed in claim 23 above. Claims 5, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Watanabe and Bohme as applied to claims 1-4, 6-11, 13-17, 19-20, and 22-25 above, and further in view of One (US 2022/0185625 A1 – hereinafter One). Regarding claim 5 see the teachings of Watanabe and Bohme as discussed in claim 3 above. However, Watanabe and Bohme do not disclose the threshold value is dynamically determined based on a fidelity of the set of images. One discloses a threshold value is dynamically determined based on a fidelity of a set of images ([0095] – a fidelity of images as the quality of the image of the object in the images, e.g. whether the object is occluded or clear in the image, a threshold value is determined based on this metrics to accommodate more or fewer misses). One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to incorporate the teachings of One into the method taught by Watanabe and Bohme to control the accuracy of the detection based on different conditions of environment. Claim 12 is rejected for the same reason as discussed in claim 5 above. Claim 18 is rejected for the same reason as discussed in claim 5 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNG Q DANG whose telephone number is (571)270-1116. The examiner can normally be reached IFT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Q Tran can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HUNG Q DANG/Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

May 22, 2023
Application Filed
Aug 20, 2024
Non-Final Rejection — §103
Nov 25, 2024
Response Filed
Dec 15, 2024
Final Rejection — §103
Jan 07, 2025
Interview Requested
Jan 15, 2025
Examiner Interview Summary
Jan 15, 2025
Applicant Interview (Telephonic)
Feb 17, 2025
Response after Non-Final Action
Mar 05, 2025
Request for Continued Examination
Mar 13, 2025
Response after Non-Final Action
Apr 04, 2025
Non-Final Rejection — §103
May 06, 2025
Interview Requested
May 19, 2025
Applicant Interview (Telephonic)
May 27, 2025
Examiner Interview Summary
Jun 20, 2025
Response Filed
Jul 24, 2025
Final Rejection — §103
Aug 26, 2025
Interview Requested
Sep 02, 2025
Examiner Interview Summary
Sep 02, 2025
Applicant Interview (Telephonic)
Sep 24, 2025
Response after Non-Final Action
Oct 06, 2025
Request for Continued Examination
Oct 10, 2025
Response after Non-Final Action
Mar 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594460
MANAGING BLOBS FOR TRACKING OF SPORTS PROJECTILES
2y 5m to grant Granted Apr 07, 2026
Patent 12588818
DETECTION OF A MOVABLE OBJECT WHEN 3D SCANNING A RIGID OBJECT
2y 5m to grant Granted Mar 31, 2026
Patent 12592258
METHOD AND APPARATUS FOR INTERACTIVE VIDEO EDITING PLATFORM TO CREATE OVERLAY VIDEOS TO ENHANCE ENTERTAINMENT VIDEO GAMES WITH EDUCATIONAL CONTENT
2y 5m to grant Granted Mar 31, 2026
Patent 12587693
ARTIFICIALLY INTELLIGENT AD-BREAK PREDICTION
2y 5m to grant Granted Mar 24, 2026
Patent 12574649
ENCODING AND DECODING METHOD, ELECTRONIC DEVICE, COMMUNICATION SYSTEM, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
68%
Grant Probability
87%
With Interview (+18.3%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 1841 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month