Prosecution Insights
Last updated: April 19, 2026
Application No. 18/815,118

AUTONOMOUS DRIVING SYSTEM

Final Rejection §103§112
Filed
Aug 26, 2024
Examiner
DANG, TRANG THANH
Art Unit
3656
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
2 (Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
75%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
16 granted / 36 resolved
-7.6% vs TC avg
Strong +31% interview lift
Without
With
+30.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
24 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
39.8%
-0.2% vs TC avg
§102
21.0%
-19.0% vs TC avg
§112
28.7%
-11.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is a Final Office Action on the merits. Response to Amendment/ Arguments The amendment filed 01/14/2026 has been entered. Claim 4 is cancelled; claims 5-12 are newly added; and claims 1-3 are amended. Therefore, claims 1-3 and 5-12 are currently pending in the instant application. Applicant’s arguments filed 01/14/2026 have been fully considered as below. Regarding rejections made under 35 USC 103 to the claims, Applicant's arguments with respect to the claims, see pages 7-8 of Remarks, have been considered but are moot in view of the new grounds of rejection provided below, in light of newly found prior art, which was necessitated based on Applicant's amendments which changed the scope of the claims. Regarding claim interpretation made under 35 USC 112(f) to the claims, Applicant's arguments with respect to the claims, see pages 6-7 of Remarks, have been considered and are persuasive in view of the amendments. Therefore, the claim interpretation has been withdrawn. Regarding rejections made under 35 USC 112(b) to the claims, Applicant's arguments with respect to the claims, see page 7 of Remarks, have been considered and are persuasive in view of the amendments. Therefore, the rejections made under 35 USC 112(b) to the claims have been withdrawn. However, new issues under this section are discussed below. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-3 and 5-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, the Applicant provides claim limitation “the machine learning model”. There in insufficient antecedent basis for this limitation in the claim and it is unclear what machine learning model is being referred to. Therefore, this renders the claim indefinite. Appropriate correction and/or clarification is needed. Claims 2-3 and 5-12 are rejected for being depending from an indefinite claim. Regarding claim 9, the Applicant provides claim limitation “wherein the user interface is operated a risk candidate scene in which the occupant feels there is a risk”. It is unclear what the metes and bounds regarding the claimed “the occupant feels there is a risk” and further how to apply to consider a risk candidate scene based on the occupant’s feelings. Therefore, this renders the claim indefinite. Appropriate correction and/or clarification is needed. Claim 10 is rejected for being depending from an indefinite claim. Regarding claim 11, the Applicant provides claim limitation “the scene”. There in insufficient antecedent basis for this limitation in the claim and it is unclear what the scene is being referred to. Therefore, this renders the claim indefinite. Appropriate correction and/or clarification is needed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, and 5-9 are rejected under 35 U.S.C. 103 as being unpatentable over Sato (US 11182986 B2), in view of Nagasaka et al. (US 20210398159 A1, hereinafter “Nagasaki”), and further in view of Ri (KR20230098976A). Regarding claim 1, Sato discloses an autonomous driving system (Sato, see at least Figs. 3, 5, 6, 8, an autonomous vehicle 111/303) comprising: an external sensor configured to detect an external environment of a vehicle (Sato, see at least Figs. 3, 4, col. 9, lines 16-40, “The sensing system, in various embodiments, may additionally or alternatively include one or more sensors 220 configured to collect information regarding the dynamic environment in which autonomous vehicle 200 is operated”); an internal sensor configured to detect a traveling state of the vehicle (Sato, see at least Figs. 3, 4, col. 8, lines 36-45, “one or more sensors 220 configured to collect information regarding operational aspects of autonomous vehicle 200, such as speed, vehicle speed, vehicle acceleration, braking force, braking deceleration, and the like”); an electronic control unit (Sato, see at least Fig. 6, col. 12, lines 18-37, processor(s) 133/computer 131) configured to: receive a detection result of the external sensor and a detection result of the internal sensor as an input value (Sato, see at least Fig. 6, col. 12, lines 18-37, “The one or more sensors 137 may include a visible light camera, an infrared camera, a LIDAR, RADAR, or sonar system, and/or peripheral sensors, which are configured to provide sensor input to the computer 131. A module of the firmware (or software) 127 executed in the processor(s) 133 applies the sensor input to an ANN defined by the model 119 to generate an output that identifies or classifies an event or object captured in the sensor input, such as an image or video clip. Data from this identification and/or classification can be included in data collected by a memory device (e.g., memory device 180) and sent from a vehicle to server 101 as discussed above”) and output an instruction value of autonomous driving (Sato, see at least col. 12, lines 26-37, lines 66-67, col. 13, lines 1-2, “In one example, the outputs of the ANN model 119 can be used to control (e.g., 141, 143, 145) the acceleration of a vehicle (e.g., 111), the speed of the vehicle 111, and/or the direction of the vehicle 111, during autonomous driving”); and perform the autonomous driving of the vehicle based on the instruction value output by the machine learning model (Sato, see at least Figs. 3, 5, 6, 8, col. 12, lines 8-16, lines 66-67, col. 13, lines 1-2, “In one example, the outputs of the ANN model 119 can be used to control (e.g., 141, 143, 145) the acceleration of a vehicle (e.g., 111), the speed of the vehicle 111, and/or the direction of the vehicle 111, during autonomous driving”); user interface configured to receive a user operation (Sato, see at least Fig. 6, col. 12, lines 11-12, an infotainment system 149; Fig. 8, display device(s) 308); a communication device configured to transmit information to a device outside the vehicle (Sato, see at least Figs. 5, 6, 8, computer 131/307 connected to the infotainment 139/display device(s) 308 and communication interface 139/305); and a second electronic control unit connected to the user interface and the communication device (Sato, see at least Fig. 6, col. 12, lines 8-17, processor(s) 133 coupled to the infotainment 149 and communication device 139), wherein the second electronic control unit extracts the input value and the instruction value of the electronic control unit (Sato, see at least Figs. 4, 5, col. 11, lines 13-20, lines 33-42, extract an data event 160 such as sensor data 103 and data output from the learning machine), and causes the communication device to transmit the extracted input value and the extracted instruction value of the electronic control unit to the device outside the vehicle (Sato, see at least Figs. 4, 5, 8, col. 3, lines 57-65, col. 11, lines 13-20, lines 33-42, cause the communication interface to transmit the event data 160 to the server 101/301), and Sato fails to explicitly teach a user interface configured to receive a user operation related to information transmission from an occupant of the vehicle; wherein the second electronic control unit extracts the input value and the instruction value of the electronic control unit in response to receiving the user operation related to the information transmission via the user interface; wherein the second electronic control unit is configured to extract data going back by a predetermined time from a timing at which the user operation related to the information transmission is received via the user interface, including a specific scene matching a predetermined condition from past data. Nagasaka teaches an autonomous driving system comprising a user interface 5 and a control unit 4, wherein the user interface 5 configured to receive a user operation related to information transmission from a user of the vehicle Ve, and wherein the control unit 4 configured to extract data in response to receiving the user operation related to the information transmission (Nagasaka, see at least Figs. 2, 3, 5, par. [0037, 0041, 0055-0059, 0061]). It would have been obvious to a person of ordinary skill in the art at the time of invention to modify the apparatus of Sato to include, the autonomous driving system comprising a user interface configured to receive a user operation related to information transmission from an occupant of the vehicle, wherein the control unit extracts data in response to receiving the user operation related to the information transmission via the user interface, as taught by Nagasaka. This modification would allow to collect data from the vehicle upon reception of an approval of the user (Nagasaka, see at least par. [0006]). PNG media_image1.png 510 641 media_image1.png Greyscale (Ri Figure 4) Ri teaches a processor 120 is configured to extract data of a machine learning model 130 based on user’s input for a time range, e.g., a specific period before and after the time when the user’s input is received, including a specific condition regarding vehicle control status together and/or correct vehicle control method with sensor data, e.g., sudden braking caused by misidentification of obstacles, instability in speed control during rain, accurately recognizing obstacles to brake quickly, or reducing speed further due to rain (Ri, see at least Fig. 1, par. [0039, 0031-0032, 0044-0045]). It would have been obvious to a person of ordinary skill in the art at the time of invention to modify the combination of Sato and Nagasaka to include, wherein the second electronic control unit is configured to extract data going back by a predetermined time from a timing at which the user operation related to the information transmission is received via the user interface, including a specific scene matching a predetermined condition from past data, as taught by Ri. This modification would allow to extract data from the vehicle for improving the machine learning model upon reception of an input of the user (Ri, see at least par. [0028]). Regarding claim 3, the combination of Sato, Nagasaka, and Ri teaches all the limitations of claim 1. The combination of Sato, Nagasaka, and Ri further teaches wherein the second electronic control unit acquires a voice of the occupant via the user interface (Nagasaka, see at least par. [0045-0046]), and determines a time range of data for extracting the input value and the instruction value of the electronic control unit based on a voice recognition result (Ri, see at least Fig. 1, par. [0039, 0031-0032, 0044-0045], extract data of a machine learning model 130 based on user’s input for a time range, e.g., a specific period before and after the time when the user’s input is received). Regarding claim 5, the combination of Sato, Nagasaka, and Ri teaches all the limitations of claim 1. The combination of Sato, Nagasaka, and Ri further teaches wherein the user interface includes a switch or a button (Nagasaki, see at least Fig. 2, par. [0045], “the HMI 5 may be further provided with […] an operating switch and an operating button”). Regarding claim 6, the combination of Sato, Nagasaka, and Ri teaches all the limitations of claim 1. The combination of Sato, Nagasaka, and Ri further teaches wherein the user interface includes an icon displayed on a display (Nagasaki, see at least Fig. 2, par. [0045-0046], “Whereas, the information transmitted from the control unit 4 is conveyed to the user via a text message or image indicated on the display 5a, or a voice or sound message”; Ri, see at least par. [0029], “The training data acquisition command can be implemented with simple operations. For example, it can be implemented by multi-tapping the touch screen or touchpad of the mobile terminal (200) or by touching a specific graphic button”). Regarding claim 7, the combination of Sato, Nagasaka, and Ri teaches all the limitations of claims 1 and 5. The combination of Sato, Nagasaka, and Ri further teaches wherein the user interface includes an icon displayed on a display (Nagasaki, see at least Fig. 2, par. [0045-0046], “Whereas, the information transmitted from the control unit 4 is conveyed to the user via a text message or image indicated on the display 5a, or a voice or sound message”; Ri, see at least par. [0029], “The training data acquisition command can be implemented with simple operations. For example, it can be implemented by multi-tapping the touch screen or touchpad of the mobile terminal (200) or by touching a specific graphic button”). Regarding claim 8, the combination of Sato, Nagasaka, and Ri teaches all the limitations of claim 1. The combination of Sato, Nagasaka, and Ri further teaches wherein the user interface includes a microphone (Nagasaki, see at least Fig. 2, par. [0045-0046], “the HMI 5 may be further provided with […] a voice-entry system”). Regarding claim 9, the combination of Sato, Nagasaka, and Ri teaches all the limitations of claim 1. The combination of Sato, Nagasaka, and Ri further teaches wherein the user interface is operated a risk candidate scene in which the occupant feels there is a risk (Ri, see at least par. [0044], “For example, it is possible to receive additional input from the user regarding the vehicle control status (such as sudden braking caused by misidentification of obstacles or instability in speed control during rain), and to store the input evaluation content together with sensor data”). Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Sato (US 11182986 B2), in view of Nagasaka et al. (US 20210398159 A1, hereinafter “Nagasaki”), in view of Ri (KR20230098976A) as applied to claim 1 above, and further in view of Shin et al. (US 20210149397 A1, hereinafter “Shin”). Regarding claim 2, the combination of Sato, Nagasaka, and Ri teaches all the limitations of claim 1. The combination of Sato, Nagasaka, and Ri fails to explicitly teach wherein the second electronic control unit manages authority of the occupant related to the information transmission, and extracts the input value and the instruction value of the electronic control unit according to the user operation of the occupant to which the authority is given. Shin teaches an autonomous vehicle 30b/200/100 (Shin, see at least Figs. 1-3, par. [0053, 0058, 0070-0075]) comprising a processor 190 configured to recognize an occupant of the autonomous vehicle 30b/200/100 and determine authority of the occupant for all functions that the occupant is capable of requesting (Shin, see at least Fig. 10, par. [0053, 0066, 0154, 0156-0167, 0170, 0174, 0177-0178]). It would have been obvious to a person of ordinary skill in the art at the time of invention to modify the combination of Sato, Nagasaka and Ri to include, wherein the control unit manages authority of the occupant related to the information transmission, and extracts the input value and the instruction value of the machine learning model according to the user operation of the occupant to which the authority is given, as taught by Shin. This modification would allow to give a vehicle control authority only to an authenticated occupant, thereby preventing the control authority from being provided to an occupant who is not responsible for the control (Shin, see at least par. [0017-0018]). Regarding claim 12, the combination of Sato, Nagasaka, and Ri teaches all the limitations of claim 1. The combination of Sato, Nagasaka, and Ri fails to explicitly teach wherein the second electronic control unit is configured to determine an authority of the occupant specified by an in-vehicle camera and biological information with reference to a list of occupants having permission authority stored in a storage device. Shin teaches an autonomous vehicle 30b/200/100 (Shin, see at least Figs. 1-3, par. [0053, 0058, 0070-0075]) comprising a processor 190 configured to recognize an occupant by monitoring the interior of the vehicle through the image of the occupant captured by the camera 210 mounted in the interior of the vehicle 200 (Shin, see at least Fig. 12, par. [0179-0180]), and perform authentication as to whether the occupant is registered through recognition of data such as the fingerprint and face of the occupant (Shin, see at least Fig. 12, par. [0157, 0165, 0181]). It would have been obvious to a person of ordinary skill in the art at the time of invention to modify the combination of Sato, Nagasaka and Ri to include, wherein the second electronic control unit is configured to determine an authority of the occupant specified by an in-vehicle camera and biological information with reference to a list of occupants having permission authority stored in a storage device, as taught by Shin. This modification would allow to give a vehicle control authority only to an authenticated occupant, thereby preventing the control authority from being provided to an occupant who is not responsible for the control (Shin, see at least par. [0017-0018]). Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Sato (US 11182986 B2), in view of Nagasaka et al. (US 20210398159 A1, hereinafter “Nagasaki”), in view of Ri (KR20230098976A) as applied to claims 1 and 9 above, and further in view of Nix et al. (US 20240144748 A, hereinafter “Nix”). Regarding claim 10, the combination of Sato, Nagasaka, and Ri teaches all the limitations of claims 1 and 9. The combination of Sato, Nagasaka, and Ri fails to explicitly teach wherein the risk candidate scene includes driver intervention, a steering angle equal to or greater than a first threshold value, or a distance between the vehicle and a surrounding lane marking equal to or less than a second threshold value. Nix teaches a vehicle failure event occurred based at least in part on data indicative of a human intervention event in which a human driver assumed control of the autonomous vehicle (Nix, see at least col. 3, lines 12-30, cols. 4-5). It would have been obvious to a person of ordinary skill in the art at the time of invention to modify the combination of Sato, Nagasaka and Ri to include, wherein the risk candidate scene includes driver intervention, as taught by Nix. This modification would allow human feedback to be actively prompted whenever a human intervention event occurs, resulting in increased collection of human feedback descriptive of specific instances in which human intervention was required or otherwise performed (Nix, see at least cols. 4-5). Regarding claim 11, the combination of Sato, Nagasaka, and Ri teaches all the limitations of claim 1. The combination of Sato, Nagasaka, and Ri fails to explicitly teach wherein the second electronic control unit is configured to set the scene in which a lane departure risk occurs and the scene in which a collision risk between the vehicle and another object occurs as the specific scene. Nix teaches the controller 102 is configured to include a visualization of the autonomous vehicle during a vehicle failure event occurred based on lane change failure events (Nix, see at least Figs. 1, 2, cols. 9-10, col. 3, lines 43-57, col. 8, lines 19-29) and/or a collision risk between the vehicle and another object occurs (Nix, see at least col. 3, lines 12-30, col. 8, lines 19-29). It would have been obvious to a person of ordinary skill in the art at the time of invention to modify the combination of Sato, Nagasaka and Ri to include, wherein the second electronic control unit is configured to set the scene in which a lane departure risk occurs and the scene in which a collision risk between the vehicle and another object occurs as the specific scene, as taught by Nix. This modification would allow to provide a visualization of the autonomous vehicle at the time of the detected vehicle failure event within the user interface can enable the human to visually review the vehicle failure event while providing the feedback (Nix, see at least col. 8, lines 19-29). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRANG DANG whose telephone number is (703)756-1049. The examiner can normally be reached Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Khoi Tran can be reached at (571)272-6919. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TRANG DANG/ Examiner, Art Unit 3656 /KHOI H TRAN/Supervisory Patent Examiner, Art Unit 3656
Read full office action

Prosecution Timeline

Aug 26, 2024
Application Filed
Nov 01, 2025
Non-Final Rejection — §103, §112
Jan 08, 2026
Examiner Interview Summary
Jan 08, 2026
Applicant Interview (Telephonic)
Jan 14, 2026
Response Filed
Apr 03, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12576884
RIGHT-OF-WAY-BASED SEMANTIC COVERAGE AND AUTOMATIC LABELING FOR TRAJECTORY GENERATION IN AUTONOMOUS SYTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12559074
AIRCRAFT SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12493302
LONGITUDINAL TRIM CONTROL MOVEMENT DURING TAKEOFF ROTATION
2y 5m to grant Granted Dec 09, 2025
Patent 12461529
ROBOT PATH PLANNING APPARATUS AND METHOD THEREOF
2y 5m to grant Granted Nov 04, 2025
Patent 12429878
Systems and Methods for Dynamic Object Removal from Three-Dimensional Data
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
75%
With Interview (+30.7%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month