Prosecution Insights
Last updated: April 19, 2026
Application No. 18/680,371

CONTROL DEVICE FOR VEHICLE

Final Rejection §103
Filed
May 31, 2024
Examiner
HUBER, MELANIE GRACE
Art Unit
3668
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
33 granted / 46 resolved
+19.7% vs TC avg
Strong +30% interview lift
Without
With
+29.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
28 currently pending
Career history
74
Total Applications
across all art units

Statute-Specific Performance

§101
10.1%
-29.9% vs TC avg
§103
55.6%
+15.6% vs TC avg
§102
22.5%
-17.5% vs TC avg
§112
10.3%
-29.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 46 resolved cases

Office Action

§103
DETAILED ACTION Status of Claims Claims 1-5 are currently pending and have been examined in this application. This action is FINAL. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see Remarks pg. 5, filed 12/32/2025, with respect to the objection to the title have been fully considered and are persuasive. The objection to the title has been withdrawn. Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Fischer (US 20190049981 A1) in view of Nix et al. (US 20180365740 A1) and in further view of Shiga et al. (US 20200094845 A1). Regarding claim 1, Fischer teaches: A control device for a vehicle, the vehicle including an autonomous driving device and a navigation device, and the control device being configured to: (Fischer – Fig. 4, Processor 402) display on the navigation device an evaluation window for evaluating different driver assistance functions performed by the autonomous driving device, wherein (Fischer – [0018] “In some examples, user interface system may include touch-input devices (e.g., several buttons) 249 disposed on a steering wheel 237 and/or a touchscreen 246 (e.g., as part of the vehicle's infotainment system) to enable a user to input subjective feedback regarding operation of the autonomous vehicle system and/or its autonomous maneuvers. For example, the vehicle user may input subjective feedback by touching one of several descriptions of autonomous vehicle performance presented on touch screen 246, or by selecting the description via buttons 249. Alternatively or in addition, the user may input a subjective feedback to describe an autonomous maneuver by using buttons 249 and/or touch screen 246 to select keys of a virtual keyboard displayed on the touchscreen 246.”) adjust a parameter that is used when the autonomous driving device performs each driver assistance function, based on an input operation from a driver to the evaluation window. (Fischer – [0032] “At 316, the server determines whether to modify the vehicle that performed the maneuver of step 302. In many examples, the vehicle's operation may be modified based on the subjective data received at 306, and may be modified to more closely align the vehicle's operation with the vehicle user's preferences expressed in the subjective data. For example, the user's subjective data may indicate a preference for faster than average autonomous lane changes, and the control system may modify the duration associated with lane changes for the user's vehicle.”) Fischer does not explicitly teach the following, however, Nix teaches: each of the plurality of contents includes a plurality of selections based on the driving assistance function being evaluated; and (Nix – [0132] “In some implementations, the interactive user interface 600 can be specific to and associated with the particular detected driving event. Stated differently, different secondary menu items 608a-e can be provided respectively for different events. For example, secondary menu items provided in response to a turning event can include textual phrases specific to turning events (e.g., “turned too wide,” “turned too narrow,” etc.), while secondary menu items provided in response to a stopping event can include textual phrases specific to stopping events (e.g., “stopped too late,” “stopped too early,” etc.). Thus, in some implementations, while the first plurality of icons that correspond to levels of satisfaction are consistent across different event types, the second plurality of icons 608a-e can be specific to the particular type of event that has been detected (e.g., turning event versus stopping event).”) Fischer and Nix are both considered to be analogous to the claimed invention because they are both in the same field of gathering an evaluation of an autonomous driving operation from a user. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Fischer with Nix to include a plurality of menu items associated with particular driving events in order to enable improved collection of information about autonomous vehicle ride quality relative to specific types of driving events (Nix, para. [0017]). The combination of Fischer and Nix does not explicitly teach the following limitation, however, Shiga teaches: the evaluation window displays a plurality of contents corresponding to the different driving assistance functions, and (Shiga – [0047] “The vehicle-mounted device 20 displays the query received from the server 30 on, for example, the touch display of the vehicle-mounted device 20 (step S208). At this time, the communication control unit 304 may collectively send a plurality of queries to be displayed on the vehicle-mounted device 20 as a query list.” [0057] “FIG. 4 is a drawing showing an example of queries displayed on the touch display 200 of the vehicle-mounted device 20 according to the first embodiment. In FIG. 4, two queries are displayed on the touch display 200. Query 1 relates to whether or not a dispatch point of the vehicle 2 was appropriate, and Query 2 relates to whether or not acceleration at the time of starting the vehicle 2 was smooth.”) Shiga is considered to be analogous to the claimed invention because it is in the same field of receiving driver input about the autonomous operations of a vehicle. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify the combination of Fischer and Nix with Shiga in order to evaluate automatic driving control based on the ride comfort of the passengers to improve the ride comfort of the passengers (Shiga, para. [0004]). Regarding claim 2, The combination of Fischer, Nix, and Shiga teaches the limitations of claim 1. Nix further teaches: wherein the control device is configured to display the evaluation window on the navigation device when the driver presses an evaluation button displayed on the navigation device. (Nix – [0114] “As one example user interface that can be provided in response to detection of a driving event, FIG. 4 depicts an example interactive user interface 400 according to example embodiments of the present disclosure. The user interface 400 includes an initial user-selectable icon 404 that enables the passenger to indicate a desire to provide passenger feedback. For example, the initial user-selectable icon 404 can be a single selectable space that depicts a cluster of buttons or icons, as illustrated in FIG. 4.”) Fischer and Nix are both considered to be analogous to the claimed invention because they are both in the same field of gathering an evaluation of an autonomous driving operation from a user. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Fischer with Nix to include an initial user-selectable icon to indicate a desire to provide feedback in order to enable improved collection of information about autonomous vehicle ride quality relative to specific types of driving events (Nix, para. [0017]). Regarding claim 4, The combination of Fischer, Nix, and Shiga teaches the limitations of claim 1. Nix further teaches: wherein the control device is configured to display, in the evaluation window, the content according to a type of the driver assistance function performed immediately before the evaluation window is displayed. (Nix – [0116] “In addition, in some implementations, the user interface 400 can include a visualization 406 of the autonomous vehicle during performance of the detected driving event. As one example, the visualization 406 can visualize or otherwise be generated based at least in part on light detection and ranging data collected by the autonomous vehicle during performance of the driving event. Providing the visualization 406 of the autonomous vehicle during performance of the detected driving event within the user interface 400 can enable the passenger to visually review the driving event when deciding whether to provide passenger feedback.”) It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Fischer with Nix to include displaying visualization of the autonomous operation in order to enable improved collection of information about autonomous vehicle ride quality relative to specific types of driving events (Nix, para. [0017]). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Fischer (US 20190049981 A1), in view of Nix et al. (US 20180365740 A1), in view of Shiga et al. (US 20200094845 A1), and in further view of Taniguchi et al. (US 20220066551 A1). Regarding claim 3, The combination of Fischer, Nix, and Shiga teaches the limitations of claim 1. The combination of Fischer, Nix, and Shiga does not explicitly teach the following limitation, however, Taniguchi teaches: the vehicle further includes an in-vehicle camera configured to capture an image of the driver, and (Taniguchi – [0022] “The driver monitoring camera 2 is an example of a sensor for obtaining a face image showing the driver's face.”) the control device is configured to: when the autonomous driving device performs the driver assistance function, analyze an expression of the driver in the image captured by the in-vehicle camera, and (Taniguchi – [0039] “The facial information may be information indicating the driver's expression. The driver's expression is expressed by facial characteristic quantities of units of facial movement detected from a face image. The state obtaining unit 531 inputs a face image into a classifier that has been trained to detect the positions of units of facial movement, such as outer canthi and corners of the mouth, thereby identifying the positions of units of facial movement included in the face image. The state obtaining unit 531 then compares the identified positions of units of facial movement with predetermined reference positions of the units of facial movement to detect the facial characteristic quantities.”) when determination is made that the expression of the driver is an expression representing a predetermined emotion toward the driver assistance function, display the evaluation window on the navigation device. (Taniguchi – [0055] “First, the state obtaining unit 531 obtains facial information indicating the state of the face of the driver of the vehicle 1 from each of face images of the driver's face (step S1).” [0056] “Next, the input-screen displaying unit 532 determines whether input of information by the driver is requested (step S2). When it is determined that input of information by the driver is not requested (No in step S2), the processor 53 terminates the facial-information obtaining process.” [0057] “When it is determined that input of information by the driver is requested (Yes in step S2), the input-screen displaying unit 532 causes the touch screen display 3 to show a screen including an input section for receiving input of information from the driver (step S3).”) Taniguchi is considered to be analogous to the claimed invention because it is in the same field of determining the feedback from a driver. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify the combination of Fischer, Nix, and Shiga with Taniguchi in order to evaluate the facial expression of a vehicle driver in order to infer the driver’s feelings while avoiding annoying the driver (Taniguchi, para. [0005]). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Fischer (US 20190049981 A1), in view of Nix et al. (US 20180365740 A1), in view of Shiga et al. (US 20200094845 A1), and in further view of Zhu et al. (US 20200216094 A1). Regarding claim 5, The combination of Fischer, Nix, and Shiga teaches the limitations of claim 1. The combination of Fischer, Nix, and Shiga does not explicitly teach the following limitation, however, Zhu teaches: wherein the control device is configured to adjust the parameter within a range between an upper limit value and a lower limit value, both inclusive. (Zhu – [0007] “The passenger feedback is used to continuously train/update the machine learning module to create a personal driving style decision-making model for the passenger that controls operation of the autonomous vehicle. During operation, the motion planner provides a range of safe operation commands according to the concurrent driving conditions. For example, the motion planner may adjust the acceleration range (0 to 60 in 4 seconds, 5 seconds, 6 seconds, etc.) based on the passenger's personal driving style preference profile to make an acceleration choice within the safe command range that is consistent with the passenger's personal driving style preference profile. In sample embodiments, the motion planner provides a driving command with a safe range and the driving style model selects values in the safe range to meet the passenger's preference.”) Zhu is considered to be analogous to the claimed invention because it is in the same field of adjusting autonomous operations in response to passenger feedback. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify the combination of Fischer, Nix, and Shiga with Zhu to include adjusting the parameter within a safe range in order to prioritize the passengers safety (Zhu, para. [0054]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure or directed to the state of the art is listed on the enclosed PTO-892. The following is a brief description for relevant prior art that was cited but not applied: Mehta et al. (US 20240217550 A1) discloses a display presents a prompt or question to a passenger after the autonomous vehicle demonstrates a driving behavior by performing a maneuver. For example, the autonomous vehicle has made an unprotected left turn. On the display, the passenger can be addressed by name and asked: Was this unprotected left turn comfortable? Response options YES and NO are provided on the display and the passenger selects one response. Lee et al. (US 11987267 B2) discloses verification result information includes discomfort-related information inputted by the user, wherein the discomfort-related information is information corresponding to details about one or more cases in which the autonomous vehicle failed for the certain operation event. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MELANIE HUBER whose telephone number is (703)756-1765. The examiner can normally be reached M-F 7:30am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMES LEE can be reached at (571)-270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.G.H./Examiner, Art Unit 3668 /JAMES J LEE/Supervisory Patent Examiner, Art Unit 3668
Read full office action

Prosecution Timeline

May 31, 2024
Application Filed
Sep 29, 2025
Non-Final Rejection — §103
Dec 11, 2025
Examiner Interview Summary
Dec 11, 2025
Applicant Interview (Telephonic)
Dec 31, 2025
Response Filed
Feb 19, 2026
Final Rejection — §103
Apr 14, 2026
Examiner Interview Summary
Apr 14, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594856
VEHICLE AND METHOD FOR CONTROLLING POWER SUPPLY TO EXTERNAL LOAD BASED ON SOC
2y 5m to grant Granted Apr 07, 2026
Patent 12576856
Inferring Operator Characteristics from Device Motion Data
2y 5m to grant Granted Mar 17, 2026
Patent 12570158
METHOD FOR SHIFT USING SHIFT ENTRY PREDICTION AND VEHICLE THEREFOR
2y 5m to grant Granted Mar 10, 2026
Patent 12558957
METHOD FOR OPERATING A DISPLAY UNIT OF A VEHICLE, AND DISPLAY UNIT
2y 5m to grant Granted Feb 24, 2026
Patent 12553741
SYSTEM AND METHOD TO ADDRESS LOCALIZATION ERRORS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+29.6%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 46 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month