Prosecution Insights
Last updated: April 19, 2026
Application No. 18/369,953

ON-VEHICLE SYSTEM

Final Rejection §103§112
Filed
Sep 19, 2023
Examiner
OSTERHOUT, SHELLEY MARIE
Art Unit
3669
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
40 granted / 60 resolved
+14.7% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
36 currently pending
Career history
96
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
48.2%
+8.2% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 60 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims This Office Action is in response to the Applicants’ filing on 07/18/2025. Claims 1-5 were previously pending, of which claims 1, 3, and 4 have been amended, no claims have been cancelled or been newly added. Accordingly, claims 1-5 are currently pending and are being examined below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/14/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments With respect to Applicant's remarks, see pages 4-5, filed 07/18/2025; Applicant’s “Amendment and Remarks” have been fully considered. Applicant’s remarks will be addressed in sequential order as they were presented. With respect to the claim rejections under 35 U.S.C. § 112(b), the amendment renders this rejection moot, the amended claims are no longer rejected under 35 U.S.C. § 112(b). With respect to the claim rejections under 35 U.S.C. § 103, applicant’s “Amendment and Remarks” have been fully considered and are not persuasive. Further consideration of the prior art of record determined that Jeong in view of Tanaka does appear to disclose the determination excluding the driver, taught by Tanaka, and the occupant of the rear seat, disclosed by Jeong, as amended in claim 1. Due to the nature of the applicant’s amendments, the scope of the applicant’s invention has changed. New application of prior art addresses the amended language, as mapped below. Therefore, the amended claims are still rejected under 35 U.S.C. § 103, and have been updated in the final office action below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4 are rejected under 35 U.S.C. 103 as being unpatentable over Jeong et al. (KR 10-2200807), hereinafter Jeong, in view of Tanaka (US 2023/0404456), hereinafter Tanaka. With respect to claim 1, Jeong discloses an on-vehicle system comprising: a control device; ([0008] “a speed control device”) a monitoring device that monitors whether each of a plurality of occupants in a vehicle has grasped a behavior of the vehicle; ([0028] “The occupant status detection unit (10) can detect occupants and objects (e.g., books, drinks, mobile phones, earphones, wired/wireless headsets, cosmetics, etc.) inside a self-driving vehicle through a camera-based vehicle interior monitoring system and sensors to recognize the current status of the occupants.” [0010] “action type recognized by the action recognition unit is at least one of sleeping, reading, drinking…”) and a storage device that stores a learned model for feeling estimation, ([0034] “The emotion recognition unit (12) can detect the facial features of a passenger through a camera sensor inside the vehicle equipped with artificial intelligence and machine learning technology and dynamically analyze (or recognize) the emotions of the passenger.”) wherein the control device: determines whether there is an occupant who has not grasped the behavior of the vehicle among the plurality of occupants based on a monitoring result of the monitoring device; ([0028] “The occupant status detection unit (10) can detect occupants and objects… through a camera-based vehicle interior monitoring system and sensors to recognize the current status of the occupants.” [0030] “The action recognition unit (11) can recognize various actions of the passenger in real time, such as the passenger's head position, hand position, arm position, gaze (gaze perception), and eye closure level.”) specifies a target occupant whose feeling is to be estimated from the plurality of occupants based on the determination result; ([0013] “The above speed control judgment unit sends an emotion recognition request to the emotion recognition unit if the type of behavior of the passenger recognized by the behavior recognition unit” Note: As the device takes in the current status of all the occupants the focus on a singular occupant due to behavior would qualify as specifying that occupant.) estimates feeling of the target occupant who has been specified by using the learned model; ([0070] “vehicle interior monitoring system and sensors to detect the status (e.g., actions) of the corresponding passengers (S10).” [0072] “the speed control judgment unit… sends an emotion recognition request to the passenger status detection unit (10).” [0034] “The emotion recognition unit (12) can detect the facial features of a passenger through…machine learning technology.”) and executes vehicle control in accordance with a result of estimating the feeling of the target occupant, ([0014] “If the above speed control judgment unit determines that it is necessary to control the speed of the autonomous vehicle, it may send a request for autonomous driving speed control along with the behavior type and emotion of the corresponding passenger to the speed control unit.”) wherein: the control device, when determining that there is no occupant who has not grasped the behavior of the vehicle, targets an occupant seated in a rear seat of the vehicle for feeling estimation. (Fig. 2 shows emotion and attentive estimation occurring in both the front and rear seat of the vehicle.) Jeong discloses using machine learning for estimating occupant feelings, but does not explicitly disclose the model used for estimating feeling estimation, or that the estimation excludes the driver. However, Tanaka teaches specific models for estimating the feelings of the occupants ([0209] “The determination unit 12 b acquires the machine learning model from the model storage unit 16, and performs “determination processing” [0236-0237] “When it is determined in step ST222 that not all of the estimation results are estimation results indicating an abnormal state… the determination unit 12 b determines that it is necessary to check the state of the occupant.” [0045] “a condition for estimating an abnormal state in a case where the feeling of anger of the occupant is recognized may be set as the state estimation condition.”) and determines whether there is an occupant who has not grasped the behavior of the vehicle among the plurality of occupants, excluding an occupant who is a driver of the vehicle, based on a monitoring result of the monitoring device; (see at least [0147] “the adjustment device 1 includes: the estimated information collecting unit 11 to collect estimation results of the state of the occupant estimated by the plurality of occupant state estimating devices 2;” [0145] “The occupant may be, for example, an occupant other than the driver of the vehicle.”) As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the machine learning of Jeong to include the model disclosed in Tanka, with reasonable expectation of success. The motivation for doing so would have been to provide an underlying method for the machine learning used to determine the occupant that needs to be checked on which may not include the driver, see Tanaka [0201, 0145]. With respect to claim 2, Jeong discloses multiple occupants being dynamically analyzed and control being based on the corresponding occupant emotion but does not explicitly disclose how that occupant is selected. However, Tanaka teaches specifying the target occupant based on the determination result includes specifying, when there is one or more occupants who have not grasped the behavior of the vehicle in the plurality of occupants, the target occupant from the one or more occupants. (Fig. 3, [0233-0239] “When it is determined in step ST222 that not all the estimation results are estimation results indicating an abnormal state… continuously for the determination period among the estimation results indicating an abnormal state among the plurality of estimation results… When obtaining the checking necessity information indicating that the checking is necessary (“YES” in step ST223), the determination unit 12 b determines that it is necessary to check the state of the occupant.”) As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the occupant analysis of Jeong to include the method disclosed in Tanka, with reasonable expectation of success. The motivation for doing so would have been to provide an underlying method used to determine which occupant needs to be checked on, see Tanaka [0266]. With respect to claim 3, Jeong discloses an imaging device disposed at a position where faces of the plurality of occupants are allowed to be imaged in the vehicle, the imaging device configure to obtain image data having expression of the target occupant, ([0028] “The occupant status detection unit (10) can detect occupants and objects… inside a self-driving vehicle through a camera-based vehicle interior monitoring system and sensors to recognize the current status of the occupants.” [0035] “The emotion recognition unit (12) can detect the facial features of a passenger through a camera sensor inside the vehicle”) Jeong discloses using machine learning to recognize the passenger’s emotions, but does not explicitly disclose the type of model that is used to do the recognition. However, Tanaka teaches the learned model is generated by machine learning so as to derive a result of estimating feeling of a person from the image data having expression of the person, and estimating feeling of the target occupant by using the learned model includes: giving image data having expression of the target occupant obtained by the imaging device to the learned model; ([0207] “The machine learning model is a machine learning model that uses the estimation result of the state of the occupant as an input and outputs information” [0045] “a condition for estimating an abnormal state in a case where the feeling of anger of the occupant is recognized may be set as the state estimation condition.”) and obtaining a result of estimating the feeling of the target occupant from the learned model by executing arithmetic processing of the learned model. ([0220] “As a learning algorithm used by the model generating unit 62, a known algorithm of supervised learning can be used.”) As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the machine learning of Jeong to include the model disclosed in Tanka, with reasonable expectation of success. The motivation for doing so would have been to provide an underlying method for the machine learning used to determine the occupant that needs to be checked on, see Tanaka [0201]. With respect to claim 4, Jeong discloses wherein the monitoring device includes the imaging device, ([0034] “a camera sensor inside the vehicle”) and determining whether there is the occupant who has not grasped the behavior of the vehicle includes determining whether there is the occupant who has not grasped the behavior of the vehicle among the plurality of occupants based on the image data obtained by the imaging device. ([0028] “The occupant status detection unit (10) can detect occupants and objects… through a camera-based vehicle interior monitoring system and sensors to recognize the current status of the occupants.” [0051] “the passenger status information (in this case, including the corresponding passenger behavior type and emotions).”) Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Jeong in view of Tanaka as applied to claim 1 above, and further in view of Bader (US 2020/0307647), hereinafter Bader. With respect to claim 5, Jeong discloses controlling the speed of the vehicle with respect to the occupant status, but does not explicitly disclose limiting the acceleration based on the occupant status. However, Bader teaches executing the vehicle control includes executing vehicle control of limiting a range of acceleration of the vehicle in accordance with the result of estimating the feeling of the target occupant when it is determined that there is the occupant who has not grasped the behavior of the vehicle. ([0032] “there is provision for the driving strategy to be adapted by decreasing accelerations if a sleep phase with a shallow depth of sleep has been determined” [0037] “Further, limit values for maximum accelerations that can occur on a journey can also be prescribed in the sleep profile.”) As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the vehicle control of Jeong to include the acceleration disclosed in Bader, with reasonable expectation of success. The motivation for doing so would have been to reduce the mechanical environmental influences of the occupant, see Bader [0032]. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHELLEY MARIE OSTERHOUT whose telephone number is (703)756-1595. The examiner can normally be reached Mon to Fri 8:30 AM - 5:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Mehdizadeh can be reached on (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.M.O./Examiner, Art Unit 3669 /Erin M Piateski/Supervisory Patent Examiner, Art Unit 3669
Read full office action

Prosecution Timeline

Sep 19, 2023
Application Filed
Apr 15, 2025
Non-Final Rejection — §103, §112
Jul 18, 2025
Response Filed
Oct 02, 2025
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583324
Working Vehicle
2y 5m to grant Granted Mar 24, 2026
Patent 12552524
METHOD AND DEVICE FOR CONTROLLING A THERMAL AND ELECTRICAL POWER PLANT FOR A ROTORCRAFT
2y 5m to grant Granted Feb 17, 2026
Patent 12541210
UNMANNED VEHICLE AND DELIVERY SYSTEM
2y 5m to grant Granted Feb 03, 2026
Patent 12530980
METHOD FOR IDENTIFYING A LANDING ZONE, COMPUTER PROGRAM AND ELECTRONIC DEVICE THEREFOR
2y 5m to grant Granted Jan 20, 2026
Patent 12515141
TRANSBRAKING SYSTEM FOR A MODEL VEHICLE
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+33.5%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 60 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month