DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
This Office Action is in response to the Applicants’ filing on 07/18/2025. Claims 1-5 were previously pending, of which claims 1, 3, and 4 have been amended, no claims have been cancelled or been newly added. Accordingly, claims 1-5 are currently pending and are being examined below.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/14/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
With respect to Applicant's remarks, see pages 4-5, filed 07/18/2025; Applicant’s “Amendment and Remarks” have been fully considered. Applicant’s remarks will be addressed in sequential order as they were presented.
With respect to the claim rejections under 35 U.S.C. § 112(b), the amendment renders this rejection moot, the amended claims are no longer rejected under 35 U.S.C. § 112(b).
With respect to the claim rejections under 35 U.S.C. § 103, applicant’s “Amendment and Remarks” have been fully considered and are not persuasive. Further consideration of the prior art of record determined that Jeong in view of Tanaka does appear to disclose the determination excluding the driver, taught by Tanaka, and the occupant of the rear seat, disclosed by Jeong, as amended in claim 1. Due to the nature of the applicant’s amendments, the scope of the applicant’s invention has changed. New application of prior art addresses the amended language, as mapped below. Therefore, the amended claims are still rejected under 35 U.S.C. § 103, and have been updated in the final office action below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4 are rejected under 35 U.S.C. 103 as being unpatentable over Jeong et al. (KR 10-2200807), hereinafter Jeong, in view of Tanaka (US 2023/0404456), hereinafter Tanaka.
With respect to claim 1, Jeong discloses an on-vehicle system comprising: a control device; ([0008] “a speed control device”)
a monitoring device that monitors whether each of a plurality of occupants in a vehicle has grasped a behavior of the vehicle; ([0028] “The occupant status detection unit (10) can detect occupants and objects (e.g., books, drinks, mobile phones, earphones, wired/wireless headsets, cosmetics, etc.) inside a self-driving vehicle through a camera-based vehicle interior monitoring system and sensors to recognize the current status of the occupants.” [0010] “action type recognized by the action recognition unit is at least one of sleeping, reading, drinking…”)
and a storage device that stores a learned model for feeling estimation, ([0034] “The emotion recognition unit (12) can detect the facial features of a passenger through a camera sensor inside the vehicle equipped with artificial intelligence and machine learning technology and dynamically analyze (or recognize) the emotions of the passenger.”)
wherein the control device: determines whether there is an occupant who has not grasped the behavior of the vehicle among the plurality of occupants based on a monitoring result of the monitoring device; ([0028] “The occupant status detection unit (10) can detect occupants and objects… through a camera-based vehicle interior monitoring system and sensors to recognize the current status of the occupants.” [0030] “The action recognition unit (11) can recognize various actions of the passenger in real time, such as the passenger's head position, hand position, arm position, gaze (gaze perception), and eye closure level.”)
specifies a target occupant whose feeling is to be estimated from the plurality of occupants based on the determination result; ([0013] “The above speed control judgment unit sends an emotion recognition request to the emotion recognition unit if the type of behavior of the passenger recognized by the behavior recognition unit” Note: As the device takes in the current status of all the occupants the focus on a singular occupant due to behavior would qualify as specifying that occupant.)
estimates feeling of the target occupant who has been specified by using the learned model; ([0070] “vehicle interior monitoring system and sensors to detect the status (e.g., actions) of the corresponding passengers (S10).” [0072] “the speed control judgment unit… sends an emotion recognition request to the passenger status detection unit (10).” [0034] “The emotion recognition unit (12) can detect the facial features of a passenger through…machine learning technology.”)
and executes vehicle control in accordance with a result of estimating the feeling of the target occupant, ([0014] “If the above speed control judgment unit determines that it is necessary to control the speed of the autonomous vehicle, it may send a request for autonomous driving speed control along with the behavior type and emotion of the corresponding passenger to the speed control unit.”)
wherein: the control device, when determining that there is no occupant who has not grasped the behavior of the vehicle, targets an occupant seated in a rear seat of the vehicle for feeling estimation. (Fig. 2 shows emotion and attentive estimation occurring in both the front and rear seat of the vehicle.)
Jeong discloses using machine learning for estimating occupant feelings, but does not explicitly disclose the model used for estimating feeling estimation, or that the estimation excludes the driver.
However, Tanaka teaches specific models for estimating the feelings of the occupants ([0209] “The determination unit 12 b acquires the machine learning model from the model storage unit 16, and performs “determination processing” [0236-0237] “When it is determined in step ST222 that not all of the estimation results are estimation results indicating an abnormal state… the determination unit 12 b determines that it is necessary to check the state of the occupant.” [0045] “a condition for estimating an abnormal state in a case where the feeling of anger of the occupant is recognized may be set as the state estimation condition.”)
and determines whether there is an occupant who has not grasped the behavior of the vehicle among the plurality of occupants, excluding an occupant who is a driver of the vehicle, based on a monitoring result of the monitoring device; (see at least [0147] “the adjustment device 1 includes: the estimated information collecting unit 11 to collect estimation results of the state of the occupant estimated by the plurality of occupant state estimating devices 2;” [0145] “The occupant may be, for example, an occupant other than the driver of the vehicle.”)
As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the machine learning of Jeong to include the model disclosed in Tanka, with reasonable expectation of success. The motivation for doing so would have been to provide an underlying method for the machine learning used to determine the occupant that needs to be checked on which may not include the driver, see Tanaka [0201, 0145].
With respect to claim 2, Jeong discloses multiple occupants being dynamically analyzed and control being based on the corresponding occupant emotion but does not explicitly disclose how that occupant is selected.
However, Tanaka teaches specifying the target occupant based on the determination result includes specifying, when there is one or more occupants who have not grasped the behavior of the vehicle in the plurality of occupants, the target occupant from the one or more occupants. (Fig. 3, [0233-0239] “When it is determined in step ST222 that not all the estimation results are estimation results indicating an abnormal state… continuously for the determination period among the estimation results indicating an abnormal state among the plurality of estimation results… When obtaining the checking necessity information indicating that the checking is necessary (“YES” in step ST223), the determination unit 12 b determines that it is necessary to check the state of the occupant.”)
As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the occupant analysis of Jeong to include the method disclosed in Tanka, with reasonable expectation of success. The motivation for doing so would have been to provide an underlying method used to determine which occupant needs to be checked on, see Tanaka [0266].
With respect to claim 3, Jeong discloses an imaging device disposed at a position where faces of the plurality of occupants are allowed to be imaged in the vehicle, the imaging device configure to obtain image data having expression of the target occupant, ([0028] “The occupant status detection unit (10) can detect occupants and objects… inside a self-driving vehicle through a camera-based vehicle interior monitoring system and sensors to recognize the current status of the occupants.” [0035] “The emotion recognition unit (12) can detect the facial features of a passenger through a camera sensor inside the vehicle”)
Jeong discloses using machine learning to recognize the passenger’s emotions, but does not explicitly disclose the type of model that is used to do the recognition.
However, Tanaka teaches the learned model is generated by machine learning so as to derive a result of estimating feeling of a person from the image data having expression of the person, and estimating feeling of the target occupant by using the learned model includes: giving image data having expression of the target occupant obtained by the imaging device to the learned model; ([0207] “The machine learning model is a machine learning model that uses the estimation result of the state of the occupant as an input and outputs information” [0045] “a condition for estimating an abnormal state in a case where the feeling of anger of the occupant is recognized may be set as the state estimation condition.”)
and obtaining a result of estimating the feeling of the target occupant from the learned model by executing arithmetic processing of the learned model. ([0220] “As a learning algorithm used by the model generating unit 62, a known algorithm of supervised learning can be used.”)
As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the machine learning of Jeong to include the model disclosed in Tanka, with reasonable expectation of success. The motivation for doing so would have been to provide an underlying method for the machine learning used to determine the occupant that needs to be checked on, see Tanaka [0201].
With respect to claim 4, Jeong discloses wherein the monitoring device includes the imaging device, ([0034] “a camera sensor inside the vehicle”)
and determining whether there is the occupant who has not grasped the behavior of the vehicle includes determining whether there is the occupant who has not grasped the behavior of the vehicle among the plurality of occupants based on the image data obtained by the imaging device. ([0028] “The occupant status detection unit (10) can detect occupants and objects… through a camera-based vehicle interior monitoring system and sensors to recognize the current status of the occupants.” [0051] “the passenger status information (in this case, including the corresponding passenger behavior type and emotions).”)
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Jeong in view of Tanaka as applied to claim 1 above, and further in view of Bader (US 2020/0307647), hereinafter Bader.
With respect to claim 5, Jeong discloses controlling the speed of the vehicle with respect to the occupant status, but does not explicitly disclose limiting the acceleration based on the occupant status.
However, Bader teaches executing the vehicle control includes executing vehicle control of limiting a range of acceleration of the vehicle in accordance with the result of estimating the feeling of the target occupant when it is determined that there is the occupant who has not grasped the behavior of the vehicle. ([0032] “there is provision for the driving strategy to be adapted by decreasing accelerations if a sleep phase with a shallow depth of sleep has been determined” [0037] “Further, limit values for maximum accelerations that can occur on a journey can also be prescribed in the sleep profile.”)
As both are in the same field of endeavor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the vehicle control of Jeong to include the acceleration disclosed in Bader, with reasonable expectation of success. The motivation for doing so would have been to reduce the mechanical environmental influences of the occupant, see Bader [0032].
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHELLEY MARIE OSTERHOUT whose telephone number is (703)756-1595. The examiner can normally be reached Mon to Fri 8:30 AM - 5:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Mehdizadeh can be reached on (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.M.O./Examiner, Art Unit 3669
/Erin M Piateski/Supervisory Patent Examiner, Art Unit 3669