Prosecution Insights
Last updated: April 19, 2026
Application No. 18/389,773

DEVICE FOR MOBILE OBJECT AND CONTROL METHOD FOR MOBILE OBJECT

Final Rejection §101§103
Filed
Dec 19, 2023
Examiner
KUJUNDZIC, DINO
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
DENSO CORPORATION
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
390 granted / 533 resolved
+21.2% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
26 currently pending
Career history
559
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
54.7%
+14.7% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 533 resolved cases

Office Action

§101 §103
DETAILED ACTION 1. This action is responsive to the following communication: Amended Claims, Specification Amendment, Replacement Drawings, and Remarks, filed on September 15, 2025. This action is made final. 2. Claims 1 and 3-8 are pending in the case; Claims 1, 5, and 6 are independent claims; Claim 2 is canceled; Claims 7 and 8 are new claims. Response to Arguments 3. In the Non-Final Rejection mailed on June 13, 2025, Specification was objected to because the title of the invention was non descriptive, but Specification Amendment filed on September 15, 2025 has rendered this objection moot. 4. It is noted that Figure 3 (element S3) was amended by Replacement Drawings filed on September 15, 2025 – the Replacement Drawings have been entered. 5. In the Non-Final Rejection mailed on June 13, 2025, it was noted that Claims 1-4 were interpreted to invoke 35 U.S.C. § 112(f) for the reasons stated therein (see pgs. 2-5). In the Remarks filed on September 15, 2025 (see pgs. 12-13), Applicant states that the intent is not to invoke 35 U.S.C. § 112(f), but such a statement is not sufficient to overcome this interpretation. As noted in the Non-Final Rejection, “If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f),” but it does not appear that this was established by Applicant. 6. In the Non-Final Rejection mailed on June 13, 2025, Claims 1-6 were rejected under 35 U.S.C. § 101 as being directed to an abstract idea without significantly more, but Amended Claims filed on September 15, 2025 have rendered this rejection moot (see FN1 of Non-Final Rejection mailed on June 13, 2025 (pg. 10), suggesting a clarification to the claims to overcome the § 101 rejection which is reflected in the Amended Claims filed on September 15, 2025). 7. With respect to the rejection of Claims 1-6 under 35 U.S.C. § 103, Applicant’s arguments, see Remarks filed on September 15, 2025 (see pgs. 14-19), in view of Claim Amendments filed therewith, have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Sobhany (US 2020/0239004 A1), as discussed below (see also Examiner Interview Summary mailed on September 9, 2025). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 8. Claims 1 and 3-6 are rejected under 35 U.S.C. 103 as being unpatentable over Ichihara et al. (hereinafter Ichihara), JP2018133696A, published on August 23, 2018 (submitted with IDS On May 8, 2024 (cite no. BA); examiner is relying on the machine translation that was submitted with the IDS), in view of Sobhany (US 2020/0239004 A1), published on July 30, 2020. With respect to independent Claim 1, Ichihara teaches a device for mobile object, the device being usable in a mobile object, the device comprising: an occupant information specifying unit specifying an occupant information record for each of multiple occupants existing in the mobile object by distinguishing the multiple occupants from one another, the multiple occupants being detected by a sensor of the mobile object (see ¶¶ 0034-35, 0043-44, showing that each occupant is identified based on authentication information, where such stored information can be related to the occupant based on sensed information (i.e., voiceprint, captured image, etc.); see also ¶ 0086). a request estimation unit estimating a request of one of the multiple occupants corresponding to a combination of the occupant information records of the multiple occupants based on the occupant information records of the multiple occupants specified by the occupant information specifying unit (see ¶¶ 0015-17, 0053, 0057, 0063, 0077, 0107, showing that an occupant’s request is considered in view of other available/determined information, such as the content of the communication, relationship between the occupants, current scene/situation, etc.). … wherein the occupant information specifying unit specifies, as the occupant information record, a voice content of each of the multiple occupants by distinguishing the multiple occupants from one another (see ¶ 0044, showing that a voiceprint can be used to identify/distinguish each occupant). the voice content of each of the multiple occupants is detected by the sensor of the mobile object, which detects a sound in a compartment of the mobile object (see ¶¶ 0029, 0044). the request estimation unit estimates the request of one of the multiple occupants corresponding to a combination of the voice contents of the multiple occupants based on the voice contents of the multiple occupants specified by the occupant information specifying unit (see ¶¶ 0063-64, 0077; see also ¶ 0107). the request estimation unit estimates a background of a conversation content based on the conversation content that is a flow of the voice contents of the multiple occupants specified by the occupant information specifying unit, and estimates the request of one of the multiple occupants to match the estimated background (see ¶¶ 0063-64, 0077; see also ¶ 0107). Ichihara recognizes that various information can be considered when determining the context/situation inside the vehicle beyond the immediate/current information, and that such information can be used to ensure that an occupant’s request/utterance is processed in a desired/suitable manner, resulting in properly distinguished situations and corresponding responses (see ¶¶ 0004, 0006, 0012, 0014, 0016-17, 0019). A skilled artisan would understand that occupants can be recognized and distinguished in various ways (see ¶¶ 0034-35, 0043-44), and that different information, whether sensed or retrieved, can be used to improve the accuracy of the relationship or context estimation (see ¶¶ 0045-55, 0112-13). While Ichihara at least suggests an internal environment specifying unit specifying an internal environment of the mobile object, which is estimated to satisfy the request of one of the multiple occupants estimated by the request estimating unit; and a provision processing unit providing the internal environment specified by the internal environment specifying unit (see ¶¶ 0006, 0015-17, 0029, 0031-32, 0107, showing that a content is provided/outputted according to the occupant’s request and the determined context/relationship information), Ichihara does not appear to disclose when the provision processing unit provides the internal environment of the mobile object, which is estimated to satisfy the request of one of the multiple occupants, the provision processing unit (i) controls an air conditioning control ECU equipped to the mobile object to adjust a room temperature and an air volume corresponding to the specified internal environment, or (ii) controls a lighting device equipped to the mobile object to adjust a brightness or an emission color of the lighting device corresponding to the specified internal environment. However, a skilled artisan would understand that the “provision unit” of Ichihara could be implemented in different ways to include more than an audio or visual output, as suggested by the teachings of Sobhany. Sobhany is directed towards selection and execution of an output action in a vehicle based on a user’s (i.e., occupant’s) state and profile (see Sobhany, Abstract). Sobhany makes it clear that different outputs, related to entertainment, safety, comfort, etc., can be produced in response to the user’s state or context (see Sobhany, ¶¶ 0027, 0041, 0096-97, 0102, 0153, 0170; see also ¶ 0046, further describing the vehicle’s comfort system). Sobhany teaches that various outputs can be produced, such as controlling the air conditioning or heating system, or controlling the output of various lights, in response to the detected user’s state and/or profile (see Sobhany, ¶¶ 0102, 0154, 0181, 0219-20). Accordingly, it would have been obvious to a skilled artisan, at the time the instant application was filed, to modify the “provision unit” of Ichihara to include various systems and corresponding outputs as suggested by Sobhany, in order to allow for the adjustment of the vehicle’s environment according to an estimated, or sensed, user’s state and context (see Sobhany, Abstract, ¶¶ 0005-08). With respect to dependent Claim 3, Ichihara in view of Sobhany teaches the device for mobile object according to claim 1, as discussed above, and further teaches wherein the occupant information specifying unit specifies, as the occupant information record, an occupant state of each of the multiple occupants by distinguishing the multiple occupants from one another, the occupant state is at least one of an action or a posture of each of the multiple occupants, and the action or the posture of each of the multiple occupants is detected by the sensor of the mobile object, which photographs inside of a compartment of the mobile object, and the request estimation unit estimates the request of one of the multiple occupants corresponding to a combination of the occupant states of the multiple occupants based on the occupant states of the multiple occupants specified by the occupant information specifying unit (see Sobhany, ¶¶ 0044-45, 0048-53, showing that each occupant can be identified via a camera image and that further analysis, including occupant’s gestures, emotions, facial expressions, etc., can be conducted on the obtained data in order to more accurately determine the current state of the vehicle’s interior). With respect to dependent Claim 4, Ichihara in view of Sobhany teaches the device for mobile object according to claim 1, as discussed above, and further teaches a supplementary information acquiring unit acquiring supplementary information, which is at least one of information on preference of one of the multiple occupants or information on a past action history of one of the multiple occupants, wherein the request estimation unit estimates the request of one of the multiple occupants based on the supplementary information acquired by the supplementary information acquiring unit in addition to the occupant information records of the multiple occupants specified by the occupant information specifying unit (see Sobhany, ¶¶ 0034-35, 0086-87, 0091, 0098, showing additional information about the occupants that can be used in determining the occupants’ relationship, history, or preferences). With respect to Claims 5 and 6, these claims are directed to a control method and a device for mobile object comprising steps and/or features similar to those recited in Claim 1, and are thus rejected under a similar rationale to Claim 1, as discussed above. 9. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Ichihara in view of Sobhany, and further in view of Dadam et al. (hereinafter Dadam), US 2021/0114603 A1, published on April 22, 2021. With respect to dependent Claim 7, Ichihara in view of Sobhany teaches the device for mobile object according to claim 1, as discussed above, but does not appear to discuss wherein when one of the multiple occupants other than a driver is in a drowsiness state, the provision processing unit (i) controls a display device equipped to the mobile object to display an image of the occupant who is in the drowsiness state or (ii) controls an audio output device to output an audio signal notifying the drowsiness state of the occupant, but a skilled artisan would understand that other occupants could be alerted to a passenger’s drowsiness as suggested by the teachings of Dadam, in order to notify them about corresponding changes/modifications (such as lowering of lights and/or audio volume, change in temperature, etc.). Dadam is directed towards adjusting an environment of a vehicle cabin according to a predicted state of an occupant (see Dadam, ¶ 0001). Dadam recognizes the importance of providing comfort to the vehicle’s passengers and provides for various ways of achieving such comfort (see Dadam, ¶¶ 0002-05, 0052). Dadam teaches that when a determination that a passenger is drowsy is made, a notification (via a speaker of visually) may be provided to other occupants (see Dadam, ¶ 0073). Accordingly, it would have been obvious to a skilled artisan, at the time the instant application was filed, to implement the notification suggested by Dadam in order to provide the desired information to other occupants (see Dadam, ¶¶ 0005, 0073). 10. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Ichihara in view of Sobhany, and further in view of Neubecker et al. (hereinafter Neubecker), US 2022/0324458 A1, published on October 13, 2022 (filed on April 13, 2021). With respect to dependent Claim 8, Ichihara in view of Sobhany teaches the device for mobile object according to claim 1, as discussed above, and while Ichihara in view of Sobhany appears to suggests making various modifications in response to a detected state or context, there does not appear to be an explicit suggestion of wherein when one of the multiple occupants other than a driver is in a drowsiness state, the provision processing unit (i) controls the air conditioning control ECU equipped to the mobile object to adjust the room temperature and the air volume such that the occupant who is in the drowsiness state can relax and (ii) reduces a volume of an audio output device equipped to the mobile object. However, the teachings of Neubecker can be relied upon for an explicit suggestion of this limitation. Neubecker is directed towards enhancing a passenger’s sleeping experience (see Neubecker, Abstract, ¶ 0001). Neubecker teaches determining when a passenger is asleep and in turn lowering the audio volume and adjusting the temperature of the climate control system, where such adjustments can be based upon individual user’s preferences (see Neubecker, ¶¶ 0021, 0024-26, further stating that an audio or visual indicator may be provided to the driver to alert them that a passenger is asleep). Accordingly, it would have been obvious to a skilled artisan, at the time the instant application was filed, to implement the particular modifications suggested by Neubecker in order to enhance the passenger’s sleeping experience in the vehicle (see Dadam, ¶¶ 0001, 0007). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DINO KUJUNDZIC whose telephone number is (571)270-5188. The examiner can normally be reached M-F 8am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivek Koppikar can be reached on 571-272-5109. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DINO KUJUNDZIC/Primary Examiner, Art Unit 3667
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Jun 11, 2025
Non-Final Rejection — §101, §103
Sep 05, 2025
Examiner Interview Summary
Sep 05, 2025
Applicant Interview (Telephonic)
Sep 15, 2025
Response Filed
Dec 24, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603007
DRIVER ASSISTANCE APPARATUS, DRIVER ASSISTANCE METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12590812
IMAGE DISPLAY BASED ON PRESENT POSITION AND ACCESSIBILITY PRIORITY
2y 5m to grant Granted Mar 31, 2026
Patent 12579687
ESTIMATION DEVICE, ESTIMATION METHOD, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12574712
SERVICE PROVIDING SERVER AND METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12567040
ONBOARDING SYSTEM WITH GRAPHICAL USER INTERFACE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+28.3%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 533 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month