Prosecution Insights
Last updated: April 19, 2026
Application No. 18/237,007

VEHICLE AND METHOD FOR ADJUSTING USER DEVICES FOR USER OF VEHICLE

Non-Final OA §103
Filed
Aug 23, 2023
Examiner
SLOWIK, ELIZABETH J
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kia Corporation
OA Round
3 (Non-Final)
46%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
64%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
30 granted / 65 resolved
-5.8% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
43 currently pending
Career history
108
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
58.9%
+18.9% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 65 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the request for continued examination filed on 01/07/2026, in which claims 1-20 are pending and addressed below. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/07/2026 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5-8, 10-11, 13, 15-16, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Kusanagi et al., U.S. Patent Application Publication No. 2019/0193712 A1 (hereinafter Kusanagi), in view of Zhou et al., “Vetaverse: Technologies, Applications, and Visions toward the Intersection of Metaverse, Vehicles, and Transportation Systems” (hereinafter Zhou), and further in view of Adi, “Using VR and Gamification to Develop Personalized Therapy Plans.” Regarding claim 1, Kusanagi teaches a vehicle comprising: a first controller configured to (Kusanagi Fig. 1): receive, via a vehicle management server…preference information associated with at least one user of a plurality of users (see at least Kusanagi [0050]: “As shown in FIG. 3, in the environment table 26C, the characteristics of a specific user are classified as four types “pleasure” expressing an emotion of pleasure, “anger” expressing an emotion of anger, “sadness” expressing an emotion of sadness, and “calmness” expressing an emotion of calmness. Then, in the environment table 26C, setting numbers of music, scent, and lighting are defined for each type.”; [0043]: “Specifically, the center server 20 acquires posted information of a specific user from the SNS server 40, and analyzes the characteristic of the specific user based on the acquired posted information.”), and based on bio-information, collected by at least a sensor of the first controller, associated with the at least one user, analyze a state of the at least one user (see at least Kusanagi [0121]: “For example, when emotion analysis processing is performed based on posting of a photograph, the emotion type can be classified based on the content and color tone of the image included in the photograph. In addition, for example, when emotion analysis processing is performed based on posting of a voice memo, the emotion type can be classified based on the voice tone.”; bio-information includes at least a voice, as evidenced by instant application [0029]); and a second controller configured to, based on the analyzed state of the at least one user and the preference information, control at least one of a plurality of user devices for providing a service, implementing the user-specific coping method, to the at least one user (see at least Kusanagi [0050]: “In the environment table 26C, a music setting number, a scent setting number, and a lighting setting number are defined for each type of emotion that is a characteristic of a specific user. FIG. 3 shows an example of the environment table 26C…Then, in the environment table 26C, setting numbers of music, scent, and lighting are defined for each type. Here, the symbols M0 to M3 denote music setting, numbers the symbols S0 to S3 mean scent setting numbers, and the symbols I0 to I3 denote lighting setting numbers.”; a user device providing a service includes at least a device providing music, scent, or lighting, as evidenced by instant application [0079]). Kusanagi fails to expressly disclose receiving preference information from a metaverse server and providing emotion care service in a metaverse. However, Zhou teaches receive…from a metaverse server, preference information associated with at least one user of a plurality of users (see at least Zhou page 7: “For instance, the objects and contents in the Vetaverse can dynamically vary, and the avatar types can also change according to the driver’s preferences.”) wherein the service comprises an emotion care service linked to activity-based preferences, of the at least one user, in the metaverse implemented by the metaverse server (see at least Zhou page 6: “In Vetaverse, it is also possible to apply emotion recognition to healthcare. Emotion recognition-based healthcare can be implemented by capturing users’ facial images, speech, or electrocardiograms via cameras and wearable devices. For example,…[182] proposed a digital twin model, which is capable of conducting real-time emotion recognition for personalized healthcare using an end-to-end framework. In this way, the health conditions of users can be monitored.”; page 5: “The multi-parameters of the driver’s physical condition are important indicators to measure the healthy driving state. Fatigue driving, drunken driving, and sudden illness during driving will cause changes in these parameters, and they are also one of the main causes of traffic accidents. Therefore, it is necessary for Vetaverse to monitor the values of these parameters and opportunely adjust the vehicle’s driving state to make prevention…When the detected parameters are abnormal, Vetaverse is supposed to inform relatives, friends, or traffic agencies of the driver’s dangerous driving circumstance and propose treatment or relief programs. For instance, VR has been proven to relieve anxiety and pain, and reduce heart rate and blood pressure [204].”; page 6: “Emotion recognition can be used in driver digital twins to monitor drivers’ conditions, which can help supervise drivers’ emotional conditions and prevent them from dangerous driving behaviour. For instance, [224] designed a driver state monitoring (DSM) system to analyze drivers’ conditions and sound the alarm by emotion recognition based on monitoring drivers’ faces. Biophysiological signals captured by wearable devices can also help to estimate if the driver is under abnormal emotional conditions such as stress and anger [83], [224]”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to modify the vehicle taught by Kusanagi with the metaverse server taught by Zhou with reasonable expectation of success. Zhou is directed towards the related field of vehicle metaverse technologies and applications. Therefore, one of ordinary skill in the art would be motivated to combine Kusanagi with Zhou to improve the commute experience for drivers and passengers (see at least Zhou page 16: “The recommendation system (RecSys) is a vital service in the IV-Metaverse, primarily used to improve the driver’s and passengers’ commute experience.”; page 5: “For instance, VR has been proven to relieve anxiety and pain, and reduce heart rate and blood pressure [204].”). Kusanagi in view of Zhou fail to expressly disclose wherein the preference information is generated in a metaverse based on activities performed through a virtual character and specifies a user-specific coping method. However, Adi teaches wherein the preference information is generated in a metaverse implemented by the metaverse server based on activities performed by the at least one user through a virtual character, and the preference information specifies, for a same emotional state, a user-specific coping method among a plurality of different coping methods (see at least Adi page 1: “The ultimate aim of the project is to train computers to be able to develop personalized therapy regimens to help ensure each patient gets a customized program for them.”; page 1: “To begin with, the team explored how they could make the kind of tasks undertaken in physical therapy into a gamified application. They started to deploy traditional game features such as scoring, challenges and avatars…The process, which was all undertaken within a virtual reality environment, saw participants tasked with avoiding obstacles as they moved through the virtual world. The system would then model their actions before mirroring them via a virtual avatar.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to modify the vehicle taught by Kusanagi and Zhou with Adi with reasonable expectation of success. Adi is directed towards the related field of using virtual reality and virtual avatars to develop personalized therapy. Zhou teaches a metaverse can use a recommendation system for drivers and passengers, and an emotional avatar can be used in a metaverse to improve the user’s emotional experience and driving safety. Therefore, one of ordinary skill in the art would be motivated to combine Kusanagi in view of Zhou with Adi to improve efficiency and scalability (see at least Adi page 1: “The ultimate aim of the project is to train computers to be able to develop personalized therapy regimens to help ensure each patient gets a customized program for them. “Using people to individually evaluate others is not efficient or sustainable in time or human resources and does not scale up well to large numbers of people,” the team say. “We need to train computers to read individual people. Gamification explores the idea that different people are motivated by different things.””). Regarding claim 2, Kusanagi in view of Zhou and Adi teach all elements of the vehicle according to claim 1 as explained above. Kusanagi further teaches wherein the first controller comprises: the at least the sensor configured to detect bio-information including at least one of: a facial expression; a respiration rate; a blood pressure; a voice; a body temperature; or a pulse of the user (see at least Kusanagi [0121]: “For example, when emotion analysis processing is performed based on posting of a photograph, the emotion type can be classified based on the content and color tone of the image included in the photograph. In addition, for example, when emotion analysis processing is performed based on posting of a voice memo, the emotion type can be classified based on the voice tone.”; Kusanagi teaches bio-information including at least a voice); and a converter configured to: based on the detected bio-information, determine the state of the at least one user; and convert the determined state of the at least one user to a numerical value (see at least Kusanagi [0120]: “In each of the above embodiments, only emotion classification of a specific user is performed in emotion analysis processing, but this is not limited, and a level indicating the degree of emotion may be determined based on the score imparted in step S101 in FIG. 8. Thus, the level can be reflected in the vehicle interior environment in addition to the classified emotion type. For example, when the emotion type is classified as “anger”, as the level of anger is increased, the volume of music can be turned up, the scent can be increased, and the illumination of the lighting can be increased.”). Regarding claim 3, Kusanagi in view of Zhou and Adi teach all elements of the vehicle according to claim 2 as explained above. Kusanagi further teaches wherein the plurality of user devices comprise at least one of: a mood lamp; a fragrance dispenser; an air conditioning device; an audio device, a video device; a seat massager; or an electric curtain (see at least Kusanagi [0042]: “The adjustment device 38 includes an audio device 38A, an air conditioner 38B, and a lighting device 38C (see FIG. 5A).”); and wherein the second controller is further configured to: based on the preference information associated with the at least one user, classify a preference of the at least one user as one of preset types (see at least Kusanagi [0050]: “In the environment table 26C, a music setting number, a scent setting number, and a lighting setting number are defined for each type of emotion that is a characteristic of a specific user.”); and based on the one of the preset types and the numerical value converted by the converter, control the at least one of the plurality of user devices (see at least Kusanagi [0120]: “In each of the above embodiments, only emotion classification of a specific user is performed in emotion analysis processing, but this is not limited, and a level indicating the degree of emotion may be determined based on the score imparted in step S101 in FIG. 8. Thus, the level can be reflected in the vehicle interior environment in addition to the classified emotion type. For example, when the emotion type is classified as “anger”, as the level of anger is increased, the volume of music can be turned up, the scent can be increased, and the illumination of the lighting can be increased.”). Regarding claim 5, this claim recites a method performed by the vehicle of claim 1. The combination of Kusanagi in view of Zhou and Adi also teaches a method performed by the vehicle of claim 1 as outlined in the rejection to claim 1 above. Therefore, claim 5 is rejected for the same rationale as claim 1. Regarding claim 6, Kusanagi in view of Zhou and Adi teach all elements of the method according to claim 5 as explained above. Zhou further teaches wherein the receiving the preference information comprises: causing the metaverse server to generate the character played by the at least one user who is connected thereto through a user terminal (see at least Zhou page 6: “Avatar emotion reconstruction is more appealing than the traditional communication methods such as texting, phone calls, and video meetings because users can use dynamic avatars to represent different emotional states and interact with the Vetaverse participants [30]…Various sensors can capture different signals for ML-based emotion recognition and avatar emotion reconstruction.”), and causing the metaverse server to, based on a history of activities performed by the at least one user through the character, analyze a preference of the at least one user (see at least Zhou pages 16-17: “XR can greatly improve onboard RecSys with richer and personalized vehicle-human interaction. A key player in this direction is multimodal information fusion. UMPR [217] is a deep multimodal preferences-based recommendation method which captures the textual and visual matching of users and items for recommendation. In IV-Metaverse, the multimodal information gathered via V2V communications can provide the system with richer background and better performance.”). Regarding claim 7, Kusanagi in view of Zhou and Adi teach all elements of the method according to claim 6 as explained above. Kusanagi further teaches wherein the receiving the preference information further comprises: classifying different methods of coping with different states of the at least one user (see at least Kusanagi [0050]: “As shown in FIG. 3, in the environment table 26C, the characteristics of a specific user are classified as four types “pleasure” expressing an emotion of pleasure, “anger” expressing an emotion of anger, “sadness” expressing an emotion of sadness, and “calmness” expressing an emotion of calmness. Then, in the environment table 26C, setting numbers of music, scent, and lighting are defined for each type.”). Regarding claim 8, Kusanagi in view of Zhou and Adi teach all elements of the method according to claim 5 as explained above. Kusanagi further teaches wherein the receiving the preference information further comprises: receiving the preference information stored in a database, associated with account information of the at least one user, of the vehicle management server (see at least Kusanagi [0063]-[0064: “In addition, as the user registration information, it is possible to register preference information regarding the vehicle interior environment such as the user's favorite tune, scent, and color…In step S2, the center server 20 executes user registration processing based on the user registration information acquired from the mobile terminal 50. Specifically, the setting unit 24 generates a unique user ID and writes the generated user ID and the acquired user registration information in a new area of the ID information table 26B.”). Regarding claim 10, this claim recites a method performed by the vehicle of claim 2 as explained above. Therefore, claim 10 is rejected for the same rationale as claim 2. Regarding claim 11, this claim recites a method performed by the vehicle of claim 3 as explained above. Therefore, claim 11 is rejected for the same rationale as claim 3. Regarding claim 13, Kusanagi in view of Zhou and Adi teach all elements of the method according to claim 11 as explained above. Kusanagi further teaches wherein the controlling further comprises: identifying the one of the preset types that corresponds to the numerical value (see at least Kusanagi [0050]: “In the environment table 26C, a music setting number, a scent setting number, and a lighting setting number are defined for each type of emotion that is a characteristic of a specific user.”); and identifying a control operation, of the at least one of the plurality of user devices, corresponding to the one of the preset types (see at least Kusanagi [0120]: “In each of the above embodiments, only emotion classification of a specific user is performed in emotion analysis processing, but this is not limited, and a level indicating the degree of emotion may be determined based on the score imparted in step S101 in FIG. 8. Thus, the level can be reflected in the vehicle interior environment in addition to the classified emotion type. For example, when the emotion type is classified as “anger”, as the level of anger is increased, the volume of music can be turned up, the scent can be increased, and the illumination of the lighting can be increased.”). Regarding claim 15, Kusanagi in view of Zhou and Adi teach all elements of the method according to claim 5 as explained above. Zhou further teaches wherein the preference information is based on a manipulation of a virtual character associated with the at least one user in the metaverse implemented by the metaverse server (see at least Zhou page 6: “Avatar emotion reconstruction is more appealing than the traditional communication methods such as texting, phone calls, and video meetings because users can use dynamic avatars to represent different emotional states and interact with the Vetaverse participants [30]; page 17: “An emotional avatar is essentially a digital representation through VR methods, which can adaptively adjust itself to the appropriate appearance according to the user’s emotional situation.”). Regarding claim 16, this claim recites a vehicle performing the method of claim 13 as explained above. Therefore, claim 16 is rejected for the same rationale as claim 13. Regarding claim 18, this claim recites a vehicle performing the method of claim 15 as explained above. Therefore, claim 18 is rejected for the same rationale as claim 15. Regarding claim 19, this claim recites a method performed by the vehicle of claim 1. The combination of Kusanagi in view of Zhou and Adi also teaches a method performed by the vehicle of claim 1 as outlined in the rejection to claim 1 above. Therefore, claim 19 is rejected for the same rationale as claim 1. Claims 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Kusanagi in view of Zhou and Adi, and further in view of Liu et al., U.S. Patent Application Publication No. 2024/0270274 A1 (hereinafter Liu). Regarding claim 4, Kusanagi in view of Zhou and Adi teach all elements of the vehicle according to claim 3 as explained above. Kusanagi further teaches wherein: the at least one of the plurality of user devices, in the individualized service mode, is controlled based on a current state of each of the plurality of users; and the at least one of the plurality of user devices, in the batch service mode, is controlled uniformly for the plurality of users (see at least Kusanagi [0115]: “In addition, for example, an “attribute” may be classified as the characteristic of a specific user. Specifically, in the environment table 26C, the attribute of a specific user is classified into a plurality of types such as “group” representing a group. “couple” representing lovers or a couple, “child” representing “accompanied by a child”, and “single” indicating one-person ride. Then, the determination unit 23 classifies the attributes from, the posted contents, posting time, and the like of the SNS, whereby it is possible to play animation songs when moving with children and to wrap the vehicle interior with soft light when lovers move together. That is, the adjustment device 38 can adjust the vehicle interior environment based on the attribute of the specific user.”). Kusanagi in view of Zhou and Adi fail to expressly disclose switching a mode of controlling from an individualized mode to a batch mode based on a target value. However, Liu teaches wherein the second controller is further configured to: determine that the numerical value reaches a target value by controlling the at least one of the plurality of user devices (see at least Liu [0093]: “In step S32, the correlation between the operating preference data and the health level information is determined based on the cumulative impact score. As an example, a level of correlation can be directly shown in the form of cumulative score: the larger the score, the greater the correlation. As another example, different grading thresholds may be additionally set and the obtained cumulative impact score may be compared with each grading threshold so as to qualitatively determine the level of correlation between the operating preference data and the health level information.”); and based on the determination that the numerical value reaches the target value, switch a mode of controlling the at least one of the plurality of user devices from an individualized service mode to a batch service mode (see at least Liu [0073]: “However, in the case where the device 1 is directly arranged in the vehicle locally, it is also possible to output the analyzed intelligent health report of the cabin via the smart antenna module 240 to the mobile terminal 260 of the user, so that the user can understand the relationship between his usage habits and the cabin health, and this also facilitates remote turning on or off of the corresponding in-vehicle functions by the user.”; [0078]: “As an example, correlations may be analyzed individually for different areas (e.g., driver cabin, front passenger cabin, rear cabin) within the cabin. As another example, it is also possible to analyze correlations as a whole for the entire cabin area.”; [0096]: “The result of such output, in one aspect, takes into account an expected operating behavior (such as adjusting the air conditioner to a preset temperature or opening the vehicle window) of the user group for this environmental factor or physical state, and meanwhile further takes into account the influence of the operating behaviors of the user on the health level in the cabin, so the model is allowed to seek a balance between user comfort and the health level in the cabin through a continuous iterative training process, and a final adjustment strategy can be obtained while taking into account both as much as possible.”; under broadest reasonable interpretation switching from an individualized service mode to a batch service mode includes switching control based solely on user preference to control based on health of the vehicle cabin). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to modify the vehicle taught by Kusanagi in view of Zhou and Adi with the batch service mode based on a target value determination taught by Liu with reasonable expectation of success. Liu is directed towards the related field of intelligent health management of a vehicle cabin. Therefore, one of ordinary skill in the art would be motivated to combine Kusanagi in view of Zhou and Adi with Liu to satisfy different user preferences while taking into account influence on the vehicle environment (see at least Liu [0004]: “However, the aforementioned current solutions still have many shortcomings, in particular in that these known monitoring solutions can only conduct simple collection and report of indoor environmental parameters, but cannot let users know which operating habits they have in the vehicle will produce which kind of influence on the environmental health in the vehicle. In addition, when facing highly individualized user groups, uniformly formed adjustment measures are incapable of accurately meeting the preferences of different users.”). Regarding claim 12, this claim recites a method performed by the vehicle of claim 4 as explained above. Therefore, claim 12 is rejected for the same rationale as claim 4. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Kusanagi in view of Zhou and Adi, and further in view of Fields et al., U.S. Patent Application Publication No. 2023/0070573 A1 (hereinafter Fields). Regarding claim 9, Kusanagi in view of Zhou and Adi teach all elements of the method according to claim 5 as explained above. Kusanagi in view of Zhou and Adi fail to expressly disclose checking a communication status between the vehicle management server and the vehicle. However, Fields teaches wherein the receiving the preference information comprises: checking a communication connection status between the vehicle management server and the vehicle that is associated with account information of the at least one user (see at least Fields [0077]: “Upon receiving an indication of vehicle operation at block 402, the on-board computer 114 may determine the configuration and operating status of the autonomous operation features (including the sensors 120 and the communication component 122) at block 404.”; [0046]: “The front-end components 102 may further include a communication component 122 to transmit information to and receive information from external sources, including other vehicles, infrastructure, or the back-end components 104, in some embodiments, the mobile device 110 may supplement the functions performed by the on-board computer 114 described herein by, for example, sending or receiving information to and from the mobile server 140 via the network 130.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to modify the vehicle taught by Kusanagi in view of Zhou and Adi with the communication status taught by Fields with reasonable expectation of success. Fields is directed towards the related field of autonomous vehicle operating status assessment. Therefore, one of ordinary skill in the art would be motivated to combine Kusanagi in view of Zhou and Adi with Fields to determine a risk level associated with an operating status (see at least Fields [0078]: “When no changes have been made to the settings, the method 400 may further check for changes to the environmental conditions and/or operating status of the autonomous operation features at block 424. If changes are determined to have occurred at block 424, the one or more risk levels may be determined based upon the new settings at block 412, as at block 422.”). Claims 14 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kusanagi in view of Zhou and Adi, and further in view of Aoki et al., U.S. Patent Application Publication No. 2002/0125332 A1 (hereinafter Aoki). Regarding claim 14, Kusanagi in view of Zhou and Adi teach all elements of the method according to claim 5 as explained above. Kusanagi in view of Zhou and Adi fail to expressly disclose switching a mode of controlling from an individualized mode to a batch mode based on a plurality of numerical values reaching target values. However, Aoki teaches wherein the controlling further comprises: determining that a first numerical value associated with a first user of the plurality of users reaches a first target value by controlling, based on a first individualized service mode associated with the first user, at least one first user device of the plurality of user devices (see at least Aoki [0098]: “Further, in the embodiment, on the basis of the solar radiation direction (θ, φ) determined at step 350, air flow distribution control means for distributing the air flow amount for each passenger seat is controlled. For the passenger seat in which the passenger receives heat by solar radiation, control for increasing the air flow distribution as compared with the other seats is performed.”; [0100]: “On a passenger seat into which solar radiation shines, the air-flow distribution control is performed so that the air flow distribution for this passenger seat is increased as compared with the other seats. Thus, in accordance with the solar radiation direction (θ, φ), the air-blowing amount is controlled by the control amount responsive to an increment of heat load received by the passengers and the vehicle.”; under broadest reasonable interpretation a user device includes an air outlet for a user’s seat); determining that a second numerical value associated with a second user of the plurality of users reaches a second target value by controlling, based on a second individualized service mode associated with the second user, at least one second user device of the plurality of user devices (see at least Aoki [0098]: “Further, in the embodiment, on the basis of the solar radiation direction (θ, φ) determined at step 350, air flow distribution control means for distributing the air flow amount for each passenger seat is controlled. For the passenger seat in which the passenger receives heat by solar radiation, control for increasing the air flow distribution as compared with the other seats is performed.”; [0100]: “On a passenger seat into which solar radiation shines, the air-flow distribution control is performed so that the air flow distribution for this passenger seat is increased as compared with the other seats. Thus, in accordance with the solar radiation direction (θ, φ), the air-blowing amount is controlled by the control amount responsive to an increment of heat load received by the passengers and the vehicle.”; under broadest reasonable interpretation a user device includes an air outlet for a user’s seat; Aoki [0042] discloses four seating areas each capable of independent temperature control); and based on the first numerical value reaching the first target value and the second numerical value reaching the second target value, switching from the first and second individualized service modes to a batch service mode, wherein the at least one of the plurality of user devices, in the batch service mode, is controlled uniformly for the plurality of users (see at least Aoki [0100]-[0101]: “On a passenger seat into which solar radiation shines, the air-flow distribution control is performed so that the air flow distribution for this passenger seat is increased as compared with the other seats. Thus, in accordance with the solar radiation direction (θ, φ), the air-blowing amount is controlled by the control amount responsive to an increment of heat load received by the passengers and the vehicle…That is, when the solar radiation does not shine into the vehicle compartment, the solar radiation amount Ts is not corrected. In this case, through the use of the intensity of solar radiation detected by the solar radiation sensor 50, the target air temperature TAO is determined to compute the control amount. Accordingly, the air flow distribution control is not performed, but air is uniformly blown to each seat.”; Aoki discloses a batch service mode when the first and second numerical values reach target values because air flow is uniformly distributed to each seat when the solar radiation is not impacting the temperature of any vehicle seat). It would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to modify the vehicle taught by Kusanagi in view of Zhou and Adi with the control taught by Aoki with reasonable expectation of success. Aoki is directed towards the related field of controlling a vehicle air conditioner based on detected solar radiation. Therefore, one of ordinary skill in the art would be motivated to combine Kusanagi in view of Zhou and Adi with Aoki to improve passenger comfort (see at least Aoki [0006]: “It is another object of the present invention to provide a vehicle air conditioner using a solar radiation detection unit, capable of performing more comfortable air-conditioning operation for passengers of the vehicle.”). Regarding claim 17, this claim recites a vehicle performing the method of claim 14 as explained above. Therefore, claim 17 is rejected for the same rationale as claim 14. Allowable Subject Matter Claim 20 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The prior art teaches controlling a vehicle based on user preference information (Kusanagi [0050]) and an AR recommendation system (Zhou pages 16-17). The prior art teaches using metaverse avatars for emotion recognition (Zhou pages 5-6) and using avatars to improve a user’s emotional experience and driving safety (Zhou pages 17-18). The prior art also teaches personalizing mental health treatment using metaverse (see Cerasa et al., “The promise of the metaverse in mental health: the new era of MEDverse”). However, the known prior art fails to disclose or suggest each and every limitation together as claimed in claim 20. Additionally, the examiner cannot determine a reasonable motivation, either in the known prior art or the existing case law, to combine the known elements to render the claimed invention. Therefore, there is a lack of motivation to combine the prior art to achieve the claimed invention as recited in claim 20. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ELIZABETH J SLOWIK whose telephone number is (571)270-5608. The examiner can normally be reached MON - FRI: 0900-1700. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANISS CHAD can be reached at (571)270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ELIZABETH J SLOWIK/ Examiner, Art Unit 3662 /ANISS CHAD/ Supervisory Patent Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Aug 23, 2023
Application Filed
May 08, 2025
Non-Final Rejection — §103
Aug 15, 2025
Response Filed
Oct 01, 2025
Final Rejection — §103
Jan 07, 2026
Request for Continued Examination
Feb 12, 2026
Response after Non-Final Action
Mar 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583434
METHOD OF CONTROLLING HYBRID ELECTRIC VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12559088
Driver Assistance Based on Pose Detection
2y 5m to grant Granted Feb 24, 2026
Patent 12545297
METHODS AND SYSTEMS FOR GENERATING A LONGITUDINAL PLAN FOR AN AUTONOMOUS VEHICLE BASED ON BEHAVIOR OF UNCERTAIN ROAD USERS
2y 5m to grant Granted Feb 10, 2026
Patent 12535318
DETERMINING SCANNER ERROR
2y 5m to grant Granted Jan 27, 2026
Patent 12499763
Reporting Road Event Data and Sharing with Other Vehicles
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
46%
Grant Probability
64%
With Interview (+18.3%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 65 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month