Prosecution Insights
Last updated: April 19, 2026
Application No. 17/941,351

TECHNIQUE FOR PROVIDING A USER-ADAPTED SERVICE TO A USER

Non-Final OA §103§112
Filed
Sep 09, 2022
Examiner
MALKOWSKI, KENNETH J
Art Unit
3667
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
2Hfutura SA
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
94%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
480 granted / 642 resolved
+22.8% vs TC avg
Strong +19% interview lift
Without
With
+19.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
22 currently pending
Career history
664
Total Applications
across all art units

Statute-Specific Performance

§101
8.3%
-31.7% vs TC avg
§103
40.7%
+0.7% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
27.7%
-12.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 642 resolved cases

Office Action

§103 §112
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/24/25 has been entered. Response to Amendment The amendment filed 11/24/25 has been accepted and entered. Accordingly, claims 15-16, 19 and 25-26 are amended. Claims 1-14 and 23-24 were previously canceled. Accordingly, claims 15-22 and 25-27 are examined herein. Response to Arguments Applicant’s arguments with respect to the pending claims have been considered but are moot in view of the new grounds of the newly formulated rejection necessitated by applicant’s amendment. However, at least one argument remains relevant to the current rejection. With respect to claim 15, Applicant asserts (Amend. 7-8) that a user being asked to provide input as follows: Your preferred driving style in highway is: A) quick and fast; B) steady and smooth from Haui does not read on “the claimed at least one answer is a use case specific answer which specifically relates to a particular vehicle ride to be provided to the user”. Applicant reasons this is necessarily because the users answer as to preference can be applied “generically . . . across vehicle rides”. However, under broadest reasonable interpretation of the claim language (i.e., “use case” and “relates to”) and Applicants’ specification, use case specific answers relating to a particular vehicle ride can be answers that are further used across vehicle rides. For example, the specification fails to providing a limiting definition of a use case goal or preference, but appears to indicate a “use case” related goal or preference can be any service provided to the user in the future that is adapted for that user (Spec. ¶ 69 “The use case related goals and preferences of a user may specifically relate to the user-adapted service provided to the user. Such goals and preferences may herein also be denoted as "actual personality information" of the user as they specifically relate to the "actual" user adapted service currently being (or about to be) provided to the user.”). The specification further indicates “exemplary questions regarding use case related goals, preferences and/or moods are shown in the tables presented herein below, wherein Table 4 provides an exemplary listing of “questions specifically relating to a vehicle ride use case” and that these are “sets of use case related questions . . . and that various other types of questions for these and other use cases are generally conceivable, as long as the questions are directed to use case related goals, preferences and/or moods in the above sense. From the exemplary sets of questions presented in Tables 4 to 7, it may easily be seen how these types of questions (specifically directed to the "actual" use case) distinguish from the questions on "general" goals and preferences of the user shown in Tables 2 and 3, which are use case independent” (Spec. ¶ 70). For example, with respect to Tables 2 and 3 a general question as opposed to a use case specific answer to a user would be “how old are you”, “what is your favorite color”, etc. This is what Applicant uses to describe the metes and bounds of something outside of a use case specific question/ answer. In contrast, Applicant explicitly states the answer to the question “Do you like a comfortable ride experience or a sporty one?” is a use case specific answer relating to a particular ride as it is stated in Table 4, it is a “vehicle ride use case”. This is literally the only example provided by Applicant for a use case specific answer for a user preference specifically relating to a particular vehicle ride. It is unclear how this differs at all from example cited in the Amendment from Haui (Amend. p. 7). Both answers, i.e., I prefer driving quick and fast on a highway drive (Haui) and I prefer a sporty ride over a comfortable ride (Spec., Table 4) can both be applied “across vehicle rides”. Furthermore, it is unclear how any user preference could not be applied across more than one vehicle ride given the term description and examples provided in the specification. The specification does not appear to describe any user preference that could not be applied across more than on vehicle ride. Accordingly, under a BRI, the examiner interprets answers can be applied across vehicle rides as included within the definition of a use case specific answer relating to a particular ride such that answers indicating a user prefers a sporty ride over a comfortable ride and a quick/ fast ride over a steady and smooth ride are considered to be a “use case specific answer which specifically relates to a particular vehicle ride”. For example, the specification fails to providing a limiting definition of a use case goal or preference, but appears to indicate a “use case” related goal or preference can be any service provided to the user in the future that is adapted for that user (Spec. ¶ 69 “The use case related goals and preferences of a user may specifically relate to the user-adapted service provided to the user. Such goals and preferences may herein also be denoted as "actual personality information" of the user as they specifically relate to the "actual" user adapted service currently being (or about to be) provided to the user.”). In addition, Soma also discloses an answer being use case specific since it specifically relates to a particular vehicle ride provided to the user as highlighted in the reformulated rejection below (i.e., flowchart for a particular vehicle ride to the user, FIG. 5 including steps 104, 116A where questions can be asked and answers are used to tailor the vehicle ride according to the user answer at step 112A-B, 114B, 122A, i.e., ¶ 48 answer to questionnaire, how are you feeling? I am very calm; 54 use answer to determine the state of the user 63 step 112a, ask a question about a preference of the user, provide information on the region that the user is traveling in the particular ride provided to the user; 81 based on answer provide customized output like music, information content, etc. tailored to user; 84 information output in accordance with user answer; 86 output according to preference of the user; 101 questions are used to estimate preferences of the user; claim 3). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 15-22 and 25-27 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With respect to claim 15, and similarly, independent claims 25-26, the metes and bounds of what is and is not included in the limitation “input regarding the user includes actual personality information of the user obtained from at least one answer to at least one question posed to the user, the at least one answer being a use case specific answer which specifically relates to a particular vehicle ride to be provided to the user, the at least one answer including at least one of: (a) one or more answers directed to one or more preferences of the user specifically relating to the particular vehicle ride to be provided to the user, and (b) one or more answers directed to one or more goals of the user specifically relating to the particular vehicle ride to be provided to the user” under a broadest reasonable interpretation in view of the specification and remaining claim language. Specifically, the limitation “use case specific answer which specifically relates to a particular vehicle ride” is unclear in view of Applicants arguments, the specification and the remaining claim language. For example, under a broadest reasonable interpretation of the claim language (i.e., “use case” and “relates to”) and Applicants’ specification, use case specific answers relating to a particular vehicle ride can be answers that are further used across vehicle rides. For example, the specification fails to providing a limiting definition of a use case goal or preference, but indicates “exemplary questions regarding use case related goals, preferences and/or moods are shown in the tables presented herein below, wherein Table 4 provides an exemplary listing of “questions specifically relating to a vehicle ride use case” and that these are “sets of use case related questions . . . and that various other types of questions for these and other use cases are generally conceivable, as long as the questions are directed to use case related goals, preferences and/or moods in the above sense. From the exemplary sets of questions presented in Tables 4 to 7, it may easily be seen how these types of questions (specifically directed to the "actual" use case) distinguish from the questions on "general" goals and preferences of the user shown in Tables 2 and 3, which are use case independent” (Spec. ¶ 70). For example, with respect to Tables 2 and 3 a general question as opposed to a use case specific answer to a user would be “how old are you”, “what is your favorite color”, etc. This what Applicant uses to describe the metes and bounds of something outside of a use case specific question/ answer. In contrast, Applicant explicitly states the answer to the question “Do you like a comfortable ride experience or a sporty one?” is a use case specific answer relating to a particular ride as it is stated in Table 4, it is a “vehicle ride use case”. This is literally the only example provided by Applicant for a use case specific answer for a user preference specifically relating to a particular vehicle ride. It is unclear how this differs at all from example cited in the Amendment from Haui (Amend. p. 7). Both answers, i.e., I prefer driving quick and fast on a highway drive (Haui) and I prefer a sporty ride over a comfortable ride (Spec., Table 4) can both be applied “across vehicle rides”. Furthermore, it is unclear how any user preference could not be applied across more than one vehicle ride given the term description and examples provided in the specification. The specification does not appear to describe any user preference that could not be applied across more than on vehicle ride. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 15-18 and 25-27 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. 20140309790 to Ricci et al. (Ricci) in view of U.S. 20180357473 to Soma et al. (Soma) and further in view of US 20170274908 to Haui et al. (Haui) With respect to claims 15 and 25-26, Ricci discloses a method for providing a user-adapted service to a user, the method comprising: obtaining a digital representation of personality data of a user, the personality data of the user being computed based on input regarding the user, wherein the input regarding the user includes actual personality information of the user (i.e., profile data 252; personality module 2004; 252, 2008, 2004, 2028 FIG. 20 and corresponding description; FIG. 21 and corresponding description; ¶¶ 367-370 virtual personality module 2004, personality data memory . . . Input corresponding to personality information may be stored in the personality data memory 2028 . . . matching a virtual personality with a personality of a user 216 . . . interpret a users behavior observed via the vehicle and/or non-vehicle sensors 242, 236 . . . detecting a context may correspond to an emotional state of the user . . . preferences 2024 can be associated with a user . . . personality matching module 2008 may communicate with a user profile . . . predictive behavior based on a defined personality . . . monitoring input made by a user (e.g., voice commands, content, context, etc.) . . . virtual personality that is identical to the personality of the user 216 . . . virtual personality meet the preferences of the user; 373 information received by one or more sensors can include inquiries, input by a user, voice commands, intelligent personal assistant history . . . associated with user or vehicle 104 . . . profile data; 380 virtual personality information stored . . . includes virtual personality, a personality type of the user, personality preferences) (¶ 381-382 presenting the virtual personality to a user 216 may include altering one or more features of the vehicle 104. For instance, one or more features of the vehicle 104 may be altered to change a mood associated with the virtual personality. Continuing this example, the personality module 2004 may communicate with the vehicle control system 204 to change an internal lighting, an infotainment setting, a temperature, an oxygen level, an air composition, a comfort setting, a seat position, a transmission setting ( e.g., automatic to manual, paddle shifting, and more), a navigation output, etc. . . . asking the user questions . . . user may respond by providing audible input responses . . . personality matching module 2008 may receive user input via a user profile associated with the user 216, voice input, visual input, tactile input, manual input, etc., and/or combinations thereof. Continuing this example, the matching engine 2012 may determine a virtual personality that suits the user input received . . . virtual personality should be generated to match the user's 216 happiness) processing the digital representation of the personality data to provide a user-adapted service to the user, wherein providing the user-adapted service to the user comprises at least one of: adapting a driving configuration of the vehicle to the user, adapting an environmental condition in a passenger cabin of the vehicle to the user, and adapting a user-specific setting regarding a passenger cabin of the vehicle to the user. (¶ 381 presenting the virtual personality to a user 216 may include altering one or more features of the vehicle 104. For instance, one or more features of the vehicle 104 may be altered to change a mood associated with the virtual personality. Continuing this example, the personality module 2004 may communicate with the vehicle control system 204 to change an internal lighting, an infotainment setting, a temperature, an oxygen level, an air composition, a comfort setting, a seat position, a transmission setting ( e.g., automatic to manual, paddle shifting, and more), a navigation output, etc.). Although Ricci discloses determining a personality of a user by asking questions to a driver that result in various adaptations as cited above, i.e., ¶ 381, etc., Ricci may not explicitly disclose that the personality obtained is a use case specific answer that specifically relates to a particular vehicle ride to be provided to the user, the at least one answer directed to a preference of the user specifically relating to the particular vehicle ride provided to the user at a time of providing the driving service to the user. Soma, from the same field of endeavor, discloses determining a driver state by deriving a use case specific answer that specifically relates to a particular vehicle ride to be provided to the user (i.e., FIG. 5 process is for a particular vehicle ride provided to the user wherein outputs based on user answers occur during the ride provided to the ser; cf. Spec. ¶¶ 69-70 “use case related goals and preferences may likewise be obtained from answers to questions posed to the user . . . exemplary questions regarding use case related goals, preferences and/or moods are shown . . . Table 4 provides an exemplary listing of questions specifically relating to a vehicle ride use case”; Table 4 “questions regarding vehicle ride use case . . . mood – are you relaxed . . . preference – do you like a comfortable ride experience or a sporty one?”) wherein the at least one answer directed to a preference of the user specifically relating to the particular vehicle ride provided to the user at a time of providing the driving service to the user. (i.e., flowchart for a particular vehicle ride to the user, FIG. 5 including steps 104, 116A where questions can be asked and answers are used to tailor the vehicle ride according to the user answer at step 112A-B, 114B, 122A, i.e., ¶ 48 answer to questionnaire, how are you feeling? I am very calm; 54 use answer to determine the state of the user 63 step 112a, ask a question about a preference of the user, provide information on the region that the user is traveling in the particular ride provided to the user; 81 based on answer provide customized output like music, information content, etc. tailored to user; 84 information output in accordance with user answer; 86 output according to preference of the user; 101 questions are used to estimate preferences of the user; claim 3) (FIG. 2, for vehicle X shown in FIG. 1, interface 1, microphone 192, imaging unit, in vehicle camera 191, voice output unit 17; FIG. 5 steps 104 estimate emotion of user; 112A, 114A, 118A, 120A estimate preference on basis of reaction of user; 122A, 114B, and corresponding descriptions; ¶¶ 39 control unit 100 recognizes an answer to a questionnaire such as “How are you feeling now?” as the state of the target user on the basis of the operation detected by the operation input unit 16.; 48 control unit 100 may estimate the emotion of the target user, for example, on the basis of the answer to the questionnaire. For example, when the answer to the questionnaire is “I am very calm”, the control unit 100 may estimate the type of emotion of the target user to be the positive emotion “calmness”, and the value of the strength of the emotion of the target user to be a large value (for example, 3). When the answer to the questionnaire is “I am slightly irritated”, the control unit 100 may estimate the type of emotion of the target user to be the negative emotion “disfavor”, and the value of the strength the emotion of the target user to be a small value (for example, 1).; 63 content of the information to be output may be various pieces of content, such as a question about a preference of the user, other questions . . . it is preferable for the question about the preference of the user to be included.; 88, 101-102, claim 3) (¶ 54 estimate the emotion of the target user . . . state of the target user generated by machine learning; 87 accurately estimate preference of the user on the basis of a highly reliable reaction) Accordingly, it would have been obvious to one of ordinary skill in the art at the time of effective filing date for the user personality information derived in Ricci to include deriving answers to use case specific questions which specifically relate to a particular vehicle ride being provided to the user wherein the answers are derived to be directed to the preferences of the user related to the particular vehicle ride being provided to the user, as taught by Soma above, in order to adapt an environmental condition in the passenger cabin of the particular vehicle in the particular vehicle ride (i.e., music, volume, information content) according to user preference thereby alleviating or eliminating a sense of discomfort of a user of a vehicle (Soma, ¶¶ 7, 29, 45-49, 51, 63-64, 83-84, 87, 95). Although Ricci in view of Soma disclose deriving use case specific answers which specifically relate to a particular vehicle ride being provided to the user as cited above, Ricci in view of Soma do not specifically articulate that the answer itself provided by the user includes information about a vehicle ride. Haui, from the same field of endeavor, teaches an invention to “personalize self-driving cars” (title) wherein the answers provided by the user contains goals or preferences specifically related to a vehicle ride, i.e., ride speed, ride quality as opposed to being derived to produce a particular ride quality. (¶¶ 2-3, 9 Corresponding to each interval, there are pre-acquired personalized sets of data for each user reflecting the user's choices of behaviors in different scenarios, preferred driving styles, and/or moral or ethics traits, which will be used by the robot in its control of the operations, a process hereby referred to as a personalized self-driving; 10 acquire user preferences data set of preferred behaviors of a self-driving car on a collection of roadway and traffic scenarios . . . invite the user inputting his or her opinions on a preferred handling behavior by selecting an answer among multiple choices or answering to a yes or no question . . . acquired preference data will then be stored in a data structure named the user preference data set, which has an entry for each user of the self-driving car; 15-30, i.e., 27-30 your preferred driving style on a highway is quick and fast or steady and smooth; FIG. 5 including 550, apply user preference data to control the operation of the car; FIG. 4 and corresponding description, i.e., apply user preference/ driving styles) (¶¶ 27 “your preferred driving style in highway is: A Quick and fast, B. Steady and Smooth”) 11 “user preference . . . user profile . . . acquisition . . . could take place between the robot and the user . . . at the time of . . . requesting a service of a self-driving car”; 380, FIG. 3 “updating . . training of the robot during the driving through interactions between the robot and the user on various roadway and traffic scenarios”; 3 “a continuing leaning by the robot during the driving”; 2 “robot conducts real-time scene analysis of roadway and traffic events”; 8 “FIG. 5 Illustration of how to apply user data on roadway and traffic scenarios”; 9 “self-driving car keeps monitoring and detecting roadway and traffic conditions by its sensing sub-system and any events prompting for a responding adjustment will be analyzed; Claim 2 “acquire the user preference data set . . . entry of the user preference data . . . through an interactive initialization process between the robot and the user at the time of . . . requesting the service of the car . . . or updating the acquired data . . . at the time of self-driving being used in a public roadway”; Claim 10 “the personalizing continues during the driving, comprising the robot executing guidance from a user in operation of the car and updating the user preference data set by the roadway and traffic scenario/guidance data pairs, through interactions between the robot; and the user over roadway and traffic scenarios”) Accordingly, it would have been obvious to one of ordinary skill in the art at the time of effective filing date to for the answers provided by Ricci in view of Soma to contain goals or preferences specifically about a vehicle ride, i.e., vehicle ride speed, as taught by Haui above, in order to improve personalization of driving service and to tailor the service to the users needs (Haui, ¶¶ 2-3). In addition, the combination is further obvious since Ricci in view of Soma already disclose speech recognition capabilities that could detect words of a user indicating vehicle ride terms wherein direct detection of keywords for ride preference rather than deriving based on emotion for example, may provide a faster response and a take up fewer processing resources in some cases. With respect to claim 16, Ricci in view of Soma and further in view of Haui disclose the at least one answer includes the one or more answers directed to the one or more preferences of the user specifically relating to the particular vehicle ride provided to the user and wherein the one or more preferences of the user are further obtained from body scan1 data indicative of characteristics of the user derivable by scanning at least a portion of the body of the user. (Haui, ¶¶ 2-3, 9 Corresponding to each interval, there are pre-acquired personalized sets of data for each user reflecting the user's choices of behaviors in different scenarios, preferred driving styles, and/or moral or ethics traits, which will be used by the robot in its control of the operations, a process hereby referred to as a personalized self-driving; 10 acquire user preferences data set of preferred behaviors of a self-driving car on a collection of roadway and traffic scenarios . . . invite the user inputting his or her opinions on a preferred handling behavior by selecting an answer among multiple choices or answering to a yes or no question . . . acquired preference data will then be stored in a data structure named the user preference data set, which has an entry for each user of the self-driving car; 15-30, i.e., 27-30 your preferred driving style on a highway is quick and fast or steady and smooth; FIG. 5 including 550, apply user preference data to control the operation of the car; FIG. 4 and corresponding description, i.e., apply user preference/ driving styles) (Ricci, ¶¶ 7, 192-193, 205, 245-246, 302, 314, 346, 356, 376-379, 391, 445) (Soma, 191-192 FIG. 2; ¶ 37, 40, 49-53) With respect to claim 17, Ricci in view of Soma and further in view of Haui disclose at least two different types of body scan data obtained from the user are combined to determine the at least one of the current mood of the user and the one or more preferences of the user. (Ricci, ¶¶ 7, 192-193, 205, 245-246, 302, 314, 346, 356, 376-379, 391, 445) (Soma, 191-192 FIG. 2; ¶ 37, 40, 49-53) With respect to claim 18, Ricci in view of Soma and further in view of Haui disclose at least one of the one or more preferences of the user is obtained by eye-tracking or mouse-tracking the user. (Ricci, ¶¶ 194, 377, 435 “recording data about a user 216 (step 2808). The data may include, but is not limited to, one or more health data, such as heart rate, oxygen levels, glucose levels, blood composition, weight, movement, eye dilation, eye movement, gaze direction”). With respect to claim 27, Ricci in view of Soma and further in view of Haui disclose the driving configuration of the vehicle corresponds to a vehicle configuration that influences a driving behavior of the vehicle. (Haui, ¶¶ 2-3, 9 Corresponding to each interval, there are pre-acquired personalized sets of data for each user reflecting the user's choices of behaviors in different scenarios, preferred driving styles, and/or moral or ethics traits, which will be used by the robot in its control of the operations, a process hereby referred to as a personalized self-driving; 10 acquire user preferences data set of preferred behaviors of a self-driving car on a collection of roadway and traffic scenarios . . . invite the user inputting his or her opinions on a preferred handling behavior by selecting an answer among multiple choices or answering to a yes or no question . . . acquired preference data will then be stored in a data structure named the user preference data set, which has an entry for each user of the self-driving car; 15-30, i.e., 27-30 your preferred driving style on a highway is quick and fast or steady and smooth; FIG. 5 including 550, apply user preference data to control the operation of the car; FIG. 4 and corresponding description, i.e., apply user preference/ driving styles) Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. 20140309790 to Ricci et al. (Ricci) in view of U.S. 20180357473 to Soma et al. (Soma) and further in view of Haui and further in view of U.S. 11,537,917 to Sanchez (Sanchez) With respect to claim 19, Ricci in view of Soma and further in view of Haui disclose collecting body scan data for at least one user, wherein the particular vehicle ride is provided based on the body scan data. (Ricci, ¶¶ 7, 192-193, 205, 245-246, 302, 314, 346, 356, 376-379, 391, 445) (Soma, 191-192 FIG. 2; ¶ 37, 40, 49-53) (Ricci, ¶¶ 194, 377, 435 “recording data about a user 216 (step 2808). The data may include, but is not limited to, one or more health data, such as heart rate, oxygen levels, glucose levels, blood composition, weight, movement, eye dilation, eye movement, gaze direction”). (¶ 381 presenting the virtual personality to a user 216 may include altering one or more features of the vehicle 104. For instance, one or more features of the vehicle 104 may be altered to change a mood associated with the virtual personality. Continuing this example, the personality module 2004 may communicate with the vehicle control system 204 to change an internal lighting, an infotainment setting, a temperature, an oxygen level, an air composition, a comfort setting, a seat position, a transmission setting ( e.g., automatic to manual, paddle shifting, and more), a navigation output, etc.). However, Ricci in view of Soma and further in view of Haui fail to explicitly disclose the collection is with respect to a plurality of users who collectively use a service or determining collective collected data However, basing determinations of characteristics of a user on a plurality of users to improve determinations was known in the art at the time of effective filing. For example, Sanchez, from the same field of endeavor, also discloses monitoring a plurality of users including body scan data such that body scan data is obtained for all individual users of the plurality of users and combined to determine collective body scan data wherein a driving service is provided based on the collective body scan data. (i.e., 446, 450, 438, 444, 424, 422, 405, FIG. 4 and corresponding descriptions; 624 collect training impairment data, 626 collect training driving data, 628 collect test/ validation data, create model with combined collective body scan data 646, FIG. 6; 606, Fig. 6; FIG 7 and corresponding description – collective body scan data used to determine if driving risk occurs, vehicle provides users with driving service, i.e., 716, 710 determine remediating action to reduce, eliminate risk, 718 communicate determined action to user 720 perform system action; Section VI, col. 12 Machine Learning (ML) model for predicting the level of driving risk exposure based at least in part upon acquired sensor data indicative of one or more impairment patterns, col. 13 one or more sets of the first training data may be collected from any suitable impairment monitoring device . . . smart phone . . . training data sets may include data indicative of impairment patterns for users other than the user associated with the smart ring, in addition to or instead of data indicative of impairment patterns for the user associated with the smart ring; col. 14, ll. 4-8 “driving patterns for users other than the user associated with the smart ring in addition to or instead of data indicative of driving patterns for the user”; col. 16, ll. 1-3 data may be collected from one or more smart ring sensors 105; claim 1 receiving one or more sets of first data indicative of one or more impairment patterns collected via one or more impairment monitoring devices in a first set of smart rings . . . receiving one or more sets of second data indicative of one or more driving patterns collected via one or more driving monitor devices, the one or more driving monitor devices including a second set of smart rings . . . training data for a machine learning (ML) model to train the ML model to discover one or more relationships between the one or more impairment patterns and the one or more driving patterns . . . level of risk exposure for the user during driving; and generating a notification to alert the user of the predicted level of risk exposure; claim 12 the one or more sets of first data includes impairment pattern data for users other than the user associated with the smart ring). Accordingly, it would have been obvious to one of ordinary skill in the art at the time of effective filing date to collect body scan data from a plurality of individual users and combine it to determine collective body scan data, as taught by Sanchez, and implement it into the system of Ricci in view of Soma and further in view of Haui such that the particular vehicle ride is provided based on the collective body scan data, since crowdsourced data from a plurality of users provides greater training data for machine learning, which further improves models and predictions based on the collective body scan data, i.e., 712-720 Fig. 7 in Sanchez. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. 20140309790 to Ricci et al. (Ricci) in view of U.S. 20180357473 to Soma et al. (Soma) and further in view of Haui and further in view of U.S. 20060149428 to Kim et al. (Kim) With respect to claim 20, Ricci in view of Soma and further in view of Haui at least suggest the personality data of the user is computed based on the input regarding the user using a neural network trained to compute personality data for a user based on input regarding the user. (Soma, ¶ 54 the control unit 100 may estimate the emotion of the target user on the basis of the traveling state of the target vehicle X and the state of the target user by using the emotion engine that outputs the emotion of the target user from the traveling state of the target vehicle X and the state of the target user generated by machine learning). Although a neural network is a type of machine learning, Ricci in view of Soma fail to explicitly disclose the phrase "neural network”. Kim, from the same field of endeavor, also discloses determining driver emotion using machine learning (FIG. 1, driver emotion extractor, behavior selector; ¶ 44) with similar inputs (FIG. 1 drivers state, state analyzer, sensor extractor, driver behavior; ¶ 26 monitored emotional data; ¶ 37 emotion-determining unit, receives the input signal from the sensor system and performs analysis of a driver's facial expressions, physiological signals like voice, etc.) wherein Kim discloses the machine learning is a neural network (¶ 33 “A driver emotion extractor" refers to a section that presumes the driver's emotions based on a signal input to a neural network”; FIG. 3 learned neural network) Accordingly, it would have been obvious to one of ordinary skill in the art at the time of effective filing date to implement a neural network, as taught by Kim as the machine learning in Ricci in view of Soma such that personality data of the user is computed based on the input regarding the user using a neural network trained to compute personality data for a user based on input regarding the user in order to provide more accurate determinations and reduced manual adjustments over generic machine learning since neural networks automates feature extraction and can more seamlessly integrate a larger array of data inputs at input nodes, allowing for more complex feature analysis and correlation discovery. Claims 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. 20140309790 to Ricci et al. (Ricci) in view of U.S. 20180357473 to Soma et al. (Soma) and further in view of Haui and further in view of U.S. 20060149428 to Kim et al. (Kim) and further in view of US 20200330018 to Lee et al. (Lee) With respect to claim 21, Ricci in view of Soma and further in view of Haui and further in view of Kim disclose user answers are inputs to a neural network, commonly considered a node. (Soma, ¶ 54 estimate the emotion of the target user . . . state of the target user generated by machine learning; 87 accurately estimate preference of the user on the basis of a highly reliable reaction) (Soma, FIG. 2, for vehicle X shown in FIG. 1, interface 1, microphone 192, imaging unit, in vehicle camera 191, voice output unit 17; FIG. 5 steps 104 estimate emotion of user; 112A, 114A, 118A, 120A estimate preference on basis of reaction of user; 122A, 114B, and corresponding descriptions; ¶¶ 39 control unit 100 recognizes an answer to a questionnaire such as “How are you feeling now?” as the state of the target user on the basis of the operation detected by the operation input unit 16.; 48 control unit 100 may estimate the emotion of the target user, for example, on the basis of the answer to the questionnaire. For example, when the answer to the questionnaire is “I am very calm”, the control unit 100 may estimate the type of emotion of the target user to be the positive emotion “calmness”, and the value of the strength of the emotion of the target user to be a large value (for example, 3). When the answer to the questionnaire is “I am slightly irritated”, the control unit 100 may estimate the type of emotion of the target user to be the negative emotion “disfavor”, and the value of the strength the emotion of the target user to be a small value (for example, 1).; 63 content of the information to be output may be various pieces of content, such as a question about a preference of the user, other questions . . . it is preferable for the question about the preference of the user to be included.; 88, 101-102, claim 3) (Ricci, i.e., profile data 252; personality module 2004; 252, 2008, 2004, 2028 FIG. 20 and corresponding description; FIG. 21 and corresponding description; ¶¶ 367-370 virtual personality module 2004, personality data memory . . . Input corresponding to personality information may be stored in the personality data memory 2028 . . . matching a virtual personality with a personality of a user 216 . . . interpret a users behavior observed via the vehicle and/or non-vehicle sensors 242, 236 . . . detecting a context may correspond to an emotional state of the user . . . preferences 2024 can be associated with a user . . . personality matching module 2008 may communicate with a user profile . . . predictive behavior based on a defined personality . . . monitoring input made by a user (e.g., voice commands, content, context, etc.) . . . virtual personality that is identical to the personality of the user 216 . . . virtual personality meet the preferences of the user; 373 information received by one or more sensors can include inquiries, input by a user, voice commands, intelligent personal assistant history . . . associated with user or vehicle 104 . . . profile data; 380 virtual personality information stored . . . includes virtual personality, a personality type of the user, personality preferences) (Kim, ¶ 33 “A driver emotion extractor" refers to a section that presumes the driver's emotions based on a signal input to a neural network”; FIG. 3 learned neural network) However, Ricci in view of Soma and further in view of Haui and further in view of Kim fail to explicitly disclose the user input corresponds to digital scores. Lee, from the same field of endeavor, discloses a machine learning model including a neural network using user input answers in order to classify a users physiological state (¶ 4 Based on the gathered information, a user's psychological state can be classified in real time through machine learning; 41 neural network (CNN); 79-81; 82 label as a numerical value; 84 ) wherein input includes user self evaluation including a digital score of their current mental/ emotional state (¶¶ 5-6; 84-84 If a user has been asked about a current stress level as a Likert-type scale between a point 1 and a point 5, which label will be questioned may be determined based on a distribution of labels for each corresponding point. For example, a stress level having the smallest label may be first questioned.; 85-86 trained using other persons' self-report information or maybe a model trained using self-report information. An improvement factor may be represented using a scale that directly or indirectly indicates the uncertainty of a trained mode; 88 he user may give an answer using the 5-point Likert scale) Accordingly, it would have been obvious to one of ordinary skill in the art at the time of effective filing date for the input node neural network data disclosed in Ricci in view of Soma and further in view of Haui and further in view of Kim to be associated with a digital score, as taught by Lee above, in order to: improve accuracy of emotional responses since numerical values directly from a user is more quantifiable and easier to process than phrases; provide a more accurate assessment of the users mentality than mere biological data (¶ 7); create a lower burden for user self reporting (¶ 8-9); reducing uncertainty of prediction (¶¶ 85-86, 97) With respect to claim 22, Ricci in view of Soma and further in view of Haui and further in view of Kim and further in view of Lee disclose the input regarding the user further corresponds to digital scores reflecting answers to questions regarding at least one of personality, goals and motivations of the user and wherein each digital score is used as input to a separate input node of the neural network when computing the personality data of the user using the neural network. (personality, Lee, ¶ 88 The user may self-report its own stress state through a user interface according to a pre-defined self-report format. As described above, the user may give an answer using the 5-point Likert scale) (personality/ goals (preferences), Ricci, i.e., profile data 252; personality module 2004; 252, 2008, 2004, 2028 FIG. 20 and corresponding description; FIG. 21 and corresponding description; ¶¶ 367-370 virtual personality module 2004, personality data memory . . . Input corresponding to personality information may be stored in the personality data memory 2028 . . . matching a virtual personality with a personality of a user 216 . . . interpret a users behavior observed via the vehicle and/or non-vehicle sensors 242, 236 . . . detecting a context may correspond to an emotional state of the user . . . preferences 2024 can be associated with a user . . . personality matching module 2008 may communicate with a user profile . . . predictive behavior based on a defined personality . . . monitoring input made by a user (e.g., voice commands, content, context, etc.) . . . virtual personality that is identical to the personality of the user 216 . . . virtual personality meet the preferences of the user; 373 information received by one or more sensors can include inquiries, input by a user, voice commands, intelligent personal assistant history . . . associated with user or vehicle 104 . . . profile data; 380 virtual personality information stored . . . includes virtual personality, a personality type of the user, personality preferences) (Personality, Soma FIG. 2, for vehicle X shown in FIG. 1, interface 1, microphone 192, imaging unit, in vehicle camera 191, voice output unit 17; FIG. 5 steps 104 estimate emotion of user; 112A, 114A, 118A, 120A estimate preference on basis of reaction of user; 122A, 114B, and corresponding descriptions; ¶¶ 39 control unit 100 recognizes an answer to a questionnaire such as “How are you feeling now?” as the state of the target user on the basis of the operation detected by the operation input unit 16.; 48 control unit 100 may estimate the emotion of the target user, for example, on the basis of the answer to the questionnaire. For example, when the answer to the questionnaire is “I am very calm”, the control unit 100 may estimate the type of emotion of the target user to be the positive emotion “calmness”, and the value of the strength of the emotion of the target user to be a large value (for example, 3). When the answer to the questionnaire is “I am slightly irritated”, the control unit 100 may estimate the type of emotion of the target user to be the negative emotion “disfavor”, and the value of the strength the emotion of the target user to be a small value (for example, 1).; 63 content of the information to be output may be various pieces of content, such as a question about a preference of the user, other questions . . . it is preferable for the question about the preference of the user to be included.; 88, 101-102, claim 3) (Lee, ¶ 4 Based on the gathered information, a user's psychological state can be classified in real time through machine learning; 41 neural network (CNN); 79-81; 82 label as a numerical value; 84, ¶¶ 5-6; 84-84 If a user has been asked about a current stress level as a Likert-type scale between a point 1 and a point 5, which label will be questioned may be determined based on a distribution of labels for each corresponding point. For example, a stress level having the smallest label may be first questioned.; 85-86 trained using other persons' self-report information or maybe a model trained using self-report information. An improvement factor may be represented using a scale that directly or indirectly indicates the uncertainty of a trained mode; 88 the user may give an answer using the 5-point Likert scale) (122, FIG. 2; FIG. 3; FIG. 4; 35, 41-51; 53-55; 62-71; 76-79; 83-91; claims 1-2) Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20240394604 to Arditi et al. (Arditi) is another example provided to disclose use case specific answers which specifically relate to a particular vehicle ride to be provide to the user including at least one preference or goal relating to the particular vehicle ride ¶ 50 – an answer that states the user would or would not want to be talked to in the use case of a human driver ride provided to the user. ¶ 2 “through a transportation application installed on a mobile device, a ride requestor may request for a ride from a starting location to a destination at a particular time” ¶ 38 ride-service device 160 may include an input/output system 326 configured to receive inputs from users . . . a microphone configured to detect and record speech or dialog uttered . . . receive audio inputs, such as audio commands, which may be interpreted by a voice recognition system or any other command interface ¶ 43 In particular embodiments, once the ride requestor 110 has been picked up, real-time sensor data about the ride requestor 110 may be gathered and used to further personalize the ride experience for the ride requestor 110. Real-time sensor data may be captured by any suitable sensor types within the vehicle, such as those described with reference to FIG. 4. For example, cameras may capture images or videos of the ride requestor 110, microphones may detect speech” ¶ 50 The training data may, for example, be labeled with known personalization preferences. For example, the label of one training data point may indicate that the requestor prefers classical music, warm temperature, and no conversation with the driver. The label of another training data point may indicate that another requestor (or the same requestor) prefers pop music and welcomes conversation 52 training data sample 710 may also include current contextual information 715 relating to the ride request. This may include, for example, the current weather condition or temperature, the time of day, traffic condition in the region, etc. The system may also garner contextual information 715 from the requestor's device . . . how the application was used when the ride request was made . . . any other application usage data may correlate to the requestor's state of mind at the time the request was made . . . signals that conceptually may indicate whether the requestor is in a rush may assist the machine-learning model with predicting whether the requestor would appreciate soothing or exciting music, or no music at all . . . may also query the other applications for data relating to the requestor . . . further obtain data such as, e.g., the requestor's location before or at the scheduled pickup time, the origination and destination locations, and any other ride information. Such information may indicate whether the requestor is going home from work, going to work from home, picking children up, running an errand, etc., which may also serve as contextual information 715 . . .” 55 determining when personalization should take for a particular ride, desirable for certain personalization to commence prior to picking up the ride requestor, such as informing the ride provider of certain preferences of the requestor and adjusting the temperature, seat configuration, and any other configuration that may take time to change 11. The method of claim 1, wherein the at least one ride preference comprises one of an audio preference, a video preference, a volume preference, a temperature preference, a conversation preference, and a language preference. 14. The method of claim 13, wherein the contextual information relating to the ride request comprises one or more of current weather condition or temperature, time of day, traffic condition, or actions of the ride requestor with respect to making the ride request. 38 ride-service device 160 may include an input/output system 326 configured to receive inputs from users . . . include a sensor such as an image-capturing device configured to recognize motion or gesture-based inputs from passengers, a microphone conf
Read full office action

Prosecution Timeline

Sep 09, 2022
Application Filed
Sep 17, 2024
Non-Final Rejection — §103, §112
Mar 17, 2025
Response Filed
May 09, 2025
Final Rejection — §103, §112
Nov 14, 2025
Notice of Allowance
Nov 24, 2025
Request for Continued Examination
Dec 01, 2025
Response after Non-Final Action
Dec 04, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589745
VISUAL GUIDANCE METHOD FOR IMPROVING AUTONOMOUS NAVIGATION WITH ROW FOLLOWING CORRECTIONS IN STEREO CAMERA SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12583443
MOVING BODY CONTROL DEVICE, MOVING BODY CONTROL METHOD, AND MOVING BODY CONTROL PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12571636
METHOD AND DEVICE WITH LANE DETECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12553733
COMPUTER-IMPLEMENTED METHOD FOR BEHAVIOR PLANNING OF AN AT LEAST PARTIALLY AUTOMATED EGO VEHICLE WITH A SPECIFIED NAVIGATION DESTINATION
2y 5m to grant Granted Feb 17, 2026
Patent 12546621
TRAVELING TRACK GENERATION DEVICE AND TRAVELING TRACK GENERATION METHOD
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
94%
With Interview (+19.1%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 642 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month