DETAILED ACTION
The present application, filed on 02/13/2023, is being examined under the first inventor to file provisions of the AIA .
The following is a Non-Final Office Action on the merits in response to applicant’s filing from 02/13/2023.
Claims 1-20 are pending and have been considered below.
Priority
The application claims priority to PRO 62/661,982, filed on 04/24/2018; CIP of 16/390,931, filed on 04/22/2019; CIP of 16/516,061, filed on 07/18/2019. The priority is acknowledged.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/13/2023, 06/29/2024, 01/22/2026 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner.
Claim Objections
Claim 14, line 2 is objected to because of the following informalities: “the cabin” should read, “a cabin”. Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 8-9, 11-15, and 17-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Valeri (US 2018/0364966).
Regarding claim 1, Valeri discloses a vehicle method {“method” [0005]}, comprising: providing one or more computer processors {34 (44): “The controller 34 includes at least one processor 44 and a computer-readable storage device or media 46. The processor 44 may be any custom-made or commercially available processor, a central processing unit (CPU), a graphics processing unit (GPU), an auxiliary processor among several processors associated with the controller 34, a semiconductor-based microprocessor (in the form of a microchip or chip set), any combination thereof, or generally any device for executing instructions” [0027]} communicatively coupled with a vehicle {10}; using the one or more computer processors {34 (44)}, determining a mental state {“determining a state-of-mind of an occupant within the interior of the vehicle based on the biometric parameters” [0005]} of a driver {“an occupant positioned within the driver's seat” [0051]} based at least in part on data gathered from biometric sensors {410: “Biometric parameters 410 generally include parameters (and/or parameter values) that might be used to categorize the state of mind of an occupant. In some embodiments, for example biometric parameters 410 include facial expressions (as determined, for example, via convolutional neural network techniques) 401, voice tone 402 (e.g., loud, soft, etc.), spoken utterance content 403 (e.g., profanity, key words related to distress or exasperation, etc.), body temperature 404, gestures 405 (e.g., angry hand motions, etc.), and eye characteristics 406 (e.g., dilated pupils, etc.). The determination of such parameters based the observation of a human being with a range of sensors is well known in the art, and need not be described herein” [0047]}; using the one or more computer processors {34 (44)} and based at least in part on one or more details of a trip {420: “Vehicular context parameters 420 generally include parameters (and/or parameter values) that might be used to characterize non-biometric, vehicle-related factors experienced by an occupant of the vehicle. Such vehicular context parameters 420 might include, for example, a navigation state 407—i.e., whether AV 10 is falling behind with respect to a desired time to reach a destination established by the occupant. Parameters 420 might also include traffic conditions 408 (e.g., congested, stop-and-go, freely moving, etc.), and weather conditions 409 (e.g., snow, rain, cloudy, clear, sunny, etc.)” [0048]}, determining one of a plurality of predetermined driving states corresponding with at least a portion of the trip {420 (407, 408, 409): determining if AV is behind on time or not behind (at least 2 predetermined states for 407); if traffic is congested, stop-and go, or freely moving (at least 3 predetermined states for 408); if weather is snowy, rainy, cloudy, clear, or sunny (5 predetermined states for 409)}; and using the one or more computer processors {34 (44): “Although only one controller 34 is shown in FIG. 1, embodiments of the autonomous vehicle 10 may include any number of controllers 34 that communicate over any suitable communication medium or a combination of communication mediums and that cooperate to process the sensor signals, perform logic, calculations, methods, and/or algorithms, and generate control signals to automatically control features of the autonomous vehicle 10. In one embodiment, as discussed in detail below, controller 34 is configured to allow an occupant to select a driving mode based on occupant preferences, vehicle state, and occupant state” [0028]}, and based at least in part on the determined mental state {410} and the determined driving state {420 (of 407, 408, and 409)}, automatically initiating one or more interventions {440: “soundscape determination module 440 is implemented as a look-up table or decision tree that selects a predetermined soundscape and related vehicle parameters based on a “best fit” to the current state of mind” [0051]} configured to alter the mental state of the driver {“states of mind that might be amenable to alteration or augmentation via an appropriate soundscape” [0059]}.
Regarding claim 8, Valeri discloses a vehicle machine learning method {420+430+440: “Modules 420 and 440 may be implemented in a variety of ways, ranging from relatively simple decision trees to machine learning models that are trained via supervised, unsupervised, or reinforcement learning. A variety of machine learning techniques may be employed for this purpose, including, for example, artificial neural networks (ANN), random forest classifiers, Bayes classifiers (e.g., naive Bayes), principal component analysis (PCA), support vector machines, linear discriminant analysis, and the like. In one embodiment, for example, module 430 is implemented as an artificial neural network that is trained via supervised learning” [0049-0050]}, comprising: providing one or more computer processors {34 (44): “The controller 34 includes at least one processor 44 and a computer-readable storage device or media 46. The processor 44 may be any custom-made or commercially available processor, a central processing unit (CPU), a graphics processing unit (GPU), an auxiliary processor among several processors associated with the controller 34, a semiconductor-based microprocessor (in the form of a microchip or chip set), any combination thereof, or generally any device for executing instructions” [0027]} communicatively coupled with a vehicle {10}; using data gathered from biometric sensors {410: “Biometric parameters 410 generally include parameters (and/or parameter values) that might be used to categorize the state of mind of an occupant. In some embodiments, for example biometric parameters 410 include facial expressions (as determined, for example, via convolutional neural network techniques) 401, voice tone 402 (e.g., loud, soft, etc.), spoken utterance content 403 (e.g., profanity, key words related to distress or exasperation, etc.), body temperature 404, gestures 405 (e.g., angry hand motions, etc.), and eye characteristics 406 (e.g., dilated pupils, etc.). The determination of such parameters based the observation of a human being with a range of sensors is well known in the art, and need not be described herein” [0047]}, training a machine learning model to determine a mental state of a driver {“Sensors 511, 512 may include any type of sensor now known or later developed. In general, sensors 511, 512 are selected based on the ability to produce biometric parameters 410 of FIG. 4. In that regard, sensors 511, 512 may be infrared sensors, optical sensors, audio microphones, or any other class of sensor that is capable of producing an image or quantitative measure indicative of the state of mind of occupant 501. The data and/or signals produced by sensors 511, 512 may be processed by intermediary sub-modules (not illustrated) to produce the desired biometric parameters 410. For example, facial expression parameter 401 and gesture parameter 405 may be produced by a previously trained convolutional neural network model, as is known in art, that takes as its input an image or sequence of images and produces an output such as an integer value corresponding to an enumerated state of mind (e.g., 1=“calm,” 2=“aggravated,” 3=“extremely aggravated,” etc.)” [0054]]}; determining the mental state of the driver using the trained machine learning model {“facial expression parameter 401 and gesture parameter 405 may be produced by a previously trained convolutional neural network model, as is known in art, that takes as its input an image or sequence of images and produces an output such as an integer value corresponding to an enumerated state of mind (e.g., 1=“calm,” 2=“aggravated,” 3=“extremely aggravated,” etc.)” [0054]; “Based on these parameters 410 (and, in some embodiments, vehicular context parameters 420), module 430 determines (at 603) a state of mind 431 of occupant 501 (e.g., “angry”, “calm”, “bored”, etc.). As described herein, state of mind 431 may be represented in any number of ways and may be selected from any convenient set of relevant states” [0059]}; using the one or more computer processors {34 (44): “Although only one controller 34 is shown in FIG. 1, embodiments of the autonomous vehicle 10 may include any number of controllers 34 that communicate over any suitable communication medium or a combination of communication mediums and that cooperate to process the sensor signals, perform logic, calculations, methods, and/or algorithms, and generate control signals to automatically control features of the autonomous vehicle 10. In one embodiment, as discussed in detail below, controller 34 is configured to allow an occupant to select a driving mode based on occupant preferences, vehicle state, and occupant state” [0028]} and based at least in part on one or more details of a trip {420: “Vehicular context parameters 420 generally include parameters (and/or parameter values) that might be used to characterize non-biometric, vehicle-related factors experienced by an occupant of the vehicle. Such vehicular context parameters 420 might include, for example, a navigation state 407—i.e., whether AV 10 is falling behind with respect to a desired time to reach a destination established by the occupant. Parameters 420 might also include traffic conditions 408 (e.g., congested, stop-and-go, freely moving, etc.), and weather conditions 409 (e.g., snow, rain, cloudy, clear, sunny, etc.)” [0048]}, determining one of a plurality of predetermined driving states corresponding with at least a portion of the trip {420 (407, 408, 409 [0048]): determining if AV is behind on time or not behind (at least 2 predetermined states for 407); if traffic is congested, stop-and go, or freely moving (at least 3 predetermined states for 408); if weather is snowy, rainy, cloudy, clear, or sunny (5 predetermined states for 409)}; and using the one or more computer processors {34 (44)}, and based at least in part on the determined mental state {410} and the determined driving state {420 (of 407, 408, 409)}, automatically initiating one or more interventions {440: “soundscape determination module 440 is implemented as a look-up table or decision tree that selects a predetermined soundscape and related vehicle parameters based on a “best fit” to the current state of mind” [0051]} configured to alter the mental state of the driver {“states of mind that might be amenable to alteration or augmentation via an appropriate soundscape” [0059]}.
Regarding claim 9, Valeri discloses the one or more computer processors {34 (44)} determines the driving state based at least in part on a location of the vehicle {420: “vehicular context parameters 420 might include, for example, a navigation state 407—i.e., whether AV 10 is falling behind with respect to a desired time to reach a destination established by the occupant. Parameters 420 might also include traffic conditions 408 (e.g., congested, stop-and-go, freely moving, etc.), and weather conditions 409 (e.g., snow, rain, cloudy, clear, sunny, etc.)” [0048]}.
Regarding claim 11, Valeri discloses the one or more interventions {440} includes changing an environment within a cabin of the vehicle {“soundscape determination module 440 is implemented as a look-up table or decision tree that selects a predetermined soundscape and related vehicle parameters based on a “best fit” to the current state of mind” [0051]; “states of mind that might be amenable to alteration or augmentation via an appropriate soundscape” [0059]}.
Regarding claim 12, Valeri discloses the one or more interventions {440} includes altering an audio condition within the cabin {“soundscape determination module 440 is implemented as a look-up table or decision tree that selects a predetermined soundscape and related vehicle parameters based on a “best fit” to the current state of mind” [0051]; “states of mind that might be amenable to alteration or augmentation via an appropriate soundscape” [0059]}.
Regarding claim 13, Valeri discloses the one or more interventions {440} includes one of preparing a music playlist and altering the music playlist {“soundscape determination module 440 is implemented as a look-up table or decision tree that selects a predetermined soundscape and related vehicle parameters based on a “best fit” to the current state of mind” [0051]}, and wherein the one or more interventions further includes initiating the music playlist {“states of mind that might be amenable to alteration or augmentation via an appropriate soundscape” [0059]}.
Regarding claim 14, Valeri discloses the one or more interventions {440} includes selecting music for playback within the cabin {“soundscape determination module 440 is implemented as a look-up table or decision tree that selects a predetermined soundscape and related vehicle parameters based on a “best fit” to the current state of mind” [0051]; “states of mind that might be amenable to alteration or augmentation via an appropriate soundscape” [0059]}.
Regarding claim 15, Valeri discloses the one or more computer processors {34 (44)} select the music based at least in part on an approachability of the music, an engagement of the music, a sentiment of the music, and one of an energy of the music and a tempo of the music {“soundscape determination module 440 is implemented as a look-up table or decision tree that selects a predetermined soundscape and related vehicle parameters based on a “best fit” to the current state of mind” [0051]; “states of mind that might be amenable to alteration or augmentation via an appropriate soundscape” [0059]; “the selected soundscape and corresponding vehicle parameters 441 may be selected to counteract the determined state of mind 431 (e.g., by producing a quiet soundscape to calm down an angry occupant), or may be selected to augment or amplify the determined state of mind 431 (e.g., by choosing loud, aggressive music to accompany an occupant apparent eagerness to reach his or her destination more quickly)” [0060]}.
Regarding claim 17, Valeri discloses training the machine learning model {420+430+440} to determine the mental state {“a state of mind 431 of occupant 501 (e.g., “angry”, “calm”, “bored”, etc.)” [0059]} of the driver {501} includes training the machine learning model {420+430+440} to determine an arousal level {angry or calm [0059]}, and/or an alertness level of the driver {bored or not bored [0059]}.
Regarding claim 18, Valeri discloses initiating the one or more interventions {440} to alter the mental state of the driver {“state of mind 431 of occupant 501” [0059]} comprises initiating one or more interventions {440 [0049, 0051]} to alter an arousal level and/or an alertness level of the driver {“the selected soundscape and corresponding vehicle parameters 441 may be selected to counteract the determined state of mind 431 (e.g., by producing a quiet soundscape to calm down an angry occupant), or may be selected to augment or amplify the determined state of mind 431 (e.g., by choosing loud, aggressive music to accompany an occupant apparent eagerness to reach his or her destination more quickly)” [0060]}.
Regarding claim 19, Valeri discloses a vehicle machine learning system {420+430+440: “Modules 420 and 440 may be implemented in a variety of ways, ranging from relatively simple decision trees to machine learning models that are trained via supervised, unsupervised, or reinforcement learning. A variety of machine learning techniques may be employed for this purpose, including, for example, artificial neural networks (ANN), random forest classifiers, Bayes classifiers (e.g., naive Bayes), principal component analysis (PCA), support vector machines, linear discriminant analysis, and the like. In one embodiment, for example, module 430 is implemented as an artificial neural network that is trained via supervised learning” [0049-0050]}, comprising: one or more computer processors {34 (44): “The controller 34 includes at least one processor 44 and a computer-readable storage device or media 46. The processor 44 may be any custom-made or commercially available processor, a central processing unit (CPU), a graphics processing unit (GPU), an auxiliary processor among several processors associated with the controller 34, a semiconductor-based microprocessor (in the form of a microchip or chip set), any combination thereof, or generally any device for executing instructions” [0027]}; and one or more media storing instructions executable by the one or more processors {“executing instructions” [0027]}, wherein the instructions, when executed, cause the vehicle machine learning system {420+430+440} to perform operations comprising: training a machine learning model [0049-0050] to determine one of a plurality of predetermined driving states corresponding with at least a portion of a trip {420 (407, 408, 409 [0048]): determining if AV is behind on time or behind (at least 2 predetermined states for 407); if traffic is congested, stop-and go, or freely moving (at least 3 predetermined states for 408); if weather is snowy, rainy, cloudy, clear, or sunny (5 predetermined states for 409)}; determining one of the predetermined driving states {420 (of 407, 408, 409)} corresponding with at least a portion of the trip using the trained machine learning model {“machining learning models that are trained” [0049]}; based at least in part on data gathered from biometric sensors {410: “Biometric parameters 410 generally include parameters (and/or parameter values) that might be used to categorize the state of mind of an occupant. In some embodiments, for example biometric parameters 410 include facial expressions (as determined, for example, via convolutional neural network techniques) 401, voice tone 402 (e.g., loud, soft, etc.), spoken utterance content 403 (e.g., profanity, key words related to distress or exasperation, etc.), body temperature 404, gestures 405 (e.g., angry hand motions, etc.), and eye characteristics 406 (e.g., dilated pupils, etc.). The determination of such parameters based the observation of a human being with a range of sensors is well known in the art, and need not be described herein” [0047]}, determining a mental state of a driver {“state of mind (e.g., 1=“calm,” 2=“aggravated,” 3=“extremely aggravated,” etc.)” [0054]; “state of mind 431 of occupant 501 (e.g., “angry”, “calm”, “bored”, etc.). As described herein, state of mind 431 may be represented in any number of ways and may be selected from any convenient set of relevant states” [0059]}; and based at least in part on the determined mental state and the determined driving state {420 (of 407, 408, 409): “Based on these parameters 410 (and, in some embodiments, vehicular context parameters 420)…” [0059]}, automatically selecting and initiating one or more interventions {440: “soundscape determination module 440 is implemented as a look-up table or decision tree that selects a predetermined soundscape and related vehicle parameters based on a “best fit” to the current state of mind” [0051]} configured to alter the mental state of the driver {“states of mind that might be amenable to alteration or augmentation via an appropriate soundscape” [0059]}.
Regarding claim 20, Valeri discloses the one or more interventions {440} is selected based at least in part on a target brainwave frequency {“the selected soundscape and corresponding vehicle parameters 441 may be selected to counteract the determined state of mind 431 (e.g., by producing a quiet soundscape to calm down an angry occupant), or may be selected to augment or amplify the determined state of mind 431 (e.g., by choosing loud, aggressive music to accompany an occupant apparent eagerness to reach his or her destination more quickly)” [0060]; calm minds and angry minds are each associated with specific brainwave frequencies, as well known in the art, therefore, the soundscape chosen is at least in part chosen to help achieve a target brainwave frequency (alpha waves which are associated with a calm mental state). Examiner further notes that Valeri never explicitly discloses measuring the target brainwave frequencies, but that is not claimed}.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Valeri in view of Williams (US 2018/0176727).
Regarding claim 16, Valeri discloses the one or more interventions {440} includes one of initiating, altering, and withholding interaction between the driver and a suitable user interface {“AV 10 provides a suitable user interface allowing the occupant to configure or otherwise customize module 440. For example, the occupant may prefer that module 440 never select a soundscape that includes loud music or which never deactivates noise cancellation. In some embodiments, a suitable user interface is presented to the user prior to engaging vehicle parameters 441 in order to confirm that the occupant wishes to change the vehicle parameters” [0051]}.
However, Valeri does not explicitly disclose the suitable user interface is a conversational agent.
Williams teaches “a wide variety of interfaces may be provided to interact with… Such interfaces include but are not limited to… Graphical user interfaces… Conversational interfaces, Conversational interface agents” [0132].
In light of these teachings, it would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to have modified the method, as disclosed by Valeri, such that the suitable user interface is a conversational agent, as taught by Williams, in order to “predict and evaluate a risk of a pre-identified behavior by the person in relation to the location and/or the context; and facilitate one or more actions and/or activities to mitigate the risk of the pre-identified behavior” {Abstract}.
Allowable Subject Matter
Claims 2-7 and 10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 2, none of the prior art of record, either alone or in obvious combination discloses the method of claim 1, wherein the plurality of predetermined driving states comprises observant driving, routine driving, effortless driving, and transitional driving.
Accordingly, claims 3-7 are allowable by virtue of dependence from claim 2.
Regarding claim 10, none of the prior art of record, either alone or in obvious combination discloses the method of claim 8, wherein the plurality of predetermined driving states includes observant driving, routine driving, effortless driving, and transitional driving.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daniel M Keck whose telephone number is (571)272-5947. The examiner can normally be reached Mon - Fri 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Shanske can be reached on (571)270-5985. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Daniel M. Keck/Patent Examiner, Art Unit 3614