DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim status
Claims 1-20 are currently pending for examination.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recite the limitations "the vehicle" in line 3. There is insufficient antecedent basis for these limitation in the claim.
Claims 2-11 depend on claim 1. Claims 2-11 are indefinite because they depend on a base claim that is indefinite.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-6, 10-15 and 19-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gallagher et al. (Gallagher; US 2019/0332902).
For claim 1, Gallagher discloses a system for assisting a distracted vehicle occupant [E.g. 0018: A vehicle system can include a first occupant sensor to sense central nervous system characteristics of a vehicle occupant; a second occupant sensor to sense non-central nervous system characteristics of the vehicle occupant; a neural network to receive the sensed central nervous system characteristics and the non-central nervous system characteristics to compute an emotional valence and arousal level of the occupant and output a stress level based on the emotional valence and arousal level; and a navigation system configured to plan a travel route for a vehicle based on a historical stress level of the occupant for segments of the travel route, 0053: At least one sensor 150 is positioned to be at the posterior of the head near or at the occipital-visual cortical region. This may assist in accurately measuring brain waves, e.g., through EDP. As driving is a visually dominant cognitive task the ability to detect processing in that anatomical area of the brain (e.g., the visual cortex) as well as other processing and cognitive networks of mental processing offers the ability to monitor visual attention level specifically. For example, visual habituation is the brain's ability to decrease its response to repeated stimuli once the information has been processed and is no longer perceived as a relevant processing demand. In addition to generally low visual attention, the occupant should not experience significant habituation patterns as the visual scenery though mundane at times is in continuous variation and the conditions demand attention in such areas. Lack of activity related to visual processing or habituation of visual stimuli can serve as a subset classification of potential distraction in addition to other brain wave responses and secondary monitoring systems, Abstract], the system comprising:
a first occupant sensor configured to capture data indicative of an emotional state of an occupant of the vehicle [E.g. 0056: FIG. 3A shows a schematic view of a system 300 that can be implemented to classify emotional state of an occupant, e.g., a vehicle occupant who is in a vehicle seat. A sensor array 301 can monitor a driver or an occupant of the vehicle seat and is positioned in a vehicle and can include any sensors described herein. The sensor array 301 can monitor the occupant using central nervous system (CNS) sensors 303, sympathetic nervous system (SNS) sensors 304, autonomic nervous system (ANS) sensors 305, parasympathetic nervous system (PSNS) sensors 306, and peripheral nervous system (PNS) sensors 307. These sensors can be placed in the vehicle cabin, e.g., in the seat, the steering wheel, door, A pillar, B pillar, or other locations in the vehicle that can interact with a vehicle occupant. The CNS sensor 303 is configured to sense signals related to the brain and the spinal column of the occupant. The ANS sensor 305 is configured to sense the occupant's physical state that relate to unconscious bodily functions, e.g., heart rate, digestion, respiratory rate, pupillary response, salivary gland operation, urination urgency, and sexual arousal. The SNS sensor 304 can sense an occupant's fight-or-flight response, which can be measured as a change in the occupant's heart rate, constriction of blood vessels, and change in blood pressure. An increase in the heart rate and an increase in blood pressure may indicate a possible agitation of the occupant. The PSNS sensor 306 can sense the occupant's state with regard to the parasympathetic system which is responsible for stimulation of digest, rest, feed or other activities that occur when the body is at rest, especially after eating, including sexual arousal, salivation, lacrimation (e.g., tears), urination, digestion and defecation. These sensors can be placed at locations where the occupant's nerves output signals that relate to at least two of the ANS, SNS, CNS, and/or PSNS electrical signals in the occupant's body. The PSNS sensor 306 can sense the occupant's craniosacral outflow. The SNS sensor 304 can sense the occupant's thoracolumbar outflow. The PNS sensor 307 sense electrical signals in the occupant's body outside the central nervous system and can sense signals to move muscles, e.g., twitching, nervous mannerisms and the like, 0057: A neural network 310 receives the sensed data related to the occupant(s) from the occupant sensors 301. The neural network 310 can include various algorithms in hardware processors. The neural network 310 operates as a computing system that is self-learning to progressively improve performance of classifying an occupant's emotional state, by executing the algorithm to consider examples of the input in view of the occupant's state. The neural network 310 can learn to identify emotional states of the occupant and the sensed data relating to the occupant; 0051, 0054];
a distraction determination module comprising instructions stored in at least one memory and executable by one or more processors [E.g. 0054: The various sensors can provide an N size array of biometric sensors that measure signals for at least the CNS function of the vehicle occupant and in some aspects measure signals for the other biometric signals of vehicle occupant. The other biometric signals can be at least one of PNS, ANS, SNS, PNPS and/or biochemistry for increased accuracy in detecting the emotional stage of the occupant. As described in more detail below, the signals are fed into signal processing units that are part of the neural network with appropriate artifact correction being run for each type of sensed signal. The neural network processes the signals in a first layer individually for a variety of bio-functional markers of valence and arousal. Each layer has individualized machine learning logic trees to remove single-metric and subjectivity uncertainty and improve accuracy. The outputs of these initial layers are fed into a second layer and, if required, subsequent layers of the neural network where they are assessed in sub-combinations or total combination. Each combined layer has a deeper machine learning logic tree that further removes single metric and subjectivity uncertainty improving accuracy. The neural network can use weighting logic based on the fidelity of the signal and processing technique to estimate a confidence coefficient of the assessment level for each network to improve reliability. The final valence/arousal levels are computed and can be used to inform the occupant or alter performance parameters in the vehicle, 0080: FIG. 6 shows an in-vehicle method 600 for using the emotional state determinations to calm the occupant. The use of the present systems 100, 300, 500 to determine the emotional state of a vehicle occupant may not be wanted by all occupants. Moreover, certain actions may increase the stress to an occupant. At 601, the vehicle requests input from the occupant, e.g., the driver, if the occupant would like the vehicle to perform the occupant sensing and calming initiatives as described herein. If the occupant does not opt-in or turn on the sensing and assessment systems, then the process stops at 602. If the occupant opts in, then a plurality of present categories can be provided to the occupant at 603. The occupant can select from any of the multiple sensors to be active and any of the machine-to-human interfaces to provide feedback to the occupant. The machine-to-human interfaces can include video displays, audio signals, and neurological stimulation; Figs. 3A-3C and Fig. 5] to cause the distraction determination module to:
recognize that the occupant is in a high emotional state based on the captured data indicative of the emotional state of the occupant [E.g. 0081: At 604, the vehicle monitors and senses characteristics of the occupant. The vehicle can also monitor other factors than may indicate a stress or anger inducing situation. The biometrics and physical characteristics of the occupant can be measured as described herein. The sensors in the vehicle can sense the occupant's speech, action, and physiological functions as described herein (e.g., facial expression, gestures, heart rate, temperature, etc.) by using camera or other biometric sensing capabilities (e.g., seat sensors) or by recording and analyzing the tone and phrases uttered by the occupant. The occupant's facial expression and changes thereto can also be analyzed to determine the state of the occupant, 0082: At 605, the sensed factors and biometric data relating to the occupant are compared to known standards or past calm behavior. At 606, it is determined if the compared data is outside of a threshold value. The steps 605 and 606 can be performed in the neural network as described herein. The comparison 605 and difference determination 606 can also be performed in a signal processor or in a controller by comparing stored data to the sensed data. At 607, if the threshold is not exceeded, then the process 600 continues to monitor the occupant. Monitoring 604 can be continuous with the comparison and determination steps, 0084: The steps 605, 606, 608 can operate to recognize the occupant's reaction as being anger-based or stress-based, relying on the amalgamation of data from the sensors and the outside-vehicle based data, 0085: At 609, if there are occurrences that indicate that the driver is not acting in a clam manner, then the machine-to-human interface is triggered];
determine, in response to the recognized high emotional state of the occupant, at least one corrective action configured to calm the occupant from the high emotional state to a normal emotional state [E.g. 0086: At 610, the human-machine interface can provide an indicator output to the occupant aimed at calming the occupant. The indicator output can be an audio message, a visual, or stimulation as described herein. The audio message can be stored in the vehicle infotainment system and played when the present systems and methods determine that the occupant is stressed or not in a clam state. The visual can be a picture of a loved one (e.g. family, significant other, pet) on a vehicle display. The audio can be playing a favorite song, telling a joke, or playing a witty remark (pre-recorded or collected from public domain), 0087: At 611, the occupant is monitored for a change in emotional state. The monitoring can be the same as that in step 604. At 612, it is determined if the occupant has changed to a calm state. If yes, then the process stops at 613. Stopping indicates that the output ends, however, the monitoring at step 604 can continue as long as the occupant is in the vehicle. If the occupant has not clamed, then the process returns to the output 610 and can repeat the prior output or try a new output; 0080: FIG. 6 shows an in-vehicle method 600 for using the emotional state determinations to calm the occupant. The use of the present systems 100, 300, 500 to determine the emotional state of a vehicle occupant may not be wanted by all occupants. Moreover, certain actions may increase the stress to an occupant. At 601, the vehicle requests input from the occupant, e.g., the driver, if the occupant would like the vehicle to perform the occupant sensing and calming initiatives as described herein. If the occupant does not opt-in or turn on the sensing and assessment systems, then the process stops at 602. If the occupant opts in, then a plurality of present categories can be provided to the occupant at 603. The occupant can select from any of the multiple sensors to be active and any of the machine-to-human interfaces to provide feedback to the occupant. The machine-to-human interfaces can include video displays, audio signals, and neurological stimulation]; and
implement or cause to be implemented the at least one corrective action [E.g. 0085-0087; see also Figs. 5-6].
For claim 2, Gallagher discloses wherein the occupant comprises a driver or operator of the vehicle [E.g. 0056: A sensor array 301 can monitor a driver or an occupant of the vehicle seat and is positioned in a vehicle and can include any sensors described herein].
For claim 3, Gallagher discloses wherein the high emotional state is associated with at least one of occupant speech over a predetermined db level, an argument involving the occupant, the use of at least one trigger word by the occupant, an emotional facial expression of the occupant, or an erratic movement of the occupant [E.g. 0056: The PNS sensor 307 sense electrical signals in the occupant's body outside the central nervous system and can sense signals to move muscles, e.g., twitching, nervous mannerisms and the like].
For claim 4, Gallagher discloses wherein the high emotional state is recognized utilizing an artificial intelligence algorithm [E.g. 0057].
For claim 5, Gallagher discloses wherein the at least one corrective action comprises at least one of playing soothing ambient sounds or music [E.g. 0116, 0062], turning off a sound system of the vehicle, activating a driver assistance system of the vehicle, activating an autonomous driving system of the vehicle, alerting the occupant of the vehicle of the high emotional state, reducing a sensitivity of at least one vehicle control, preventing use of a mobile device of the occupant, reducing a travel speed or a maximum speed of the vehicle, or establishing a communication link between the occupant and assistance.
For claim 6, Gallagher discloses wherein causing the at least one corrective action to be implemented comprises communicating a signal to at least one of a subsystem of the system [0070, 0080, 0086], another module of the system, or another system of the vehicle suitable to implement the at least one corrective action.
For claim 10, Gallagher discloses wherein the distraction determination module further comprises instructions stored in the at least one memory and executable by the one or more processors to cause the distraction determination module to: receive updated captured data indicative of an updated emotional state of the occupant of the vehicle subsequent to implementation of the at least one corrective action [E.g. 0087: At 611, the occupant is monitored for a change in emotional state. The monitoring can be the same as that in step 604. At 612, it is determined if the occupant has changed to a calm state. If yes, then the process stops at 613. Stopping indicates that the output ends, however, the monitoring at step 604 can continue as long as the occupant is in the vehicle. If the occupant has not clamed, then the process returns to the output 610 and can repeat the prior output or try a new output]; recognize that the occupant is in a normal emotional state based on the updated captured data indicative of the updated emotional state of the occupant [E.g. 0087: At 612, it is determined if the occupant has changed to a calm state]; and terminate or cause to be terminated the at least one corrective action [E.g. 0087: At 612, it is determined if the occupant has changed to a calm state. If yes, then the process stops at 613. Stopping indicates that the output ends].
For claim 11, Gallagher discloses wherein the distraction determination module further comprises instructions stored in the at least one memory and executable by the one or more processors to cause the distraction determination module to: receive updated captured data indicative of an updated emotional state of the occupant of the vehicle subsequent to implementation of the at least one corrective action [E.g. 0087: At 611, the occupant is monitored for a change in emotional state. The monitoring can be the same as that in step 604. At 612, it is determined if the occupant has changed to a calm state. If yes, then the process stops at 613. Stopping indicates that the output ends, however, the monitoring at step 604 can continue as long as the occupant is in the vehicle. If the occupant has not clamed, then the process returns to the output 610 and can repeat the prior output or try a new output]; recognize that the occupant is still in the high emotional state based on the updated captured data indicative of the updated emotional state of the occupant [E.g. 0087: At 612, it is determined if the occupant has changed to a calm state. If yes, then the process stops at 613. Stopping indicates that the output ends, however, the monitoring at step 604 can continue as long as the occupant is in the vehicle. If the occupant has not clamed, then the process returns to the output 610 and can repeat the prior output or try a new output]; determine, in response to recognizing that the occupant is still in the high emotional state, at least one additional corrective action configured to calm the occupant from the high emotional state to the normal emotional state; and implement or cause to be implemented the at least one additional corrective action [E.g. 0087].
For claim 12, is interpreted and rejected as discussed with respect to claim 1.
For claim 13, is interpreted and rejected as discussed with respect to claim 2.
For claim 14, is interpreted and rejected as discussed with respect to claim 3.
For claim 15, is interpreted and rejected as discussed with respect to claim 5.
For claim 19, is interpreted and rejected as discussed with respect to claim 10.
For claim 20, is interpreted and rejected as discussed with respect to claim 11.
Allowable Subject Matter
Claims 7-9 and 16-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
9. The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure: see PTO-892 Notice of Reference Cited.
10. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED BARAKAT whose telephone number is (571)270-3696. The examiner can normally be reached on 9:00am-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Davetta Goins can be reached on (571) 272-2957. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMED BARAKAT/
Primary Examiner, Art Unit 2689