DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-20 are pending for examination.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/27/2025 has been entered.
Response to Arguments
Applicant's arguments filed 10/27/2025 have been fully considered.
In response to the argument under “The 35 U.S.C. § 102(a)(1) Rejection” that Lisy fails to teach the newly introduced limitation “determining an alertness state of the user based on at least the sound data of the sensor data” recited in claims 1 and 17, Lisy appears to teach this limitation in the following disclosure.
Col. 52 line 50- 60 discloses:
“As already described in this disclosure, in various embodiments, preferably, the device or system of the present invention derives metrics indicative of one or more of heart rate, heart rate variability, respiration rate, stress, sleep, cycle, alertness, concentration, focus, preparedness, calorimetry, or metabolism using one or more of dry electrodes, body temperature sensors, ambient temperature sensors, galvanic skin response sensors, pulse oximetry sensors, near infrared sensors, GPS sensors, accelerometers, gyroscopes, altimeters, pressure sensors, proximity sensors, audio sensors,” and
Col. 48 line 59 - Col. 49 line 38 discloses:
“FIG. 4 shows another embodiment of the present invention consisting of sensors implemented in wireless earbuds 42 useful for detecting whether the subject is drowsy or alert, focused or unfocused during an occupational task such as driving a vehicle or operating dangerous machinery such as heavy construction equipment or factory equipment. Such sensors preferably comprise ECG, EEG, motion, and/or temperature sensors, as well as ambient and/or ear bone microphone(s).
The cited disclosure clearly discloses that the earbuds determine an alertness state of the user (e.g., drowsiness or non-alert) based on sound data, including ambient sound and user speech, captured by the audio sensors of the earbuds.
Although Lisy fails to provide a specific example that discloses the determination steps used to perform the determination, an anticipatory refence is not required to disclose the details of how the determination is implemented, so long as the reference clearly teach the claimed function. In this case, Lisy expressly discloses determining the alertness state of the user based on sound data obtained from the earbuds. This appears to reasonably convey a person having ordinary skill in the art that the disputed limitation is anticipated by Lisy.
In addition, a supporting reference by Ardelean (Pub. No.: US 2020/0218914 A1) is cited to demonstrate how a sound recognition system determines drowsiness or sleepiness of the user based on the user’s speech. For example, the system discloses determining drowsiness when a user speaks slowly and determining impairment when a user slurs speech. Ardelean confirms that a person having ordinary skill in the art would have understood how Lisy’s system uses sound data to determine alertness of the user.
Therefore, the rejection is maintained based on the reasons provided.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-9, 12-14, 16-18 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lisy (Pat. No.: US 9,579,060 B1) and supported by Ardelean (Pub. No.: US 2020/0218914 A1).
Regarding claim 1, Lisy teaches a method (abstract, (Fig. 1 – Fig 7, head mounted monitoring device includes earbuds or eyeglasses with earbuds 11, 42, 52, 61, 71), comprising:
receiving sensor data from an ear-worn electronic device disposed at least partially in the ear of a user, the sensor data comprising sound data from at least one microphone of the ear-worn electronic device (Col. 4 line 37 – 41, “Also preferably, the earbud further comprises at least one other sensor for pressure, temperature, g-force, altitude, galvanic skin response, blood oxygenation, movement, ambient sound, or the speech sound of the user.” and Col. 34 line 38-42, “In addition to or instead of an ambient audio sensor, some embodiments of the present invention may incorporate an ear bone microphone capable of picking up the speech of the subject through an earbud, advantageously allowing for hands-free two-way communication.”. The earbud includes a microphone configured to capture ambient noises and user speech.) indicating one or both of non-alert types of sounds and a reaction speed below a threshold (Col. 4 line 64 – Col. 5 line 7, “Further preferably, the at least one earbud further comprises an ambient microphone adapted to measure ambient sound signals, a computer processor adapted to process the ambient sound signals so as to generate corresponding noise cancelling sound signals provided through the at least one soundspeaker, and a user interface adapted to permit the subject to be provided with a non-ambient sound substantially free of ambient sound, a transparent ambient sound free of non-ambient sound, or any continuous mix between the two, through the at least one soundspeaker.”. Ambient noise and user speech are recognized by the earbud as non-alert types of sounds.);
determining an alertness state of the user based on at least the sound data the sensor data (Col. 52 line 50- 60, “As already described in this disclosure, in various embodiments, preferably, the device or system of the present invention derives metrics indicative of one or more of heart rate, heart rate variability, respiration rate, stress, sleep, cycle, alertness, concentration, focus, preparedness, calorimetry, or metabolism using one or more of dry electrodes, body temperature sensors, ambient temperature sensors, galvanic skin response sensors, pulse oximetry sensors, near infrared sensors, GPS sensors, accelerometers, gyroscopes, altimeters, pressure sensors, proximity sensors, audio sensors,”. The earbud determines the user’s alertness by using audio data captured by the microphone / audio sensors); and
providing alert data in response to the alertness state comprising a non-alert state to a user stimulation interface of the ear-worn electronic device or to an external device (Fig. 4, Col. 48 line 59 – Col. 49 line 38, “FIG. 4 shows another embodiment of the present invention consisting of sensors implemented in wireless earbuds 42 useful for detecting whether the subject is drowsy or alert, focused or unfocused during an occupational task such as driving a vehicle or operating dangerous machinery such as heavy construction equipment or factory equipment. Such sensors preferably comprise ECG, EEG, motion, and/or temperature sensors, as well as ambient and/or ear bone microphone(s). … Upon detection of a condition such as a reduced heart rate, a change in brainwave pattern (particularly a transition from beta to alpha, theta, or delta waves) or a combination of conditions such as lack of substantial motion over a predefined threshold period of time in combination with temperature above or below a threshold (indicating possibly sleepiness), a notification or warning 41 may be delivered to the subject, either through a visual display (in a moving vehicle, preferably this is a heads-up display incorporated into the dashboard and/or windshield) that is in communication with the earbuds through a wireless connection, and/or through an alarm or speech message delivered through the earbuds or other audio device.”. The earbud provides an alert to the user in response to the determination that the user is drowsy or the alertness is low (analogues to “non-alert state”) based on the audio data captured from the ambient microphones / earbuds. A support reference by Ardelean further demonstrates how a system uses speech of the user to determine drowsiness or impairment. See para [0034], “If the occupant (e.g., user 16 or passenger 102) is speaking more slowly, this may be indicative of the occupant (e.g., user 16 or passenger 102) being sleepy/tired.” and para [0038], “If the occupant (e.g., user 16 or passenger 102) is slurring their speech, this may be indicative of the occupant (e.g., user 16 or passenger 102) being impaired.”).
Regarding claim 2, Lisy teaches the method according to claim 1, wherein the sensor data comprises one or both of:
motion data indicating a head position of the user; and
motion data indicating head movements of the user (Col. 28 line 43-45, “Accelerometers are often included to detect high g-force conditions, angular movements and accelerations, and the like.”. The monitoring device (e.g., earbud) is worn at the user’s head. The movements detected by the accelerometers of the earbud are indicative of the movements of the user’s head).
Regarding claim 3, Lisy teaches the method according to claim 1, wherein the sensor data comprises physiological data including one or more of:
data indicating a respiration rate of the user (Col. 5 line 19 – 23, “Further preferably, the ECG signals are further processed to determine one or more of heart rate variability, respiration rate as derived from the ECG modulation and/or respiratory sinus arrhythmia or a combination of the two.”);
data indicating surprise of the user;
data indicating a temperature of the user (Col. 24 line 21-45, body temperature sensors);
data indicating physiological electrical signals of the user indicative of eye gaze; and
data indicating head motion or position of the user.
Regarding claim 4, Lisy teaches the method according to claim 1, wherein the sensor data comprises one or both of:
data indicating a reaction speed of the user; and
emotional state data (Col. 25 line 44-49, “Galvanic skin response sensors measure the recorded electrical resistance between two electrodes when a very weak current is steadily passed between them. The sensors are normally placed a short distance apart, and the resistance recorded varies in accordance with the emotional state of the subject.”).
Regarding claim 5, Lisy teaches the method according to claim 1, further comprising determining the alertness state of the user further based on external sensor data ((Col. 33 line 19-41, “Interior ambient temperatures may be measured in cabin, cockpit, or other such vehicle-employed systems, as well as exterior ambient temperatures, or those outside of the cabin, cockpit, or the like. For diving applications, temperature sensors may be included to measure ambient water temperature. In other words, temperature sensors may be included to measure the temperature of all gases inhaled or exhaled by the subject, as well as any environmental or ambient temperatures surrounding the subject, such that the conditions surrounding the subject may be known and used to help monitor the subject's and system's statuses, as well as to detect or predict and mitigate or treat dangerous breathing conditions, and to help alert the subject or third party.” and Col. 34 line 58 – Col. 35 line 8 “Eye tracking sensors may be mounted on headgear (e.g., eyewear, the brims of hats, or in helmets) or may be placed on a separate device such as a display, smartphone or other portable electronic device, or on a dashboard or other vehicle or equipment control console. A basic eye tracking sensor may consist of only a single video camera which is able to determine, to within an acceptable degree of accuracy, where the eye(s) of a subject is/are looking, by, for example using an image matching algorithm or neural net algorithm to determine if eyes are pointed straight toward the camera or at some angle away from the camera. In certain applications it may be acceptable for such sensor to be only sensitive enough to make a binary determination (e.g., to determine only whether or not the subject has his or her eyes on the road, and thus to keep statistics of the amount of time the eyes are on the road and from such statistics make a determination of distracted or drowsy driving).” and Col. 52 line 50-67. External ambient temperature and/or eye tracking sensors are used to determine the alertness of the user).
Regarding claim 6, Lisy teaches the method according to claim 5, wherein the external sensor data comprises motion data from at least one motion sensor indicating arm movements of the user (Col. 4 line 2 -9, “The device or system may additionally pair with other sensor systems attached to the subject or worn by the subject, for example an arm or torso sensor or an electronic device such as a digital watch or smart watch, or cellular phone or smart phone, and where the invention fuses data from the sensors on the various devices to provide a more complete and robust set of data as well as recommendations and/or warnings.”. The system includes an arm sensor.).
Regarding claim 7, Lisy teaches the method according to claim 5, wherein the external sensor data comprises visual data from at least one optical sensor comprising eye gaze data of the user (Col. 34 line 58 – Col. 35 line 8, eye tracking sensor, such as a smartphone, determines whether the user/driver’s eyes are looking at the road).
Regarding claim 8, Lisy teaches the method according to claim 7, wherein the external sensor data comprises driving data, the method further comprising determining a user operation adherence level based on the driving data (Col. 34 line 58 – Col. 35 line 8, “Eye tracking sensors may be mounted on headgear (e.g., eyewear, the brims of hats, or in helmets) or may be placed on a separate device such as a display, smartphone or other portable electronic device, or on a dashboard or other vehicle or equipment control console. A basic eye tracking sensor may consist of only a single video camera which is able to determine, to within an acceptable degree of accuracy, where the eye(s) of a subject is/are looking, by, for example using an image matching algorithm or neural net algorithm to determine if eyes are pointed straight toward the camera or at some angle away from the camera. In certain applications it may be acceptable for such sensor to be only sensitive enough to make a binary determination (e.g., to determine only whether or not the subject has his or her eyes on the road, and thus to keep statistics of the amount of time the eyes are on the road and from such statistics make a determination of distracted or drowsy driving).”. The system determines the vehicle is driving on a road and the driver’s eyes are looking on the road while driving.).
Regarding claim 9, Lisy teaches the method according to claim 5, wherein the external sensor data comprises chemical sensor data indicating an impairment level (Col. 25 line 25- 43, “Galvanic skin response sensors measure a subject's level of excitement, stress, or other such indicators of psychological or physiological stimulation or arousal, as a function of the increased skin conductance caused by the increase in sweat. Skin conductance is a measure of the electrical conductance of the skin, and is commonly known in the art as one of several names, including galvanic skin response (GSR), electrodermal response (EDR), psychogalvanic reflex (PGR), skin conductance response (SCR) or skin conductance level (SCL). Galvanic skin response typically varies based on the moisture level of the subject's skin, such as is caused by sweating. Galvanic skin response may measure stimulation or arousal due to the fact that sweat is controlled by the sympathetic nervous system, which is the part of the autonomic nervous system that initiates or activates the fight-or-flight response to some stimulus applied to the sympathetic neurons.” . The galvanic sensor is considered as a chemical sensor that reacts to the chemicals of sweat to determine the emotion of the user.).
Regarding claim 12, Lisy teaches the method according to claim 1, further comprising updating a model to determine the alertness state of the user based on historical user data to reduce false positive indications of non-alert states (Col. 9 line 21-25, “Preferably this method, further comprises the steps of determining, with a computer processor, when the at least one derived metric has exceeded a threshold or has exhibited a predefined or learned pattern; and automatically providing a stimulus to the subject” . The system learns the patterns of the user to determine the alertness of the user.).
Regarding claim 13, Lisy teaches the method according to claim 1, further comprising providing the alert data to an external device to activate an autonomous or semi-autonomous operational mode of a machine being operated by the user (Col. 49 line 18- 22, “The present invention may further trigger the driven vehicle to initiate automatic braking, automatic steering, or other autopilot features, or may trigger any other equipment to shut down and/or put itself in a less dangerous state,”.).
Regarding claim 14, Lisy teaches the method according to claim 1, wherein a user-perceptible alert is provided in response to the alert data comprising one of more of the following:
an audible alert,
a haptic alert,
an electrical stimulation alert, and
a visual alert (Fig. 4, visual alert 41).
Regarding claim 16, Lisy teaches the method according to claim 1, wherein the sensor data comprises data indicative of the user’s circadian rhythm, and the alertness state of the user is determined based on the circadian rhythm data (Col. 24 line 21-45, “Many embodiments of the present invention further include sensors for measuring the subject's body temperature. Because body temperature affects the rate of chemical reactions critical to normal body operation and healthy survival, the body's thermoregulation mechanisms attempt to keep the subject at optimum operating temperature, 37° C. (98.6° F.) on average in humans, with variation among individuals and in accordance with seasonal, hormonal and menstrual cycles and circadian rhythms—with about 0.5° C. (0.9° F.) variance between daily high and low points. Increased body temperature is indicative of strenuous physical activity, and body temperature change—whether severely increased or severely decreased—is also symptomatic of illness or other dangerous conditions such as hypothermia. Eating, drinking, and smoking can all influence body temperature, as can sleep disturbances (with temperature dropping during rest). Thus, monitoring of body temperature can provide useful exercise and health information, and can alert the subject to take a break from exercise, to oncoming sickness, to bedtime or waketime, to mealtime or caloric restriction, to periods of fertility, etc. It is also believed that an increase in daily body temperature variation can provide an indicator of increased overall physical fitness.”).
Regarding claim 17, recites a method that is similar to the combination of claims 1 and 2. Therefore, it is rejected for the same reasons.
Regarding claim 18, recites a method that is similar to claim 2. Therefore, it is rejected for the same reasons.
Regarding claim 20, recites a method that is similar to claim 16. Therefore, it is rejected for the same reasons.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Lisy (Pat. No.: US 9,579,060 B1) and supported by Ardelean (Pub. No.: US 2020/0218914 A1), and further in view of Kates (Pub. No.: US 2006/001114 A1).
Regarding claim 10, Lisy teaches the method according to claim 1, wherein the earbud receives ambient sounds around the user but fails to teach further comprising receiving support animal data indicating a reaction of a support animal of the user.
However, in the same field of sound detection, Kates teaches a sound recognition system configured to determine the sound made by the animal of the user and to provide an alert to the user base on the determined sound. See para [0171], “This dog "speech recognition" system can base its discrimination on acoustic features, such as, for example, formant structure, pitch, loudness, spectral analysis, etc. When the computer recognizes the message behind the sounds made by the dog, then the system 130 can respond accordingly, either by providing a message to the owner/trainer or by taking action in the dog's environment. Thus, for example, if the dog emits a cry of pain, a choking sound, or the like, the system 130 will raise an alarm and attempt to contact the owner or trainer.”.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Lisy’s earbuds to recognize distress sounds of an animal and to generate an alert to the user to enhance safety.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Lisy (Pat. No.: US 9,579,060 B1) and supported by Ardelean (Pub. No.: US 2020/0218914 A1), and further in view of Yoon (Pub. No.: US 2019/0366844 A1).
Regarding claim 11, Lisy teaches the method according to claim 1, but fails to teach further comprising:
determining a driver error score based on a current driver score to a predetermined baseline driver score; and
determining an alertness state of the user further based on the driver error score.
However, in the same field of alertness detection, Yoon teaches a system configured to determine alertness of the driver based on determining a driver error score (Fig. 3 step S330, alertness score) based on a current driver score to a predetermined baseline driver score (para [0134], “The alertness level determining engine 247 weights result values numerically represented as a result of the above-described factors with different weights and determines the alertness level of the driver from a computation (for example, sum) of the weighted result value. For example, an alertness value based on the image sensor 80 may be weighted with the highest weight and an alertness value based on the displacement profile of the brake pedal or the gas pedal 90 may be weighted with the lowest weight.”. The alertness score is determined based on the measured score of the brake profile with a weight and the measured score of the direction of the face on the road with a weight); and
determining an alertness state of the user further based on the driver error score (Fig. 3, step 340, “In step S340, the control device 240 compares the alertness level of the driver with a threshold level. The threshold level is a predetermined value to determine a possibility of drowsy driving by the driver. For example, when the alertness level is determined to be a value between 1 and 5, the threshold level may be predetermined to be 3.5”).
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Lisy’s earbuds to determine alertness score based on a combined sum of a plurality of weighted scores to improve accuracy.
Claims 15 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Lisy (Pat. No.: US 9,579,060 B1) and supported by Ardelean (Pub. No.: US 2020/0218914 A1), and further in view of Dukish (Pat. No.: US 9,142,130 B1).
Regarding claim 15, Lisy teaches the method according to claim 1, wherein the earbud detects ambient noises and speeches but fails to teach the earbud detects sound data comprises sound produced by a vehicle operated by the user running over a rumble strip.
However, in the same field of sound detection, Dukish teaches a sound detection system configured to detect the sound generated by the vehicle’s tires as the vehicle travels over the rumble strips. Figs. 1 – 7 and Col. 3 line 56 - Col. 4 line 3, “Further yet, as best seen in FIGS. 4A, 4B, and 5, the control system contains a microphone 52 to detect the sound vibration 18 generated as a vehicle's tire 12 travels over the rumble strips 14. The microphone 52 may be any type of transducer able to respond to sound energy. Referring to FIG. 4A, the microphone 52 is part of a control unit 50 where the output of the microphone is preferably passed to a filter 54 for conditioning and amplification before being passed to a tone detector 56. The tone detector can be a simple analog circuit with resistive and capacitive components chosen to respond to voltage levels of the incoming audio wave of interest, which when working in conjunction with multivibrators will produce an output when the incoming audio wave 18 substantially matches the pre-programmed characteristics of the sound of a tire 12 traveling over the rumble strip 14.”.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Lisy’s earbuds to recognize the sound of rumble strips to improve safety.
Regarding claim 19, recites a method that is similar to claim 15. Therefore, it is rejected for the same reasons.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHEN Y WU whose telephone number is (571)272-5711. The examiner can normally be reached Monday-Friday, 10AM-6PM, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Quan-Zhen Wang can be reached at 571-272-3114. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ZHEN Y WU/Primary Examiner, Art Unit 2685