Prosecution Insights
Last updated: April 19, 2026
Application No. 18/799,319

Methods and Systems for Processing, Storing, and Publishing Data Collected by an In-Ear Device

Non-Final OA §103
Filed
Aug 09, 2024
Examiner
PASHA, ATHAR N
Art Unit
2657
Tech Center
2600 — Communications
Assignee
The Diablo Canyon Collective LLC
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
138 granted / 154 resolved
+27.6% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
18 currently pending
Career history
172
Total Applications
across all art units

Statute-Specific Performance

§101
21.9%
-18.1% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
16.9%
-23.1% vs TC avg
§112
5.2%
-34.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 154 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 8/9/24 is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3, 7 and 12, 14, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Grizzel in further view of Vaughn and Winn. Claim 1. An earphone of a user comprising: Claim 12. A method comprising: a plurality of sensors (Grizzel¶Col5 para 2: The audio may be detected by a microphone and the movement input may be detected by a motion sensor)); and a circuit that executes instructions stored on a memory to perform operations, the operations comprising: receiving an audio signal (C5 para 2: The audio may be detected by a microphone and the movement input may be detected by a motion sensor, ¶Col 25 para 1: "As shown in FIG. 7A, an example wearable device 110 may be an earbud wearable device 110a with two sides where each side includes an inner-lobe insert 750 that includes a speaker 101); Grizzel does not explicitly disclose however Vaughn teaches predicting a user response to the audio signal (Vaughn ¶ Claim 1. to identify a context based on the sensed information; an action identifier communicatively coupled to the sound identifier and the context identifier to identify an action [predict user response] based on the identified sound and the identified context; and a context developer communicatively coupled to the context identifier to develop contextual information for the context identifier, wherein the contextual information is to be one or more of an emotional state of a user, biometric information of the user, gesture information of the user, or facial information of the user, and wherein the context developer comprises a machine learner to identify a sound, categorize a sound, and identify a new action [response] based on one or more of the context or a monitored response), wherein predicting the user response includes comparing sound records stored on a device memory to the audio signal and biometric features of the user detected by the plurality of sensors (Vaughn ¶ Claim 1. wherein the contextual information is to be one or more of an emotional state of a user, biometric information of the user, gesture information of the user, or facial information of the user, and wherein the context developer comprises a machine learner to identify a sound, categorize a sound, and identify a new action [predict user response] based on one or more of the context or a monitored response); and It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify earphone of Grizzel with the sound analysis of Vaughn in order to improve results for sound/context/action identification ([0057], Vaughn); Neither Grizzel nor Vaughn explicitly disclose however Winn teaches providing a notification to the user based on the predicted response (Winn ¶[0077]A user notification 310 is stored in the user notification database 920 corresponding to the predicted response). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify earphone of Grizzel in view of the sound analysis of Vaughn to include notification of Winn in order to in order to increase the accuracy of the predicted response ([0033], Winn); With respect to claims 3 and 14 Grizzel further teaches wherein the plurality of sensors includes one or more physiological sensors to detect the biometric features of the user, the biometric features including motion data that indicates whether the user reacts or does not react to the audio signal (Grizzel ¶ C23 ll 6-17: For example, if device 110 captures audio data corresponding to “play my favorite playlist” while detecting motion data corresponding to a user performing a head nod [head nod is a biometric data that is sensed based on on-board sensors); With respect to claims 4 Grizzel teaches wherein the motion data includes a head movement of the user (Grizzel ¶ (Grizzel ¶ C23 ll 6-17: For example, if device 110 captures audio data corresponding to “play my favorite playlist” while detecting motion data corresponding to a user performing a head nod [head nod is a biometric data that is sensed based on on-board sensors); With respect to claims 6 and 16 Grizzel teaches storing the sound records and the biometric features according to a storage plan, wherein the storage plan includes storing at least part of the biometric features on the device memory (Grizzel ¶ C21 ll 35-39: FIG. 6 illustrates...audio data 111 , sensor data 302, gesture data 304 and/or time data 306 to the server(s) 120 via a network(s) 199.. , ¶ Col7 ll10-20Once speech is detected in the input audio 11, the voice input device 110 [headset] may use the wake command detection component 220 to perform wakeword detection … or other data to determine if the incoming audio “matches” stored audio data [on device memory] corresponding to a keyword). With respect to claims 7 and 17 Grizzel teaches wherein storing the sound records and the biometric features includes generating an in-ear data object, the in-ear data object including portions of the audio signal and the biometric features (Grizzel ¶ C21 ll 35-39: FIG. 6 illustrates...audio data 111 , sensor data 302, gesture data 304 and/or time data 306 to the server(s) 120 via a network(s) 199.. , ¶ Col7 ll10-20Once speech is detected in the input audio 11, the voice input device 110 [headset] may use the wake command detection component 220 to perform wakeword detection … or other data to determine if the incoming audio “matches” stored audio data [on device memory] corresponding to a keyword). With respect to claims 8 and 18 Grizzel teaches wherein the device memory includes the memory of the earphone, a memory of an associated user device, or a memory of an external system (Grizzel ¶ C21 ll 35-39: FIG. 6 illustrates...audio data 111 , sensor data 302, gesture data 304 and/or time data 306 to the server(s) 120 via a network(s) 199.. , ¶ Col7 ll10-20Once speech is detected in the input audio 11, the voice input device 110 [headset] may use the wake command detection component 220 to perform wakeword detection … or other data to determine if the incoming audio “matches” stored audio data [on device memory] corresponding to a keyword). With respect to claims 11 and 20 Vaugh further teaches 11. The earphone of claim 1, wherein predicting the user response to the audio signal includes utilizing a machine learning model to compare the sound records stored on the device memory to the audio signal and the biometric features of the user detected by the plurality of sensors, the machine learning model being stored on the device memory (Vaughn ¶ Claim 1. wherein the contextual information is to be one or more of an emotional state of a user, biometric information of the user, gesture information of the user, or facial information of the user, and wherein the context developer comprises a machine learner to identify a sound, categorize a sound, and identify a new action [predict user response] based on one or more of the context or a monitored response). Claims 2, 13 are rejected under 35 U.S.C. 103 as being unpatentable over Grizzel, Vaughn and Winn in further view of Goldstein_458 (US 20080240458 A1) With respect to claims 2 and 13 none of Grizzel, Vaughn and Winn explicitly disclose however Goldstein_458 teaches wherein the plurality of sensors includes one or more microphones to detect the audio signal, the audio signal including a safety-related sound (Goldstein_458 ¶[0028] Referring to FIG. 2, a block diagram of the earpiece 100 in accordance with an exemplary embodiment is shown. As illustrated, the earpiece 100 can include a processor 206 operatively coupled to the ASM 110, ECR 120, and ECM 130 via one or more Analog to Digital Converters (ADC) 202 and Digital to Analog Converters (DAC) 203. The processor 206 can monitor the ambient sound captured by the ASM 110 for target sounds in the environment, such as an alarm (e.g., bell, emergency vehicle, security system, etc.), siren (e.g, police car, ambulance, etc.), voice (e.g., "help", "stop", "police", etc.) It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify earphone of Grizzel in view of the sound analysis of Vaughn in view of notification of Winn to include sounds of Goldstein_458 in order to allow users to react in time on hearing sounds (Goldstein_458, [0003]) Claims 5, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Grizzel, Vaughn and Winn in further view of Jarrell, Angell and Boscacci. With respect to claims 5 and 15 Grizzel, Vaughn and Winn do not explicitly disclose however Jarrell teaches wherein the sound records include sounds classified as safety-related sounds, the classified safety-related sounds including emergency broadcast signals, earthquake sirens, tornado signals, police sirens, ambulance sirens, firetruck sirens, car horns, fire alarms, smoke alarms, and train signals (Jarell ¶[0174] For example, a different blinking light pattern or siren signal may be used to indicate an earthquake, a storm, a tornado, a hurricane, a typhoon, a tsunami or other weather event versus smoke, fire, suspicious activity, a disturbance, an intrusion, or some other threat or emergency,¶ [0214] The communications repeater module may include one or more first antennas configured to receive an initial communications signal, one or more amplifiers to amplify or boost the received initial communications signal, and/or one or more second antennas to transmit or broadcast the amplified or boosted signal). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify earphone of Grizzel in view of the sound analysis of Vaughn in view of notification of Winn to include sounds of Jarell in order to further classifying specific sounds do that they can be detected and to have a more robust reference to be able to detect different types of alarms, sirens etc in order to then be able to decipher what is the source. Grizzel, Vaughn, Winn and Jarrell do not explicitly disclose but Angell teaches sound of car alarms, tornado sirens, smoke alarms, fire alarms, police sirens, fire truck sirens, ambulance sirens (Angell ¶0057] Digital audio data 312 may also be used to identify humans and animals with irregular breathing, wheezing, congestion, coughing, or sneezing to identify, for example and without limitation, patients and/or animals that are suffering from infections, bronchitis, asthma, the flu, a cold or other health problems, the sound of car alarms, gas leaks, tornado sirens, smoke alarms, fire alarms, burglar alarms, police sirens, fire truck sirens, ambulance sirens, and other emergency or warning alarms that may indicate the presence of potentially hazardous conditions, emergency situations, and/or dangerous substances.) It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify earphone of Grizzel in view of the sound analysis of Vaughn in view of notification of Winn in view of sounds of Jarell to include sounds of Angell in order to further classifying specific sounds do that they can be detected and to have a more robust reference to be able to detect different types of alarms, sirens etc in order to then be able to decipher what is the source. Grizzel, Vaughn, Winn, Jarrell and Angell do not explicitly disclose but Boscacci teaches train signal (Boscacci ¶[ [0036] If the emitter is mounted next to a railway level crossing, it may emit a signal only when a train is in the vicinity and the signal may be "Warning Train is Crossing the Road" or may comprise the normal horn sound of the train. .) It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify earphone of Grizzel in view of the sound analysis of Vaughn in view of notification of Winn in view of sounds of Jarell in view of sounds of Angell to include sounds of Boscaci in order to further classifying specific sounds do that they can be detected and to have a more robust reference to be able to detect different types of alarms, sirens etc in order to then be able to decipher what is the source. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Grizzel, Vaughn and Winn in further view of Goldstein_671 (US 20170112671 A1) With respect to claims 9 Grizzel, Vaughn and Winn explicitly disclose however Goldstein_1 teaches wherein the notification includes an audible message notifying the user of the audio signal (Goldstein_671¶[0131] As a non-limiting example, if a user is in a polluted environment, such as air filled with VOCs, the communication module may notify the user to move to a new environment…The communication module may utilize audible or visible alerts if the user is meeting their physiological targets or exceeding safe physiological limits.) It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify earphone of Grizzel in view of the sound analysis of Vaughn in view of notification of Winn to include sounds of Goldstein_671 in order to allow users to enhance their awareness. With respect to claims 10 and 19 Grizzel, Vaughn and Winn explicitly disclose however Goldstein teaches wherein the notification includes an audible message instructing the user to pay attention to their surroundings (Goldstein_671¶[0131] As a non-limiting example, if a user is in a polluted environment, such as air filled with VOCs, the communication module may notify the user to move to a new environment…The communication module may utilize audible or visible alerts if the user is meeting their physiological targets or exceeding safe physiological limits.) It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify earphone of Grizzel in view of the sound analysis of Vaughn in view of notification of Winn to include sounds of Goldstein_671 in order to allow users to enhance their awareness. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ATHAR N PASHA whose telephone number is (408)918-7675. The examiner can normally be reached on Monday-Thursday Alternate Fridays, 7:30-4:30 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached on (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ATHAR N PASHA/Primary Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Aug 09, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596882
COMPLIANCE DETECTION USING NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Apr 07, 2026
Patent 12586563
Method, System and Apparatus for Understanding and Generating Human Conversational Cues
2y 5m to grant Granted Mar 24, 2026
Patent 12579173
SYSTEMS AND METHODS FOR DYNAMICALLY PROVIDING INTELLIGENT RESPONSES
2y 5m to grant Granted Mar 17, 2026
Patent 12566921
GAZETTEER INTEGRATION FOR NEURAL NAMED ENTITY RECOGNITION
2y 5m to grant Granted Mar 03, 2026
Patent 12547844
INTELLIGENT MODEL SELECTION SYSTEM FOR STYLE-SPECIFIC DIGITAL CONTENT GENERATION
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+17.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 154 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month