Prosecution Insights
Last updated: April 19, 2026
Application No. 18/387,849

RECORDING SYSTEM

Non-Final OA §102§103
Filed
Nov 07, 2023
Examiner
SELLERS, DANIEL R
Art Unit
2694
Tech Center
2600 — Communications
Assignee
St Famtech LLC
OA Round
2 (Non-Final)
67%
Grant Probability
Favorable
2-3
OA Rounds
3y 6m
To Grant
84%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
401 granted / 595 resolved
+5.4% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
28 currently pending
Career history
623
Total Applications
across all art units

Statute-Specific Performance

§101
5.9%
-34.1% vs TC avg
§103
63.6%
+23.6% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
6.8%
-33.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 595 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application is being examined under the pre-AIA first to invent provisions. Response to Arguments Applicant’s arguments with respect to claim(s) 1 and 21-26 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (e) the invention was described in (1) an application for patent, published under section 122(b), by another filed in the United States before the invention by the applicant for patent or (2) a patent granted on an application for patent by another filed in the United States before the invention by the applicant for patent, except that an international application filed under the treaty defined in section 351(a) shall have the effects for purposes of this subsection of an application filed in the United States only if the international application designated the United States and was published under Article 21(2) of such treaty in the English language. Claim(s) 1-10, 12-14, 16, and 20-26 is/are rejected under pre-AIA 35 U.S.C. 102(e) as being anticipated by Goldstein et al., US 2008/0187163 A1 (hereafter Goldstein). The applied reference has a common joint inventor with the instant application. Based upon the pre-AIA 35 U.S.C. 102(e) date of the reference, it constitutes prior art. This rejection under pre-AIA 35 U.S.C. 102(e) might be overcome either by a showing under 37 CFR 1.132 that any invention disclosed but not claimed in the reference was derived from the inventor or joint inventors (i.e., the inventive entity) of this application and is thus not the invention “by another,” or if the same invention is not being claimed, by an appropriate showing under 37 CFR 1.131(a). Regarding claim 1, Goldstein anticipates: “A Recording System (ARS) comprising: a monitoring assembly, the monitoring assembly including a first sound microphone (FSM) to monitor an ambient acoustic field proximate to a wearable, the FSM producing a first signal responsive to the ambient acoustic field” (see Goldstein, ¶ 0032 and 0042, and figure 2, unit 111), “a second sound microphone (SSM) configured to primarily pickup a user's voice when a user is speaking into the wearable, wherein the SSM produces a second signal” (see Goldstein, ¶ 0043 and figure 2, unit 125), and “a processor” (see Goldstein, ¶ 0034 and figure 2, unit 121); “at least one circular buffer for continually storing the first signal and the second signal” (see Goldstein, ¶ 0035 and 0045-0046, pending claim 5, figure 2, unit 209, figure 3, step 260, and figures 5 and 7, unit 443); “an analysis system that analyzes the first signal and the second signal to detect a trigger, wherein the analysis system is operated by the processor” by detecting an event in the audio, such as a detected sound signature (see Goldstein, ¶ 0047-0048, figure 3, step 262); “a data storage device” (see Goldstein, ¶ 0035 and figure 2, unit 208); and “a record activation system that is activated when the trigger is detected by the analysis system, wherein the record activation system stores a third signal in the data storage device, wherein the third signal is a portion of the first signal” (see Goldstein, ¶ 0046-0048 and 0057-0060 and figures 6-7), and “wherein the third signal includes embedded time information” (see Goldstein, ¶ 0063-0064 and figure 9, steps 520, 536, and 550). Regarding claim 2, see the preceding rejection with respect to claim 1 above. Goldstein anticipates the “system according to claim 1, wherein the trigger is the user's speech” (see Goldstein, ¶ 0008 and 0047). Regarding claim 3, see the preceding rejection with respect to claim 1 above. Goldstein anticipates the “system according to claim 1, wherein the trigger is the other than the user's speech” by teaching the trigger is a detected abrupt movement (see Goldstein, ¶ 0048). Regarding claim 4, see the preceding rejection with respect to claim 1 above. Goldstein anticipates the “system according to claim 1, wherein the third signal is a portion of the first signal” by teaching that some of the buffered audio signals, including the first signal from the FSM, are stored when an event occurs (see Goldstein, ¶ 0046-0048 and 0057-0060 and figures 6-7). Regarding claim 5, see the preceding rejection with respect to claim 1 above. Goldstein anticipates the “system according to claim 1, wherein the third signal is a portion of the second signal” by teaching that some of the buffered audio signals, including the second signal from the SSM, are stored when an event occurs (see Goldstein, ¶ 0046-0048 and 0057-0060 and figures 6-7). Regarding claim 6, see the preceding rejection with respect to claim 1 above. Goldstein anticipates the “system according to claim 1, where the system is further configured to receive a fourth signal” by teaching a received signal, such as audio content (see Goldstein, ¶ 0036 and 0044). Regarding claim 7, see the preceding rejection with respect to claim 1 above. Goldstein anticipates the “system according to claim 1, where the wearable is at least one of an earphone, a phone, or a computing device” by teaching an earpiece (see Goldstein, ¶ 0031 and 0034, and figures 1-2). Regarding claim 8, see the preceding rejection with respect to claim 1 above. Goldstein anticipates the “system according to claim 1, the system further comprising: a signal router which selects which portion of the first signal or which portion of the second signal is in the third signal” (see Goldstein, ¶ 0053-0054 and figure 4, units 303, 305, 311, 313, 315, 317, and 323). Regarding claim 9, see the preceding rejection with respect to claim 1 above. Goldstein anticipates the “system according to claim 1, wherein the time information is a time-coded index” (see Goldstein, ¶ 0050 and 0064). Regarding claim 10, see the preceding rejection with respect to claim 1 above. Goldstein anticipates the “system according to claim 1, wherein the trigger is an accident” (see Goldstein, ¶ 0048). Regarding claim 12, see the preceding rejection with respect to claim 1 above. Goldstein anticipates the “system according to claim 1, wherein the third signal is converted to text” (see Goldstein, ¶ 0064 and figure 9, step 533). Regarding claim 13, see the preceding rejection with respect to claim 1 above. Goldstein anticipates the “system according to claim 1, the system further comprising: a remote audio forensics analysis system configured to analyze the third signal” (see Goldstein, ¶ 0064 and figure 9, steps 537 and 539). Regarding claim 14, see the preceding rejection with respect to claim 13 above. Goldstein anticipates the “system according to claim 13, where the remote audio forensics analysis system includes a communication system configured to transmit the third signal to a remote server for analysis of the third signal” (see Goldstein, ¶ 0064 and figure 9, steps 521, 523, 531, 537 and 539). Regarding claim 16, see the preceding rejection with respect to claim 6 above. Goldstein anticipates the “system according to claim 6, wherein the fourth signal is audio content sent to the speaker of the wearable” by teaching that the received signal, such as audio content, is output by the ECR in the earpiece (see Goldstein, ¶ 0036 and 0043). Regarding claim 20, see the preceding rejection with respect to claim 1 above. Goldstein anticipates the “system of claim 1, wherein the trigger is a sudden acceleration indicative of an accident or a fall” (see Goldstein, ¶ 0062 and figure 8, steps 503, 511, and 599). Regarding claim 21, Goldstein anticipates: “A Recording System (ARS) comprising: a monitoring assembly, the monitoring assembly including a first sound microphone (FSM) to monitor an ambient acoustic field proximate to a wearable, the FSM producing a first signal responsive to the ambient acoustic field” (see Goldstein, ¶ 0032 and 0042, and figure 2, unit 111), “a second sound microphone (SSM) configured to primarily pickup a user's voice when a user is speaking into the wearable, wherein the SSM produces a second signal” (see Goldstein, ¶ 0043 and figure 2, unit 125), and “a processor” (see Goldstein, ¶ 0034 and figure 2, unit 121); “at least one circular buffer for continually storing the first signal and the second signal” (see Goldstein, ¶ 0035 and 0045-0046, pending claim 5, figure 2, unit 209, figure 3, step 260, and figures 5 and 7, unit 443); “an analysis system that analyzes the first signal and the second signal to detect a trigger, wherein the analysis system is operated by the processor” by detecting an event in the audio, such as a detected sound signature (see Goldstein, ¶ 0047-0048, figure 3, step 262); “a data storage device” (see Goldstein, ¶ 0035 and figure 2, unit 208); and “a record activation system that is activated when the trigger is detected by the analysis system, wherein the record activation system stores a third signal in the data storage device, wherein the third signal is a portion of the second signal” (see Goldstein, ¶ 0046-0048 and 0057-0060 and figures 6-7), and “wherein the third signal includes embedded time information” (see Goldstein, ¶ 0063-0064 and figure 9, steps 520, 536, and 550). Regarding claim 22, see the preceding rejection with respect to claim 21 above. Goldstein anticipates the “system according to claim 21, wherein the trigger is the user's speech” (see Goldstein, ¶ 0008 and 0047). Regarding claim 23, see the preceding rejection with respect to claim 21 above. Goldstein anticipates the “system according to claim 21, where the wearable is at least one of an earphone, a phone, or a computing device” by teaching an earpiece (see Goldstein, ¶ 0031 and 0034, and figures 1-2). Regarding claim 24, Goldstein anticipates: “A Recording System (ARS) comprising: a monitoring assembly, the monitoring assembly including a first sound microphone (FSM) to monitor an ambient acoustic field proximate to a wearable, the FSM producing a first signal responsive to the ambient acoustic field” (see Goldstein, ¶ 0032 and 0042, and figure 2, unit 111), “a second sound microphone (SSM) configured to primarily pickup a user's voice when a user is speaking into the wearable, wherein the SSM produces a second signal” (see Goldstein, ¶ 0043 and figure 2, unit 125), and “a processor” (see Goldstein, ¶ 0034 and figure 2, unit 121); “at least one circular buffer for continually storing the first signal and the second signal” (see Goldstein, ¶ 0035 and 0045-0046, pending claim 5, figure 2, unit 209, figure 3, step 260, and figures 5 and 7, unit 443); “an analysis system that analyzes the first signal and the second signal to detect a trigger, wherein the analysis system is operated by the processor” by detecting an event in the audio, such as a detected sound signature (see Goldstein, ¶ 0047-0048, figure 3, step 262); “a data storage device” (see Goldstein, ¶ 0035 and figure 2, unit 208); and “a record activation system that is activated when the trigger is detected by the analysis system, wherein the record activation system stores a third signal in the data storage device, wherein the third signal is a portion of both the first signal and the second signal” (see Goldstein, ¶ 0046-0048 and 0057-0060 and figures 6-7), and “wherein the third signal includes embedded time information” (see Goldstein, ¶ 0063-0064 and figure 9, steps 520, 536, and 550). Regarding claim 25, see the preceding rejection with respect to claim 24 above. Goldstein anticipates the “system according to claim 24, wherein the trigger is the user's speech” (see Goldstein, ¶ 0008 and 0047). Regarding claim 26, see the preceding rejection with respect to claim 25 above. Goldstein anticipates the “system according to claim 25, where the wearable is at least one of an earphone, a phone, or a computing device” by teaching an earpiece (see Goldstein, ¶ 0031 and 0034, and figures 1-2). Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claims 1-2, 4-9, 13-16, 19, and 21-26 is/are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Le et al., US 2003/0161097 A1 (previously cited and hereafter Le) in view of Mumford et al., WO 01/88825 A2 (hereafter Mumford). Regarding claim 1, Le teaches “A Recording System (ARS) comprising: a monitoring assembly, the monitoring assembly including a first sound microphone (FSM) to monitor an ambient acoustic field proximate to a wearable, the FSM producing a first signal responsive to the ambient acoustic field” because Le teaches a computer system (i.e., the monitoring assembly), that includes an environmental microphone (i.e., the FSM), to monitor the environmental audio, or ambient acoustic field, around the user (see Le, ¶ 0021 and figures 1A, 1B, and 2, units 10 and 38), “a second sound microphone (SSM) configured to primarily pickup a user's voice when a user is speaking into the wearable, wherein the SSM produces a second signal” because Le teaches a personal microphone (i.e., the SSM) to receive the voice of the user, such as when the user speaks a voice command to record and store an audio clip (see Le, ¶ 0021 and figures 1C and 2, unit 36), “and a processor” because the computer system includes a processor (see Le, ¶ 0027 and figure 2, unit 16); “at least one circular buffer for continually storing the first signal and the second signal” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the FSM of Le) (see Le, ¶ 0010 and 0028 and figure 2, units 18, 36, and 38); “an analysis system that analyzes the first signal and the second signal to detect a trigger, wherein the analysis system is operated by the processor” because Le teaches a voice recognition engine (i.e., the analysis system) to detect a trigger, such as a user’s voice command (see Le, ¶ 0024 and 0027-0028); “a data storage device” because Le teaches a memory for buffering and storing the audio data (see Le, ¶ 0027-0028 and 0037, and figure 2, unit 18); and “a record activation system that is activated when the trigger is detected by the analysis system” because Le teaches the recording of the audio from the circular buffer to the data storage when the user’s voice command (i.e., the trigger) is detected (see Le, ¶ 0010 and 0028), “wherein the record activation system stores a third signal in the data storage device, wherein the third signal is a portion of the first signal, …” because Le teaches that the user triggers the recording to store a user conversation before and after the voice command is received, where the stored third signal is the other person’s voice from the FSM, such that the environmental microphone records the other person speaker their name (see Le, ¶ 0010, 0021, 0028, and 0035). Le also teaches that the third signal (e.g., audio) is stored with the time and date of the conversation (see Le, ¶ 0035 and 0039). However, Le does not appear to teach the features “wherein the third signal includes embedded time information” because Le does not specifically teach or reasonably suggest that that the time and date information is embedded, or stored within, the recorded audio. Mumford teaches a system for communicating audio, video, and medical data between monitoring sites and/or network servers (see Mumford, ¶ 0001). Herein, Mumford teaches capturing the audio, video, and medical data of a patient for monitoring, bilateral communication and/or review and analysis (see Mumford, ¶ 0005 and 0009-0010). Specifically, Mumford teaches that the captured audio, video, and medical data has time coding embedded within its data stream to synchronize the streams to a common time signal (see Mumford, ¶ 0020-0021). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Le with the teachings of Mumford for the purpose of providing a wearable computer with audio and video capabilities with embedded time information, such that time synchronized audio and video allows for better communication and/or review and analysis by synchronization of the various data streams (see Le, ¶ 0027, 0035, and 0039, in view of Mumford, ¶ 0005, 0009-0010, and 0020). Therefore, the combination of Le and Mumford makes obvious the features: “wherein the record activation system stores a third signal in the data storage device, wherein the third signal is a portion of the first signal, and wherein the third signal includes embedded time information” because Le teaches that the user triggers the recording to store a user conversation before and after the voice command is received, where the stored third signal is the other person’s voice from the FSM, such that the environmental microphone records the other person (see Le, ¶ 0010, 0021, 0028, and 0035), and by making it obvious to embed time coding or information in the third signal for later recall and analysis (see Le, ¶ 0035 and 0039, in view of Mumford, ¶ 0010 and 0020-0021). Regarding claim 2, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system according to claim 1, wherein the trigger is the user's speech” because Le teaches a voice recognition engine (i.e., the analysis system) to detect a trigger, such as a user’s voice command (see Le, ¶ 0024 and 0027-0028). Regarding claim 4, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system according to claim 1, wherein the third signal is a portion of the first signal” because Le teaches that the user triggers the recording to store a user conversation before and after the voice command is received, where the stored third signal includes the user’s voice from the SSM (see Le, ¶ 0010, 0021, 0028, and 0035). Regarding claim 5, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system according to claim 1, wherein the third signal is a portion of the second signal” because Le teaches that the user triggers the recording to store a user conversation before and after the voice command is received, where the stored third signal includes the other person’s voice from the FSM (see Le, ¶ 0010, 0021, 0028, and 0035). Regarding claim 6, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system according to claim 1, where the system is further configured to receive a fourth signal” because Le teaches that the system receives multiple types of data (e.g., input data from other sources, such as GPS sensors and IR receivers, data from a data port, etc.), such that the system receives fourth signals, such as other audio from received communications (e.g., messages from a co-worker that is then converted to speech, or audio messages from a co-worker) (see Le, ¶ 0010, 0013, 0021, 0029-0030, 0035, and 0041). Regarding claim 7, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system according to claim 1, where the wearable is at least one of an earphone, a phone, or a computing device” because Le teaches a wearable computer system including an earpiece (i.e., the earphone) (see Le, ¶ 0021 and 0027, figures 1B and 2, units 10 and 30), or the wearable computer system includes a computer unit (see Le, ¶ 0027, figures 1B and 2, unit 15). Regarding claim 8, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system according to claim 1, the system further comprising: a signal router which selects which portion of the first signal or which portion of the second signal is in the third signal” because Le teaches that the system allows the user, based on a set-up procedure and/or an explicit voice command, to determine which portions of the first and second signal is stored as the third signal (see Le, ¶ 0037-0038). Regarding claim 9, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system according to claim 1, wherein the time information is a time-coded index” because Le teaches that the system stores and recalls time and date information, and Mumford makes obvious time stamps, such as time information or time codes (see Mumford, ¶ 0005 and 0020). Regarding claim 13, see the preceding rejection with respect to claim 1 above. The combination of Le and Mumford makes obvious the “system according to claim 1, the system further comprising: a remote audio forensics analysis system configured to analyze the third signal” because Mumford makes obvious to send the captured data to an intermediate server for review and analysis (see Mumford, ¶ 0002, 0006, and 0010). Regarding claim 14, see the preceding rejection with respect to claim 13 above. The combination makes obvious the “system according to claim 13, where the remote audio forensics analysis system includes a communication system configured to transmit the third signal to a remote server for analysis of the third signal” (see Le, ¶ 0030 and Mumford, ¶ 0002, 0006, and 0010). Regarding claim 15, see the preceding rejection with respect to claim 14 above. The combination makes obvious the “system according to claim 14, where the remote audio forensics analysis system is configured to receive results of the analysis from the remote server” because the combination makes obvious the networked device processes, or analyzes, the audio and the results are directed towards the monitoring system(s) and/or administrative system for appropriate action (see Le, ¶ 0030 and 0037, in view of Mumford, ¶ 0002, 0006, 0010, and 0028). Regarding claim 16, see the preceding rejection with respect to claim 6 above. The combination makes obvious the “system according to claim 6, wherein the fourth signal is audio content sent to the speaker of the wearable” because Le teaches information, such as text-to-speech (TTS) and/or a remote co-worker’s audio message, that is received by the wearable computer system (see Le, ¶ 0013 and 0041). Regarding claim 19, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system of claim 1, wherein the trigger is an improper functioning of the wearable indicative of an electronic failure” because Mumford teaches monitoring medical devices, it would be obvious to record the data when one or more monitored devices indicate failures (see Mumford, ¶ 0010 and 0028). Regarding claim 21, see the preceding rejection with respect to claim 1 above. Le teaches features of a recording system as shown above with respect to claim 1, but does not appear to teach or reasonably suggest the features “wherein the third signal includes embedded time information”. Mumford makes obvious these additional features. For the same reasons as stated above with claim 1, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Le with the teachings of Mumford for the purpose of providing a wearable computer with audio and video capabilities with embedded time information, such that time synchronized audio and video allows for better communication and/or review and analysis by synchronization of the various data streams (see Le, ¶ 0027, 0035, and 0039, in view of Mumford, ¶ 0005, 0009-0010, and 0020). The combination of Le and Mumford makes obvious “A Recording System (ARS) comprising: a monitoring assembly, the monitoring assembly including a first sound microphone (FSM) to monitor an ambient acoustic field proximate to a wearable, the FSM producing a first signal responsive to the ambient acoustic field” because Le teaches a computer system (i.e., the monitoring assembly), that includes an environmental microphone (i.e., the FSM), to monitor the environmental audio, or ambient acoustic field, around the user (see Le, ¶ 0021 and figures 1A, 1B, and 2, units 10 and 38), “a second sound microphone (SSM) configured to primarily pickup a user's voice when a user is speaking into the wearable, wherein the SSM produces a second signal” because Le teaches a personal microphone (i.e., the SSM) to receive the voice of the user, such as when the user speaks a voice command to record and store an audio clip (see Le, ¶ 0021 and figures 1C and 2, unit 36), “and a processor” because the computer system includes a processor (see Le, ¶ 0027 and figure 2, unit 16); “at least one circular buffer for continually storing the first signal and the second signal” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the FSM of Le) (see Le, ¶ 0010 and 0028 and figure 2, units 18, 36, and 38); “an analysis system that analyzes the first signal and the second signal to detect a trigger, wherein the analysis system is operated by the processor” because Le teaches a voice recognition engine (i.e., the analysis system) to detect a trigger, such as a user’s voice command (see Le, ¶ 0024 and 0027-0028); “a data storage device” because Le teaches a memory for buffering and storing the audio data (see Le, ¶ 0027-0028 and 0037, and figure 2, unit 18); and “a record activation system that is activated when the trigger is detected by the analysis system” because Le teaches the recording of the audio from the circular buffer to the data storage when the user’s voice command (i.e., the trigger) is detected (see Le, ¶ 0010 and 0028), “wherein the record activation system stores a third signal in the data storage device, wherein the third signal is a portion of the second signal, and wherein the third signal includes embedded time information” because Le teaches that the user triggers the recording to store a user conversation before and after the voice command is received, where the stored third signal is the user’s voice from the SSM, such that the personal microphone records the user’s voice (see Le, ¶ 0010, 0021, 0028, and 0035), and by making it obvious to embed time coding or information in the third signal for later recall and analysis (see Le, ¶ 0035 and 0039, in view of Mumford, ¶ 0010 and 0020-0021). Regarding claim 22, see the preceding rejection with respect to claim 21 above. The combination makes obvious the “system according to claim 21, wherein the trigger is the user's speech” because Le teaches a voice recognition engine (i.e., the analysis system) to detect a trigger, such as a user’s voice command (see Le, ¶ 0024 and 0027-0028). Regarding claim 23, see the preceding rejection with respect to claim 21 above. The combination makes obvious the “system according to claim 21, where the wearable is at least one of an earphone, a phone, or a computing device” because Le teaches a wearable computer system including an earpiece (i.e., the earphone) (see Le, ¶ 0021 and 0027, figures 1B and 2, units 10 and 30), or the wearable computer system includes a computer unit (see Le, ¶ 0027, figures 1B and 2, unit 15). Regarding claim 24, see the preceding rejection with respect to claim 1 above. Le teaches features of a recording system as shown above with respect to claim 1, but does not appear to teach or reasonably suggest the features “wherein the third signal includes embedded time information”. Mumford makes obvious these additional features. For the same reasons as stated above with claim 1, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Le with the teachings of Mumford for the purpose of providing a wearable computer with audio and video capabilities with embedded time information, such that time synchronized audio and video allows for better communication and/or review and analysis by synchronization of the various data streams (see Le, ¶ 0027, 0035, and 0039, in view of Mumford, ¶ 0005, 0009-0010, and 0020). The combination of Le and Mumford makes obvious “A Recording System (ARS) comprising: a monitoring assembly, the monitoring assembly including a first sound microphone (FSM) to monitor an ambient acoustic field proximate to a wearable, the FSM producing a first signal responsive to the ambient acoustic field” because Le teaches a computer system (i.e., the monitoring assembly), that includes an environmental microphone (i.e., the FSM), to monitor the environmental audio, or ambient acoustic field, around the user (see Le, ¶ 0021 and figures 1A, 1B, and 2, units 10 and 38), “a second sound microphone (SSM) configured to primarily pickup a user's voice when a user is speaking into the wearable, wherein the SSM produces a second signal” because Le teaches a personal microphone (i.e., the SSM) to receive the voice of the user, such as when the user speaks a voice command to record and store an audio clip (see Le, ¶ 0021 and figures 1C and 2, unit 36), “and a processor” because the computer system includes a processor (see Le, ¶ 0027 and figure 2, unit 16); “at least one circular buffer for continually storing the first signal and the second signal” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the FSM of Le) (see Le, ¶ 0010 and 0028 and figure 2, units 18, 36, and 38); “an analysis system that analyzes the first signal and the second signal to detect a trigger, wherein the analysis system is operated by the processor” because Le teaches a voice recognition engine (i.e., the analysis system) to detect a trigger, such as a user’s voice command (see Le, ¶ 0024 and 0027-0028); “a data storage device” because Le teaches a memory for buffering and storing the audio data (see Le, ¶ 0027-0028 and 0037, and figure 2, unit 18); and “a record activation system that is activated when the trigger is detected by the analysis system” because Le teaches the recording of the audio from the circular buffer to the data storage when the user’s voice command (i.e., the trigger) is detected (see Le, ¶ 0010 and 0028), “wherein the record activation system stores a third signal in the data storage device, wherein the third signal is a portion of the second signal, and wherein the third signal includes embedded time information” because Le teaches that the user triggers the recording to store a user conversation before and after the voice command is received, where the stored third signal is a combination of signals from both microphones, such as providing a signal that has the environmental noise filtered therefrom (see Le, ¶ 0010, 0021, 0028, and 0035), and by making it obvious to embed time coding or information in the third signal for later recall and analysis (see Le, ¶ 0035 and 0039, in view of Mumford, ¶ 0010 and 0020-0021). Regarding claim 25, see the preceding rejection with respect to claim 24 above. The combination makes obvious the “system according to claim 24, wherein the trigger is the user's speech” because Le teaches a voice recognition engine (i.e., the analysis system) to detect a trigger, such as a user’s voice command (see Le, ¶ 0024 and 0027-0028). Regarding claim 26, see the preceding rejection with respect to claim 25 above. The combination makes obvious the “system according to claim 25, where the wearable is at least one of an earphone, a phone, or a computing device” because Le teaches a wearable computer system including an earpiece (i.e., the earphone) (see Le, ¶ 0021 and 0027, figures 1B and 2, units 10 and 30), or the wearable computer system includes a computer unit (see Le, ¶ 0027, figures 1B and 2, unit 15). Claims 3, 10, 12, and 20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over the combination of Le and Mumford as applied to claim 1 above, and further in view of Shalon et al., US 2006/0064037 A1 (previously cited and hereafter Shalon). Regarding claim 3, see the preceding rejection with respect to claim 1 above. The combination of Le and Mumford makes obvious the system of claim 1 where the trigger is the user's speech (see Le, ¶ 0035). However, the combination does not appear to teach or reasonably suggest that “the trigger is the other than the user's speech”. Shalon teaches systems and methods for monitoring and modifying behavior (see Shalon, abstract, ¶ 0009, 0022, 0030, and 0093). More importantly, the system taught by Shalon is similar to the teachings of Le, wherein the system uses voice recognition capabilities for user interaction, performs functions as an interactive calendar including scheduling and recording verbal comments, allows a user to receive and send emails or voice messages, provides entertainment via CD or MP3 players, perform wireless communications, and provides a voice recording function for recording speech and/or conversations (see Shalon, ¶ 0157, 0314-0320, 0330, 0331, and 0343). Additionally, Shalon teaches audio interfaces for external systems, such as using voice commands or clicking sounds with an internet-enabled cell phone where the cell phone responds with requested information (see Shalon, ¶ 0357). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Le and Mumford with the teachings of Shalon to provide a different user interface for differently abled people and/or for different use cases, where the user issues a command through non-verbal sounds, such as a clicking sound (see Le, ¶ 0037-0038, in view of Shalon, ¶ 0331, 0343, and 0357). Therefore, the combination of Le, Mumford, and Shalon makes obvious the “system according to claim 1, wherein the trigger is the other than the user's speech” because the combination makes obvious to trigger the recording of audio when detecting a non-verbal sound, such as a clicking sound from the user (see Le, ¶ 0037-0038, in view of Shalon, ¶ 0331, 0343, and 0357). Regarding claim 10, see the preceding rejection with respect to claim 1 above. The combination of Le and Mumford makes obvious the system of claim 1, but does not appear to teach or reasonably suggest that “the trigger is an accident”. Shalon teaches similar features compared to Le, wherein the system uses voice recognition capabilities for user interaction, performs functions as an interactive calendar including scheduling and recording verbal comments, allows a user to receive and send emails or voice messages, provides entertainment via CD or MP3 players, perform wireless communications, and provides a voice recording function for recording speech and/or conversations (see Shalon, ¶ 0157, 0314-0320, 0330, 0331, and 0343). Shalon also teaches other health related aspects, where the system is used to monitor an elderly or disabled person living alone. It detects an event, like a fall, and subsequently helps the user call a preprogrammed number, send a message to that number, or otherwise allow communication with the user, such as sending the user's voice and ambient audio to the remote party (see Shalon, ¶ 0332). One of ordinary skill in the art at the time of the effective filing date would find it obvious that the system would record audio from at least the ambient microphone in the event of a fall, because the system would initiate a phone call to 911 or other monitoring service and it would be obvious that those services record phone calls for liability, legal, training, and/or other well-known reasons (see Shalon, ¶ 0317, 0331-0332, and 0343). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Le and Mumford with the teachings of Shalon for the purpose of providing a personal safety device that automatically calls emergency services in the event of a detected emergency, or accident (see Mumford, ¶ 0001 and 0005 in view of Shalon, ¶ 0117, 0331-0332, and 0343). Therefore, the combination of Le, Mumford, and Shalon makes obvious the “system according to claim 1, wherein the trigger is an accident” because the combination makes it obvious to monitor the ambient sound for sounds associated with a detected emergency, or accident (see Mumford, ¶ 0001 and 0005 in view of Shalon, ¶ 0117, 0331-0332, and 0343). Regarding claim 12, see the preceding rejection with respect to claim 1 above. The combination of Le and Mumford makes obvious the system of claim 1, but does not appear to teach or reasonably suggest that “the third signal is converted to text”. Shalon teaches similar features compared to Le, wherein the system uses voice recognition capabilities for user interaction, performs functions as an interactive calendar including scheduling and recording verbal comments, allows a user to receive and send emails or voice messages, provides entertainment via CD or MP3 players, perform wireless communications, and provides a voice recording function for recording speech and/or conversations (see Shalon, ¶ 0157, 0314-0320, 0330, 0331, and 0343). Shalon teaches speech to text processing via automatic transcription services (see Shalon, ¶ 0343). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Le and Mumford with the teachings of Shalon for the purpose of providing a transcript of a conversation, reminder, or the like for a user to retrieve and review later (see Le, ¶ 0037, and Mumford, ¶ 0010, in view of Shalon, ¶ 0343). Therefore, the combination of Le, Mumford, and Shalon makes obvious the “system according to claim 1, wherein the third signal is converted to text” because Le teaches the features for saving a conversation between the user and another person, and Shalon makes obvious to automatically transcribe a conversation for later use (see Le, ¶ 0037, and Mumford, ¶ 0010, in view of Shalon, ¶ 0343). Regarding claim 20, see the preceding rejection with respect to claims 1 and 10 above. The combination of Le and Mumford makes obvious the features of the system of claim 1, but do not teach or reasonably suggest the features “wherein the trigger is a sudden acceleration indicative of an accident or a fall”. For the same reasons as stated above with respect to claim 10, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Le and Mumford with the teachings of Shalon for the purpose of providing a personal safety device that automatically calls emergency services in the event of a detected emergency, or accident (see Mumford, ¶ 0001, 0005, and 0018 in view of Shalon, ¶ 0117, 0331-0332, and 0343). Therefore, the combination of Le, Mumford, and Shalon makes obvious the “system of claim 1, wherein the trigger is a sudden acceleration indicative of an accident or a fall” because the combination makes it obvious to monitor the various medical devices for information associated with a detected emergency, or accident, such as detecting a fall (see Mumford, ¶ 0010 and 0018, in view of Shalon, ¶ 0332). Claim 11 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over the combination of Le and Mumford as applied to claim 1 above, and further in view of Couper et al., US 2008/0162133 A1 (previously cited and hereafter Couper). Regarding claim 11, see the preceding rejection with respect to claim 1 above. The combination of Le and Mumford makes obvious the system of claim 1, but does not appear to teach or reasonably suggest that “the trigger is a gunshot”. Couper teaches a method of identifying incidents using mobile devices, such as identifying detected sounds using sound signature matching features (see Couper, abstract). Couper teaches that sound detection features are useful for crime prevention and provides a system that leverages mobile devices for sound identification and providing public safety features (see Couper, ¶ 0001-0002, 0020-0021, and 0030). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Le and Mumford with the teachings of Couper for the purpose of leveraging the wearable computer system of Le for sound identification and providing communication to appropriate authorities, such as a 911 call center (see Le, ¶ 0027-0030, 0037, and 0042, and Mumford, ¶ 0001, 0005, and 0009, in view of Couper, ¶ 0020-0021, 0030, and 0034). Therefore, the combination of Le, Mumford, and Couper makes obvious the “system according to claim 1, wherein the trigger is a gunshot” because Le teaches that the device listens for user commands to record audio and upload the recorded audio data to another computer for later retrieval, and Couper makes obvious that the wearable computer system is used to monitor ambient sounds for various types of sounds, such as a gunshot sound, that indicate an emergency, such that the appropriate authority, such as a 911 call center, is contacted and/or alerted to the identified incident (see Le, ¶ 0027-0030, 0037, and 0042, and Mumford, ¶ 0001, 0005, and 0009, in view of Couper, ¶ 0002, 0020-0021, 0026, 0030, and 0034). Claims 17 and 18 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over the combination of Le and Mumford as applied to claim 16 above, and further in view of Bizjak, US 2002/0172374 A1 (previously cited). Regarding claim 17, see the preceding rejection with respect to claim 16 above. The combination of Le and Mumford makes obvious the system of claim 16, but does not appear to teach or reasonably suggest that “the volume of the fourth signal is reduced when the trigger is detected”. Bizjak teaches audio signal processing systems including companders, noise compensators, and methods of controlling systems that include companders, volume controls and noise compensators (see Bizjak, abstract and ¶ 0002). Bizjak teaches that prior art audio signal processing systems do not provide features for a listener to comfortably hear all portions of an audio signal due to environmental audio noise, such that the quiet portions of the signal are masked by the environmental noise and that if or when the user manually increases the volume to hear the masked portions, the volume will be too loud once when the noise has subsided or the audio signal becomes greater in amplitude (see Bizjak, ¶ 0008). Bizjak teaches improvements including noise compensation features to give priority to an audio signal over the environmental noise, and improvements including signal muting, where the audio signal is muted because it is secondary to the environment, such as muting the audio signal during a conversation (see Bizjak, ¶ 0049, 0222, 0240, 0249, 0402, and 0531). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Le and Mumford with the teachings of Bizjak for the purpose of allowing a user to better hear a detected conversation by muting a competing audio signal (see Le, ¶ 0035 and 0041-0042, and Mumford, ¶ 0009, in view of Bizjak, ¶ 0049, 0240, 0249, and 0402). Therefore, the combination of Le, Mumford, and Bizjak makes obvious the “system of claim 16, wherein the volume of the fourth signal is reduced when the trigger is detected” because the combination makes obvious that the trigger is associated with a conversation and makes obvious to reduce, or mute, the fourth signal in order for the listener to hear the other person during a conversation (see Le, ¶ 0035 and 0041-0042, and Mumford, ¶ 0009, in view of Bizjak, ¶ 0049, 0240, 0249, and 0402). Regarding claim 18, see the preceding rejection with respect to claim 17 above. The combination makes obvious the “system of claim 17, wherein the first signal is sent to the speaker” because it is obvious to allow the user to clearly hear the ambient, or environmental, sound according to preferences (see Le, ¶ 0021, 0035, and 0038, in view of Bizjak, ¶ 0240, 0402, and 0531). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dr. Les E. Atlas, "Declaration of Dr. Les E. Atlas, PH.D. in support of petition for Inter Partes Review of U.S. Patent No. 9,124,982", Exhibit 1002 of IPR2022-00234 (previously cited and hereinafter Atlas), where Atlas discusses the knowledge of one of ordinary skill at the time of the instant invention with respect to one or more of the cited art above and cited art referenced in multiple IDS filed with the instant application (see Atlas, pp.1-202). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daniel R Sellers whose telephone number is (571)272-7528. The examiner can normally be reached Mon - Fri 10:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fan S Tsang can be reached at (571)272-7547. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Daniel R Sellers/Primary Examiner, Art Unit 2694
Read full office action

Prosecution Timeline

Nov 07, 2023
Application Filed
Nov 08, 2023
Response after Non-Final Action
Feb 06, 2025
Non-Final Rejection — §102, §103
May 08, 2025
Response Filed
Sep 30, 2025
Request for Continued Examination
Oct 05, 2025
Response after Non-Final Action
Oct 24, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604151
COMPUTER SYSTEM FOR PROCESSING AUDIO CONTENT AND METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12562144
ACOUSTIC ECHO CANCELLATION UNIT
2y 5m to grant Granted Feb 24, 2026
Patent 12556879
SHARED POINT OF VIEW
2y 5m to grant Granted Feb 17, 2026
Patent 12556190
Startup Calibration and Digital Temperature Compensation for an Open-Loop VCO Based ADC Architecture
2y 5m to grant Granted Feb 17, 2026
Patent 12532139
AUDIO SIGNAL PROCESSING METHOD AND AUDIO SIGNAL PROCESSING APPARATUS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
67%
Grant Probability
84%
With Interview (+16.9%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 595 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month