DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
Response to Arguments
Applicant's arguments filed 10/2/2025 have been fully considered and an examination on the merits is provided herein.
Claim Objections
Claim 15 is objected to because of the following informalities: the last two lines of the claim recite:
“analyzing the third circular buffer for a third event and if a second event is detected then storing the second circular buffer into the data storage system” (italics added).
Examiner believes this should read, “if a third event is detected then storing the third circular buffer into the data storage system”. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6 and 25-28 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 6 recites the limitation "the audio content" in line 1. There is insufficient antecedent basis for this limitation in the claim.
Claim 25 recites the limitation "the audio content" in the last line. There is insufficient antecedent basis for this limitation in the claim.
Claims 26-28 depend from claim 25 and do not correct the issues inherited from claim 25.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a).
Claims 1-9, 11, 13, and 21-32 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Le et al., US 2003/0161097 A1 (previously cited and hereafter Le), in view of Stirnemann, US 2007/0036377 A1 (previously cited), in view of Mayer, US 2004/0042103 A1 (previously cited), and in view of Mumford et al., WO 01/88825 A2 (hereafter Mumford).
Regarding claim 1, Le teaches a wearable computer system that acts as an analysis system comprising a monitoring assembly with an ambient microphone to monitor the ambient environment and personal microphone to monitor the user’s voice (see Le, ¶ 0008-0012, 0021-0024, and fig 2, units 36 and 38). Le further teaches a data storage device configured to continually and constantly buffer the ambient microphone using a scrolling buffer (see Le, ¶ 0010 and 0028), and the data storage device configured to record a user’s conversation based on a recognized voice command or user instruction (see Le, ¶ 0010, 0021, 0023, 0028, 0035, and 0037).
Le does not appear to teach or reasonably suggest features, such that there is a second ambient microphone and that “the first ambient sound microphone, the second ambient sound microphone, and the ear canal microphone are part of an earphone”.
Stirnemann discloses a hearing instrument and a method of adapting the gain according to a determined ear canal characteristic (see Stirnemann, abstract). Stirnemann teaches two outer microphones (e.g., first and second ambient microphones) to perform beamforming (see Stirnemann, ¶ 0082). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Le with the teachings of Stirnemann for the purpose of adapting the audio output of the earpiece to an individual’s ear canal properties (see Stirnemann, ¶ 0016 and 0018).
However, the combination of Le and Stirnemann does not appear to specifically teach the feature of multiple circular buffers, such as “a second circular buffer” and “a third circular buffer”.
Mayer teaches a system and method for improved retroactive recording and/or replay (see Mayer, abstract and ¶ 0002). Herein, Mayer provides an example for retroactive recording of songs from the radio, wherein a circular buffer is used to provide a retroactive recording capability so that the beginning of the song is not lost when a user decides to record a song after the song had started (see Mayer, ¶ 0005). Mayer also provides another example using multiple circular buffers in an electronic device (i.e., a wrist watch, cellular phone, or other common electronic device that a user carries) to record a conversation, where at least two circular buffers are used for constant recording of phone conversations and constant recording of sound in the environment using one or more non-directional microphones (see Mayer, ¶ 0016). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Le and Stirnemann with the teachings of Mayer for the purpose of retroactively recording multiple audio inputs using multiple circular buffers in parallel so that portions of a conversation are not lost (see Le, ¶ 0010 and 0028, and Stirnemann, ¶ 0082, in view of Mayer, ¶ 0016).
However, the combination of Le, Stirnemann, and Mayer does not appear to teach the features “wherein when an event is detected the event detector retrieves data from the circular buffer, embeds a time coded index into the data generating a new data, and stores the new data to the data storage system” because the combination does not specifically teach or reasonably suggest that that the time and date information is embedded, or stored within, the recorded audio.
Mumford teaches a system for communicating audio, video, and medical data between monitoring sites and/or network servers (see Mumford, ¶ 0001). Herein, Mumford teaches capturing the audio, video, and medical data of a patient for monitoring, bilateral communication and/or review and analysis (see Mumford, ¶ 0005 and 0009-0010). Specifically, Mumford teaches that the captured audio, video, and medical data has time coding embedded within its data stream to synchronize the streams to a common time signal (see Mumford, ¶ 0020-0021). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Le, Stirnemann, and Mayer with the teachings of Mumford for the purpose of providing a wearable computer with audio and video capabilities with embedded time information, such that time synchronized audio and video allows for better communication and/or review and analysis by synchronization of the various data streams (see Le, ¶ 0027, 0035, and 0039, in view of Mumford, ¶ 0005, 0009-0010, and 0020).
Therefore, the combination of Le, Stirnemann, Mayer, and Mumford makes obvious the features:
“A system comprising:
a first ambient sound microphone, the first ambient sound microphone producing a first ASM signal” because the combination makes obvious a first ambient microphone that generates the first sound signal that is received by the system (see Le, ¶ 0021 and figure 2, unit 38 in view of Stirnemann, figure 7, unit 1.1 and ¶ 0082);
“a second ambient sound microphone, the second ambient sound microphone producing a second ASM signal” because the combination makes obvious a second ambient microphone that generates the second sound signal that is received by the system (see Le, ¶ 0021 in view of Stirnemann, figure 7, unit 1.2 and ¶ 0082);
“an ear canal microphone, the ear canal microphone producing an ear canal signal, wherein the first ambient sound microphone, the second ambient sound microphone, and the ear canal microphone are part of an earphone” because the combination makes obvious a personal, or in-ear, microphone that generates the third sound signal that is received by the system, and the three microphones are part of an earpiece (see Le, ¶ 0021 and figure 2, units 36 and 38 in view of Stirnemann, figure 7, units 1.1, 1.2, and 6 and ¶ 0069-0070 and 0082);
“a first circular buffer, wherein the first ambient sound signal is constantly recorded in the circular buffer during a mode operation of the earphone” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), Le teaches a first circular buffer where the first ambient sound (i.e., the sound signal from the environmental microphone of Le) is constantly buffered in a circular buffer during operation of the earpiece as made obvious by the combination of Le, Stirnemann, and Mayer (see Le, ¶ 0021 and figure 1C, in view of Stirnemann, ¶ 0069-0070, and further in view of Mayer, ¶ 0016);
“a second circular buffer, wherein the second ambient sound signal is constantly recorded in the second circular buffer during the mode operation of the earphone” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), and because Stirnemann makes obvious the second ambient sound signal, it is obvious to use a second circular buffer to constantly record the second ambient sound signal (see Mayer, ¶ 0016);
“a third circular buffer, wherein the ear canal signal is constantly recorded in the third circular buffer during the mode operation of the earphone, wherein the mode operation of the earphone is when a user activates a mode of operation where the first ambient sound signal, the second ambient sound signal and the ear canal sound signal are all constantly recorded” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), teaches that the data storage device records a user’s conversation based on a recognized voice command or user instruction (see Le, ¶ 0010, 0021, 0023, 0028, 0035, and 0037), and Le further suggests recording an audio clip, such as the user's conversation, using both microphones, it would be obvious to one of ordinary skill in the art at the time of the invention that recording a conversation including all participants would record the user’s voice using the personal microphone and record the other participants' voices using the ambient microphone (see Le, ¶ 0023 and 0024 in view of ¶ 0035 and 0037, and in view of Mayer, ¶ 0016), therefore the combination makes obvious the feature to constantly record the first sound signal in a first circular buffer to monitor the user’s speech with the personal microphone for recording a conversation including all participants;
“a data storage system” because Le teaches the data storage for recording the buffered audio for more permanent storage (see Le, ¶ 0024 and 0028); and
“an event detector, wherein when an event is detected the event detector retrieves data from the circular buffer, embeds a time coded index into the data generating a new data, and stores the new data to the data storage system, wherein the data includes the first ambient sound signal” because Le teaches that the user triggers the recording to store a user conversation before and after the voice command is received, where the stored signal is the other person’s voice from the ASM, such that the environmental microphone records the other person (see Le, ¶ 0010, 0021, 0028, and 0035), the combination makes obvious that the recorded conversation includes one or more of the first and second ambient sound signals or the ear canal sound signal (see Le, ¶ 0024 in view of Stirnemann, ¶ 0084 and Mayer, ¶ 0016), and by further making it obvious to embed time coding or information in the first ambient sound signal for later recall and analysis (see Le, ¶ 0035 and 0039, in view of Mumford, ¶ 0010 and 0020-0021).
Regarding claim 2, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system of claim 1, wherein the data includes the first ambient sound signal and the ear canal signal” because it is obvious to record both signals for recording a conversation (see Le, ¶ 0024 and 0028).
Regarding claim 3, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system of claim 1, wherein the data includes the first ambient sound signal and the second ambient sound signal” because the combination makes it obvious to use beamforming for both ambient microphones and it is obvious to record the beamformed audio signal (see Le, ¶ 0021 in view of Stirnemann, ¶ 0082).
Regarding claim 4, the present claim has been amended to depend from independent claim 21, and is addressed following the rejection of claim 21 below.
Regarding claim 5, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system of claim 1, wherein the event is at least one of breaking glass, an accident, a gun shot, a keyword, hand-clap, a whistle, or a user's voice or a combination thereof” because Le teaches an event such as the user’s voice command to record the audio from the circular buffers to the data storage (see Le, ¶ 0024 and 0028).
Regarding claim 6, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system of claim 1, wherein the audio content is at least one of a signal from a personal media player, or a signal from phone, or a warning signal or a combination thereof” because Le teaches that the system can receive phone calls and forward the audio to the system for the user to hear (see Le, ¶ 0041).
Regarding claim 7, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system according to claim 1 further comprising:
an audio forensics analysis system configured to analyze either a content of one or more of the first circular buffer, the second circular buffer or the third circular buffer, or wherein the analysis system is configured to analyze the new data stored on the data storage device” because Le makes obvious that the circular buffer is analyzed to detect voice commands issued by the user (see Le, ¶ 0024 and 0027-0028).
Regarding claim 8, see the preceding rejection with respect to claim 7 above. The combination makes obvious the “system according to claim 7, where the audio forensics analysis system includes a communication system configured to transmit the content of at least one or more of the first circular buffer, the second circular buffer or the third circular buffer or the new data of the data storage device to a remote server for analysis” because Le teaches that the recorded audio is uploaded to another computer (see Le, ¶ 0039).
Regarding claim 9, see the preceding rejection with respect to claim 8 above. The combination makes obvious the “system according to claim 8, wherein the analysis is to determine if speech is present in the content” because Le teaches an event such as the user’s voice command to record the audio from the circular buffers to the data storage (see Le, ¶ 0024 and 0028).
Regarding claim 11, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system according to claim 1, wherein the system is at least one of a phone, a computing device or a personal media player or a combination thereof” because Le teaches a computing device (see Le, ¶ 0021 and figure 1B, unit 10).
Regarding claim 13, see the preceding rejection with respect to claim 1 above. The combination makes obvious the “system according to claim 1, wherein the first circular buffer and the second circular buffer and the third circular buffer can each share a portion of memory in the system” because Mayer teaches multiple circular buffers for temporarily recording multiple audio sources, such as a telephone source and one or more microphones for recording sounds, and Mayer further teaches that the circular buffers are provided with flash memory or MRAM (see Mayer, ¶ 0016).
Regarding claim 21, see the preceding rejection with respect to claim 1 above. For the same reasons as stated above with respect to claim 1, the combination of Le, Stirnemann, Mayer, and Mumford makes obvious:
“A system comprising:
a first ambient sound microphone, the first ambient sound microphone producing a first ASM signal” because the combination makes obvious a first ambient microphone that generates the first sound signal that is received by the system (see Le, ¶ 0021 and figure 2, unit 38 in view of Stirnemann, figure 7, unit 1.1 and ¶ 0082);
“a second ambient sound microphone, the second ambient sound microphone producing a second ASM signal” because the combination makes obvious a second ambient microphone that generates the second sound signal that is received by the system (see Le, ¶ 0021 in view of Stirnemann, figure 7, unit 1.2 and ¶ 0082);
“an ear canal microphone, the ear canal microphone producing an ear canal signal, wherein the first ambient sound microphone, the second ambient sound microphone, and the ear canal microphone are part of an earphone” because the combination makes obvious a personal, or in-ear, microphone that generates the third sound signal that is received by the system, and the three microphones are part of an earpiece (see Le, ¶ 0021 and figure 2, units 36 and 38 in view of Stirnemann, figure 7, units 1.1, 1.2, and 6 and ¶ 0069-0070 and 0082);
“a first circular buffer, wherein the first ambient sound signal is constantly recorded in the circular buffer during a mode operation of the earphone” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), Le teaches a first circular buffer where the first ambient sound (i.e., the sound signal from the environmental microphone of Le) is constantly buffered in a circular buffer during operation of the earpiece as made obvious by the combination of Le, Stirnemann, and Mayer (see Le, ¶ 0021 and figure 1C, in view of Stirnemann, ¶ 0069-0070, and further in view of Mayer, ¶ 0016);
“a second circular buffer, wherein the second ambient sound signal is constantly recorded in the second circular buffer during the mode operation of the earphone” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), and because Stirnemann makes obvious the second ambient sound signal, it is obvious to use a second circular buffer to constantly record the second ambient sound signal (see Mayer, ¶ 0016);
“a third circular buffer, wherein the ear canal signal is constantly recorded in the third circular buffer during the mode operation of the earphone, wherein the mode operation of the earphone is when a user activates a mode of operation where the first ambient sound signal, the second ambient sound signal and the ear canal sound signal are all constantly recorded” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), teaches that the data storage device records a user’s conversation based on a recognized voice command or user instruction (see Le, ¶ 0010, 0021, 0023, 0028, 0035, and 0037), and Le further suggests recording an audio clip, such as the user's conversation, using both microphones, it would be obvious to one of ordinary skill in the art at the time of the invention that recording a conversation including all participants would record the user’s voice using the personal microphone and record the other participants' voices using the ambient microphone (see Le, ¶ 0023 and 0024 in view of ¶ 0035 and 0037, and in view of Mayer, ¶ 0016), therefore the combination makes obvious the feature to constantly record the first sound signal in a first circular buffer to monitor the user’s speech with the personal microphone for recording a conversation including all participants;
“a data storage system” because Le teaches the data storage for recording the buffered audio for more permanent storage (see Le, ¶ 0024 and 0028); and
“an event detector, wherein when an event is detected the event detector retrieves data from the circular buffer, embeds a time coded index into the data generating a new data, and stores the new data to the data storage system, wherein the data includes the second ambient sound signal” because Le teaches that the user triggers the recording to store a user conversation before and after the voice command is received, where the stored signal is the other person’s voice from the ASM, such that the environmental microphone records the other person (see Le, ¶ 0010, 0021, 0028, and 0035), the combination makes obvious that the recorded conversation includes one or more of the first and second ambient sound signals or the ear canal sound signal (see Le, ¶ 0024 in view of Stirnemann, ¶ 0084 and Mayer, ¶ 0016), and by further making it obvious to embed time coding or information in the second ambient sound signal for later recall and analysis (see Le, ¶ 0035 and 0039, in view of Mumford, ¶ 0010 and 0020-0021).
Regarding claim 4, see the preceding rejection with respect to claim 21 above. The combination makes obvious the “system of claim 21, wherein the data includes the second ambient sound signal and the ear canal signal” because it is obvious to record both signals for recording a conversation (see Le, ¶ 0024 and 0028) where Stirnemann makes obvious the second ambient sound signal from the second external microphone (see Stirnemann, figure 7).
Regarding claim 22, see the preceding rejection with respect to claim 21 above. The combination makes obvious the “system according to claim 21 further comprising:
an audio forensics analysis system configured to analyze either a content of one or more of the first circular buffer, the second circular buffer or the third circular buffer, or wherein the analysis system is configured to analyze the new data stored on the data storage device” because Le makes obvious that the circular buffer is analyzed to detect voice commands issued by the user (see Le, ¶ 0024 and 0027-0028).
Regarding claim 23, see the preceding rejection with respect to claim 22 above. The combination makes obvious the “system according to claim 22, where the audio forensics analysis system includes a communication system configured to transmit the content of at least one or more of the first circular buffer, the second circular buffer or the third circular buffer or the new data of the data storage device to a remote server for analysis” because Le teaches that the recorded audio is uploaded to another computer (see Le, ¶ 0039).
Regarding claim 24, see the preceding rejection with respect to claim 23 above. The combination makes obvious the “system according to claim 23, wherein the analysis is to determine if speech is present in the content” because Le teaches an event such as the user’s voice command to record the audio from the circular buffers to the data storage (see Le, ¶ 0024 and 0028).
Regarding claim 25, see the preceding rejection with respect to claim 1 above. For the same reasons as stated above with respect to claim 1, the combination of Le, Stirnemann, Mayer, and Mumford makes obvious:
“A system comprising:
a first ambient sound microphone, the first ambient sound microphone producing a first ASM signal” because the combination makes obvious a first ambient microphone that generates the first sound signal that is received by the system (see Le, ¶ 0021 and figure 2, unit 38 in view of Stirnemann, figure 7, unit 1.1 and ¶ 0082);
“a second ambient sound microphone, the second ambient sound microphone producing a second ASM signal” because the combination makes obvious a second ambient microphone that generates the second sound signal that is received by the system (see Le, ¶ 0021 in view of Stirnemann, figure 7, unit 1.2 and ¶ 0082);
“an ear canal microphone, the ear canal microphone producing an ear canal signal, wherein the first ambient sound microphone, the second ambient sound microphone, and the ear canal microphone are part of an earphone” because the combination makes obvious a personal, or in-ear, microphone that generates the third sound signal that is received by the system, and the three microphones are part of an earpiece (see Le, ¶ 0021 and figure 2, units 36 and 38 in view of Stirnemann, figure 7, units 1.1, 1.2, and 6 and ¶ 0069-0070 and 0082);
“a first circular buffer, wherein the first ambient sound signal is constantly recorded in the circular buffer during a mode operation of the earphone” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), Le teaches a first circular buffer where the first ambient sound (i.e., the sound signal from the environmental microphone of Le) is constantly buffered in a circular buffer during operation of the earpiece as made obvious by the combination of Le, Stirnemann, and Mayer (see Le, ¶ 0021 and figure 1C, in view of Stirnemann, ¶ 0069-0070, and further in view of Mayer, ¶ 0016);
“a second circular buffer, wherein the second ambient sound signal is constantly recorded in the second circular buffer during the mode operation of the earphone” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), and because Stirnemann makes obvious the second ambient sound signal, it is obvious to use a second circular buffer to constantly record the second ambient sound signal (see Mayer, ¶ 0016);
“a third circular buffer, wherein the ear canal signal is constantly recorded in the third circular buffer during the mode operation of the earphone, wherein the mode operation of the earphone is when a user activates a mode of operation where the first ambient sound signal, the second ambient sound signal and the ear canal sound signal are all constantly recorded” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), teaches that the data storage device records a user’s conversation based on a recognized voice command or user instruction (see Le, ¶ 0010, 0021, 0023, 0028, 0035, and 0037), and Le further suggests recording an audio clip, such as the user's conversation, using both microphones, it would be obvious to one of ordinary skill in the art at the time of the invention that recording a conversation including all participants would record the user’s voice using the personal microphone and record the other participants' voices using the ambient microphone (see Le, ¶ 0023 and 0024 in view of ¶ 0035 and 0037, and in view of Mayer, ¶ 0016), therefore the combination makes obvious the feature to constantly record the first sound signal in a first circular buffer to monitor the user’s speech with the personal microphone for recording a conversation including all participants;
“a data storage system” because Le teaches the data storage for recording the buffered audio for more permanent storage (see Le, ¶ 0024 and 0028); and
“an event detector, wherein when an event is detected the event detector retrieves data from the circular buffer, embeds a time coded index into the data generating a new data, and stores the new data to the data storage system, wherein the data includes the audio content” because Le teaches that the user triggers the recording to store a user conversation before and after the voice command is received, where the stored signal is the other person’s voice from the ASM, such that the environmental microphone records the other person (see Le, ¶ 0010, 0021, 0028, and 0035), the combination makes obvious that the recorded conversation includes one or more of the first and second ambient sound signals or the ear canal sound signal (see Le, ¶ 0024 in view of Stirnemann, ¶ 0084 and Mayer, ¶ 0016), and by further making it obvious to embed time coding or information in the audio content, or ambient sound signal for later recall and analysis (see Le, ¶ 0035 and 0039, in view of Mumford, ¶ 0010 and 0020-0021).
Regarding claim 26, see the preceding rejection with respect to claim 25 above. The combination makes obvious the “system according to claim 25 further comprising:
an audio forensics analysis system configured to analyze either a content of one or more of the first circular buffer, the second circular buffer or the third circular buffer, or wherein the analysis system is configured to analyze the new data stored on the data storage device” because Le makes obvious that the circular buffer is analyzed to detect voice commands issued by the user (see Le, ¶ 0024 and 0027-0028).
Regarding claim 27, see the preceding rejection with respect to claim 26 above. The combination makes obvious the “system according to claim 26, where the audio forensics analysis system includes a communication system configured to transmit the content of at least one or more of the first circular buffer, the second circular buffer or the third circular buffer or the new data of the data storage device to a remote server for analysis” because Le teaches that the recorded audio is uploaded to another computer (see Le, ¶ 0039).
Regarding claim 28, see the preceding rejection with respect to claim 27 above. The combination makes obvious the “system according to claim 27, wherein the analysis is to determine if speech is present in the content” because Le teaches an event such as the user’s voice command to record the audio from the circular buffers to the data storage (see Le, ¶ 0024 and 0028).
Regarding claim 29, see the preceding rejection with respect to claim 1 above. For the same reasons as stated above with respect to claim 1, the combination of Le, Stirnemann, Mayer, and Mumford makes obvious:
“A system comprising:
a first ambient sound microphone, the first ambient sound microphone producing a first ASM signal” because the combination makes obvious a first ambient microphone that generates the first sound signal that is received by the system (see Le, ¶ 0021 and figure 2, unit 38 in view of Stirnemann, figure 7, unit 1.1 and ¶ 0082);
“a second ambient sound microphone, the second ambient sound microphone producing a second ASM signal” because the combination makes obvious a second ambient microphone that generates the second sound signal that is received by the system (see Le, ¶ 0021 in view of Stirnemann, figure 7, unit 1.2 and ¶ 0082);
“an ear canal microphone, the ear canal microphone producing an ear canal signal, wherein the first ambient sound microphone, the second ambient sound microphone, and the ear canal microphone are part of an earphone” because the combination makes obvious a personal, or in-ear, microphone that generates the third sound signal that is received by the system, and the three microphones are part of an earpiece (see Le, ¶ 0021 and figure 2, units 36 and 38 in view of Stirnemann, figure 7, units 1.1, 1.2, and 6 and ¶ 0069-0070 and 0082);
“a first circular buffer, wherein the first ambient sound signal is constantly recorded in the circular buffer during a mode operation of the earphone” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), Le teaches a first circular buffer where the first ambient sound (i.e., the sound signal from the environmental microphone of Le) is constantly buffered in a circular buffer during operation of the earpiece as made obvious by the combination of Le, Stirnemann, and Mayer (see Le, ¶ 0021 and figure 1C, in view of Stirnemann, ¶ 0069-0070, and further in view of Mayer, ¶ 0016);
“a second circular buffer, wherein the second ambient sound signal is constantly recorded in the second circular buffer during the mode operation of the earphone” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), and because Stirnemann makes obvious the second ambient sound signal, it is obvious to use a second circular buffer to constantly record the second ambient sound signal (see Mayer, ¶ 0016);
“a third circular buffer, wherein the ear canal signal is constantly recorded in the third circular buffer during the mode operation of the earphone, wherein the mode operation of the earphone is when a user activates a mode of operation where the first ambient sound signal, the second ambient sound signal and the ear canal sound signal are all constantly recorded” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), teaches that the data storage device records a user’s conversation based on a recognized voice command or user instruction (see Le, ¶ 0010, 0021, 0023, 0028, 0035, and 0037), and Le further suggests recording an audio clip, such as the user's conversation, using both microphones, it would be obvious to one of ordinary skill in the art at the time of the invention that recording a conversation including all participants would record the user’s voice using the personal microphone and record the other participants' voices using the ambient microphone (see Le, ¶ 0023 and 0024 in view of ¶ 0035 and 0037, and in view of Mayer, ¶ 0016), therefore the combination makes obvious the feature to constantly record the first sound signal in a first circular buffer to monitor the user’s speech with the personal microphone for recording a conversation including all participants;
“a data storage system” because Le teaches the data storage for recording the buffered audio for more permanent storage (see Le, ¶ 0024 and 0028); and
“an event detector, wherein when an event is detected the event detector retrieves data from the circular buffer, embeds a time coded index into the data generating a new data, and stores the new data to the data storage system, wherein the data includes the ear canal sound signal” because Le teaches that the user triggers the recording to store a user conversation before and after the voice command is received, where the stored signal is the person’s own voice from the ear canal microphone, such that the microphone records the user’s voice (see Le, ¶ 0010, 0021, 0028, and 0035), the combination makes obvious that the recorded conversation includes one or more of the first and second ambient sound signals or the ear canal sound signal (see Le, ¶ 0024 in view of Stirnemann, ¶ 0084 and Mayer, ¶ 0016), and by further making it obvious to embed time coding or information in the ear canal sound signal for later recall and analysis (see Le, ¶ 0035 and 0039, in view of Mumford, ¶ 0010 and 0020-0021).
Regarding claim 30, see the preceding rejection with respect to claim 29 above. The combination makes obvious the “system according to claim 29 further comprising:
an audio forensics analysis system configured to analyze either a content of one or more of the first circular buffer, the second circular buffer or the third circular buffer, or wherein the analysis system is configured to analyze the new data stored on the data storage device” because Le makes obvious that the circular buffer is analyzed to detect voice commands issued by the user (see Le, ¶ 0024 and 0027-0028).
Regarding claim 31, see the preceding rejection with respect to claim 30 above. The combination makes obvious the “system according to claim 30, where the audio forensics analysis system includes a communication system configured to transmit the content of at least one or more of the first circular buffer, the second circular buffer or the third circular buffer or the new data of the data storage device to a remote server for analysis” because Le teaches that the recorded audio is uploaded to another computer (see Le, ¶ 0039).
Regarding claim 32, see the preceding rejection with respect to claim 31 above. The combination makes obvious the “system according to claim 31, wherein the analysis is to determine if speech is present in the content” because Le teaches an event such as the user’s voice command to record the audio from the circular buffers to the data storage (see Le, ¶ 0024 and 0028).
Claims 10, 12, and 14-20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over the combination of Le, Stirnemann, Mayer, and Mumford as applied to claims 1, 5, and 9 above, and further in view of Shalon et al., US 2006/0064037 A1 (previously cited and hereafter Shalon).
Regarding claim 10, see the preceding rejection with respect to claim 9 above. The combination of Le, Stirnemann, Mayer, and Mumford makes obvious the system according to claim 9. However, the combination does not appear to teach the features “wherein if speech is present in the content then a first remote server converts the speech to text”.
Shalon teaches systems and methods for monitoring and modifying behavior (see Shalon, abstract, ¶ 0009, 0022, 0030, and 0093). More importantly, the system taught by Shalon is similar to the teachings of Le, wherein the system uses voice recognition capabilities for user interaction, performs functions as an interactive calendar including scheduling and recording verbal comments, allows a user to receive and send emails or voice messages, provides entertainment via CD or MP3 players, perform wireless communications, and provides a voice recording function for recording speech and/or conversations (see Shalon, ¶ 0157, 0314-0320, 0330, 0331, and 0343). Additionally, Shalon teaches speech to text processing via automatic transcription services (see Shalon, ¶ 0343), and teaches audio interfaces for external systems, such as using voice commands with an internet-enabled cell phone where the cell phone responds with requested information (see Shalon, ¶ 0357). Shalon also teaches distributed processing features, where that the system has a processing unit in a remote server (see Shalon, ¶ 0237 and figure 1d, units 12, 14, 20, and 22). One of ordinary skill in the art at the time of the invention would have found it obvious to transmit the user’s recorded speech to a remote server with better processing capabilities (see Shalon, ¶ 0235-0237) for providing better and/or faster automated transcription services (see Shalon, ¶ 0343 and 0357 in view of ¶ 0235-0237). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Le, Stirnemann, Mayer, and Mumford with the teachings of Shalon to provide a better and/or faster automated transcription services for transcribing a user’s recorded conversation (see Le, ¶ 0038-0039 in view of Shalon, ¶ 0235-0237, 0343, and 0357).
Therefore, the combination of Le, Stirnemann, Mayer, Mumford, and Shalon makes obvious the “system according to claim 9, wherein if speech is present in the content then a first remote server converts the speech to text and either the first remote server sends the results to the system or a second remote server sends the results to the system” because Shalon makes obvious that a first remote server converts the speech to text via an automated transcription service and the response, or transcribed speech and/or conversation, is sent to the user’s system (see Le, ¶ 0038-0039 in view of Shalon, ¶ 0235-0237, 0343, and 0357 and figure 1d, units 12, 14, 20, and 22).
Regarding claim 12, see the preceding rejection with respect to claim 10 above. The combination of Le, Stirnemann, Mayer, Mumford, and Shalon makes obvious the “The system according to claim 10, further including:
a text-to-speech translation system to translate the received results from the first or second remote server into speech” because Shalon makes obvious that a first remote server converts the speech to text via an automated transcription service and the response, or transcribed speech and/or conversation, is sent to the user’s system (see Le, ¶ 0038-0039 in view of Shalon, ¶ 0235-0237, 0327, 0331, 0343, and 0357 and figure 1d, units 12, 14, 20, and 22).
Regarding claim 14, see the preceding rejection with respect to claim 5 above. The combination of Le, Stirnemann, Mayer, and Mumford makes obvious the system of claim 5, but does not appear to teach or reasonably suggest the features of “wherein the event is at least one of … an accident” such that the “when the accident is determined the system automatically transmits an indication to a remote location a message that a user has been involved in the accident”.
Shalon teaches systems and methods for monitoring and modifying behavior (see Shalon, abstract, ¶ 0009, 0022, 0030, and 0093). More importantly, the system taught by Shalon is similar to the teachings of Le, wherein the system uses voice recognition capabilities for user interaction, performs functions as an interactive calendar including scheduling and recording verbal comments, allows a user to receive and send emails or voice messages, provides entertainment via CD or MP3 players, perform wireless communications, and provides a voice recording function for recording speech and/or conversations (see Shalon, ¶ 0157, 0314-0320, 0330, 0331, and 0343). Shalon also teaches other health related aspects, where the system is used to monitor an elderly or disabled person living alone. It detects an event, like a fall, and subsequently helps the user call a preprogrammed number, send a message to that number, or otherwise allow communication with the user, such as sending the user's voice and ambient audio to the remote party (see Shalon, ¶ 0332). One of ordinary skill in the art at the time of the invention would find it obvious that the system would record audio from at least the ambient microphone in the event of a fall, because the system would initiate a phone call to 911 or other monitoring service and it would be obvious that those services record phone calls for liability, legal, training, and/or other well-known reasons (see Shalon, ¶ 0317, 0331-0332, and 0343). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Le, Stirnemann, Mayer, and Mumford with the teachings of Shalon for the purpose of providing a personal safety device that automatically calls emergency services in the event of a detected emergency (see Shalon, ¶ 0332 and 0343).
Therefore, the combination of Le, Stirnemann, Mayer, Mumford, and Shalon makes obvious the “system according to claim 5, wherein when the accident is determined the system automatically transmits an indication to a remote location a message that a user has been involved in the accident” because Shalon makes obvious that the system monitors sounds to detect that the user has been in an accident, such as a fall (see Shalon, ¶ 0317, 0331-0332, and 0343, and further see Shalon, ¶ 0430).
Regarding claim 15, see the preceding rejection with respect to claim 1 above. The combination of Le, Stirnemann, Mayer, and Mumford makes obvious the system of claim 1, and for the same reasons makes obvious similar features of the instant method.
However, the combination of Le, Stirnemann, Mayer, and Mumford does not appear to teach or reasonably suggest the features of “analyzing the second circular buffer for second event” and “analyzing the third circular buffer for a third event”.
Shalon teaches systems and methods for monitoring and modifying behavior, where in one aspect, the system is used to monitor and optionally modify user behavior, such as eating behavior, by using a body mountable sensor to sense non-verbal energy (see Shalon, abstract, ¶ 0009, 0022, 0030, and 0093). In other aspects, the system is similar to the teachings of Le, wherein the system uses voice recognition capabilities for user interaction, performs functions as an interactive calendar including scheduling and recording verbal comments, allows a user to receive and send emails or voice messages, provides entertainment via CD or MP3 players, perform wireless communications, and provides a voice recording function for recording speech and/or conversations (see Shalon, ¶ 0157, 0314-0320, 0330, 0331, and 0343). Shalon also teaches other health related aspects, where the system is used to monitor an elderly or disabled person living alone. It detects an event, like a fall, and subsequently helps the user call a preprogrammed number, send a message to that number, or otherwise allow communication with the user, such as sending the user's voice and ambient audio to the remote party (see Shalon, ¶ 0332). One of ordinary skill in the art at the time of the invention would find it obvious that the system would record audio from at least the ambient microphone in the event of a fall, because the system would initiate a phone call to 911 or other monitoring service and it would be obvious that those services record phone calls for liability, legal, training, and/or other well-known reasons (see Shalon, ¶ 0317, 0331-0332, and 0343). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify the combination of Le, Stirnemann, Mayer, and Mumford with the teachings of Shalon for the purpose of providing a personal safety device that automatically calls emergency services in the event of a detected emergency (see Shalon, ¶ 0332 and 0343).
Therefore, the combination of Le, Stirnemann, Mayer, Mumford, and Shalon makes obvious:
“A method comprising:
receiving a first sound signal, wherein the first sound signal is generated by a first microphone” because the combination makes obvious a first ambient microphone that generates the first sound signal that is received by the system (see Le, ¶ 0021 and figure 2, unit 38 in view of Stirnemann, figure 7, unit 1.1 and ¶ 0082);
“receiving a second sound signal, wherein the second sound signal is generated by a second microphone” because the combination makes obvious a second ambient microphone that generates the second sound signal that is received by the system (see Le, ¶ 0021 in view of Stirnemann, figure 7, unit 1.2 and ¶ 0082);
“receiving a third sound signal, wherein the third sound signal is generated by a third microphone, wherein the first microphone and the second microphone and the third microphone are part of a device” because the combination makes obvious a personal, or in-ear, microphone that generates the third sound signal that is received by the system, and the three microphones are part of an earpiece (see Le, ¶ 0021 and figure 2, units 36 and 38 in view of Stirnemann, figure 7, units 1.1, 1.2, and 6 and ¶ 0069-0070 and 0082);
“constantly recording the first sound signal in a first circular buffer along with an embedded time coded index” because Le teaches the continual and constant buffering of the environmental sound signal (i.e., the sound signal from the environmental microphone of Le) (see Le, ¶ 0010 and 0028), Le teaches a first circular buffer where the first ambient sound (i.e., the sound signal from the environmental microphone of Le) is constantly buffered in a circular buffer during operation of the earpiece as made obvious by the combination (see Le, ¶ 0021 and figure 1C, Stirnemann, ¶ 0069-0070, Mayer, ¶ 0016, and Shalon, ¶ 0117-0118 and 0343), and by further making it obvious to embed time coding or information in the first ambient sound signal for later recall and analysis (see Le, ¶ 0035 and 0039, in view of Mumford, ¶ 0010 and 0020-0021);
“constantly recording the second sound signal in a second circular buffer” (see Le, ¶ 0010 and 0028, Stirnemann, ¶ 0082, Mayer, ¶ 0016, and Shalon, ¶ 0317, 0331-0332, and 0343);
“constantly recording the second sound signal in a third circular buffer” (see Le, ¶ 0010 and 0028, Stirnemann, ¶ 0082, Mayer, ¶ 0016, and Shalon, ¶ 0317, 0331-0332, and 0343);
“analyzing the first circular buffer for a first event and if a first event is detected then storing the first circular buffer into a data storage system, wherein the first circular buffer is part of a memory that is part of the device, wherein the second circular buffer is part of a second memory that is part