DETAILED ACTION
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-19 are presented for examination on the merits.
Claim Objection
2. Claim 9 uses the term “capable of”. It has been held that the recitation that an element is capable of performing a function is not a positive limitation but only requires the ability to so perform. It does not constitute a limitation in any patentable sense.
Claim Rejections - 35 USC § 112.
3. The following is a quotation of the second paragraph of 35 U.S.C. 112:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
4. Claim 9 is rejected under 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which applicant regards as the invention. The term " such as a streaming service" renders the claim(s) indefinite because the claim includes elements not actually disclosed (those encompassed by "to look like"), thereby rendering the scope of the claim(s) unascertainable. See MPEP § 2173.05(d).
5. As to claim 9, the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d).
Claim Rejections - 35 USC § 103
6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 1-17 are rejected under 35 U.S.C. 103 as being unpatentable over Rao (US 11614529 B1) in view of Kamath (US 11564036 B1) and further in view of Bradl (US 2002/0072301 A1)
As to claim 1, Rao discloses in controlling emission of ultrasonic signals for presence detection having claimed
a. a presence detecting electronic device read on Col. 1, Lines 43-51, (FIG. 1 shows an illustrative presence-detection device in a user environment that is performing presence detection);
b. a microphone configured to receive acoustic signals from the environment read on Col. 1, Lines 43-51, (the architecture includes the presence-detection device controlling secondary devices physically situated in the user environment based on detecting presence of a user. In this example, the presence-detection device has a loudspeaker and a microphone that are used to detect presence, and/or lack of presence, of a user);
c. a processor connected to the microphone for analyzing the received signal read on Col. 6, Lines 3-47, (upon being emitted, the ultrasonic signal 114 will generally reflect off of objects in the user environment 102. As briefly mentioned above, when the ultrasonic signal 114 bounces off objects, various changes to the characteristics of the audio signal may occur. For instance, as mentioned above, the Doppler effect (or Doppler shift) is one such change in audio signal characteristics where the frequency or wavelength of a wave, such as an emitted sound wave, changes in relation to an emitting object upon bouncing off of a moving object. In the illustrated example, the ultrasonic signal 114 may experience a change in frequency upon reflecting off the user 106 if the user 106 is moving. Thus, because there is movement 120 by the user 106, the reflected ultrasonic signal 122 (or reflected ultrasonic sound) may experience a change in frequency. Generally, if the movement 120 of the user 106 is towards the loudspeaker, then the reflected ultrasonic signal 122 may have a higher frequency compared to the emitted signal 114 when detected at the presence-detection device 104). Rao does not explicitly recite wherein the device also includes means for directly detecting the transmitted signal from an acoustic source and comparing the directly transmitted signal with the received signal for identifying if an object or user in the vicinity of the device based on the comparison between the directly transmitted signal and the signal reflected from the object.
However, Kamath in presence-detection devices that are able to detect movement of a person in an environment by emitting ultrasonic signals cures this deficiency by teaching that it may be beneficial wherein the processor is configured to compare the directly transmitted signal with the received signal for identifying if an object or user in the vicinity of the device based on the comparison between the directly transmitted signal and the signal reflected from the object read on Col. 5, Lines 43-63, ( the reference signal may be generated using another microphone of the presence-detection device. The presence-detection device may include multiple microphones, some of which are located in closer proximity to the loudspeaker than others. In some examples, a microphone located in closest proximity, or in close proximity, to the loudspeaker may be used to generate audio signals that represent the audible sound with more strength as compared to the reflection signals. Accordingly, a microphone located further away from the loudspeaker may be used to generate the audio signal that represents the reflected signal and the audible sound, and a microphone in closer proximity to the loudspeaker may be used to generate the reference signal. The reference signal may represent the audible noise with more strength (e.g., higher decibels (dB)) as compared to the audio signal generated by the microphone located further from the loudspeaker. The audio signal may be processed using AEC techniques and the reference signal to remove (or attenuate) the portion of the audio signal representing the audible sound (e.g., the distortions) to help isolate the reflection signals for further analysis to detect movement).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of invention was filed to incorporate the presence detection using ultrasonic signals with concurrent audio playback of Kamath into Rao in order to detect movement of a person in an environment by emitting ultrasonic signals using a loudspeaker that is concurrently outputting audible sound.
Rao further discloses: the acoustic signals comprising a first signal transmitted directly from an acoustic source and a second acoustic signal representing a reflection of the transmitted acoustic signal broadly read on Col. 6, Lines 17-25 discloses (generally, if the movement 120 of the user 106 is towards the loudspeaker, then the reflected ultrasonic signal 122 may have a higher frequency compared to the emitted signal 114 when detected at the presence-detection device 104. Conversely, the reflected ultrasonic signal 122 may have a lower frequency relative to the presence-detection device 104 compared to the emitted signal 114 when the movement 120 of the user 106 is away from the presence-detection device 104). Even though Rao broadly discloses as shown above, to clarify the office action, examiner brought Bradl.
However, Bradl in presence of a work piece cures this deficiency by teaching that it may be beneficial:
d. wherein the acoustic signals comprising a first signal transmitted directly from an acoustic source and a second acoustic signal representing a reflection of the transmitted acoustic signal read on Claim 14, (wherein said detection device includes first means for determining an intensity of the reflected ultrasound waves received by said ultrasound receiver, second means for comparing the intensity with an intensity of the ultrasound waves emitted by said ultrasound transmitter, and third means for deciding about a presence and an absence of the work piece).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the apparatus and method for detecting a work piece in an automatic processing apparatus of Bradl into Rao in view of Kamath in order to detect whether a work piece is held in a holder in the processing apparatus is carried out by irradiating the holder with ultrasound waves, receiving the reflected ultrasound waves and detecting on the basis of the reflected ultrasound waves.
As to claim 2, Rao further discloses:
a. wherein the device includes at least two microphones, the processor being configured to detect the direction of the incoming signals read on Col. 7, Lines 30-63, (for instance, the machine-learning model(s) may be trained to identify, based on a comparison between phase components representing the reflection 122 of the ultrasonic signal 114 detected by two different microphones 112, a direction of the user 106 as he or she moves through the environment 102. As an example, a first microphone 112 may capture audio data representing a reflected ultrasonic signal 122 for 8 seconds of time, and a second microphone 112 that is oriented in a different direction may capture audio data representing the reflected ultrasonic signal 122 for substantially the same 8 seconds of time. Feature vectors may be created for each of those audio channel sources that represent the phase of the frequency response of the reflected ultrasonic signal 122. The machine-learning model(s) may be trained to determine, based on a comparison (e.g., subtraction) of the feature vectors representing phase components, a direction of movement of the object as it moves during those 8 seconds. In this way, two (or more) microphones 112 in a microphone array may be utilized to determine the direction the user 106 (or another object) is moving in the environment 102).
As to claim 3, Kamath further teaches:
a. wherein the acoustic source is independent of the device and the processor is configured to detect the direct acoustic signal as well as the first reflection and from the difference in time of arrival calculate whether a user is likely to be in the vicinity of the device read on Col. 5, Lines 43-63, ( the reference signal may be generated using another microphone of the presence-detection device. The presence-detection device may include multiple microphones, some of which are located in closer proximity to the loudspeaker than others. In some examples, a microphone located in closest proximity, or in close proximity, to the loudspeaker may be used to generate audio signals that represent the audible sound with more strength as compared to the reflection signals. Accordingly, a microphone located further away from the loudspeaker may be used to generate the audio signal that represents the reflected signal and the audible sound, and a microphone in closer proximity to the loudspeaker may be used to generate the reference signal. The reference signal may represent the audible noise with more strength (e.g., higher decibels (dB)) as compared to the audio signal generated by the microphone located further from the loudspeaker. The audio signal may be processed using AEC techniques and the reference signal to remove (or attenuate) the portion of the audio signal representing the audible sound (e.g., the distortions) to help isolate the reflection signals for further analysis to detect movement).
As to claim 4, Rao further discloses:
a. wherein the processor is configured to detect the direction to the acoustic source and the direction of the reflected signal as well as the time difference between the signal arrivals at the detectors, and to analyze the likelihood of a user being present read on Col. 6, Lines 3-25, (Upon being emitted, the ultrasonic signal 114 will generally reflect off of objects in the user environment 102. As briefly mentioned above, when the ultrasonic signal 114 bounces off objects, various changes to the characteristics of the audio signal may occur. For instance, as mentioned above, the Doppler effect (or Doppler shift) is one such change in audio signal characteristics where the frequency or wavelength of a wave, such as an emitted sound wave, changes in relation to an emitting object upon bouncing off of a moving object. In the illustrated example, the ultrasonic signal 114 may experience a change in frequency upon reflecting off the user 106 if the user 106 is moving. Thus, because there is movement 120 by the user 106, the reflected ultrasonic signal 122 (or reflected ultrasonic sound) may experience a change in frequency. Generally, if the movement 120 of the user 106 is towards the loudspeaker, then the reflected ultrasonic signal 122 may have a higher frequency compared to the emitted signal 114 when detected at the presence-detection device 104).
As to claim 5, Rao further discloses:
a. wherein the processor is also configured to filter the received acoustic signals at the microphones, the filter being a high pass filter selecting frequencies above a certain limit, improving the accuracy of the distinction between the different acoustic propagation paths read on Col. 11, Lines 21-48, (a signal-processing component 214 that, when executed by the processor(s) 202, perform various operations for processing audio data/signals generated by the microphone(s) 112. For example, the signal-processing component 214 may include components to perform low-pass filtering and/or high-pass filtering to ensure that speech and other sounds in the spectrum region of the ultrasonic signal does not affect baseband processing. For instance, the signal-processing component 214 may performing high-pass filtering for the audio data received in each audio channel for respective microphones 112 to remove sounds at lower frequencies that are outside or lower than of the frequency range of the ultrasonic signal and/or reflected signals that have shifted, such as speech (e.g., 100 Hz, 200 Hz, etc.) or other sounds in the environment 102).
As to claim 6, Kamath further teaches:
a. wherein the processor is configured to receive the transmitted signal from a wireless or wired electronic connection for comparing with the received signal and to recognize the directly transmitted signal based on the electronic connection read on Col. 5, Lines 43-63 and Col. 17, Lines 54-65, ( the reference signal may be generated using another microphone of the presence-detection device. The presence-detection device may include multiple microphones, some of which are located in closer proximity to the loudspeaker than others. In some examples, a microphone located in closest proximity, or in close proximity, to the loudspeaker may be used to generate audio signals that represent the audible sound with more strength as compared to the reflection signals. Accordingly, a microphone located further away from the loudspeaker may be used to generate the audio signal that represents the reflected signal and the audible sound, and a microphone in closer proximity to the loudspeaker may be used to generate the reference signal. The reference signal may represent the audible noise with more strength (e.g., higher decibels (dB)) as compared to the audio signal generated by the microphone located further from the loudspeaker. The audio signal may be processed using AEC techniques and the reference signal to remove (or attenuate) the portion of the audio signal representing the audible sound (e.g., the distortions) to help isolate the reflection signals for further analysis to detect movement. The first microphone 206(1) may generate an audio signal 406 representing a reflected signal 408 and an audible signal 408. Further, the second microphone 206(2) may generate a reference signal 412. That is, the second microphone 206(2) may be located in closest proximity, or in close proximity, to the loudspeaker 110 and may be used to generate the reference signal 412 that represents the audible sound 116 with more strength as compared to the reflected signal 408. The reference signal 412 may represent the audible sound 116 with more strength (e.g., higher decibels (dB)) as compared to the audio signal 406 generated by the microphone 206(1) located further from the loudspeaker 110).
As to claim 7, Kamath further teaches:
a. wherein the wired or wireless connection is connected to the speaker or close to the speaker so as to compensate for any distortions caused by speaker protection processes or limitations in the signal processing and amplification read on Col. 5, Lines 43-63, (the reference signal may be generated using another microphone of the presence-detection device. The presence-detection device may include multiple microphones, some of which are located in closer proximity to the loudspeaker than others. In some examples, a microphone located in closest proximity, or in close proximity, to the loudspeaker may be used to generate audio signals that represent the audible sound with more strength as compared to the reflection signals. Accordingly, a microphone located further away from the loudspeaker may be used to generate the audio signal that represents the reflected signal and the audible sound, and a microphone in closer proximity to the loudspeaker may be used to generate the reference signal. The reference signal may represent the audible noise with more strength (e.g., higher decibels (dB)) as compared to the audio signal generated by the microphone located further from the loudspeaker. The audio signal may be processed using AEC techniques and the reference signal to remove (or attenuate) the portion of the audio signal representing the audible sound (e.g., the distortions) to help isolate the reflection signals for further analysis to detect movement).
As to claim 8, Rao further discloses:
a. wherein the processor is configured to recognize the transmitted acoustic signal by comparison with a database including a set of samples, and to compare the received signals with the sample to analyze the received signal and detecting reflected signals in the received signals read on Col. 12, Lines 39-47, (the computer-readable media 204 may further store or include an audio-data buffer 216 that is memory allocation which is configured to store audio data 220. The audio-data buffer 216 may store audio data that is configured by the signal-generation component 210 to be output by the loudspeaker(s) 110 (e.g., ultrasonic audio data, audible audio data, etc.). Further, the audio-data buffer 216 may store audio data that was generated using the microphone(s) 112 (e.g., reflected ultrasonic signals 114)).
As to claim 9, Rao further discloses:
a. wherein the samples are constituted by a music library, such as a streaming service, thus being capable of identifying a user based on the distortions in the received signals compared with the original read on Col. 4, Lines 51-65, (the techniques described herein may include various optimizations. For instance, when the presence-detection devices are playing audible music data, or otherwise outputting audio in a human-audible frequency range, the presence-detection devices may be configured to determine how to mix the audible audio data with the ultrasonic audio data in such a way that presence detection is still enabled. For instance, the presence-detection devices may determine at what power level (e.g., volume level) the audible audio is being output, and select a power level for the ultrasonic signal to ensure that reflections of the ultrasonic signal will be received at the device. Generally, the higher the power level at which the audible audio is output, the higher the power level at which the ultrasonic signal is to be output).
As to claim 10, Rao further discloses:
a. wherein the processor is configured to analyze the characteristics of the received signals; so as to evaluate the size, position, posture or any gestures of the detected user read on Col. 6, Lines 3-25, (the Doppler effect (or Doppler shift) is one such change in audio signal characteristics where the frequency or wavelength of a wave, such as an emitted sound wave, changes in relation to an emitting object upon bouncing off of a moving object. In the illustrated example, the ultrasonic signal 114 may experience a change in frequency upon reflecting off the user 106 if the user 106 is moving. Thus, because there is movement 120 by the user 106, the reflected ultrasonic signal 122 (or reflected ultrasonic sound) may experience a change in frequency. Generally, if the movement 120 of the user 106 is towards the loudspeaker, then the reflected ultrasonic signal 122 may have a higher frequency compared to the emitted signal 114 when detected at the presence-detection device 104).
As to claim 11, the claim is interpreted and rejected as to claims 1 & 3.
As to claim 12, the claim is interpreted and rejected as to claim 2.
As to claim 13, the claim is interpreted and rejected as to claim 1.
As to claim 14, the claim is interpreted and rejected as to claim 8.
As to claim 15, Rao further discloses:
a. wherein the certain limit is 1kHz read on Col. 5, Lines 45-67, (As shown in FIG. 1, the loudspeaker 110 of the presence-detection device 104 may transmit, or otherwise output, an ultrasonic signal 114. Generally, the loudspeaker 110 may comprise any type of electroacoustic transducer that convers an electric audio signal into a corresponding sound. In some instances, the loudspeaker 110 may be an existing on-board speaker configured to output sound within frequency ranges that are audible to humans, such as 35 Hz-20 kHz. However, in the illustrated example the ultrasonic signal 114 may include at least a pulsed, or a continuous, emission of the signal 114 at a frequency that is outside the frequency range in which humans can hear sound (e.g., over 20 kHz)).
As to claim 16, Rao further discloses:
a. wherein the characteristics are amplitude and frequency range read on Col. 11, Line 49 – Col. 12, Line 19, (for example, the signal-processing component 214 may perform various types of transforms to convert the audio signal from the time domain into the frequency domain, such as a Fourier transform, a fast Fourier transform, a Z transform, a Fourier series, a Hartley transform, and/or any other appropriate transform to represent or resolve audio signals into their magnitude (or amplitude) components and phase components in the frequency domain. Further, the signal-processing component 214 may utilize any type of windowing function on the audio data, such as the Hanning Window, the Hamming Window, Blackman window, etc. Additionally, the signal-processing component 214 may perform a logarithmic transform on the magnitude components to transform the magnitude components of the frequency of the reflected signal. For instance, due to the high-dynamic range of the magnitude components of the frequency of the reflected ultrasonic signal, and because the amount of reflection that occurs from movement of the user 106 is relatively small (may appear similar to noise), the logarithmic transform may transform the magnitude components into a larger range. After applying a logarithmic transform to the magnitude components, the change in magnitude caused by the reflection of the ultrasonic signal off of the moving object, or person, will be more easily identifiable).
As to claim 17, Rao further discloses:
a. wherein the known characteristics are constituted by a sound or music store in a database read on Col. 21, Lines 9-19, (the signal-generation component 210 may analyze audio data stored in the audio-data buffer 216, such as music data. For example, the presence-detection device 104 may buffer music data in the audio-data buffer 216 prior to causing the loudspeaker(s) 110 to convert the music data into audible sound. Further, an audio-player component 710 may receive a volume-level indication 702 via an input device 240 (e.g., voice command, volume knob, touch screen, etc.). The volume-level indication 702 may indicate a power-level, or volume level, at which the audible sound is to be output by the loudspeaker 110).
As to claim 18, the claim is interpreted and rejected as to claim 1.
As to claim 19, Rao further discloses:
wherein the acoustic source is a speaker read on Col. 1, Lines 43-51, (the architecture includes the presence-detection device controlling secondary devices physically situated in the user environment based on detecting presence of a user. In this example, the presence-detection device has a loudspeaker and a microphone that are used to detect presence, and/or lack of presence, of a user).
Response to Arguments
9. Applicant's arguments with respect to claims 1-17 have been considered but are moot in view of the new ground(s) of rejection that was necessitated by Applicant's amendment.
Citation of pertinent Prior Arts
10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: see PTO-892 Notice of References Cited.
Conclusion
11. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Fekadeselassie Girma whose telephone number is (571)270-5886. The examiner can normally be reached on M-F 8:30am - 5pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Davetta W. Goins can be reached on (571) 272-2957. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Fekadeselassie Girma/
Primary Examiner Art Unit 2689