DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed 12/23/2025 has been entered.
Claims 8 is cancelled.
Claims 1 and 12-13 are amended.
Claims 1-7 and 9-13 are pending.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7 and 9-13 are rejected under 35 U.S.C. 103 as being unpatentable over Chen (WO 2016/011433 A2) in view of Peterson (US 2011/0242305 A1).
Regarding claim 1, Chen teaches A system for controlling a sound-based sensing of subjects in a space[0033], the sensing being performed by a network of network devices, a plurality of the network devices each having a sound generating unit and a plurality of the network devices each having a sound detecting unit, the network devices being distributed in the space, the system comprising[0033 has network of multiple wireless transmitters and receivers]:
a sound generation controlling unit configured to control the at least one sound generating unit to generate a predetermined sound and configured to control the plurality of sound detecting units to detect the sound after a multi-channel propagation through at least a portion of the space and to generate a sensing signal indicative of the detected sound[0033 has network of wireless transmitters and receivers], the at least one sound generating unit being located at a position in the space different from the position of the sound detecting units in the space[0033, Fig 2A, 2B has wireless device #208 transmitting the signal for second device #210],
a subject determination unit configured to determine a status and/or position of at least one subject in the space based on the plurality of sensing signals and[0033, 0034, 0059 has status of objects];
a baseline providing unit configured to provide a baseline indicative of sensing signals detected by the sound detecting units with respect to at least one predetermined status and/or position of the at least one subject in the space, the subject determination unit being adapted to determine a status and/or position of the at least one subject further based on the provided baseline[0034 has frequencies and frequency comparison meaning baseline and comparison for identification];
wherein the sound generation controlling unit is adapted to control each of the sound generating units of the network devices to generate a predetermined sound subsequent to each other, and the sound detecting units of all other network devices to detect, respectively, the subsequently generated sounds[0033 has control over various receivers and communication in the network].
While Chen mentions wireless signals it does not explicitly mention sound signals.
Peterson teaches sound based networked signals [Abstract; 0021, 0027, 0028 has acoustic communication
It would have been obvious to one of ordinary skill in the art before the filing date to have modified the network devices in Chen with the acoustic based signals in Peterson to have an acoustic based network as an alternative to radio.
Regarding claim 12, Chen teaches A method for controlling a sound based sensing of subjects in a space[0033], the sensing being performed by a network of network devices, a plurality of the network device each having s a sound generating unit, and a plurality of the network devices each having a sound detection unit, the network devices being distributed in the space, the method comprising[0033: has network of wireless transmitters and receivers]:
controlling each of the sound generating units to generate a predetermined sound subsequent to each other and controlling the sound detecting units of all other network devices to detect[0033 has network of wireless transmitters and receivers], respectively, the subsequently generated sounds after a multi- channel propagation through at least a portion of the space and to generate a sensing signal indicative of the detected sound, the at least one sound generating unit being located at a position in the space different from the position of the sound detecting units in the space[0033, Fig 2A, 2B has wireless device #208 transmitting the signal for second device #210], and
determining a status and/or position of at least one subject in the space based on the plurality of sensing signals and[0033, 0034, 0059 has status of objects];
providing a baseline indicative of sensing signals detected by the sound detecting units with respect to at least one predetermined status and/or position of the at least one subject in the space[0034 has frequencies and frequency comparison meaning baseline and comparison for identification], the subject determination unit being adapted to determine a status and/or position of the at least one subject further based on the provided baseline[0033 has control over various receivers and communication in the network; 0033, 0034, 0059 has status of objects]
While Chen mentions wireless signals it does not explicitly mention sound signals.
Peterson teaches sound based networked signals [Abstract; 0021, 0027, 0028 has acoustic communication]
It would have been obvious to one of ordinary skill in the art before the filing date to have modified the network devices in Chen with the acoustic based signals in Peterson to have an acoustic based network as an alternative to radio.
Regarding claim 2, Chen, as modified, teaches wherein the status and/or position is determined based on i) the signal strength of the plurality of detected sensing signals[0034 has signal strength] and/or based on ii) channel state information derived from the plurality of detected sensing signals and the predetermined generated sound. [0034 has channel state information]
Regarding claim 3, Chen, as modified, teaches The system according to claim 1 wherein the sound generation controlling unit is adapted such that the sound generating unit generates the predetermined sound as a directed sound, wherein the directed sound is directed to the at least one subject. [0302 has directional antenna].
Peterson also teaches that wherein the sound generation controlling unit is adapted such that the sound generating unit generates the predetermined sound as a directed sound, wherein the directed sound is directed to the at least one subject. [Fig 31; 0183 has directional sound]
Regarding claim 4, Chen, as modified, teaches wherein the sound generation controlling unit is adapted such that the sound generating unit generates the predetermined sound as an omnidirectional sound. [0302 has omnidirectional antenna]
Peterson also teaches that wherein the sound generation controlling unit is adapted such that the sound generating unit generates the predetermined sound as an omnidirectional sound. [0257 has omnidirectional pattern]
Regarding claim 5, Chen, as modified, teaches wherein each sound detecting unit comprises a sound detection array such that the plurality of sensing signals are each indicative of a direction from which the detected sound has reached the detection array, wherein the subject determination unit is adapted to determine the status and/or position of the subject further based on the direction information provided by each sensing signal. [Fig 32 and 0122, 0183, 0257 has TOA with direction information]
Regarding claim 6, Chen, as modified, teaches wherein each network device comprises a sound detecting unit and a sound generating unit, wherein the sound generation controlling unit is adapted to control the sound generating units of the network devices to generate a predetermined sound and the sound detecting units of all other network devices to detect the generated sounds such that for each sound generated by a different sound generating unit a plurality of detected sensing signals are generated, wherein the status and/or position of the subject is determined based on each of the plurality of audio sensing signals. [00301 has network of devices working together].
Peterson also teaches that wherein each network device comprises a sound detecting unit and a sound generating unit, wherein the sound generation controlling unit is adapted to control the sound generating units of the network devices to generate a predetermined sound and the sound detecting units of all other network devices to detect the generated sounds such that for each sound generated by a different sound generating unit a plurality of detected sensing signals are generated, wherein the status and/or position of the subject is determined based on each of the plurality of audio sensing signals. [0112, 0123, 0163 has multiple sound generation points and sensing points; See also Fig 32]
Regarding claim 7, Chen, as modified, teaches wherein the sound generation controlling unit is adapted to control the sound generating units of the network devices to subsequently generate different predetermined sounds and the sound detecting units of all other network devices to detect the subsequently generated different sounds. [0034, 00127 has different frequencies]
Regarding claim 9, Chen, as modified, teaches The system according to claim 1 wherein the subject determination unit is adapted to determine an open or closed status of a door, window and/or furniture, and/or to determine a position of a furniture and/or a living being, and/or to determine a breathing rate, a body movement, a gait, a gesture, a vital sign and/or activity of living being present in the space. [0051 and 00334 have open/closed door status; 0333 has position of object, 0007, 00145, 00333, 0341-0343 has gesture recognition; 00343 has living being monitoring].
Peterson also teaches that The system according to claim 1 wherein the subject determination unit is adapted to determine an open or closed status of a door, window and/or furniture, and/or to determine a position of a furniture and/or a living being, and/or to determine a breathing rate, a body movement, a gait, a gesture, a vital sign and/or activity of living being present in the space. [Claim 17 has object movement; see also 0111,0165;0001-0011 has gesture. See also claims 9-12]
Regarding claim 10, Chen, as modified, teaches wherein at least one of the network devices comprises a lighting functionality. [Claim 80 as well has intended use for a lighting device]
Regarding claim 11, Chen, as modified, teaches wherein at least one network device comprises a sound generating unit and a plurality of the network devices comprises a sound detecting unit[00301 has network of devices working together], and a system for controlling a sound based sensing of objects according to claim 1[See Claim 1 rejection as above].
Peterson also teaches that wherein at least one network device comprises a sound generating unit and a plurality of the network devices comprises a sound detecting unit[0112,0123, 0163 has multiple sound generation points and sensing points; See also Fig 32], and a system for controlling a sound based sensing of objects according to claim 1 [See Claim 1 rejection as above]
Regarding claim 13, Chen, as modified, teaches a non-transitory computer readable medium comprising program code execute the method according to claim 12 when run on a processor. [0008, 00311 has computer program].
Response to Arguments
Applicant's arguments filed 12/23/2025 have been fully considered but they are not persuasive.
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Applicant is reading the prior art overly narrowly regarding arguments on pages 7-8 of the Remarks. Applicant’s invention concerns a system for sensing objects with sound and generation and communication and control using sound. The concept of sensing objects as well as generation and communication using sound is a basic concept in the field of ultrasonics known to a person of ordinary skill. It is the combination of the art of Chen and Peterson that makes the claim obvious.
Other pertinent references such as Shin (US 20210239831 A1)[Abstract, 0170-0171 has ultrasonic signals to detect visitor] or Grabowski(US 20220392327 A1)[Abstract, Claim 1 has ultrasonic sensing and communication regarding door being open] also show ultrasonic communication among devices for the purpose of sensing objects.
Applicant's remaining arguments amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Rejections are maintained – and no allowable subject matter can be identified at this time.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VIKAS NMN ATMAKURI whose telephone number is (571)272-5080. The examiner can normally be reached Monday-Friday 7:30am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Isam Alsomiri can be reached at (571)272-6970. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VIKAS ATMAKURI/Examiner, Art Unit 3645
/JAMES R HULKA/Primary Examiner, Art Unit 3645