DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on December 18, 2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Double Patenting
A rejection based on double patenting of the “same invention” type finds its support in the language of 35 U.S.C. 101 which states that “whoever invents or discovers any new and useful process... may obtain a patent therefor...” (Emphasis added). Thus, the term “same invention,” in this context, means an invention drawn to identical subject matter. See Miller v. Eagle Mfg. Co., 151 U.S. 186 (1894); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Ockert, 245 F.2d 467, 114 USPQ 330 (CCPA 1957).
A statutory type (35 U.S.C. 101) double patenting rejection can be overcome by canceling or amending the claims that are directed to the same invention so they are no longer coextensive in scope. The filing of a terminal disclaimer cannot overcome a double patenting rejection based upon 35 U.S.C. 101.
Claims 1-20 are provisionally rejected under 35 U.S.C. 101 as claiming the same invention as that of claims 1-20 of copending Application No. 18/959,230 (reference application). This is a provisional statutory double patenting rejection since the claims directed to the same invention have not in fact been patented.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-2, 4, 7-12, 14, and 17-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticiapted by Amir et al. ((US 2023/0245541).
As per claim 1, Amir et al. disclose an apparatus (100, figures 1-2) comprising:
an image capture device (camera, 208, paragraphs 0060); and
one or more processors (CPU, 202, paragraph 0059) coupled with memory (210) and configured to:
detect, using the image capture device, an entity (target object, 104) within an area (the CPU 202 is configured to control the camera 208 to capture an image (represented by image data) of the environment. The camera 208 is preferably a visible light camera in that it senses visible light., paragraphs 0071 and 0073);
determine that the entity corresponds to one or more criteria (the CPU 202 determines that an intruder is present in an area of interest, and the process 400 proceeds to step S408, paragraph 0104);
identify a first sound corresponding to the one or more criteria and a state of the area; play, by a speaker device, the first sound to deter the entity from perpetrating an event within the area ("Additionally or alternatively, at step S408 the CPU 202 controls the speaker 218 to emit audio as an audible deterrent to the intruder. The audio emitted by the speaker 218 may be a non-speech sound e.g. a warning siren. Additionally or alternatively the audio emitted by the speaker 218 may be an audible speech message e.g. "this is private property, please leave the area immediately!", paragraph 0112);
identify a second sound corresponding to the one or more criteria, the state of the area, and the first sound; and play, by the speaker device, the second sound to deter the entity from perpetrating the event. ("Taking the example where at step S408 the CPU 202 controls the speaker 218 to emit an audible speech message, at step S412 the CPU 202 may control the speaker to increase the volume of the emitted audible speech message and/or to output a different audible speech message. Alternatively or additionally, at step S412, the CPU 202 may control the speaker 218 to emit a non-speech sound e.g. a warning siren. Alternatively or additionally, at step S412 the CPU 202 may control the lighting device 216 to emit light as a visual deterrent to the intruder in a manner as described above with respect to the deterrent output at step S408.", figure 4 and paragraph 0137).
As per claim 2, Amir et al. disclose the one or more processors are configured to:
detect the entity within the area by detecting at least one of physical characteristics of the entity (the object has moved towards a second predetermined area) or behavioral characteristics of the entity (the deterrent output at step S408 may have been based on the object 104 being located in a first predetermined area 502 (e.g. a first region defined by a first virtual fence) within a field of view 500 of the active reflected wave detector 206, and the second predetermined condition may comprise that the object has moved towards a second predetermined area 504 (e.g. a second region defined by a second virtual fence) within the field of view 500 of the active reflected wave detector 206. If this example second predetermined condition is met, this indicates that the intruder has not moved away from the area of interest in a desired direction despite the device 102 outputting the deterrent at step S408 and has instead moved in a direction towards a sensitive area that is more of a security threat (e.g. they have got closer to a building, paragraph 0125)
As per claim 4, Amir et al. disclose the one or more processors being configured to generate a profile corresponding to the determined one or more criteria of the entity (The second predetermined condition may be based at least on kinetic information associated with the person e.g. their speed of travel. For example the second predetermined condition may be that the speed of the person does not exceed a predetermined threshold. If this example second predetermined condition is met, this may indicate that the intruder is moving out of the area of interest but are doing it too slowly, or they are simply not moving such that they are staying at the same location. The speed information may be provided by the tracking module referred to above, paragraph 0129).
As per claim 7, Amir et al. disclose the one or more processors are configured to determine, responsive to playing the first sound, one or more second criteria of the entity; and identify the second sound corresponding to the one or more criteria, the state of the area, and the one or more second criteria. ("Taking the example where at step S408 the CPU 202 controls the speaker 218 to emit an audible speech message, at step S412 the CPU 202 may control the speaker to increase the volume of the emitted audible speech message and/or to output a different audible speech message. Alternatively or additionally, at step S412, the CPU 202 may control the speaker 218 to emit a non-speech sound e.g. a warning siren. Alternatively or additionally, at step S412 the CPU 202 may control the lighting device 216 to emit light as a visual deterrent to the intruder in a manner as described above with respect to the deterrent output at step S408.", paragraph 137 & Figure 4).
As per claim 8, Amir et al. disclose the one or more processors are configured
to: identify a third sound corresponding to the one or more criteria, the state of the area, the first sound, and the second sound; and play, by a second speaker device located separately from the speaker device, the third sound to deter the entity from perpetrating the event. ("Whilst not shown in FIG. 4, after a predetermined time period has elapsed after output of the deterrent at step S412, the CPU 202 may process further measured wave reflection data accrued by the active reflected wave detector 206 to determine that an object has moved from the deterrent zone towards the device 102 into an alarm zone
(which in this illustrative example is the inner most zone located closest to the device 102). In response to this determination the CPU 202 controls the speaker 218 to emit audio in the form of an alarm siren, paragraph 0148).
As per claim 9, Amir et al. disclose the one or more processor being configure to:
capture, by the image device (208), image of the entity in the area (paragraph 0071); and
transmit the image of the entity to a client device (110, 114) associated with the area (paragraph 0114).
As per claim 10, Amir et al. disclose the first sound and the second sound being played in at least partial concurrence (figure 4, if both 1st and 2nd predetermined conditions are met close enough together, the output device will output both deterrents concurrently).
As per claims 11-12, 14 and 17-20, The method claims 11-12, 14 and 17-20 are essentially the same in scope as system claims 1-2, 4 and 7-10 above and are rejected similarly.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 5-6 and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Amir et al. (US 2023/0245541) in view of Harel (2021/0201640).
As per claim 5, Amir et al. disclose the instant claimed invention except for the state being one or more of a time of day, a resident within the area, or a holiday. Harel discloses a perimeter protecting apparatus (200) comprises a plurality of cameras (211, 212, 213, 214) and One of such a vetting process is trying to perform Face Recognition, and if successful, it performs search of a previously train data, looking for a match to one of the Home Owners, or previously captured family members, hired help, or any other previously authorized person whose face are part of a "white listed" people, paragraph 0041. Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to utilize the apparatus as taught by Harel in a system as disclosed by Amir et al. for the purpose of determining the home owner's face as a resident within the area in order to preventing a false alarm.
As per claim 6, Harel discloses identifying the first sound corresponding
to the one or more criteria and a state of the area includes providing the state and the one or more criteria as inputs to a machine learning model to generate the first sound, ("The present invention, in embodiments thereof, provides a system and methods which may implement a combination of computer vision tools, machine learning techniques, canned messages announcements over a loudspeaker, speech recognition based dialoging, and interactive procedures, all together comprising an Al system which
facilitates an automatic assessment of people who enter or leave a monitored perimeter", paragraph 0015).
As per claims 15-16, The method claims 15-16 are essentially the same in scope as system claims 5-6 above and are rejected similarly.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAI T. NGUYEN whose telephone number is (571)272-2961. The examiner can normally be reached Mon-Fri: 9am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Quan-Zhen Wang can be reached at 571-272-3114. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TAI T NGUYEN/Primary Examiner, Art Unit 2685 February 3, 2026