Prosecution Insights
Last updated: April 19, 2026
Application No. 18/983,699

METHODS AND APPARATUS FOR DETECTING PETS

Non-Final OA §103§DP
Filed
Dec 17, 2024
Examiner
ABDOU TCHOUSSOU, BOUBACAR
Art Unit
2482
Tech Center
2400 — Computer Networks
Assignee
Simplisafe Inc.
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
82%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
294 granted / 436 resolved
+9.4% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
21 currently pending
Career history
457
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
52.1%
+12.1% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 436 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 8-16 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 11-15 of U.S. Patent No. 12211283, and claims 11-14 of U.S. Patent No. 11922642. Although the claims at issue are not identical, they are not patentably distinct from each other because: The patent claims include all of the limitations of the instant application claims, respectively. The patent claims also include additional limitations. Hence, the instant application claims are generic to the species of invention covered by the respective patent claims. As such, the instant application claims are anticipated by the patent claims and are therefore not patentably distinct therefrom. (See Eli Lilly and Co. v. Barr Laboratories Inc., 58 USPQ2D 1869, "a later genus claim limitation is anticipated by, and therefore not patentably distinct from, an earlier species claim", In re Goodman, 29 USPQ2d 2010, "Thus, the generic invention is 'anticipated' by the species of the patented invention" and the instant “application claims are generic to species of invention covered by the patent claim, and since without terminal disclaimer, extant species claims preclude issuance of generic application claims”). Claims 1-7 and 17-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. 12211283, claims 1-15 of U.S. Patent No. 11922642, and claims 1-21 of U.S. Patent No. 12450756, in view of Zhang et al (US 20230074386). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of this application are rendered obvious by the claims of the patent. Patent claims recite all limitations found in instant claims except “determining whether a maximum number of attempts to classify the moving object has been reached; and deactivating the image capture device after reaching the maximum number of attempts.” However, Zhang teaches determining whether a maximum number of attempts to classify the moving object has been reached (see [0034], The identity recognition end condition may include … time consumption of the current round of identity recognition reaches a preset duration, and the number of the current round of identity recognitions reaches a preset number of times); and deactivating the image capture device after reaching the maximum number of attempts (see [0034], after the identity recognition end condition is met, the camera is turned off to stop image capture). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify the instant claims using Zhang’s teachings to determine whether a maximum number of attempts to classify the moving object has been reached; and deactivate the image capture device after reaching the maximum number of attempts in order to save computing power and reduce power consumption (Zhang; [0030]). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7, 17, 19 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al (US 20190130580) in view of Solh et al (US 11257226) in view of Zhang et al (US 20230074386). As to claim 1, Chen discloses a method comprising: acquiring, with an image capture device, an image of a scene (see [0136], video analytics system 100 receives video frames 102 from a video source 130); processing the image using first and second bounding boxes, the first bounding box being positioned about a moving object depicted in the image and the second bounding box being positioned about a pet depicted in the image (FIG. 13, blob bounding boxes 1324 and detector bounding boxes 1323; see [0194]; see FIGS. 15A-15B); classifying, based on overlap of the first and second bounding boxes, the moving object as the pet (FIG. 13, bounding box aggregation engine 1325 and object tracking system 1206; see FIGS. 8A-9C and [0197]). Chen fails to explicitly disclose that the image of the scene is acquired in response to detecting motion; determining whether a maximum number of attempts to classify the moving object has been reached; and deactivating the image capture device after reaching the maximum number of attempts. However, Solh teaches in response to detecting motion, acquiring, with an image capture device, an image of a scene (FIG. 8, B802-B804). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Chen using Solh's teachings to acquire, with an image capture device, an image of a scene in response to detecting motion in order to improve battery life for battery-powered devices (Solh; col. 2, lines 9-40; col. 5, lines 15-20). The combination of Chen and Solh fails to explicitly disclose determining whether a maximum number of attempts to classify the moving object has been reached; and deactivating the image capture device after reaching the maximum number of attempts. However, Zhang teaches determining whether a maximum number of attempts to classify the moving object has been reached (see [0034], The identity recognition end condition may include … time consumption of the current round of identity recognition reaches a preset duration, and the number of the current round of identity recognitions reaches a preset number of times); and deactivating the image capture device after reaching the maximum number of attempts (see [0034], after the identity recognition end condition is met, the camera is turned off to stop image capture). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify the combination of Chen and Solh using Zhang’s teachings to determine whether a maximum number of attempts to classify the moving object has been reached; and deactivate the image capture device after reaching the maximum number of attempts in order to save computing power and reduce power consumption (Zhang; [0030]). As to claim 2, the combination of Chen, Solh and Zhang further discloses further comprising: detecting the motion in the scene using a motion detector (Solh; col. 34, lines 1-6). As to claim 3, the combination of Chen, Solh and Zhang further discloses wherein the image is a first image (Chen; FIG. 12, video frames 1202), and further comprising: acquiring a second image of the scene using the image capture device (Chen; FIG. 12, video frames 1202); applying a motion detection process to the first and second images to generate the first bounding box (Chen; FIG. 13 and [0215], blob detection system 1204); and applying an object detection process to the first image to generate the second bounding box (Chen; FIG. 13, deep learning system 1208). As to claim 4, Chen as modified by Solh and Zhang fails to explicitly disclose further comprising converting the first and second images to greyscale to produce first and second greyscale images; comparing pixel intensities between the first and second greyscale images; and detecting the moving object based on differences in at least some of the pixel intensities between the first and second greyscale images exceeding a predetermined threshold value. However, Solh teaches converting the first and second images to greyscale to produce first and second greyscale images (col. 43, lines 19-23; col. 6, lines 15-20; col. 27, lines 40-44); comparing pixel intensities between the first and second greyscale images (col. 43, lines 23-36); and detecting the moving object based on differences in at least some of the pixel intensities between the first and second greyscale images exceeding a predetermined threshold value (FIG.1B: relevant motion; see col. 10, lines 9-12, 21-22; see also col. 43, lines 36-43; col. 38, lines 36-41; col. 39, lines 11-20, 33-46; col. 5, lines 44-65). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to further modify Chen in the combination using Solh's teachings to include converting the first and second images to greyscale to produce first and second greyscale images; comparing pixel intensities between the first and second greyscale images; and detecting the moving object based on differences in at least some of the pixel intensities between the first and second greyscale images exceeding a predetermined threshold value in order to provide systems capable of performing motion classification on recorded video on devices with limited computing power and reduce the number of false positive motion event detections, which may in turn reduce the occurrence of irrelevant notifications, unnecessary video recording and/or uploading, and/or reduce the amount of power consumed by motion-triggered actions (Solh; col. 2, lines 9-40; col. 5, lines 15-20). As to claim 5, the combination of Chen, Solh and Zhang further discloses further comprising: acquiring, with the image capture device and based on determining that the maximum number has not yet been reached (Zhang; [0034]), one or more additional images of the scene (Chen; FIG. 12, video frames 1202); and processing one or more additional images of the scene to confirm classification of the moving object as the pet (Chen; FIG. 13, bounding box aggregation engine 1325 and object tracking system 1206; see FIGS. 8A-9C and [0197]). As to claim 6, the combination of Chen, Solh and Zhang further discloses wherein processing plurality of additional images comprises: applying a motion detection process to the plurality of additional images to confirm detection of the moving object (Chen; FIG. 13 and [0215], blob detection system 1204); and applying an object detection process to at least one of the plurality of additional images to confirm identification of the pet (Chen; FIG. 13, deep learning system 1208). As to claim 7, the combination of Chen, Solh and Zhang further discloses wherein applying the object detection process comprises applying an artificial neural network (Chen; [0194]). As to claim 17, Chen discloses a device (FIG. 12) comprising: a camera (FIG. 12 and [0136], video source 1230); at least one processor (see [0130]); and a non-transitory data storage device storing instructions that when executed by the at least one processor configure the device to (see [0130]) activate the camera to acquire a plurality of images (see [0136]), apply a motion detection process to the plurality of images to produce a representation of a first bounding box that identifies a location in at least one image of the plurality of images of detected motion (FIG. 13, blob detection system 1204 and blob bounding boxes 1324; FIGS. 15A-15B), apply an object detection process to the at least one image to recognize a pet depicted in the at least one image, wherein applying the object detection process includes producing a representation of a second bounding box that identifies a location of the pet in the at least one image (FIG. 13, deep learning system 1208 and detector bounding boxes 1323; see [0194]), identify a moving pet by pairing the detected motion with the pet based on determining that the first bounding box and the second bounding box at least partially overlap (FIG. 13, bounding box aggregation engine 1325 and object tracking system 1206; see FIGS. 8A-9C and [0197]; FIGS. 15A-15B). Chen fails to explicitly disclose a motion detector; activate, based on detecting a motion event with the motion detector, the camera to acquire a plurality of images, determine whether a maximum number of attempts to identify the moving pet has been reached, and deactivate the camera based at least in part on reaching the maximum number of attempts. However, Solh teaches a motion detector (FIG. 8, B802); and activate, based on detecting a motion event with the motion detector, the camera to acquire a plurality of images (FIG. 8, B804). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Chen using Solh's teachings to include a motion detector; and activate, based on detecting a motion event with the motion detector, the camera to acquire a plurality of images in order to improve battery life for battery-powered devices (Solh; col. 2, lines 9-40; col. 5, lines 15-20). The combination of Chen and Solh fails to explicitly disclose determine whether a maximum number of attempts to identify the moving pet has been reached, and deactivate the camera based at least in part on reaching the maximum number of attempts. However, Zhang teaches determine whether a maximum number of attempts to identify the moving pet has been reached (see [0034], The identity recognition end condition may include … time consumption of the current round of identity recognition reaches a preset duration, and the number of the current round of identity recognitions reaches a preset number of times); and deactivate the camera based at least in part on reaching the maximum number of attempts (see [0034], after the identity recognition end condition is met, the camera is turned off to stop image capture). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify the combination of Chen and Solh using Zhang’s teachings to determine whether a maximum number of attempts to classify the moving object has been reached; and deactivate the image capture device after reaching the maximum number of attempts in order to save computing power and reduce power consumption (Zhang; [0030]). As to claim 19, the combination of Chen, Solh and Zhang further discloses wherein the motion detector is a passive infrared sensor (Solh; col. 19, lines 49-50). As to claim 20, the combination of Chen, Solh and Zhang further discloses further comprising: a battery coupled to the motion detector, the camera, the non-transitory data storage device, and the at least one processor (Solh; FIG. 3, battery 342). Claim(s) 8-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al (US 20190130580) in view of Wu et al (US 20210329193). As to claim 8, Chen discloses a device (FIG. 12) comprising: an image capture device (FIG. 12, video source 1230); at least one processor (see [0130]); and a non-transitory data storage device storing instructions that when executed by the at least one processor configure the device to (see [0130]) generate a representation of a first bounding box that identifies a location in an image of a moving object depicted in the image (FIG. 13, blob bounding boxes 1324; see FIGS. 15A-15B), generate a representation of a second bounding box that identifies a location in the image of a pet depicted in the image (FIG. 13, detector bounding boxes 1323; see [0194]; see FIGS. 15A-15B), classify, based on overlap of the first and second bounding boxes, the moving object as the pet (FIG. 13, bounding box aggregation engine 1325 and object tracking system 1206; see FIGS. 8A-9C and [0197]); control the image capture device to acquire a plurality of additional images (FIG. 12, video frames 1202); process the plurality of additional images to confirm classification of the moving object as the pet (FIG. 13, bounding box aggregation engine 1325 and object tracking system 1206; see FIGS. 8A-9C and [0197]). Chen fails to explicitly disclose deactivate, based on confirming classification of the moving object as the pet, the image capture device. However, Wu teaches deactivate, based on confirming classification of the moving object as the pet, the image capture device (see [0093]). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify Chen using Wu's teachings to deactivate, based on confirming classification of the moving object as the pet, the image capture device in order to reduce power consumption (Wu; [0057], [0070]). As to claim 9, the combination of Chen and Wu further discloses wherein the non-transitory data storage device further stores instructions that when executed by the at least one processor configure the device to acquire the image using the image capture device (Chen; see [0209]). As to claim 10, Chen as modified by Wu fails to explicitly disclose further comprising: a motion detector; and wherein the non-transitory data storage device further stores instructions that when executed by the at least one processor configure the device to activate the image capture device to acquire the image in response to detection of motion by the motion detector. However, Wu teaches a motion detector (FIG. 6, passive detector 62); and wherein the non-transitory data storage device further stores instructions that when executed by the at least one processor configure the device to activate the image capture device to acquire the image in response to detection of motion by the motion detector (FIG. 7, S206). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to further modify Chen using Wu's teachings to include a motion detector; and wherein the non-transitory data storage device further stores instructions that when executed by the at least one processor configure the device to activate the image capture device to acquire the image in response to detection of motion by the motion detector in order to reduce power consumption (Wu; [0057], [0070]). As to claim 11, the combination of Chen and Wu further discloses wherein the motion detector is a passive infrared sensor (Wu; FIG. 6, passive detector 62). As to claim 12, the combination of Chen and Wu further discloses wherein the image is a first image (Chen; FIG. 12, video frames 1202), and wherein the non-transitory data storage device further stores instructions that when executed by the at least one processor configure the device to apply a motion detection process to the first image and a second image to generate the representation of the first bounding box (Chen; FIG. 13 and [0215], blob detection system 1204); and apply an object detection process to the first image to generate the representation of the second bounding box (Chen; FIG. 13, deep learning system 1208). As to claim 13, the combination of Chen and Wu further discloses wherein the non-transitory data storage device further stores instructions that when executed by the at least one processor configure the device to acquire the first and second images using the image capture device (Chen; see [0209]). As to claim 14, Chen as modified by Wu fails to explicitly disclose further comprising: a motion detector; and wherein the non-transitory data storage device further stores instructions that when executed by the at least one processor configure the device to activate, in response to detecting motion with the motion detector, the image capture device to acquire the first and second images. However, Wu teaches a motion detector (FIG. 6, passive detector 62); and wherein the non-transitory data storage device further stores instructions that when executed by the at least one processor configure the device to activate, in response to detecting motion with the motion detector, the image capture device to acquire the first and second images (FIG. 7, S206). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to further modify Chen using Wu's teachings to include a motion detector; and wherein the non-transitory data storage device further stores instructions that when executed by the at least one processor configure the device to activate, in response to detecting motion with the motion detector, the image capture device to acquire the first and second images in order to reduce power consumption (Wu; [0057], [0070]). As to claim 15, the combination of Chen and Wu further discloses wherein the non-transitory data storage device further stores instructions that when executed by the at least one processor cause the device to: operate in a low power mode of operation in which the image capture device is deactivated (Wu; FIG. 7, S204 and [0090], sleep mode); detect the motion while operating in the low power mode of operation (Wu; FIG. 7, S202); and in response to detecting the motion, operate in a higher power mode of operation in which the image capture device is activated (Wu; FIG. 7, S206 and [0090], wakeup mode). Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al (US 20190130580) in view of Wu et al (US 20210329193) in view of Solh et al (US 11257226). As to claim 16, the combination of Chen and Wu fails to explicitly disclose further comprising: a battery coupled to the motion detector and to the image capture device and configured to supply operating power to the motion detector and to the image capture device. However, Solh teaches a battery coupled to the motion detector and to the image capture device and configured to supply operating power to the motion detector and to the image capture device (FIG. 3, battery 342). At the time before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skills in the art to modify the combination of Chen and Wu using Solh's teachings to include a battery coupled to the motion detector and to the image capture device and configured to supply operating power to the motion detector and to the image capture device in order to provide a battery-powered device with improved battery life (Solh; col. 5, lines 15-21). Allowable Subject Matter Claim 18 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BOUBACAR ABDOU TCHOUSSOU whose telephone number is (571)272-7625. The examiner can normally be reached M-F 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at 5712727331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BOUBACAR ABDOU TCHOUSSOU/Primary Examiner, Art Unit 2482
Read full office action

Prosecution Timeline

Dec 17, 2024
Application Filed
Feb 11, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604072
CAMERA AND INFRARED SENSOR SHUTTER
2y 5m to grant Granted Apr 14, 2026
Patent 12587755
VEHICLE-MOUNTED CONTROL DEVICE, AND THREE-DIMENSIONAL INFORMATION ACQUISITION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12587724
DIGITALLY ENHANCED MICROSCOPY FOR MULTIPLEXED HISTOLOGY
2y 5m to grant Granted Mar 24, 2026
Patent 12574509
METHOD AND APPARATUS FOR ENCODING/DECODING VIDEO AND METHOD FOR TRANSMITTING BITSTREAM
2y 5m to grant Granted Mar 10, 2026
Patent 12574476
VEHICULAR VISION SYSTEM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
82%
With Interview (+14.2%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 436 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month