Prosecution Insights
Last updated: April 19, 2026
Application No. 18/156,529

MULTI RESOLUTION MOTION DETECTION

Non-Final OA §103
Filed
Jan 19, 2023
Examiner
ISLAM, MEHRAZUL NMN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Simplisafe Inc.
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
29 granted / 50 resolved
-4.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
46 currently pending
Career history
96
Total Applications
across all art units

Statute-Specific Performance

§101
9.2%
-30.8% vs TC avg
§103
68.6%
+28.6% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 50 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/11/2025 has been entered. Information Disclosure Statement The information disclosure statements (“IDS”) filed on 12/11/2025 has been reviewed and the listed references have been considered. Status of Claims Claims 21-28, 30-38 and 40 are pending. Claims 21, 31 and 40 are amended. Claims 1-20, 29 and 39 are cancelled. Response to Arguments Applicant’s amendment of independent Claims 21, 31 and 40, which has altered the scope of the claims of the instant application, has necessitated the new ground(s) of rejection presented in this office action with respect to claims of the instant application. Accordingly, in response to Applicant’s arguments that are merely directed to the amended portion of the claims, new analyses have been presented below, which make Applicant’s arguments moot. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 21-27, 30-37 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over Mao et al. (US 2021/0365707 A1) in view of NPL Carvalho et al. (Anomaly detection with a moving camera using multiscale video analysis – Listed on IDS filed on 06/05/2024). Mao teaches, A method comprising: (Mao, ¶0361: “the method comprising:”) defining a first set of bounding boxes of a first resolution (Mao, ¶0334: “4×4 boxes”) based on a first set of motion detection results, the first set of motion detection results being generated in response to detection of motion (Mao, ¶0103: “video analytics can operate as an Intelligent Video Motion Detector by detecting moving objects and by tracking moving objects”) using a first set of pixel blocks of an image (Mao, ¶0108: “a blob can refer to a contiguous group of pixels making up at least a portion of a background object in a frame of image data”) and a first threshold, (Mao, ¶0130: “the blob processing engine 418 can filter out one or more small blobs that are below a certain size threshold”; threshold is interpreted as minimum blob size). However, Mao does not explicitly teach, wherein pixel blocks of the first set of pixel blocks have a first uniform size; defining a second set of bounding boxes of a second resolution higher than the first resolution based on a second set of motion detection results, the second set of motion detection results being generated in response to detection of motion using a second set of pixel blocks of the image and a second threshold different from the first threshold, wherein pixel blocks of the second set of pixel blocks have a second uniform size smaller than the first uniform size; and detecting motion of an object in a field of view of a surveillance device and represented in the image based on a combination of the first and second sets of bounding boxes. In an analogous field of endeavor, Carvalho teaches, wherein pixel blocks of the first set of pixel blocks (Carvalho, page 330, ¶03: “set of pixel positions belonging to the bounding box of the detected mask of the object”) have a first uniform size; (Carvalho, page 320, ¶03: “perform a multiscale NCC computation, which employs a fixed window (K × K pixels); and ¶06: “In Fig. 9b an image that is subsampled by 64 is used, and only the biggest object, the backpack, is detected”) defining a second set of bounding boxes of a second resolution higher than the first resolution (Carvalho, page 313, ¶06: “larger abandoned objects are searched in lower video resolutions and smaller objects are searched in higher resolutions”) based on a second set of motion detection results, the second set of motion detection results being generated in response to detection of motion (Carvalho, page 312, ¶07: “motion estimation to compensate the movement of the camera and detect the objects of interest”; also see Mao, ¶0103: “Intelligent Video Motion Detector”) using a second set of pixel blocks of the image (Carvalho, page 330, ¶03: “set of pixel positions belonging to the bounding box of the detected mask of the object”) and a second threshold different from the first threshold, (Carvalho, page 320, ¶03: “smaller objects are then searched for with increasing resolution images”; ¶05: “f64(suitable for the detection of larger objects), 32, 16, and 8 (smaller objects)”) wherein pixel blocks of the second set of pixel blocks have a second uniform size (Carvalho, page 320, ¶03: “perform a multiscale NCC computation, which employs a fixed window (K × K pixels)”) smaller than the first uniform size; (Carvalho, page 320, ¶06: “In Fig. 9c an image subsampled by 32 is used”) and detecting motion of an object in a field of view of a surveillance device (Carvalho, page 314, ¶03: “The proposed surveillance system consists of a high-definition (HD) 24frame/s camera mounted on a robotic Roomba® platform”) and represented in the image based on a combination of the first and second sets of bounding boxes. (Carvalho, page 325, ¶02: “Finally, we perform the union of the detection masks obtained in all resolutions to generate a single, final detection mask”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Mao using the teachings of Carvalho to introduce multiscale object detection. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of analyzing different sized targets in video frames. Therefore, it would have been obvious to combine the analogous arts Mao and Carvalho to obtain the invention of claim 21. Regarding claim 22, Mao in view of Carvalho teaches, The method of claim 21, further comprising: generating a motion mask based on image sensor data, (Mao, ¶0218: “optical flow maps (also referred to as motion vector maps) can be generated based on… the movement of points from a first frame to a second frame”) wherein the first set of pixel blocks is generated based on the motion mask (Carvalho, page 330, ¶03: “set of pixel positions belonging to the bounding box of the detected mask of the object”) and the second set of pixel blocks is generated based on the motion mask. (Carvalho, page 325, ¶02: “Finally, we perform the union of the detection masks obtained in all resolutions to generate a single, final detection mask”). The proposed combination as well as the motivation for combining Mao and Carvalho references presented in the rejection of claim 21, apply to claim 22 and are incorporated herein by reference. Thus, the method recited in claim 22 is met by Mao and Carvalho. Regarding claim 23, Mao in view of Carvalho teaches, The method of claim 22, wherein the motion mask is a greyscale representation of the image sensor data. (Mao, ¶0120: “the foreground pixels of the foreground mask can be a different color than that used for the background pixels… the background pixels can be black (e.g., pixel color value 0 in 8-bit grayscale or other suitable value) and the foreground pixels can be white (e.g., pixel color value 255 in 8-bit grayscale or other suitable value”). Regarding claim 24, Mao in view of Carvalho teaches, The method of claim 21, further comprising: generating a first motion mask, wherein the first set of pixel blocks is generated based on the first motion mask; and generating a second motion mask, wherein the second set of pixel blocks is generated based on the second motion mask. (Carvalho, page 322, ¶01: “Fig. 9 Example of detection masks using the multiscale approach: in image (b), only the largest object (backpack) is detected; in (c), the string roll is also detected… b image subsampled by 64, c image subsampled by 32”). The proposed combination as well as the motivation for combining Mao and Carvalho references presented in the rejection of claim 21, apply to claim 24 and are incorporated herein by reference. Thus, the method recited in claim 24 is met by Mao and Carvalho. Regarding claim 25, Mao in view of Carvalho teaches, The method of claim 21, further comprising: combining the first set of bounding boxes and the second set of bounding boxes; and (Mao, ¶0130: “two or more bounding boxes may be merged together”) determining that a first bounding box in the first set of bounding boxes and a second bounding box in the second set of bounding boxes at least partially overlap, (Mao, ¶0138: “two bounding boxes are overlapped geometrically”) wherein the combination of the first and second sets of bounding boxes includes a combined box that encompasses the first bounding box and the second bounding box. (Mao, ¶0130: “a merging process to merge some connected components (represented as bounding boxes) into bigger bounding boxes”). Regarding claim 26, Mao in view of Carvalho teaches, The method of claim 21, further comprising: combining the first set of bounding boxes and the second set of bounding boxes to produce a combined set of bounding boxes; determining that a first bounding box in the first set of bounding boxes does not overlap with the second set of bounding boxes; and adding the first bounding box in the combined set of bounding boxes. (Mao, ¶0130: “two or more bounding boxes may be merged together based on certain rules even when the foreground pixels of the two bounding boxes are totally disconnected.”) Regarding claim 27, Mao in view of Carvalho teaches, The method of claim 21, further comprising: combining the first set of bounding boxes and the second set of bounding boxes to produce a combined set of bounding boxes; determining that a first bounding box in the second set of bounding boxes does not overlap with the first set of bounding boxes; (Mao, ¶0130: “the foreground pixels of the two bounding boxes are totally disconnected”) and discarding the first bounding box in the combined set of bounding boxes. (Mao, ¶0258: “remove false alarms (e.g., by minimizing a wrongly detected/tracked object bounding box”). Regarding claim 30, Mao in view of Carvalho teaches, The method of claim 21, wherein the first set of bounding boxes includes one or more shapes created covering an area around least a portion of a detected object. (Mao, ¶0211: “Based on object detection and tracking, a bounding box 1014 is generated around the person in the subsequent frame”; see also, Fig. 46C). Regarding claim 31, it recites a system with elements corresponding to the steps of the method recited in claim 21. Therefore, the recited elements of system claim 31 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 21. Additionally, the rationale and motivation to combine Mao and Carvalho presented in rejection of claim 21, apply to this claim. Additionally, Mao teaches, A system comprising: at least one hardware processor; and at least one memory storing instructions that, when executed by the at least one hardware processor, cause the at least one hardware processor to perform operations (Mao, ¶0344: “The storage device 4730 can include software services, servers, services, etc., that when the code that defines such software is executed by the processor 4710, it causes the system to perform a function”). Regarding claim 32, it recites a system with elements corresponding to the steps of the method recited in claim 22. Therefore, the recited elements of system claim 32 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 22. Additionally, the rationale and motivation to combine Mao and Carvalho presented in rejection of claim 21, apply to this claim. Regarding claim 33, it recites a system with elements corresponding to the steps of the method recited in claim 23. Therefore, the recited elements of system claim 33 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 23. Additionally, the rationale and motivation to combine Mao and Carvalho presented in rejection of claim 21, apply to this claim. Regarding claim 34, it recites a system with elements corresponding to the steps of the method recited in claim 24. Therefore, the recited elements of system claim 34 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 24. Additionally, the rationale and motivation to combine Mao and Carvalho presented in rejection of claim 21, apply to this claim. Regarding claim 35, it recites a system with elements corresponding to the steps of the method recited in claim 25. Therefore, the recited elements of system claim 35 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 25. Additionally, the rationale and motivation to combine Mao and Carvalho presented in rejection of claim 21, apply to this claim. Regarding claim 36, it recites a system with elements corresponding to the steps of the method recited in claim 26. Therefore, the recited elements of system claim 36 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 26. Additionally, the rationale and motivation to combine Mao and Carvalho presented in rejection of claim 21, apply to this claim. Regarding claim 37, it recites a system with elements corresponding to the steps of the method recited in claim 37. Therefore, the recited elements of system claim 37 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 27. Additionally, the rationale and motivation to combine Mao and Carvalho presented in rejection of claim 21, apply to this claim. Regarding claim 40, it recites a machine-storage medium including instructions corresponding to the steps of the method recited in claim 21. Therefore, the recited instructions of the machine-storage medium of claim 40 are mapped to the proposed combination in the same manner as the corresponding steps of the method claim 21. Additionally, the rationale and motivation to combine Mao and Carvalho presented in rejection of claim 21, apply to this claim. Additionally, Mao teaches, A non-transitory machine-readable medium embodying instructions that, when executed by a machine, cause the machine to perform operations comprising (Mao, ¶0007: “a non-transitory computer-readable medium is provided that has stored thereon instructions that, when executed by one or more processors, cause the one or more processors to”). Claims 28 and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Mao et al. (US 2021/0365707 A1) in view of NPL Carvalho et al. (Anomaly detection with a moving camera using multiscale video analysis – Listed on IDS filed on 06/05/2024) and in further view of Amini et al. (US 2019/0259270 A1). Regarding claim 28, Mao in view of Carvalho teaches, The method of claim 21. However, the combination of Mao and Carvalho does not explicitly teach, wherein image sensor data is received in response to motion detected by an infrared sensor. In an analogous filed of endeavor, Amini teaches, wherein image sensor data is received in response to motion detected by an infrared sensor. (Amini, ¶0009: “the first sensor is an infrared (IR) sensor, the method further includes adjusting motion detection thresholds related to the IR sensor to change sensitivity of the IR sensor to motion occurring in the area”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Mao in view of Carvalho using the teachings of Amini to introduce an infrared sensor. A person skilled in the art would be motivated to combine the known elements as described above and achieve the predictable result of object tracking in poor lighting conditions. Therefore, it would have been obvious to combine the analogous arts Mao, Carvalho and Amini to obtain the invention of claim 28. Regarding claim 38, it recites a system with elements corresponding to the steps of the method recited in claim 28. Therefore, the recited elements of system claim 38 are mapped to the proposed combination in the same manner as the corresponding steps in method claim 28. Additionally, the rationale and motivation to combine Mao, Carvalho, and Amini presented in rejection of claim 28, apply to this claim. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEHRAZUL ISLAM whose telephone number is (571)270-0489. The examiner can normally be reached Monday-Friday: 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Saini Amandeep can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MEHRAZUL ISLAM/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Jan 19, 2023
Application Filed
Jul 07, 2023
Response after Non-Final Action
Feb 22, 2025
Non-Final Rejection — §103
May 17, 2025
Interview Requested
May 22, 2025
Examiner Interview Summary
May 22, 2025
Applicant Interview (Telephonic)
May 28, 2025
Response Filed
Jul 16, 2025
Final Rejection — §103
Sep 17, 2025
Response after Non-Final Action
Dec 11, 2025
Request for Continued Examination
Jan 12, 2026
Response after Non-Final Action
Feb 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602808
METHOD FOR INSPECTING AN OBJECT
2y 5m to grant Granted Apr 14, 2026
Patent 12592075
REMOTE SENSING FOR INTELLIGENT VEGETATION TRIM PREDICTION
2y 5m to grant Granted Mar 31, 2026
Patent 12579695
Method of Generating Target Image Data, Electrical Device and Non-Transitory Computer Readable Medium
2y 5m to grant Granted Mar 17, 2026
Patent 12524900
METHOD FOR IMPROVING ESTIMATION OF LEAF AREA INDEX IN EARLY GROWTH STAGE OF WHEAT BASED ON RED-EDGE BAND OF SENTINEL-2 SATELLITE IMAGE
2y 5m to grant Granted Jan 13, 2026
Patent 12489964
PATH PLANNING
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
86%
With Interview (+28.3%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 50 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month