Prosecution Insights
Last updated: April 19, 2026
Application No. 18/973,228

CAMERA

Final Rejection §103
Filed
Dec 09, 2024
Examiner
HAGHANI, SHADAN E
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
Alarm.com Incorporated
OA Round
2 (Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
2y 11m
To Grant
79%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
221 granted / 366 resolved
+2.4% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
33 currently pending
Career history
399
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
60.3%
+20.3% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 366 resolved cases

Office Action

§103
DETAILED ACTION Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3 and 5-22 are rejected under 35 U.S.C. 103 as being unpatentable over Shimada (US PG Publication 2020/0177849) in view of Carter (US PG Publication 2022/0319260) teaches. Regarding Claim 1, Shimada (US PG Publication 2020/0177849) discloses a method (transmission operation procedure, Fig. 23, [0251]) comprising: capturing, by a camera, a video sequence that depicts (capture image/during recording, S1-S2, Fig. 16, [0252]) …; while continuing to capture the video sequence (during recording, S2, Fig. 16, [0252]): detecting (determined that [0253]), by … (action information generator, S61, Fig. 23), a representation of an entity (list of detected actions in Figs. 8, 15) in the video sequence (there is a default event [0253]); and in response to detecting the representation of the entity in the video sequence (is there action information is YES, S7, Fig. 23), sending (transmits the generated thumbnail [0255]), to a video analysis system (to the investigation headquarter [0255]), a snapshot (camera generates a thumbnail image [0255]) a) from the video sequence (by using captured video [0255]) and b) that includes the representation of the entity (see thumbnails in Figs. 18, 19, 28 having entities; corresponding to the pieces of content [0147]; indicates a situation of a site [0149]; pursuing criminal [0152]; actions illustrated in Fig. 5, [0171]; ); and after a threshold period of time from (after the flow chart reaches step 45, Fig. 23) and in response to detection of the representation of the entity (YES there is action information at S7, Fig. 23), sending, to the video analysis system (to the investigation headquarter [0255]), at least a portion of the video sequence (stream, S46, Fig. 23; captured video, Fig. 23), wherein sending the snapshot comprises sending the snapshot (transmission of thumbnail at S44, Fig. 23) within a time period from detecting the representation of the entity in the video sequence (the time from “is there action information?” is YES at S7 to transmit thumbnail at S44, Fig. 23) that is less than a delay time period of the camera for sending data (is less than the time from “is there action information?” is YES at S7 to stream video at S46, Fig. 23). Shimada does not disclose, but Carter (US PG Publication 2022/0319260) teaches video (camera sends images or video [0123])… that depicts a property (a monitored designated area or multiple monitored designated areas created by the AI EM system in the view of the camera [0069]); detecting by the camera (upon detection of intruder, camera sends images or video to the AI EM [0123]) a representation of an entity (intruder [0123]). One of ordinary skill in the art before the application was filed would have been motivated to supplement Shimada with the artificial intelligence image analysis of Carter ([0069]) to improve the event-identification in Shimada, proving additional security in high-risk environments and engendering safe outcomes. Regarding Claim 2, Shimada (US PG Publication 2020/0177849) discloses the method of claim 1, wherein the camera sends the snapshot only in response to only detecting the representation of the entity (if YES at “is there information,” S7, proceed to “transmit thumbnail,” S44, Fig. 23). Regarding Claim 3, Shimada (US PG Publication 2020/0177849) discloses the method of claim 1, wherein the portion of the video sequence includes the snapshot (thumbnail is a still image [0217], i.e., captured by the camera, inherent). Regarding Claim 5, Shimada (US PG Publication 2020/0177849) discloses the method of claim 1, wherein a data size of the snapshot is less than a data size of the at least a portion of video sequence (inherent: thumbnails are smaller than video). Regarding Claim 6, Shimada (US PG Publication 2020/0177849) discloses the method of claim 1, comprising: wherein sending the snapshot and triggering a beginning of the delay time period (transmitting the thumbnail at S44 is responsive to YES at S7; the delay time period starts at YES at S7, Fig. 23) are responsive to determining … (if YES at “is there information,” S7, Fig. 23, stream the video, Fig. 23; to the investigation headquarter [0255]). Shimada does not disclose, but Carter (US PG Publication 2022/0319260) teaches determining whether the entity is within a threshold distance of the property (creating the geofence or MDA monitored area, having a proximity distance from the AI EM device, and/or for detecting a breach of the geofence area or MDA [0070]-[0071]); wherein sending (collect audio-visual information that is recorded in the event of a breach of an access point or geofence, video is recorded that is later watched [0074]; Data recorded by the AI EM system including images, including still and videos, and audio is uploaded to the cloud [0297]) … are responsive to determining that the entity is within the threshold distance of the property (collect audio-visual information that is recorded in the event of a breach of an access point or geofence [0074]). One of ordinary skill in the art before the application was filed would have been motivated to supplement Shimada with the artificial intelligence image analysis of Carter ([0069]) to improve the event-identification in Shimada, proving additional security in high-risk environments and engendering safe outcomes. Regarding Claim 7, Shimada (US PG Publication 2020/0177849) discloses the method of claim 1, comprising: … in response to detecting (if YES at “is there information,” S7, Fig. 23)… sending the at least a portion of the video sequence (stream the video, Fig. 23) to the video analysis system (to the investigation headquarter [0255]). Shimada does not disclose, but Carter (US PG Publication 2022/0319260) teaches further comprising: detecting, within the representation of the entity (upon detection of intruder, camera sends images or video to the AI EM [0123]), a predetermined feature (software of the AI device is operable to perform facial recognition [0094], [0123]); and in response to detecting the predetermined feature (collect audio-visual information that is recorded in the event of a breach of an access point or geofence [0074]), sending the at least a portion of the video sequence (a video is recorded that is later watched [0074]) …. One of ordinary skill in the art before the application was filed would have been motivated to supplement Shimada with the artificial intelligence image analysis of Carter ([0069]) to improve the event-identification in Shimada, proving additional security in high-risk environments and engendering safe outcomes. Regarding Claim 8, Shimada (US PG Publication 2020/0177849) discloses the method of claim 1, comprising sending, to a backend server (back end server 50 disposed in the investigation headquarter [0240], [0075]) in communication with the video analysis system (to the investigation headquarter [0255]; back end streaming server 60 disposed in the investigation headquarter [0240], [0075]), the snapshot (transmits the generated thumbnail [0255]). Regarding Claim 9, Shimada (US PG Publication 2020/0177849) discloses one or more non-transitory computer storage media encoded with instructions that, when executed by one or more computers, cause the one or more computers to perform a method (processor 19 executing software [0098]-[0099]) …. The remainder of Claim 9 is rejected on the grounds provided in Claim 1. Regarding Claim 10, the claim is rejected on the grounds provided in Claim 2. Regarding Claim 11, the claim is rejected on the grounds provided in Claim 3. Regarding Claim 13, the claim is rejected on the grounds provided in Claim 5. Regarding Claim 14, the claim is rejected on the grounds provided in Claim 6. Regarding Claim 15, the claim is rejected on the grounds provided in Claim 7. Regarding Claim 16, the claim is rejected on the grounds provided in Claim 8. Regarding Claim 17, Shimada (US PG Publication 2020/0177849) discloses a system comprising one or more computers and one or more storage devices on which are stored instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform a method (software [0099]) comprising: capturing, by a camera, a video sequence that depicts (capture image/during recording, S1-S2, Fig. 16, [0252]) a property (); while continuing to capture the video sequence (during recording, S2, Fig. 16, [0252]): detecting (determined that [0253]), by … (action information generator, S61, Fig. 23), a representation of an entity (list of detected actions in Figs. 8, 15) in the video sequence (there is a default event [0253]); in response to determining (“is there action information?” is YES at S7 [0253], Fig. 23) that the entity (list of trigger events in Figs. 5, 7, [0253]) …, triggering a beginning of a delay time period (the beginning of the time period is when the event is detected: “is there action information?” is YES at S7 [0253], Fig. 23); and in response to detecting the representation of the entity (“is there action information?” is YES at S7 [0253], Fig. 23) in the video sequence (video analysis result [0244] from captured video [0211]), sending, to a video analysis system (to the investigation headquarter [0255]), a snapshot (generate/transmit a thumbnail [0255]) a) from the video sequence (by using captured video data [0255]) and b) that includes the representation of the entity (thumbnails in Fig. 18 have entities); and after expiration of the delay time period (after step S45 is executed, Fig. 23), sending, to the video analysis system (to the investigation headquarter [0257]), at least a portion of the video sequence (stream captured video S46, Fig. 23, [0257]), wherein sending the snapshot is responsive to determining (“is there action information?” is YES at S7 [0253], generate and send the thumbnail S44, Fig. 23) …. Shimada does not disclose, but Carter (US PG Publication 2022/0319260) teaches video sequence (camera sends images or video [0123]) that depicts a property (a monitored designated area or multiple monitored designated areas created by the AI EM system in the view of the camera [0069]); detecting by the camera (upon detection of intruder, camera sends images or video to the AI EM [0123]) a representation of an entity (intruder [0123]); determining whether the entity is within a threshold distance of the property (creating the geofence or MDA monitored area, having a proximity distance from the AI EM device, and/or for detecting a breach of the geofence area or MDA [0070]-[0071]); in response to determining that the entity is within the threshold distance of the property (in the event of a breach of the access point or geofence [0074]), triggering (collect audio-visual information that is recorded in the event [0074]; videos, and audio is uploaded to the cloud [0297]) …; wherein sending (collect audio-visual information that is recorded in the event [0074]; videos, and audio is uploaded to the cloud [0297]) … is responsive to determining that the entity is within the threshold distance of the property (in the event of a breach of the access point or geofence [0074]). One of ordinary skill in the art before the application was filed would have been motivated to supplement Shimada with the artificial intelligence image analysis of Carter ([0069]) to improve the event-identification in Shimada, proving additional security in high-risk environments and engendering safe outcomes. One of ordinary skill in the art before the application was filed would have been motivated to extend the detected events tables (Figs. 5, 7) of Shimada with a breach in geofence, as in Carter, because a breach in geofence is a possible trespass, which is a criminal or civil offence for which evidence must be obtained. Regarding Claim 18, the claim is rejected on the grounds provided in Claim 2. Regarding Claim 19, the claim is rejected on the grounds provided in Claim 3. Regarding Claim 20, the claim is rejected on the grounds provided in Claim 4. Regarding Claim 21, Shimada (US PG Publication 2020/0177849) discloses the system of claim 17, wherein a data size of the snapshot is less than a data size of the at least a portion of video sequence (thumbnail [0255], S44 Fig. 23, as opposed to captured video [0257], S46, Fig. 23; thumbnails refer to smaller images/files representative of the original image). Regarding Claim 22, Shimada (US PG Publication 2020/0177849) discloses the system of claim 17, wherein the method comprises: detecting, within the representation of the entity, a predetermined feature (in a case of detecting whether police officer held a gun, video analysis will be considered [0112]); and in response to detecting the predetermined feature (YES there is action information at S7, Fig. 23), sending the at least a portion of the video sequence to the video analysis system (stream, S46, Fig. 23; captured video, Fig. 23 to the investigation headquarter [0257]). Response to Arguments Applicant’s remarks filed 3/18/2026 have been considered but are unpersuasive. Applicant argues that Shimada is silent to a delay time period of the camera sending data. Remarks at 8. This is unpersuasive. The arrows in the flowchart of Fig. 23 disclose that the thumbnail image is sent before the captured video is streamed. Therefore, the time between detecting the event and sending the thumbnail is less than the delay between detecting the event and streaming the video. Note that the “delay” of Claim 1 is not limited by what causes the delay, or whether it is incidental or intentional. The claim also does not limit whether the time periods, delays, or threshold periods of time are fixed, static, dynamic, random, or incidental. On Page 9 of Remarks, Applicant argues that Carter does not teach triggering a delay time period, but this is unpersuasive because Shimada teaches triggering a delay time period upon the detection of an event (Fig. 23). In the combination used to reject Claim 17, the events table of Figs. 5 and 7 of Shimada are modified by Carter to include violations of a geofence among the detectable events. Claim 17, like Claim 1, is non-specific on the measures of the time and delays. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20060193534 A1 - image data captured during the moving-object detection period can be distributed in a stream US 20140146172 A1 - transmitting still image of detected target in monitoring area THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHADAN E HAGHANI whose telephone number is (571)270-5631. The examiner can normally be reached M-F 9AM - 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHADAN E HAGHANI/ Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

Dec 09, 2024
Application Filed
Dec 18, 2025
Non-Final Rejection — §103
Mar 18, 2026
Response Filed
Apr 03, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604020
VIDEO DECODING METHOD AND DECODER DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12598323
INTER PREDICTION-BASED VIDEO ENCODING AND DECODING
2y 5m to grant Granted Apr 07, 2026
Patent 12586336
WEARABLE DEVICE, METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM CONTROLLING LIGHT RADIATION OF LIGHT SOURCE
2y 5m to grant Granted Mar 24, 2026
Patent 12574549
CHROMA INTRA PREDICTION WITH FILTERING
2y 5m to grant Granted Mar 10, 2026
Patent 12568225
LIMITING A NUMBER OF CONTEXT CODED BINS FOR RESIDUE CODING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
79%
With Interview (+18.6%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 366 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month