Prosecution Insights
Last updated: April 19, 2026
Application No. 19/039,412

SYSTEMS AND METHODS FOR SHARING ANALYTICAL RESOURCES IN A CAMERA NETWORK

Non-Final OA §101§103§DP
Filed
Jan 28, 2025
Examiner
TOPGYAL, GELEK W
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Tyco Fire & Security GmbH
OA Round
1 (Non-Final)
59%
Grant Probability
Moderate
1-2
OA Rounds
3y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
355 granted / 604 resolved
+0.8% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
35 currently pending
Career history
639
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
56.2%
+16.2% vs TC avg
§102
25.4%
-14.6% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 604 resolved cases

Office Action

§101 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Information Disclosure Statement The information disclosure statement (IDS) submitted on 8/12/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim recites, inter alia, “A computer-readable medium storing instructions for sharing analytical resources in a camera network …” After close inspection, the Examiner respectfully notes that the disclosure, as a whole, does not specifically identify what may be included as a computer-readable medium and what is not to be included as a computer-readable medium. An Examiner is obliged to give claims their broadest reasonable interpretation consistent with the specification during examination. The broadest reasonable interpretation of a claim drawn to a computer-readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal, per se, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter. Therefore, given the silence of the disclosure and the broadest reasonable interpretation, the computer readable storage medium of the claim may include transitory propagating signals. As a result, the claim pertains to non-statutory subject matter. However, the Examiner respectfully submits a claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. § 101 by adding the limitation “non-transitory” to the claim. Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. For additional information, please see the Patents’ Official Gazette notice published February 23, 2010 (1351 OG 212). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-19 of U.S. Patent No. 12,229,999. Although the claims at issue are not identical, they are not patentably distinct from each other because Claims 1 and 14 of the instant application: Claims 1 and 10 of 999 Patent An apparatus for sharing analytical resources in a camera network, comprising: at least one memory; and at least one processor in communication with the at least one memory and configured to: capture a video clip comprising a set of image frames via a first camera, wherein the first camera is part of the camera network comprising a plurality of cameras; transmit, via the first camera, the video clip to a second camera in the camera network for analysis, wherein the second camera is configured to analyze, for a given period of time, one or more frames of the video clip in place of one or more frames that are locally captured by the second camera that would have been analyzed during the given period of time; receive, by the first camera from the second camera, results of the analysis; and generate, for display on a user interface, the video clip with the results. An apparatus for sharing analytical resources in a camera network, comprising: a memory; and a processor in communication with the memory and configured to: capture a video clip comprising a set of image frames via a first camera having only non-artificial intelligence (A.I) features, wherein the first camera is part of a camera network comprising a plurality of cameras, and wherein the first camera is configured to present captured video clips through a user interface associated with the first camera; identify, in the camera network, a second camera that has an A.I feature; determine whether the second camera has bandwidth to analyze the video clip; transmit, via the first camera, the video clip to the second camera for analysis using the A.I feature, in response to determine that the second camera has the bandwidth, wherein the second camera is configured to accommodate the analysis of the video clip by: reducing, for a given period of time, an amount of frames to analyze that are locally captured by the second camera by a first number of frames; and analyzing, from the video clip, a subset of image frames equal in count to the first number of frames, wherein the subset of image frames are analyzed in place of frames that are locally captured by the second camera in the given period of time; receive, by the first camera from the second camera, metadata comprising results of the analysis using the A.I feature; and generate, for display on the user interface, the video clip with the metadata. As seen in the table above, each and every limitation of claims 1 and 14 of the instant application is broader than and fully encompassed by the limitations of claims 1 and 10 of US Patent No 12,229,999, and is therefore fully anticipated and rejected under the doctrines of Double Patenting. Similar congruency can be noted between claim 20 of the instant application and claim 19 of US Patent No 12,229,999. Regarding dependent claims 2-13 and 15-19 of the instant application, similarly limitations are found in claims 1-9 of US Patent No 12,229,999 and is also fully anticipated and rejected under the doctrines of Double Patenting. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-6, 9-16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 2022/0006960) in view of Jiao (US 2020/0403933) and further in view of Park et al. (US 2023/0115371). Regarding claims 1 and 14, Kim teaches an apparatus/method for sharing analytical resources in a camera network, comprising: at least one memory (paragraph 61); and at least one processor in communication with the at least one memory (paragraph 61) and configured to: capturing a video clip comprising a set of image frames via a first camera, wherein the first camera is part of the camera network comprising a plurality of cameras (Figs. 1, teaches a plurality of cameras 13-1, 13-2, 13-n capturing video frames); transmitting, via the first camera, the video clip to a second camera in the camera network for analysis, wherein the second camera is configured to analyze (paragraphs 55-56 teaches wherein the normal cameras transmit their video frames to the AI camera based on available resources transmitted by the AI cameras to the system), for a given period of time, one or more frames of the video clip in place of one or more frames that are locally captured by the second camera that would have been analyzed during the given period of time; receiving, by the first camera from the second camera, results of the analysis; PNG media_image1.png 8 3 media_image1.png Greyscale and generating, for display on a user interface, the video clip with the results (paragraphs 66 and 115 teaches wherein the result of the analysis is sent back to the management server 15, however, Kim fails to explicitly teach that the result is sent all the way back to the first camera). As discussed above, while Kim returns metadata to the server, fails to explicitly teach sending it back to the camera. Jiao in claim 18 and 24 of teaches wherein the processing result from a second camera (slave) is transmitted all the way back to the first (master) camera. It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Jiao into the system of Kim because said incorporation allows for the benefit of maintaining efficiency between camera devices based on their usage (Jiao: abstract). However, while Kim and Jiao teaches the sharing of resources between a normal camera and an AI camera, fails to explicitly teach, but Park teaches the claimed second camera is able to analyze: for a given period of time, one or more frames of the video clip in place of one or more frames that are locally captured by the second camera that would have been analyzed during the given period of time (paragraphs 51, 77 and 97 teaches wherein during high computation load, only a subset or less than all the frames that were captured by a camera are processed in a neural network. Furthermore, it is the examiner’s position that Kim and Jiao are relied upon to teach the ability to share resources between a first camera and a second camera, which includes the ability to process and analyze the video frames that are received by the AI camera (in Kim) and by the slave camera (in Jiao). Therefore, in utilizing Park into the proposed combination, with a reduced frame rate on the processing less than an amount of frames that are locally captured by the second camera from Park, the number of frames that are received for processing is also reduced in the combination of Kim and Jiao. Additionally, in its combination with Kim and Jiao, as discussed above, since Kim and Jiao also teach their own version of processing/analyzing the camera’s own images (in addition to the frames received from non-AI cameras), the combination of Park would result in a lower number of frames being processed during the given period of time). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Park such that when the AI camera of Kim is operating with limited resource availability (while still analyzing video frames captured by the AI camera itself), it can reduce the frame rate being processed less than the frame rate locally captured on its own camera, because such an incorporation allows for the system to increase its processing efficiency while under high demand/load (paragraph 77 of Park). The methodology of claim 1 is implemented by apparatus claim 1 and is therefore anticipated. Similarly, CRM claim 21 is also rejected since the apparatus also claims the same features for medium storing computer executable instructions. Regarding claims 2 and 15, Kim teaches the claimed wherein transmitting the video clip to the second camera is in response to an event captured in the video clip (paragraphs 49 and 99). Regarding claims 3 and 16, Kim teaches the claimed wherein the event comprises motion by an object (paragraphs 49 and 99). Regarding claims 5 and 18, Kim teaches the claimed wherein the analysis comprises performing an artificial intelligence (A.I) feature (Kim in paragraphs 55-56 teaches wherein the normal cameras transmit their video frames to the AI camera based on available resources transmitted by the AI cameras to the system. Regarding claims 6 and 19, Kim teaches the claimed wherein the A.I feature comprises at least one of (examiner notes the alternative language) object detection, object tracking, facial detection, biometric recognition, environmental event detection, or software-based image enhancement (paragraphs 49 and 99). Regarding claim 9, Kim the claimed wherein transmitting the video clip to the second camera is in response to determining that the second camera has bandwidth to analyze the video clip (paragraphs 52-56 at least teaches wherein the AI cameras are checked for a level of available resources). Regarding claim 10, Kim the claimed determining whether the second camera has the bandwidth to analyze the video clip wherein further comprises: transmitting, via the first camera to the second camera, a bandwidth query comprising a request for information about at least one of storage space or hardware utilization on the second camera (paragraphs 84-87 teaches wherein the master node of the AI cameras collects the use rate of AI and main processors. The claimed transmitting a query is done by the system when the normal cameras are identified to be paired with AI cameras); receiving a response to the bandwidth query from the second camera, wherein the response comprises at least one of an available storage space or an available hardware utilization (paragraphs 84-87 teaches wherein the master node of the AI cameras collects the use rate of AI and main processors. The claimed available storage space or hardware utilization is met by master node AI camera having prepared and processed the use rate of the AI processor and the use rate of the main processor); determining that the second camera has the bandwidth to analyze the video clip in response to at least one of determining that the available storage space is larger than a size of the video clip or determining that the hardware utilization is less than a threshold hardware utilization (paragraphs 84-87 teaches wherein “The master node may allocate each of the first to n-th normal cameras 131 to 13n to any one of the AI cameras 121 to 123 with reference to the use rates of the AI processors and the use rates of the main processors of the cluster information CLSTIF”. Therefore, clearly teaching that the AI cameras has enough resources to be assigned to a normal camera based on its use rate). Regarding claim 11, Kim and Jiao teaches the claimed wherein the camera network includes a third camera, further comprising: transmitting, via the first camera, the video clip to the third camera for the analysis, in response to determining that the second camera does not have the bandwidth (Kim: paragraphs 84-87 teaches wherein “The master node may allocate each of the first to n-th normal cameras 131 to 13n to any one of the AI cameras 121 to 123 with reference to the use rates of the AI processors and the use rates of the main processors of the cluster information CLSTIF”. Therefore, clearly teaching that the AI cameras has enough resources to be assigned to a normal camera based on its use rate. Therefore, if a first AI camera is beyond its use rate, the normal camera would be assigned to another AI camera with available resources); receiving, by the first camera from the third camera, the results of the analysis (paragraphs 66 and 115 teaches wherein the result of the analysis is sent back to the management server 15, however, Kim fails to explicitly teach that the result is sent all the way back to the first camera); and generating, for display on the user interface, the video clip with the results from the third camera (Figs. 1, 10-11 and paragraphs 48-51 teaches wherein the video clips and metadata are output to the user so they can select the metadata and discern the origins of the video that was captured). As discussed above, while Kim returns metadata to the server, fails to explicitly teach sending it back to the camera. Jiao in claim 18 and 24 of teaches wherein the processing result from a second camera (slave) is transmitted all the way back to the first (master) camera. It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Jiao into the system of Kim because said incorporation allows for the benefit of maintaining efficiency between camera devices based on their usage (Jiao: abstract). Regarding claim 12, Kim teaches the claimed wherein the camera network includes a third camera, wherein identifying the second camera comprises: determining, based on a plurality of rules, whether to select the second camera or the third camera for providing the analysis, wherein the plurality of rules query one or more of feature availability, time restrictions, or network connectivity (paragraphs 84-87 teaches wherein “The master node may allocate each of the first to n-th normal cameras 131 to 13n to any one of the AI cameras 121 to 123 with reference to the use rates of the AI processors and the use rates of the main processors of the cluster information CLSTIF”. Therefore, clearly teaching that the AI cameras has enough resources to be assigned to a normal camera based on its use rate. Therefore, if a first AI camera is beyond its use rate, the normal camera would be assigned to another AI camera with available resources. The claimed “feature availability” is met by the use rate resources available); and identifying the second camera to provide the analysis based on the plurality of rules (paragraphs 84-87 teaches wherein “The master node may allocate each of the first to n-th normal cameras 131 to 13n to any one of the AI cameras 121 to 123 with reference to the use rates of the AI processors and the use rates of the main processors of the cluster information CLSTIF”. Therefore, clearly teaching that the AI cameras has enough resources to be assigned to a normal camera based on its use rate. Therefore, if a first AI camera is beyond its use rate, the normal camera would be assigned to another AI camera with available resources. The claimed “feature availability” is met by the use rate resources available). Regarding claim 13, Jiao in its combination with Kim, teaches the claimed further comprising: transmitting, via the first camera to the camera network, a broadcast message comprising the video clip and a request for the analysis (paragraph 102 teaches a message from a normal camera in the form of a signal to request other AI cameras to process the video signal associated with the event signal); and wherein the second camera is identified in response to receiving a response to the broadcast message from the second camera, the response indicating that the analysis will be performed by the second camera (paragraphs 14-16 teaches the claimed transmitting status messages from the AI cameras that perform the analysis). The prior motivation as discussed above is incorporated herein. Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 2022/0006960) in view of Jiao (US 2020/0403933) further in view of Park et al. (US 2023/0115371) and further in view of Grancharov et al. (US 2022/0044045). Regarding claim 7, Kim teaches the claimed as discussed in claims 1 and 5 above, however fails to teach, but Grancharov teaches wherein the A.I feature is object detection, and wherein receiving the results comprises receiving a plurality of object identifiers that label objects in each frame of the video clip (paragraph 29). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Grancharov into the proposed combination of Kim, Jiao and Park such that detected object’s boundary, position and dimension/size can be highlighted on a display device, because such an incorporation allows for the benefit of improving the user experience by improving the accuracy of object detection for highlighting a result of the object detection (paragraphs 4-12). Regarding claim 8, Kim teaches the claimed as discussed in claims 1, 5 and 7 above however fails to teach, but Grancharov teaches wherein the plurality of object identifiers comprise dimensions and positions of boundary boxes that border objects in each frame of the video clip, and wherein generating the video clip with the results on the user interface comprises generating the boundary boxes in the one or more frames of the video clip based on the dimensions and positions (paragraph 29). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Grancharov into the proposed combination of Kim, Jiao and Park such that detected object’s boundary, position and dimension/size can be highlighted on a display device, because such an incorporation allows for the benefit of improving the user experience by improving the accuracy of object detection for highlighting a result of the object detection (paragraphs 4-12). Allowable Subject Matter Claims 4 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claims 4 and 17, while the prior art discussed above teaches the ability to share resources between cameras based on bandwidth ability and based on whether cameras have A.I abilities or not, including the processing of frames sent from a first camera to a second camera, however fails to explicitly teach the claimed wherein analyzing the one or more frames of the video clip by the second camera further comprises: reducing, for the given period of time, an amount of frames that are locally captured by the second camera by a first number of frames; and analyzing, from the video clip, a subset of image frames equal in count to the first number of frames in place of the frames that are locally captured by the second camera in the given period of time. So as indicated by the above statements, the closest prior art as discussed above, either singularly or in combination, fail to anticipate or render the above combination of the discussed features/limitations obvious and additionally, applicant’s arguments have been considered persuasive, in light of the claim limitations as well as the enabling portions of the specification. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GELEK W TOPGYAL whose telephone number is (571)272-8891. The examiner can normally be reached M-F (9:30-6 PST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GELEK W TOPGYAL/Primary Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Jan 28, 2025
Application Filed
Feb 21, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601836
RADIO-WAVE SENSOR INSTALLATION ASSISTANCE DEVICE, COMPUTER PROGRAM, AND RADIO-WAVE SENSOR INSTALLATION POSITION DETERMINATION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12597341
INSTALLATION SUPPORT DEVICE FOR RADIO WAVE SENSOR, COMPUTER PROGRAM, METHOD OF DETERMINING INSTALLATION POSITION OF RADIO WAVE SENSOR, AND METHOD OF SUPPORTING INSTALLATION OF RADIO WAVE SENSOR
2y 5m to grant Granted Apr 07, 2026
Patent 12592263
VIDEO VARIATION EFFECTS
2y 5m to grant Granted Mar 31, 2026
Patent 12586607
MULTIMEDIA PROCESSING METHOD AND APPARATUS BASED ON ARTIFICIAL INTELLIGENCE, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12567445
VIDEO REMIXING METHOD
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
59%
Grant Probability
78%
With Interview (+19.3%)
3y 8m
Median Time to Grant
Low
PTA Risk
Based on 604 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month