Prosecution Insights
Last updated: April 19, 2026
Application No. 18/554,461

DYNAMIC EDGE-CLOUD COLLABORATION WITH KNOWLEDGE ADAPTATION

Non-Final OA §103
Filed
Oct 06, 2023
Examiner
TOPGYAL, GELEK W
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
Wyze Labs Inc.
OA Round
3 (Non-Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
355 granted / 604 resolved
+0.8% vs TC avg
Strong +19% interview lift
Without
With
+19.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
35 currently pending
Career history
639
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
56.2%
+16.2% vs TC avg
§102
25.4%
-14.6% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 604 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/16/2026 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) 8-13 and 15-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 8-9, 12-13, 16-19 and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Chew et al. (US 2018/0181868) in view of Lee et al. (US 2020/0311546). Regarding claim 8, Chew teaches a surveillance system (Fig. 4 and paragraph 33) comprising: an edge device (paragraphs 16-17, 20 and 33) that is configured to: generate a first set of images of an environment to be surveilled (paragraphs 15-17, 20 and 33 teaches camera capturing images/video), apply a first model to each image in the first set of images, so as to produce a first set of outputs (Figs. 1-2, analytics determiner (i.e. “model”) running on local client device, running analytics to generate an output/result), and However, while Chew is explicit towards using a surveillance system to perform camera and server based analysis of video frames, fails to teach, however, Lee teaches: for each output in the first set of outputs, determine whether confidence in that output exceeds a threshold (Figs. 1-3 and paragraphs 4 and 38 teaches determining whether a calculated confidence level exceeds a threshold), and generate a feature map corresponding to that output in response to a determination that the confidence does not exceed the threshold (Figs. 1-3 and paragraphs 4 and 38 teaches determining whether a calculated confidence level exceeds a threshold, and if below the threshold, inference is transmitted to a cloud/server system); and cause transmission of the feature map to a server system (Figs. 1-3 and paragraphs 4 and 38 teaches transmitting the feature map to the server/cloud); and the server system that is configured to: receive a set of feature maps from the edge device (Figs. 1-3 and paragraphs 4, 38 and 43 teaches the cloud/server system is sent the feature maps) wherein each feature map in the set of feature maps is generated, by the edge device, to consist of one or more features partially representing an image in the set of images (Figs. 1-3 and paragraphs 4, 38 and 43 teaches wherein an initial edge device 300 primarily working in the imaging/realm, and the images/sensor data to the cloud device/server. The operation results, from the various number of layers of the CNN/DDNN partially represents the input image in that it encodes spatial feature information extracted from the image through convolution, while not preserving the raw pixel-level representation of the image), and for each feature map of the set of feature maps, providing that feature map as input to a second model so as to produce a second set of outputs, wherein the second set of outputs is representative of an inference made, by the second model, based on the feature map (Fig. 4 and paragraphs 33-39 teaches wherein the cloud device/server runs its own analysis sequentially through secondary layers all the way up to a final layer. Each layer outputs an inference until a final output result for the inference is output as a result (“cloud exit”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Lee into the system of Chew because combining Lee and Chew requires only routine skill and produces predictable results. The modification involves applying Chew’s confidence threshold comparison mechanism – a discrete, well-defined software component comprising a confidence level comparator, and a data sender (see paragraphs 43 and Figs. 6 and 9) – to Lee’s existing confidence evaluation step. Lee already implements confidence comparison as a core architectural element (paragraph 38); the modification is the substitution of a user-configurable threshold value for a predetermined one. There is no teaching away in either reference since Chew does not criticize user-defined thresholds as inferior to predetermined ones, and Lee does not teach that its predetermined threshold is structurally essential. The combination produces the predictable result of a more flexible and deployable distributed DNN inference system. See KSR v. Teleflex Inc. Regarding claim 9, Lee teaches the claimed wherein each feature map is provided as input to a middle layer of the second model (Fig. 2 and 6C, paragraphs 4, 38, 40, 58 teaches the claimed “intermediate layer” in the form of at least “conv” layer as part of the cloud’s model). Regarding claim 12, Chew teaches a method performed by an edge device (Fig. 4 and paragraph 33) that generates samples while surveilling an environment, the method comprising: applying a first model to the samples to produce outputs (Figs. 1-2, analytics determiner (i.e. “model”) running on local client device, running analytics to generate an output/result), However, while Chew is explicit towards using a surveillance system to perform camera and server based analysis of video frames, fails to teach, however, Lee teaches: wherein each output is representative of an inference made in relation to a corresponding sample (Figs. 1-3 and paragraphs 4 and 38 teaches determining an inference based on an initial model output in a first layer/model); determining whether confidence in each of the outputs exceeds a threshold (Figs. 1-3 and paragraphs 4 and 38 teaches determining whether a calculated confidence level exceeds a threshold); and for each output for which the confidence does not exceed the threshold (Figs. 1-3 and paragraphs 4 and 38 teaches determining whether a calculated confidence level exceeds a threshold, and if below the threshold, inference is transmitted to a cloud/server system), causing transmission of a feature map partially representing the corresponding sample to a server system for analysis by a second model (Figs. 1-3 and paragraphs 4 and 38 teaches transmitting the feature map to the server/cloud). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Lee into the system of Chew because combining Lee and Chew requires only routine skill and produces predictable results. The modification involves applying Chew’s confidence threshold comparison mechanism – a discrete, well-defined software component comprising a confidence level comparator, and a data sender (see paragraphs 43 and Figs. 6 and 9) – to Lee’s existing confidence evaluation step. Lee already implements confidence comparison as a core architectural element (paragraph 38); the modification is the substitution of a user-configurable threshold value for a predetermined one. There is no teaching away in either reference since Chew does not criticize user-defined thresholds as inferior to predetermined ones, and Lee does not teach that its predetermined threshold is structurally essential. The combination produces the predictable result of a more flexible and deployable distributed DNN inference system. See KSR v. Teleflex Inc. Regarding claim 13, Chew teaches the claimed wherein the edge device is a camera, and wherein the model is trained to detect instances of an object in images (paragraphs 16-17, 20 and 33 teaches a camera and paragraphs 2-3 teaches object detection). Regarding claim 16, Chew teaches a method performed by a server system (Chew: Fig. 4 and paragraph 33), the method comprising: receiving a sample generated by an edge device while surveilling an environment (paragraphs 15-17, 20 and 33 teaches camera capturing images/video), wherein the sample is selected by applying a first model to the sample (Figs. 1-2, analytics determiner (i.e. “model”) running on local client device, running analytics to generate an output/result); storing an indication of the inference in a data structure (paragraph 17). However, while Chew fails to teach the remaining limitations, Lee also teaches a method performed by a server system (Figs. 1-3), also performing: providing the sample to an intermediary layer of a second model as input, so as to produce an output that is representative of an inference made in relation to the sample (Fig. 2 and 6C, paragraphs 4, 38, 40, 58 teaches the claimed “intermediate layer” in the form of at least “conv” layer as part of the cloud’s model that receives the operation result from an edge device 300), so as to produce a second set of outputs (Fig. 4 and paragraphs 33-39 teaches wherein the cloud device/server runs its own analysis sequentially through secondary layers all the way up to a final layer. Each layer outputs an inference until a final output result for the inference is output as a result (“cloud exit”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Lee into the system of Chew because combining Lee and Chew requires only routine skill and produces predictable results. The modification involves applying Chew’s confidence threshold comparison mechanism – a discrete, well-defined software component comprising a confidence level comparator, and a data sender (see paragraphs 43 and Figs. 6 and 9) – to Lee’s existing confidence evaluation step. Lee already implements confidence comparison as a core architectural element (paragraph 38); the modification is the substitution of a user-configurable threshold value for a predetermined one. There is no teaching away in either reference since Chew does not criticize user-defined thresholds as inferior to predetermined ones, and Lee does not teach that its predetermined threshold is structurally essential. The combination produces the predictable result of a more flexible and deployable distributed DNN inference system. See KSR v. Teleflex Inc. Regarding claim 17, Chew teaches the claimed wherein the data structure is maintained in memory of the server system (paragraph 17). Regarding claim 18, Chew teaches the claimed wherein the sample is representative of a digital image of the environment (paragraphs 15-17, 20 and 33 teaches camera capturing images/video). Regarding claim 19, Chew teaches the claimed wherein the edge device is further configured to: for each output in the first set of outputs, indicate that output is an appropriate inference in a data structure in response to a determination that the confidence does exceed the threshold (Figs. 1-3 and paragraphs 4 and 38 teaches determining whether a calculated confidence level exceeds a threshold and the result is appropriate and is output as an “exit”). Regarding claim 22, Chew and Lee teaches the claimed wherein the first model requires fewer computational resources than the second model to produce an output when applied to a given image (as discussed above, the first model run on Chew’s system is less robust than the cloud system of Lee). The prior motivation as discussed above is incorporated herein. Regarding claim 23, Chew and Lee teaches the claimed wherein the threshold is programmed in memory of the camera (Chew: Fig. 1 and paragraph 25: threshold data is stored on client device. Lee: paragraphs 4 and 38 teaches the threshold being part of the system, which includes the edge device 300). The prior motivation as discussed above is incorporated herein. Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Chew et al. (US 2018/0181868) in view of Lee et al. (US 2020/0311546) and further in view of Yang et al. (US 2017/0076195). Regarding claim 10, Chew and Lee teaches the claimed as discussed in claim 8, wherein Chew’s system is able to perform classification/object detection operations and Lee’s DNN also is processing images to determine inferences, but fails to teach, but Yang teaches the claimed wherein the first and second models are classification models (paragraphs 31 and 33, classifier functions). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Yang such that the generic functions of Chew and Lee would also allow for the use in classification functions in both the first and second models as taught by Yang because said incorporation allows for the benefit of enhancing user experience when interacting with such devices (paragraphs 1-4). Regarding claim 11, Chew and Lee teaches the claimed as discussed in claim 8, wherein Chew’s system is able to perform classification/object detection operations and Lee’s DNN also is processing images to determine inferences, but Yang teaches the claimed wherein the first and second models are object detection models (paragraphs 36-37, object detection models). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Yang such that the generic functions of Chew and Lee would also allow for the use in classification functions in both the first and second models as taught by Yang because said incorporation allows for the benefit of enhancing user experience when interacting with such devices (paragraphs 1-4). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Chew et al. (US 2018/0181868) in view of Lee et al. (US 2020/0311546) in view of Nitz et al. (US 2015/0213371). Regarding claim 15, Chew teaches the claimed as discussed in claim 1 above, however fails to but Nitz wherein said applying, said determining, and said causing are performed in real time as the samples are generated by the edge device (paragraphs 0043, 0075 and 0076). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Nitz into the system of Chew and Lee because said incorporation allows for the benefit of determining an interference context based on real-time input (abstract). Claims 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Chew et al. (US 2018/0181868) in view of Lee et al. (US 2020/0311546) and further in view of Takeuchi et al. (US 2010/0295944). Regarding claim 20, Chew in its proposed combination with Lee fails to teach, however, Takeuchi teaches the claimed wherein the server system is further configured to: cause transmission of the second set of outputs to the edge device (Figs. 5-7 and paragraph 76 teaches wherein analysis from server 130 is sent back to the camera 100); and wherein the camera is further configured to: establish that an activity or an object of interest is included in at least one image in the first set of images (Figs. 5-7 and paragraphs 62-76, metadata combing section combines initial metadata generated by the camera 100 with metadata from the analysis server 130. Paragraph 79-81 teaches wherein event information is generated by the rules engine section within the camera) based on an analysis of (i) confident outputs in the first set of outputs and (ii) the second set of outputs (Figs. 5-7 and paragraphs 76-79, metadata combing section combines initial metadata generated by the camera 100 with metadata from the analysis server 130), and cause a notification that specifies the activity or the object of interest to be presented by a computer program executing on a mediatory device (Fig. 5, metadata and event data is sent to a central server 10 and paragraph 80-81 and 121 teaches wherein the an alert is generated on server 10 to alert a user of an event). It would have been obvious to one of ordinary skill in the art before the effective filing date of the current application to incorporate the teachings of Takeuchi into the system of Chew and Lee such that the results of the analysis is sent back to the camera in Chew because such an incorporation allows for the benefit of sharing resources to assist sink/edge devices with additional resources when required (paragraphs 7-19). Regarding claim 21, Chew and Lee teaches the claimed as discussed in claim 1 above, however fails to teach, but Takeuchi teaches wherein the server system is further configured to: cause transmission of the second set of outputs to a computer program executing on a mediatory device (Figs. 5 and 12, wherein first and second outputs (metadata A1 and metadata B3, event is sent to a central server 10); and wherein the camera is further configured to: cause transmission of confident outputs in the first set of outputs to the computer program executing on the mediatory device (Figs. 5 and 12, wherein first and second outputs (metadata A1 and metadata B3, event is sent to a central server 10). The prior motivation as discussed above is incorporated herein. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GELEK W TOPGYAL whose telephone number is (571)272-8891. The examiner can normally be reached M-F (9:30-6 PST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GELEK W TOPGYAL/Primary Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Oct 06, 2023
Application Filed
May 17, 2025
Non-Final Rejection — §103
Jun 18, 2025
Examiner Interview Summary
Jun 18, 2025
Applicant Interview (Telephonic)
Jul 08, 2025
Response Filed
Oct 19, 2025
Final Rejection — §103
Dec 16, 2025
Applicant Interview (Telephonic)
Dec 16, 2025
Examiner Interview Summary
Jan 06, 2026
Response after Non-Final Action
Jan 16, 2026
Request for Continued Examination
Jan 18, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601836
RADIO-WAVE SENSOR INSTALLATION ASSISTANCE DEVICE, COMPUTER PROGRAM, AND RADIO-WAVE SENSOR INSTALLATION POSITION DETERMINATION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12597341
INSTALLATION SUPPORT DEVICE FOR RADIO WAVE SENSOR, COMPUTER PROGRAM, METHOD OF DETERMINING INSTALLATION POSITION OF RADIO WAVE SENSOR, AND METHOD OF SUPPORTING INSTALLATION OF RADIO WAVE SENSOR
2y 5m to grant Granted Apr 07, 2026
Patent 12592263
VIDEO VARIATION EFFECTS
2y 5m to grant Granted Mar 31, 2026
Patent 12586607
MULTIMEDIA PROCESSING METHOD AND APPARATUS BASED ON ARTIFICIAL INTELLIGENCE, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12567445
VIDEO REMIXING METHOD
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
78%
With Interview (+19.3%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 604 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month