Prosecution Insights
Last updated: April 19, 2026
Application No. 18/090,944

SYSTEMS AND METHODS FOR ON-DEVICE TRAINING MACHINE LEARNING MODELS

Final Rejection §103
Filed
Dec 29, 2022
Examiner
COLEMAN, PAUL
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
The Adt Security Corporation
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
7 granted / 10 resolved
+15.0% vs TC avg
Strong +43% interview lift
Without
With
+42.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
23 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
36.3%
-3.7% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The present application is being examined under the claims filed 01/07/2026. The status of the claims are as follows: Claims 1-20 are pending. Claims 1-3, 5-9, 12-16, and 19-20 are amended. Response to Amendment The Office Action is in response to Applicant’s communication filed January 07, 2026 in response to office action mailed October 07, 2025. The Applicant’s remarks and any amendments to the claims or specification have been considered with the results that follow. Response to Arguments Regarding 35 U.S.C. § 101 Applicant argues at pages 12-17 of the Remarks filled January 07, 2026, that claims 1-20 are not directed to an abstract idea, particularly because claims 1, 7, and 14 recite identifying an object using a pretrained machine learning model, and further recites generating an updated machine learning model by training the pretrained machine learning model, the training comprising adding at least one new classifier to the pretrained machine learning model. Applicant further argues that the claims recite a practical application and, in any event, amount to significantly more than any alleged abstract idea. Examiner response: Applicant’s arguments under 35 U.S.C. § 101 have been considered and are persuasive. In view of the amendments to claims 1, 7, and 14, including the recitations of end-user feedback comprising at least one label entered by the end-user and training that comprises adding at least one new classifier to the pretrained machine learning mode, the prior rejection under 35 U.S.C. § 101 set forth at pages 2-14 of the Non-Final Office Action is withdrawn. Regarding 35 U.S.C. § 103 Applicant argues at pages 17-21 of the Remarks that Sulzer and Sanketi fail to disclose or suggest the amended features of independent claims 1, 7, and 14, including receiving an end-user feedback indication comprising at least one label entered by the end-user and generating an updated machine learning model by training the pretrained machine learning model, wherein the training comprises adding at least one new classifier to the pretrained machine learning model. Applicant further argues that Sulzer is directed to expert-driven server-side training, that Sanketi describes automated on-device training without specific user feedback, and that a person of ordinary skill in the art would not have been motivated to combine Sulzer with Sanketi to arrive at an end-user manual labeling system. Examiner’s response: Applicant arguments have been fully considered but are not persuasive. The present rejection has been revised in view of Applicant’s amendments and is based on the combined teachings of Sulzer et al., Sanketi et al, and Zhou et al. Sulzer continues to teach the premises-security/video-surveillance context, including object identification, annotation/labelling of objects in media, and training of object-detection models using labelled media. Sanketi continues to teach the on-device machine-learning framework, including device-resident example collection, on-device retraining, and use of locally stored training examples in retraining a model. Zhou teaches the additional amended feature that training of a pretrained neural network may comprise adding a task specific classifier and generating a task specific layer that is used to update the neural network. In particular, Zhou teaches that “in the Classifier model, the classifier in the last layer of the neural network may be tuned … each task may add a task specific classifier and batch normalization layers …” and further teaches “for each task in the plurality of sequential tasks: generating a task specific layer in the task specific neural network; and updating the neural network with the task specific layer”. Applicant’s argument that Sulzer and Sanketi alone fail to disclose or suggest the amended claims is not persuasive because the present rejection does not rely on Sulzer and Sanketi alone for the newly added classifier limitation. Applicant’s argument that Sanketi does not disclose an end-user manual labeling workflow also is not persuasive. The rejection does not rely on Sanketi alone for manual object labelling. Sulzer teaches manual annotation/labelling of objects in media with classification data, and Sanketi teaches receiving training examples from an application on the user device and using such examples in on-device retraining. The rejection relies on the combined teachings of these references to show that it would have been obvious to implement Sulzer’s label-based object feedback in Sanketi’s on-device training framework, while further using Zhou’s task-specific classifier addition technique to update the pretrained model in accordance with the amended claim language. Accordingly, Applicant’s arguments have been considered but are not persuasive in view of the revised rejection set forth below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sulzer et al. (US2020/0364468A1) in view of Sanketi et al. (US2022/0004929A1) and further in view of Zhou et al. (WO2020/069039A1). Regarding claim 1, Sulzer in view of Sanketi and further in view of Zhou, teach a user device for a premises security system that comprises a plurality of premises devices, the user device being associated with an end-user and configured with a pretrained machine learning model for detecting object types, and user device comprising processing circuitry being configured to: “receive at least one media file from a media file database associated with the user device;” – Sulzer teaches this limitation in part (bolded). Sulzer teaches using security video/frames stored for training/testing in a database: “creating a database comprising said each frame … the database is searchable …” (Sulzer, pg. 1, ¶[0006]) “The recorded videos may be split into frames/images and … uploaded into the deep learning training server 230.” (Sulzer, pg. 7, ¶[0095]) “identify, using the pretrained machine learning model, at least one dominant object in the at least one media file;” – Sulzer teaches this limitation. Sulzer teaches identifying/detecting objects of interest in security imagery: “may be used to detect and/or identify one or more items of interest ( such as … a weapon …).” (Sulzer, pg. 7, ¶[0094]) “bounding boxes and bounding polygons … added to … frames … are annotated/labeled with classification data” (Sulzer, pg. 12, ¶[0135]) “receive an end-user feedback indication associated with the least one dominant object, the end-user feedback indication comprising at least one label entered by the end-user;” – Sulzer teaches this limitation in part (bolded). Sulzer teaches collecting label-based feedback linked to detected objects and iterating the dataset: “annotated/labeled with classification data” (Sulzer, pg. 12, ¶[0135]) “deployment/analytics phase includes model evaluation and may incorporate a feedback loop between model performance and database composition.” (Sulzer, pg. 12, ¶[0133]) Sulzer further teaches manual labeling/subcategorization of detected objects: “the chosen frames may be annotated with a unique set of weapon labels and/or attributes which may separate out labeled objects into subcategories and allow the deep learning models to identify similar weapons with different characteristics …” (Sulzer, pg. 5, ¶[0066]) Sulzer does not teach: “and generate an updated machine learning model by training the pretrained machine learning model based at least in part of the feedback indication and the at least one media file,” “the training of the pretrained machine learning model comprising adding at least one new classifier to the pretrained machine learning model.” Sanketi, however, teaches this limitation: “and generate an updated machine learning model by training the pretrained machine learning model based at least in part of the feedback indication and the at least one media file,” – Sanketi teaches retraining a device-resident machine-learned model based on training examples stored by the centralized example database: “receiving an instruction … to re-train the machine-learned model based at least in part on one or more training examples stored by a centralized example database; and … causing the machine-learned model to be re-trained …” (Sanketi, pg. 14, claim 22) “At 1002, the computing system can re-train the first machine-learned model based at least in part on the one or more training examples stored by the centralized example database. At 1004, the computing system can re-train the first machine-learned model based at least in part on the one or more training examples stored by the centralized example database.” (Sanketi, pg. 13, ¶¶ [0178]-[0179]) Neither Sulzer nor Sanketi teach this limitation: “the training of the pretrained machine learning model comprising adding at least one new classifier to the pretrained machine learning model.” Zhou, however, teaches this limitation: “the training of the pretrained machine learning model comprising adding at least one new classifier to the pretrained machine learning model.” – Zhou teaches training from a pretrained model and adding task-specific classifier structure during training: “In the Classifier model, the classifier in the last layer of the neural network may be tuned, while the former twenty file layers are transferred from the ImageNet pretrained model and are kept fixed during training. In this case, each task may add a task specific classifier and batch normalization layers, …” (Zhou, pg. 15, ¶[0060]) Zhou further teaches generating and storing new task-specific layers in the network and updating the network with those added structures: “for each task in the plurality of sequential tasks: generating a task specific layer in the task specific neural network; and updating the neural network with the task specific layer.” (Zhou, pg. 20, claim 9) “structure optimizer 160 may maintain super network S so that the new task-specific layers and new shareable layers may be stored in super network S” (Zhou, pg. 8, ¶[0031]) “The ‘new’ choice may cause structure optimizer 160 to spawn one or more new parameters that do not exist in super network S.” (Zhou, pg. 9, ¶[0033]) A POSITA would have been motivated at the time of the claimed invention to modify the Sulzer/Sanketi combination to use the known continual/class-incremental training technique of adding a task specific classifier to a pretrained neural network, as taught by Zhou, in order to adapt a pretrained model to new labeled object types while preserving previously learned features and reducing the need to retrain the entire network, with the predictable benefit of efficient personalized updating on the user device. A POSITA would have been motivated at the time of the claimed invention to implement Sulzer’s labeled object-detection training/feedback pipeline on Sanketi’s on-device platform to (i) keep examples/labels locally for privacy and latency, (ii) enable device-managed storage and background training, and (iii) employ Zhou’s known technique of adding a task specific classifier to a pretrained model so that the device could efficient adapt the pretrained model to newly labeled object categories, yielding predictable benefits in personalization, model extensibility, and efficient incremental learning. Regarding claim 2, Sulzer in view of Sanketi and further in view of Zhou, teach the user device of claim 1, wherein the processing circuitry is further configured to: “determine or modify an alarm configuration comprising a mapping of at least one detected object type to at least one corresponding premises security action;” – Sulzer teaches this limitation. Sulzer teaches determining alert/response parameters tied to detected object events. In particular, Sulzer teaches that metadata output results from video inference are filtered and analyzed, and that: “the metadata output values are saved and used to determine the minimum and maximum parameters for triggering alerts.” (Sulzer, p. 5, ¶[0073]) Thus reducing false positives with triggering real-time weapon detection alerts. “” – Sulzer teaches this limitation in part. Sulzer teaches exporting/deploying trained detection models to an IVS server and using associated metadata-derived parameters for alerting. In particular, Sulzer teaches that: “each high-performing model is further tested by exporting the model from the deep learning server and integrating it into an IVS server” (Sulzer, p. 5, ¶[0071]) “the metadata output from these testing runs may be exported …” (Sulzer, p. 5, ¶[0072]) “the metadata output values are saved and used to determine the minimum and maximum parameters for triggering alerts.” (Sulzer, p. 5, ¶[0073]) “triggering real-time weapon detection alerts.” (Sulzer, p. 5, ¶[0073]) Sulzer therefore teaches transmitting/deploying a trained model to a server-side component and using associated configuration-like parameters derived from metadata for performing security actions. Sulzer does not expressly teach: “and cause transmission of the alarm configuration and the updated machine learning model to a control device …” Sanketi, however, teaches this part of the limitation: “and cause transmission of the alarm configuration and the updated machine learning model to a control device …” – Sanketi teaches distribution flows for plans/model parameters/training results and return of an updated model: “Prediction and training plans, as well as model parameters, are provided by or otherwise managed by the artifact manager 434. The artifact manager 434 can support retrieving artifacts from a cloud server 210, from application assets, and/or from files. It can also support mutable artifacts, for example, for storing training results which are consumed by a predictor or another trainer.” (Sanketi, p. 9, ¶[0104]) “The platform 122 can transmit the update to a central server computing device (e.g., "the cloud") for aggregation with other updates provided by other computing devices.” (Sanketi, p. 7, ¶[0078]) “The updated global model can then be re-sent to the user device.” (Sanketi, p. 8, ¶[0098]) A POSITA at the time of the claimed invention would have been motivated to combine Sulzer, Sanketi, and Zhou so that Sulzer’s alert/action mapping in a premises-security context could be used with Sanketi’s model/plan distribution framework and Zhou’s updated-model training technique, thereby enabling the user device to transmit an alarm configuration together with the updated machine learning model to a control device for carrying out premises-security actions, with predictable benefits in deployment flexibility, personalization, and improved object-detection performance. Regarding claim 3, Sulzer in view of Sanketi and further in view of Zhou, teach the user device of claim 1, wherein the processing circuitry is further configured to: “cause transmission of the ” – Sulzer teaches this limitation in part. Sulzer teaches: “Model A was then exported from the deep learning server and deployed on the IVS server.” (Sulzer, p. 5, ¶[0074]) “Multiple filter scenarios are analyzed to determine the best configuration specific to each model that is trained.” (Sulzer, p. 5, ¶[0074]) “responsive to causing transmission of the updated machine learning model to the other user device, ” – Sulzer teaches this limitation in part. Sulzer teaches: “deployment/analytics phase includes model evaluation and may incorporate a feedback loop between model performance and database composition.” (Sulzer, p. 12, ¶[0133]) “one or more of the frames are annotated/labeled with classification data” (Sulzer, p. 12, ¶[0134]) “and detect at least one object type of at least one additional media file based at least in part on the further updated machine learning model.” – Sulzer teaches this limitation in part. Sulzer teaches: “may be used to detect and/or identify one or more items of interest” (Sulzer, p. 6, ¶[0077]) And Sulzer teaches use of recorded security-camera video in the surveillance environment: “the recorded video data is taken from actual security cameras in real-life environments” (Sulzer, p. 6, ¶[0077]) Sulzer does not teach: “… updated … to another user device of the premises security system … for further training … to generate a further updated machine learning model;” “… receive the further updated machine learning model, … ” “… further updated machine learning model.” Sanketi, however, teaches these limitations in part: “… ” – Sanketi teaches: “The federated server 212 provides the training plan 208 to the device 216. The device 216 can implement the training plan 208 to perform on-device training based on locally stored data.” (Sanketi, p. 8, ¶[0090]) “the device 216 can provide an update to the federated server 212.” (Sanketi, p. 8, ¶[0090]) “the update can describe one or more parameters of the re-trained model or one or more changes to the parameters of the model that occurred during the re-training of the model.” (Sanketi, p. 8, ¶[0090]) “… receive the ” – Sanketi teaches: “The federated server 212 can receive many of such updates from multiple devices and can aggregate the updates to generate an updated global model. The updated global model can then be re-sent to the device 216.” (Sanketi, p. 8, ¶[0091]) Sanketi further teaches training based on additional examples collected on-device, namely that: “The background process 404 can implement or interact with one or more training engines 406 to pull data from an example collection and execute a training plan.” (Sanketi, p. 9, ¶[0112]) “The plan can describe how to query the collection for training data” (Sanketi, p. 9, ¶[0112]) “… ” – Sanketi teaches use of the retrained/returned model for subsequent inference. Sanketi teaches that: “The updated global model can then be re-sent to the device 216.” (Sanketi, p. 8, ¶[0091]) “The device 216 can implement the inference plan 206 to generate inferences.” (Sanketi, p. 8, ¶[0089]) Sanketi does not teach: “updated”, “further updated” and “updated machine learning model” Zhou, however, teaches these portions of the limitations: “updated”, “further updated” and “updated machine learning model” – Zhou teaches these remaining aspects, namely that the model used after additional subsequent training is a successively updated network, including: “updating the neural network with the at least one parameter.” (Zhou, p. 18, claim 1) “and updating the neural network with the task specific layer.” (Zhou, p. 20, claim 9) A POSITA would have been motivated to use the further updated model after additional distributed/sequential training to detect object types in additional media files, because that is the predictable purpose of retraining in Sanketi and detection in Sulzer, with Zhou supplying the inherited updated/further-updated network architecture. Regarding claim 4, Sulzer in view of Sanketi and further in view of Zhou, teach the user device of claim 1, wherein the feedback indication comprises at least one of: “an object type label; an accuracy indication; and a threat level indicator.” – Sulzer teaches this limitation. Sulzer teaches: “The chosen video frames are then processed to include bounding boxes and/or bounding polygons and labels.” (Sulzer, pg. 5, ¶[0066]) “The output of a video inference test job replicates the output of real-time inference with metadata including … confidence score” (Sulzer, pg. 8, ¶[0101]) Because the claim recites “at least one of”, Sulzer’s labels and confidence score (accuracy indication) satisfy the limitation; no additional teaching is required. It would have been obvious to a POSITA to implement Sulzer’s feedback indications using Sanketi’s standardized on-device platform that exposes collection API / prediction API / training API services and a device-side centralized example database, in order to streamline capture, storage, and reuse of such labels/accuracy indications on user devices, yielding predictable benefits in manageability, latency, and privacy without altering the underlying functions taught by Sulzer. To the extent claim 4 depends from amended claim 1, the rejection further relies on Zhou as set forth above with respect to claim 1. Regarding claim 6, Sulzer in view of Sanketi and further in view of Zhou, teach the user device of claim 1, wherein the processing circuitry is further configured to: “cause transmission of the ” – Sulzer teaches this limitation in part. Sulzer teaches, in the premises-security / surveillance context, that: “The recorded videos may be split into frames/images and the frames/images are uploaded into the deep learning training server 230, where … neural networks are trained on the collected data.” (Sulzer, p. 7-8, ¶[0095]) Sulzer does not teach these portions of the limitation: “… updated …” “for providing on-device object detection for at least one additional device associated with a user account of the premises security system.” Sanketi, however, teaches these portions of the limitation in part: “for providing on-device object detection for at least one additional device associated with a user account of the premises security system.” – Sanketi teaches that: “given a URI for a training plan ( e.g., instructions for training the model), the on-device machine learning platform 122 can run training of the model 132a-c ( e.g., by interacting with a machine learning engine 128 to cause training of the model 132a-c by the engine 128) based on previously collected examples.” (Sanketi, p. 7, ¶[0076]) “the re-trained model 132a-c can be used to provide inferences” (Sanketi, p. 7, ¶[0077]) “the machine learning platform 122 can upload logs or other updates regarding the machine-learned models 132a-c to the cloud” (Sanketi, p. 7, ¶[0078]) Sanketi further teaches that the background process can execute a training plan, that said plans may be downloaded from a federated learning server or a cloud server, and that: “The federated server 212 provides the training plan 208 to the device 216. The device 216 can implement the training plan 208 to perform on-device training based on locally stored data.” (Sanketi, p. 8, ¶[0090]) Sanketi also teaches artifact/model distribution via remote infrastructure: “Prediction and training plans, as well as model parameters, are provided by or otherwise managed by the artifact manager 434. The artifact manager 434 can support retrieving artifacts from a cloud server 210,” (Sanketi, p. 9, ¶[0104]) Sanketi does not teach this portion of the limitation: “… updated …” Zhou, however, teaches this remaining portion of the limitation: “… updated …” – Zhou teaches: “for each task in the plurality of sequential tasks … retraining … the at least one parameter in the task specific neural network; and updating the neural network with the at least one parameter.” (Zhou, p. 18, claim 1) And Zhou further teaches: “for each task in the plurality of sequential tasks: generating a task specific layer in the task specific neural network; and updating the neural network with the task specific layer.” (Zhou, p. 20, claim 9) And also teaches that, in the Classifier model: “each task may add a task specific classifier and batch normalization layers” (Zhou, p. 15, ¶[0060]) A POSITA would have been motivated to combine Sulzer, Sanketi, and Zhou so that Sulzer’s premises-security model deployment context could use Sanketi’s remote-server / artifact-distribution framework to provide on-device detection to additional devices associated with the same user account, while Zhou supplies the inherited updated-model architecture from amended claim 1, with predictable benefits in manageability, account-wide personalization, and scalable deployment across devices. Regarding claim 7: Claim 7 is rejected under 35 U.S.C. § 103 for substantially the same reasons set forth above with respect to claim 1. Claim 7 recites, in system form, the same amended user-device limitations analyzed above for claim 1, including receiving an end-user feedback indication comprising at least one label entered by the end-user and generating an updated machine learning model by training the pretrained machine learning model, the training comprising adding at least one new classifier to the pretrained machine learning model. The additional control-device recitations do not materially alter the obviousness analysis, and are taught or suggested for the reasons previously set forth in the prior Office action and above. Regarding claims 8-13: Claims 8-13 are rejected under 35 U.S.C. § 103 as being unpatentable over Sulzer et al. in view of Sanketi et al. and further in view of Zhou et al. for substantially the same reasons discussed above with respect to amended claim 7. Claims 8-13 do not add material limitations that alter the obviousness analysis beyond the amendments to independent claim 7. To the extent claims 8, 9, 12, and 13 now expressly recite an updated machine learning model or a further updated machine learning model, and to the extent claims 10 and 11 depend from amended claim 7, Zhou teaches generating an updated neural network during training by adding task-specific classifier structure. In particular, Zhou teaches that “In the Classifier model, the classifier in the last layer of the neural network may be turned … each task may add a task specific classifier and batch normalization layers …” (Zhou, p. 15, ¶[0060]), and further teaches “for each task in the plurality of sequential tasks: generating a task specific layer in the task specific neural network; and updating the neural network with the task specific layer.” (Zhou, p. 20, claim 9). Accordingly, claims 8-13 are unpatentable over Sulzer in view of Sanketi and further in view of Zhou. Regarding claim 14: Claim 14 is rejected under 35 U.S.C. § 103 for substantially the same reasons set forth above with respect to claim 1. Claim 14 recites, in method form, the same amended core limitations analyzed above for claim 1, including receiving an end-user feedback indication comprising at least one label entered by the end-user and generating an updated machine learning model by training the pretrained machine learning model, the training comprising adding at least one new classifier to the pretrained machine learning model. The remaining method steps correspond to the previously analyzed system/device functionality and do not materially alter the obviousness analysis. Regarding claims 15-20: Claims 15-20 are rejected under 35 U.S.C. § 103 as being unpatentable over Sulzer et al. in view of Sanketi et al. and further in view of Zhou et al. for substantially the same reasons discussed above with respect to amended claim 14. Claims 15-20 do not add material limitations that alter the obviousness analysis beyond the amendments to independent claim 14. To the extent claims 15, 16, 19, and 20 now expressly recite an updated machine learning model or a further updated machine learning model, and to the extent claims 17 and 18 depend from amended claim 14, Zhou teaches generating an updated neural network during training by adding task-specific classifier structure. In particular, Zhou teaches that “In the Classifier model, the classifier in the last layer of the neural network may be turned … each task may add a task specific classifier and batch normalization layers …” (Zhou, p. 15, ¶[0060]), and further teaches “for each task in the plurality of sequential tasks: generating a task specific layer in the task specific neural network; and updating the neural network with the task specific layer.” (Zhou, p. 20, claim 9). Accordingly, claims 15-20 are unpatentable over Sulzer in view of Sanketi and further in view of Zhou. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Paul Coleman whose telephone number is (571)272-4687. The examiner can normally be reached Mon-Fri. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAUL COLEMAN/ Examiner, Art Unit 2126 /DAVID YI/ Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Dec 29, 2022
Application Filed
Oct 03, 2025
Non-Final Rejection — §103
Jan 07, 2026
Response Filed
Mar 17, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597489
METHOD, DEVICE, AND COMPUTER PROGRAM FOR PREDICTING INTERACTION BETWEEN COMPOUND AND PROTEIN
2y 5m to grant Granted Apr 07, 2026
Patent 12574861
METHOD AND SYSTEM FOR ACCELERATING DISTRIBUTED PRINCIPAL COMPONENTS WITH NOISY CHANNELS
2y 5m to grant Granted Mar 10, 2026
Patent 12443678
STEPWISE UNCERTAINTY-AWARE OFFLINE REINFORCEMENT LEARNING UNDER CONSTRAINTS
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+42.9%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month