Prosecution Insights
Last updated: April 19, 2026
Application No. 17/625,635

IMPLEMENTING MACHINE LEARNING IN A RESOURCE-CONSTRAINED ENVIRONMENT

Non-Final OA §102§103
Filed
Jan 07, 2022
Examiner
SACKALOSKY, COREY MATTHEW
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
UNIVERSITY OF SURREY
OA Round
3 (Non-Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
4y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
16 granted / 25 resolved
+9.0% vs TC avg
Strong +49% interview lift
Without
With
+49.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
39 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
42.0%
+2.0% vs TC avg
§103
38.0%
-2.0% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§102 §103
DETAILED ACTION This Office Action is in response to the RCE filed on 07/15/2025. Claims 1, 14, and 15 currently amended. Claims 1-15 are pending in this application and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments In reference to Applicant’s arguments on page(s) 6-9 regarding rejections made under 35 U.S.C. 102 and 103: Rejection under 35 U.S.C. §§ 102 & 103 Claims 1 - 3 and 9 - 15 are rejected under 35 U.S.C. § 102(a)(2) as being anticipated by US 11550614 B2, referenced herein as JR. Claims 4 - 7 are rejected under 35 U.S.C. § 103 as being unpatentable over JR as applied to claims 1 - 3 and 9 - 15 above, and further in view of EP 2449477 Bl, referenced herein as VIJAYAN. Claim 8 is rejected under 35 U.S.C. § 103 as being unpatentable over JR as applied to claims 1 - 3 and 9 - 15 above, and further in view of US 20210027182 Al, referenced herein as HARRIS. Applicant respectfully submits that the cited references fail to teach or anticipate: "loading the contents of a package comprising a computer program into an encapsulated execution environment, wherein the computer program includes instructions to obtain a trained machine learning model from a cloud storage service, the cloud storage service being external to the encapsulated execution environment, and wherein a data storage size of the contents of the package is constrained from exceeding a package data storage size limit, wherein the package data storage size limit is less than a storage capacity of the encapsulated execution environment," as recited in amended claim 1 as presented herein. The Office Action alleges JR anticipates "loading the contents of a package comprising a computer program into an encapsulated execution environment," by citing p. 24, col. 11, lines 27 - 31 and p. 24, col. 12, lines 38 - 42 of JR. (Office Action, pp. 9 - 10). Applicant respectfully disagrees, but in the interest of progressing prosecution has amended independent claim 1 to further clarify "loading the contents of a package comprising a computer program into an encapsulated execution environment, wherein the computer program includes instructions to obtain a trained machine learning model from a cloud storage service, the cloud storage service being external to the encapsulated execution environment, and wherein a data storage size of the contents of the package is constrained from exceeding a package data storage size limit, wherein the package data storage size limit is less than a storage capacity of the encapsulated execution environment." The Office Action cites JR as anticipating "loading the contents of a package comprising a computer program into an encapsulated execution environment." (Office Action, pp. 9 - 10). However, JR fails to teach or disclose, "wherein the package data storage size limit is less than a storage capacity of the encapsulated execution environment," as recited in amended claim 1. Additionally, JR fails to teach or disclose, "wherein the computer program includes instructions to obtain a trained machine learning model from a cloud storage service, the cloud storage service being external to the encapsulated execution environment," as recited in amended claim 1. In particular, JR fails to teach or disclose "obtaining, from a cloud storage service, a trained machine learning model" and "loading, in to temporary storage of the encapsulated execution environment, the trained machine learning model," as the program of JR includes the machine learning model and thus it cannot be a program that obtains the machine learning model from cloud storage and loads it into temporary storage of the encapsulated execution environment. The amended claim limitation further distinguishes from the system disclosed in JR as the amended claim limitation clarifies "the computer program includes instructions to obtain a trained machine learning model from a cloud storage service, the cloud storage service being external to the encapsulated execution environment". The remaining cited references fail to cure the deficiencies of JR. In particular, the remaining cited references fail to teach or disclose "wherein the package data storage size limit is less than a storage capacity of the encapsulated execution environment" and/or "wherein the computer program includes instructions to obtain a trained machine learning model from a cloud storage service, the cloud storage service being external to the encapsulated execution environment" as claimed in amended claim 1. Therefore, the cited references, individually or in combination, fail to teach or disclose "loading the contents of a package comprising a computer program into an encapsulated execution environment, wherein the computer program includes instructions to obtain a trained machine learning model from a cloud storage service, the cloud storage service being external to the encapsulated execution environment, and wherein a data storage size of the contents of the package is constrained from exceeding a package data storage size limit, wherein the package data storage size limit is less than a storage capacity of the encapsulated execution environment," as claimed in amended claim 1. For at least these reasons, Applicant submits that independent claim 1 is patentably distinct from the cited references and is allowable. Independent claims 14 and 15 are amended to incorporate similar limitations to those amended into claim 1. In this regard, Applicant submits that independent claims 14 and 15 are patentably distinct from the cited references and are allowable. Applicant further submits that for at least the reasons relating to corresponding independent claims, the pending dependent claims are in condition for allowance. However, Applicant also notes that the patentability of the dependent claims certainly does not hinge on the patentability of independent claims. In particular, it is believed that some or all of these claims may possess features that are independently patentable, regardless of the patentability of the independent claims. Examiner’s response: Applicant’s arguments have been fully considered but are moot in light of the amendments made on the independent claims. Applicant argues that the presented prior art references do not teach the newly amended claim limitations as presented. Examiner agrees, and further search and consideration must be conducted. The rejections made under 35 U.S.C. 102 and 103 are withdrawn and new grounds for rejection is presented below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 1-3 and 9-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Faulhaber et al (US 11550614 B2, hereinafter Faulhaber), in view of Lupesko et al (US 11763154 B1, hereinafter Lupesko), and in view of Singh et al (US 11868629 B1, hereinafter Singh). Regarding Claim 1: Faulhaber teaches A computer-implemented method for utilizing a machine learning model in a resource-constrained environment, the method comprising: loading the contents of a package comprising a computer program into an encapsulated execution environment (Faulhaber [Col 11 lines 27-31]: "the ML scoring containers 150 are logical units created within a virtual machine instance using the resources available on that instance, and can be utilized to isolate execution of a task from other processes (e.g., task executions) occurring in the instance") and executing, by one or more computing devices, the computer program in the encapsulated execution environment, wherein executing the computer program (Faulhaber [Col 11 lines 27-31]: "the ML scoring containers 150 are logical units created within a virtual machine instance using the resources available on that instance, and can be utilized to isolate execution of a task from other processes (e.g., task executions) occurring in the instance") obtaining, from the cloud storage service, the trained machine learning model, (Faulhaber [Col 12 lines 52-61]: "The model hosting system 140 further forms the ML scoring container(s) 150 by retrieving model data corresponding to the identified trained machine learning model(s) in some embodiments...In embodiments in which a single model data file is identified in the deployment request, the model hosting system 140 retrieves the identified model data file from the training model data store 175 and inserts the model data file into a single ML scoring container") wherein a combined data storage size of the trained machine learning model and the contents of the package exceeds the package data storage size limit (Faulhaber [Col 8 lines 41-43]: "For example, the resources used to train a particular machine learning model can exceed the limitations of a single virtual machine instance"); loading, in to temporary storage of the encapsulated execution environment, the trained machine learning model (Faulhaber [Col 11 lines 7-9]: "The model hosting system 140 can then execute machine learning models using the compute capacity"; (EN): it can be reasonably understood by one skilled in the art that “compute capacity” is analogous to temporary memory akin to an L1 or L2 cache); Faulhaber does not distinctly disclose wherein the computer program includes instructions to obtain a trained machine learning model from a cloud storage service, the cloud storage service being external to the encapsulated execution environment and applying the trained machine learning model to derive one or more vector outputs based on one or more vector inputs However, Lupesko teaches wherein the computer program includes instructions to obtain a trained machine learning model from a cloud storage service, the cloud storage service being external to the encapsulated execution environment (Lupesko [Col 9 lines 1-10]: "At block 418, the controlling device may deploy the machine learning service for processing requests with the new model. The machine learning service may include may be deployed in a virtual private cloud or other virtualized environment. Deployment may include activating a network address to receive requests including input information to be processed by the machine learning service. The virtualized environment may be instantiated within an execution container allocated for the domain associated with the client.") and applying the trained machine learning model to derive one or more vector outputs based on one or more vector inputs (Lupesko [Col 3 lines 6-8]: "The outputs may include objects associated with an input, vectors of output values classifying respective inputs, or the like.") Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the packaging and deploying algorithms utilizing containers for flexible machine learning of Faulhaber with the systems and methods for automated generation of a machine learning model based in part on a pretrained model of Lupesko in order to provide a comprehensive system for providing trained machine learning models to virtual environments. The systems and methods presented in Lupesko are beneficial for Faulhaber in that they allow for the use of pretrained machine learning models in distributed systems (Lupesko [Col 2 lines 4-12]: “Wide adoption, however, can be hindered in part because not all users in these domains have sufficient time or resources to deploy state-of-the-art solutions. The features described in this application provide an end-to-end solution to generate hosted machine learning services for users with little or no prior knowledge of artificial intelligence techniques based on pre-trained models that are dynamically adapted to the specific problem presented by the user”) Faulhaber + Lupesko does not distinctly disclose and wherein a data storage size of the contents of the package is constrained from exceeding a package data storage size limit wherein the package data storage size limit is less than a storage capacity of the encapsulated execution environment; However, Singh teaches and wherein a data storage size of the contents of the package is constrained from exceeding a package data storage size limit (Singh [Col 53, lines 53-59]: "based a particular capacity model result, C.sub.i, of the capacity model results (566), the storage system sizing service (404) may determine that a single storage system or a combination of storage systems of storage systems may provide storage capacity that satisfies a storage capacity specification after accounting for capacity overhead incurred in implementing all selected storage services") wherein the package data storage size limit is less than a storage capacity of the encapsulated execution environment (Singh [Col 53, lines 53-59]: "based a particular capacity model result, C.sub.i, of the capacity model results (566), the storage system sizing service (404) may determine that a single storage system or a combination of storage systems of storage systems may provide storage capacity that satisfies a storage capacity specification after accounting for capacity overhead incurred in implementing all selected storage services"); Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the packaging and deploying algorithms utilizing containers for flexible machine learning of Faulhaber + Lupesko with the methods, apparatus, and products for dynamic storage system sizing of Singh in order to provide a comprehensive system for distributed systems storage. The systems and methods presented in Singh are beneficial for Faulhaber + Lupesko in that they allow for the use of dynamic storage that doesn’t exceed certain limits (Singh [Col 14 lines 1-6]: “storage cluster 161 is scalable, meaning that storage capacity with non-uniform storage sizes is readily added, as described above. One or more storage nodes 150 can be plugged into or removed from each chassis and the storage cluster self-configures in some embodiments.”) Regarding Claim 2: Faulhaber teaches The method of claim 1, wherein the encapsulated execution environment is a container (Faulhaber [Col 2 lines 37-41]: "users can create or utilize relatively simple containers adhering to a specification of a provider network, where the containers include code for how a machine learning model is to be trained and/or executed"). Regarding Claim 3: Faulhaber teaches The method of claim 1, wherein the encapsulated execution environment is a virtual machine (Faulhaber [Col 2 lines 49-53]: "embodiments enable a single physical computing device (or multiple physical computing devices) to host one or more instances of virtual machines that appear and operate as independent computing devices to users"). Regarding Claim 9: Faulhaber teaches The method of claim 1, wherein the trained machine learning model has been trained using a first machine learning framework (Faulhaber [Col 31 lines 11-15]: "In some embodiments, the machine learning models may be “custom” algorithms developed by users, and/or use custom code to train using existing algorithms such as deep learning frameworks (e.g., TensorFlow, Apache MXNet, etc.).") and wherein applying the trained machine learning model uses a second machine learning framework (Faulhaber [Col 12 lines 1-6]: "In some embodiments, the OS 152 and the runtime 154 are the same as the OS 144 and runtime 146 utilized by the virtual machine instance 142. In some embodiments, the OS 152 and/or the runtime 154 are different than the OS 144 and/or runtime 146 utilized by the virtual machine instance 142"). Regarding Claim 10: Faulhaber teaches The method of claim 9, wherein a data storage size of the first machine learning framework is greater than a data storage size of the second machine learning framework (Faulhaber [Col 8 lines 41-53]: "For example, the resources used to train a particular machine learning model can exceed the limitations of a single virtual machine instance 122. However, the algorithm included in the container image can be in a format that allows for the parallelization of the training process. Thus, the model training system 120 can create multiple copies of the container image provided in a training request, initialize multiple virtual machine instances 122, and cause each virtual machine instance 122 to load a container image copy in one or more separate ML training containers 130. The virtual machine instances 122 can then each execute the code 136 stored in the ML training containers 130 in parallel."). Regarding Claim 11: Faulhaber teaches The method of claim 9, wherein the first machine learning framework is a development machine learning framework (Faulhaber [Col 3 lines 54-58]: "FIG. 1 is a block diagram of an illustrative operating environment 100 in which machine learning models are trained and hosted, in some embodiments. The operating environment 100 includes end user devices 102, a model training system 120, a model hosting system 140") and the second machine learning framework is a production machine learning framework (Faulhaber [Col 3 lines 54-58]: "FIG. 1 is a block diagram of an illustrative operating environment 100 in which machine learning models are trained and hosted, in some embodiments. The operating environment 100 includes end user devices 102, a model training system 120, a model hosting system 140"). Regarding Claim 12: Faulhaber teaches The method of claim 1, wherein the contents of the package comprise a slimmed down machine learning framework (Faulhaber [Col 7 lines 1-4]: "the code 136 includes some or all of the executable instructions that form the container image of the ML training container 130 initialized therein") and applying the trained machine learning model uses the slimmed down machine learning framework (Faulhaber [Col 13 lines 55-63]: "a virtual machine instance 142 executes the code 156 stored in an identified ML scoring container 150 in response to the model hosting system 140 receiving the execution request. In particular, execution of the code 156 causes the executable instructions in the code 156 corresponding to the algorithm to read the model data file stored in the ML scoring container 150, use the input included in the execution request as an input parameter, and generate a corresponding output"). Regarding Claim 13: Faulhaber teaches The method of claim 12, wherein the slimmed down machine learning framework comprises a subset of a plurality of files of a full machine learning framework (Faulhaber [Col 7 lines 1-4]: "the code 136 includes some or all of the executable instructions that form the container image of the ML training container 130 initialized therein") wherein the subset excludes one or more of the plurality of files that are not accessed during one or more applications of the trained machine learning model using the full machine learning framework (Faulhaber [Col 12 lines 43-51]: “the model hosting system 140 forms the ML scoring container(s) 150 from one or more container images stored in the container data store 170 that are appropriate for executing the identified trained machine learning model(s). For example, an appropriate container image can be a container image that includes executable instructions that represent an algorithm that defines the identified trained machine learning model(s)”; (EN): including only appropriate files is analogous to excluding inappropriate files). Regarding Claim 14: Due to claim language similar to that of claim 1, claim 14 is rejection for the same reasons as presented above in the rejection of claim 1, with the exception of the limitation(s) covered below. Faulhaber teaches A data processing system comprising one or more processors configured to perform a method for utilizing a machine learning model in a resource- constrained environment (Faulhaber [Col 36 lines 45-51]: "Each of the one or more electronic devices 1520 may include an operating system that provides executable program instructions for the general administration and operation of that device and typically will include computer-readable medium storing instructions that, when executed by a processor of the device, allow the device to perform its intended functions.") Regarding Claim 15: Due to claim language similar to that of claims 1 and 14, claim 15 is rejection for the same reasons as presented above in the rejection of claims 1 and 14. Claim Rejections - 35 USC § 103 Claim(s) 4-7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Faulhaber, Lupesko, and Singh as applied to claims 1, 14, and 15 above, and further in view of Vijayan et al (EP 2449477 B1, hereinafter Vijayan). Regarding Claim 4: Lupesko teaches processing, with the trained machine learning model, the one or more vector inputs to derive the one or more vector outputs (Lupesko [Col 3 lines 6-8]: "The outputs may include objects associated with an input, vectors of output values classifying respective inputs, or the like.") Faulhaber + Lupesko + Singh does not distinctly disclose The method of claim 1, wherein applying the trained machine learning model comprises: obtaining the one or more vector inputs by querying a non-relational database; However, Vijayan teaches The method of claim 1, wherein applying the trained machine learning model comprises: obtaining the one or more vector inputs by querying a non-relational database (Vijayan [0078]: "The management light index 245 and SS light index 247 may be implemented in a non-relational database format, such as C-Tree from Faircom, Inc., SimpleDB from Amazon, Inc., or CouchDB from the Apache Software Foundation. In this way, the storage manager 105 may provide a faster response to client 130 or other requests than if it were to query management index 211"); Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the packaging and deploying algorithms utilizing containers for flexible machine learning of Faulhaber + Lupesko + Singh with the systems and methods for performing data storage operations, including content indexing, containerized deduplication, and policy-driven storage, within a cloud environment of Vijayan. The systems and methods presented in Vijayan are beneficial for Faulhaber + Lupesko + Singh in that they allow for data transfer over wide area networks (Vijayan [0024]: “The systems support a variety of clients and storage devices that connect to the system in a cloud environment, which permits data transfer over wide area networks, such as the Internet, and which may have appreciable latency and/or packet loss.”) Regarding Claim 5: Faulhaber + Lupesko + Singh does not distinctly disclose The method of claim 4, wherein the non-relational database partitions data entries across a plurality of database partitions using a partition key wherein the partition key is an entity identifier. However, Vijayan teaches The method of claim 4, wherein the non-relational database partitions data entries across a plurality of database partitions using a partition key (Vijayan [0066]: "A secondary storage computing device 165 may also include a content indexing component 205 to perform content indexing of data in conjunction with the archival, restoration, migration, or copying of data, or at some other time") wherein the partition key is an entity identifier (Vijayan [0066]: "A secondary storage computing device 165 may also include a content indexing component 205 to perform content indexing of data in conjunction with the archival, restoration, migration, or copying of data, or at some other time"). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the packaging and deploying algorithms utilizing containers for flexible machine learning of Faulhaber + Lupesko + Singh with the systems and methods for performing data storage operations, including content indexing, containerized deduplication, and policy-driven storage, within a cloud environment of Vijayan. The systems and methods presented in Vijayan are beneficial for Faulhaber + Lupesko + Singh in that they allow for data transfer over wide area networks (Vijayan [0024]: “The systems support a variety of clients and storage devices that connect to the system in a cloud environment, which permits data transfer over wide area networks, such as the Internet, and which may have appreciable latency and/or packet loss.”) Regarding Claim 6: Faulhaber + Lupesko + Singh does not distinctly disclose The method of claim 4, wherein querying the non-relational database utilises a sort key based on incrementing integer timestamps. However, Vijayan teaches The method of claim 4, wherein querying the non-relational database utilises a sort key based on incrementing integer timestamps (Vijayan [0003]: "Secondary copies include point-in-time data and are typically for intended for long-term retention (e.g., weeks, months or years depending on retention criteria, for example as specified in a storage policy as further described herein) before some or all of the data is moved to other storage or discarded. Secondary copies may be indexed so users can browse, search and restore the data at another point in time"). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the packaging and deploying algorithms utilizing containers for flexible machine learning of Faulhaber + Lupesko + Singh with the systems and methods for performing data storage operations, including content indexing, containerized deduplication, and policy-driven storage, within a cloud environment of Vijayan. The systems and methods presented in Vijayan are beneficial for Faulhaber + Lupesko + Singh in that they allow for data transfer over wide area networks (Vijayan [0024]: “The systems support a variety of clients and storage devices that connect to the system in a cloud environment, which permits data transfer over wide area networks, such as the Internet, and which may have appreciable latency and/or packet loss.”) Regarding Claim 7: Faulhaber + Lupesko + Singh does not distinctly disclose The method of claim 6, wherein the incrementing integer timestamps are Unix timestamps. However, Vijayan teaches The method of claim 6, wherein the incrementing integer timestamps are Unix timestamps (Vijayan [0003]: "Secondary copies include point-in-time data and are typically for intended for long-term retention (e.g., weeks, months or years depending on retention criteria, for example as specified in a storage policy as further described herein) before some or all of the data is moved to other storage or discarded. Secondary copies may be indexed so users can browse, search and restore the data at another point in time"; [0130]: "the operating system (e.g., a Windows operating system, a Unix operating system, a Linux operating system, etc.)"). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the packaging and deploying algorithms utilizing containers for flexible machine learning of Faulhaber + Lupesko + Singh with the systems and methods for performing data storage operations, including content indexing, containerized deduplication, and policy-driven storage, within a cloud environment of Vijayan. The systems and methods presented in Vijayan are beneficial for Faulhaber + Lupesko + Singh in that they allow for data transfer over wide area networks (Vijayan [0024]: “The systems support a variety of clients and storage devices that connect to the system in a cloud environment, which permits data transfer over wide area networks, such as the Internet, and which may have appreciable latency and/or packet loss.”) Claim Rejections - 35 USC § 103 Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Faulhaber, Lupesko, and Singh as applied to claims 1, 14, and 15 above, and further in view of Harris et al (US 20210027182 A1, hereinafter Harris). Regarding Claim 8: Faulhaber + Lupesko + Singh does not distinctly disclose The method of claim 1, wherein the trained machine learning model comprises a representation of one or more computational graphs in a neural network exchange format. However, Harris teaches The method of claim 1, wherein the trained machine learning model comprises a representation of one or more computational graphs in a neural network exchange format (Harris [0003]: "The computer system can also build a predictive model based on the smoothed topological graph using a supervised machine learning algorithm"). Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to combine the packaging and deploying algorithms utilizing containers for flexible machine learning of Faulhaber + Lupesko + Singh with the systems and methods for automated machine learning model building process in order to reduce complexity and improve model performance of Harris. The systems and methods presented in Harris are beneficial for Faulhaber + Lupesko + Singh in that they allow for tuning of parameters to improve performance of machine learning models (Harris [Abstract]: “In addition, the settings and parameters for implementing the automated machine learning model building process can be tuned to improve performance of future models. The model building process can also be monitored to ensure that the current build is based on new information compared to previously builds”) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to COREY M SACKALOSKY whose telephone number is (703)756-1590. The examiner can normally be reached M-F 7:30am-3:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at (571) 272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COREY M SACKALOSKY/Examiner, Art Unit 2128 /OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Jan 07, 2022
Application Filed
Dec 11, 2024
Non-Final Rejection — §102, §103
Mar 19, 2025
Response Filed
Apr 09, 2025
Final Rejection — §102, §103
Jun 20, 2025
Interview Requested
Jul 01, 2025
Examiner Interview Summary
Jul 15, 2025
Request for Continued Examination
Jul 18, 2025
Response after Non-Final Action
Oct 09, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596932
METHOD AND SYSTEM FOR DEPLOYMENT OF PREDICTION MODELS USING SKETCHES GENERATED THROUGH DISTRIBUTED DATA DISTILLATION
2y 5m to grant Granted Apr 07, 2026
Patent 12591759
PARALLEL AND DISTRIBUTED PROCESSING OF PROPOSITIONAL LOGICAL NEURAL NETWORKS
2y 5m to grant Granted Mar 31, 2026
Patent 12572441
FULLY UNSUPERVISED PIPELINE FOR CLUSTERING ANOMALIES DETECTED IN COMPUTERIZED SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Patent 12518197
INCREMENTAL LEARNING WITHOUT FORGETTING FOR CLASSIFICATION AND DETECTION MODELS
2y 5m to grant Granted Jan 06, 2026
Patent 12487763
METHOD AND APPARATUS WITH MEMORY MANAGEMENT AND NEURAL NETWORK OPERATION
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+49.4%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month