Prosecution Insights
Last updated: April 19, 2026
Application No. 17/246,680

PREDICTING MODELING AND ANALYTICS INTEGRATION PLATFORM

Final Rejection §101
Filed
May 02, 2021
Examiner
BYRD, UCHE SOWANDE
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Zeta Global Corp.
OA Round
6 (Final)
23%
Grant Probability
At Risk
7-8
OA Rounds
4y 8m
To Grant
51%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
81 granted / 350 resolved
-28.9% vs TC avg
Strong +28% interview lift
Without
With
+27.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
51 currently pending
Career history
401
Total Applications
across all art units

Statute-Specific Performance

§101
42.2%
+2.2% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
10.0%
-30.0% vs TC avg
§112
5.3%
-34.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 350 resolved cases

Office Action

§101
DETAILED ACTION Status of the Application Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status This action is a Final Action on the merits in response to the application filed on 10/18/2024. Claims 1-17 have been amended. Claims 1-17 remain pending in this application. Response to Amendment Applicant’s amendments are acknowledged. The 35 U.S.C. 101 rejections of claims 1-17 in the previous office action are withdrawn in light of applicant’s amendments, however a new 101 rejections was added. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 are directed towards a method, claims 9-16 are directed towards an apparatus, and claim 17 is directed towards a machine-readable medium all of which are among the statutory categories of invention. Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. The claim recites at least one step or act, including applying an API to a models. Thus, the claim is to a process, which is one of the statutory categories of invention. (Step 1: YES). Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. With respect to claims 1-17, the independent claims (claims 1, 9, and 17) are directed to managing of modeling data, In independent claim 1, the bolded limitations emphasized below correspond to the abstract ideas of the claimed invention: Claim 1, A computer implemented method, comprising: accessing, by a model management system, a model repository database, the model repository database storing a plurality of dynamic predictive models; deploying, by the model management system using an Application Programming Interface (API), each of the plurality of dynamic predictive models to a respective one of a plurality of model evaluation servers, the deploying of each of the plurality of dynamic predictive models comprising: dynamically allocating, by the model management system using the API, computing resources across the plurality of model evaluation servers based on a volume of scoring requests to maintain system performance through dynamic load balancing; initiating, by the model management system using the API, a model release for at least one of the plurality of dynamic predictive models, the model release providing a new version of the at least one of the plurality of dynamic predictive models to the plurality of model evaluation servers, the model release including: accessing a release notification for the new release of a dynamic predictive model by one of the plurality of model evaluation servers; generating, by a model evaluation server, an API call configured to retrieve the new release of the dynamic predictive model from the model management system; and synchronizing, by the model management system, the plurality of dynamic predictive models, the synchronizing including at least determining a same version of the plurality of dynamic predictive models is deployed and available to all model evaluation servers; persisting the plurality of dynamic predictive models locally to a file storage component to alleviate a need for retrieving the new release on a restart of the plurality of model evaluation servers; receiving, at a scoring server, a scoring request for a score of a lead from at least one client device; separating, by the scoring server, the scoring request into a plurality of scoring sub-requests, each sub-request assigned to one of the deployed dynamic predictive models, each deployed dynamic predictive model executing on different processing cores to implement a parallelized workflow that improves data throughput and reduces system latency by reducing end-to-end response time; generating, using each of the deployed dynamic predictive models executed by a corresponding model evaluation server, evaluation results based on the each sub-request assigned to the each of the deployed dynamic predictive models, wherein the result from each of the deployed dynamic predictive models includes a model evaluation response with respect to each sub-request assigned to the each of the deployed dynamic predictive models; aggregating, using a RESTful API associated with the plurality of model evaluation servers executing the plurality of dynamic predictive models, the evaluation results from each of the deployed dynamic predictive models having a model release persisted in a local file storage component to minimize inter-server network communication latency; evaluating, by the scoring server, the aggregated evaluation results; and providing, by the scoring server, a response to the scoring request to the at least one client device based on evaluating the aggregated evaluation results. these steps fall within and recite an abstract ideas because they are directed to a method of organizing human activity which includes commercial interactions such as behaviors and business relations (See MPEP 2106.04(a)(2), subsection II). Regarding steps of: A computer implemented method, comprising: accessing, by a model management system, a model repository database, the model repository database storing a plurality of dynamic predictive models; deploying, by the model management system using an Application Programming Interface (API), each of the plurality of dynamic predictive models to a respective one of a plurality of model evaluation servers, the deploying of each of the plurality of dynamic predictive models comprising: dynamically allocating, by the model management system using the API, computing resources across the plurality of model evaluation servers based on a volume of scoring requests to maintain system performance through dynamic load balancing; initiating, by the model management system using the API, a model release for at least one of the plurality of dynamic predictive models, the model release providing a new version of the at least one of the plurality of dynamic predictive models to the plurality of model evaluation servers, the model release including: accessing a release notification for the new release of a dynamic predictive model by one of the plurality of model evaluation servers; generating, by a model evaluation server, an API call configured to retrieve the new release of the dynamic predictive model from the model management system; and synchronizing, by the model management system, the plurality of dynamic predictive models, the synchronizing including at least determining a same version of the plurality of dynamic predictive models is deployed and available to all model evaluation servers; persisting the plurality of dynamic predictive models locally to a file storage component to alleviate a need for retrieving the new release on a restart of the plurality of model evaluation servers; receiving, at a scoring server, a scoring request for a score of a lead from at least one client device; separating, by the scoring server, the scoring request into a plurality of scoring sub-requests, each sub-request assigned to one of the deployed dynamic predictive models, each deployed dynamic predictive model executing on different processing cores to implement a parallelized workflow that improves data throughput and reduces system latency by reducing end-to-end response time; generating, using each of the deployed dynamic predictive models executed by a corresponding model evaluation server, evaluation results based on the each sub-request assigned to the each of the deployed dynamic predictive models, wherein the result from each of the deployed dynamic predictive models includes a model evaluation response with respect to each sub-request assigned to the each of the deployed dynamic predictive models; aggregating, using a RESTful API associated with the plurality of model evaluation servers executing the plurality of dynamic predictive models, the evaluation results from each of the deployed dynamic predictive models having a model release persisted in a local file storage component to minimize inter-server network communication latency; evaluating, by the scoring server, the aggregated evaluation results; and providing, by the scoring server, a response to the scoring request to the at least one client device based on evaluating the aggregated evaluation results. If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior, then it falls within the “method of organizing human activity” grouping of abstract ideas. Therefore, If the identified limitation(s) falls within any of the groupings of abstract ideas enumerated in the 2019 PEG, the analysis should proceed to Prong Two. (Step 2A, Prong One: YES). Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d). The claim recites the additional elements of server, model management system, database, web interface, client device, models, apparatus, processor, memory, API. The claims recite the steps are performed by the server, model management system, database, web interface, client device, models, apparatus, processor, memory, API. The limitations of accessing, by a model management system, a model repository database, the model repository database storing a plurality of dynamic predictive models; deploying, by the model management system using an Application Programming Interface (API), each of the plurality of dynamic predictive models to a respective one of a plurality of model evaluation servers, the deploying of each of the plurality of dynamic predictive models comprising: dynamically allocating, by the model management system using the API, computing resources across the plurality of model evaluation servers based on a volume of scoring requests to maintain system performance through dynamic load balancing; initiating, by the model management system using the API, a model release for at least one of the plurality of dynamic predictive models, the model release providing a new version of the at least one of the plurality of dynamic predictive models to the plurality of model evaluation servers, the model release including: accessing a release notification for the new release of a dynamic predictive model by one of the plurality of model evaluation servers; generating, by a model evaluation server, an API call configured to retrieve the new release of the dynamic predictive model from the model management system; and synchronizing, by the model management system, the plurality of dynamic predictive models, the synchronizing including at least determining a same version of the plurality of dynamic predictive models is deployed and available to all model evaluation servers; persisting the plurality of dynamic predictive models locally to a file storage component to alleviate a need for retrieving the new release on a restart of the plurality of model evaluation servers; receiving, at a scoring server, a scoring request for a score of a lead from at least one client device; separating, by the scoring server, the scoring request into a plurality of scoring sub-requests, each sub-request assigned to one of the deployed dynamic predictive models, each deployed dynamic predictive model executing on different processing cores to implement a parallelized workflow that improves data throughput and reduces system latency by reducing end-to-end response time; generating, using each of the deployed dynamic predictive models executed by a corresponding model evaluation server, evaluation results based on the each sub-request assigned to the each of the deployed dynamic predictive models, wherein the result from each of the deployed dynamic predictive models includes a model evaluation response with respect to each sub-request assigned to the each of the deployed dynamic predictive models; aggregating, using a RESTful API associated with the plurality of model evaluation servers executing the plurality of dynamic predictive models, the evaluation results from each of the deployed dynamic predictive models having a model release persisted in a local file storage component to minimize inter-server network communication latency; evaluating, by the scoring server, the aggregated evaluation results; and providing, by the scoring server, a response to the scoring request to the at least one client device based on evaluating the aggregated evaluation results. are mere data gathering and output recited at a high level of generality, and thus the bolded limitations emphasized above correspond to the insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05. Further, the limitations are recited as being performed by server, model management system, database, web interface, client device, models, apparatus, processor, memory, API. The server, model management system, database, web interface, client device, models, apparatus, processor, memory, API are recited at a high level of generality. In limitation (a), server, model management system, database, web interface, client device, models, apparatus, processor, memory, API is used as a tool to perform the generic computer function of receiving data. See MPEP 2106.05(f). The server, model management system, database, web interface, client device, models, apparatus, processor, memory, API are used to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception. (Step 2A: YES). Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. As explained with respect to Step 2A, Prong Two, the additional elements are the server, model management system, database, web interface, client device, models, apparatus, processor, memory, API. The additional elements were found to be insignificant extra-solution activity in Step 2A, Prong Two, because they were determined to be insignificant limitations as necessary data gathering and outputting. However, a conclusion that an additional element is insignificant extra solution activity in Step 2A, Prong Two should be re-evaluated in Step 2B. See MPEP 2106.05, subsection I.A. At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). As discussed in Step 2A, Prong Two above, the recitations of accessing, by a model management system, a model repository database, the model repository database storing a plurality of dynamic predictive models; deploying, by the model management system using an Application Programming Interface (API), each of the plurality of dynamic predictive models to a respective one of a plurality of model evaluation servers, the deploying of each of the plurality of dynamic predictive models comprising: dynamically allocating, by the model management system using the API, computing resources across the plurality of model evaluation servers based on a volume of scoring requests to maintain system performance through dynamic load balancing; initiating, by the model management system using the API, a model release for at least one of the plurality of dynamic predictive models, the model release providing a new version of the at least one of the plurality of dynamic predictive models to the plurality of model evaluation servers, the model release including: accessing a release notification for the new release of a dynamic predictive model by one of the plurality of model evaluation servers; generating, by a model evaluation server, an API call configured to retrieve the new release of the dynamic predictive model from the model management system; and synchronizing, by the model management system, the plurality of dynamic predictive models, the synchronizing including at least determining a same version of the plurality of dynamic predictive models is deployed and available to all model evaluation servers; persisting the plurality of dynamic predictive models locally to a file storage component to alleviate a need for retrieving the new release on a restart of the plurality of model evaluation servers; receiving, at a scoring server, a scoring request for a score of a lead from at least one client device; separating, by the scoring server, the scoring request into a plurality of scoring sub-requests, each sub-request assigned to one of the deployed dynamic predictive models, each deployed dynamic predictive model executing on different processing cores to implement a parallelized workflow that improves data throughput and reduces system latency by reducing end-to-end response time; generating, using each of the deployed dynamic predictive models executed by a corresponding model evaluation server, evaluation results based on the each sub-request assigned to the each of the deployed dynamic predictive models, wherein the result from each of the deployed dynamic predictive models includes a model evaluation response with respect to each sub-request assigned to the each of the deployed dynamic predictive models; aggregating, using a RESTful API associated with the plurality of model evaluation servers executing the plurality of dynamic predictive models, the evaluation results from each of the deployed dynamic predictive models having a model release persisted in a local file storage component to minimize inter-server network communication latency; evaluating, by the scoring server, the aggregated evaluation results; and providing, by the scoring server, a response to the scoring request to the at least one client device based on evaluating the aggregated evaluation results. are recited at a high level of generality. These elements amount to transmitting data and are well understood, routine, conventional activity. See MPEP 2106.05(d), subsection II. 10 As discussed in Step 2A, Prong Two above, the recitation of server, model management system, database, web interface, client device, models, apparatus, processor, memory, API to perform limitations amounts to no more than mere instructions to apply the exception using a generic computer component. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept. (Step 2B: NO). Dependent claims 2-8 and 10-16 do not contain any new additional elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims. In this case, the claims are rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Thus, the claim is not patent eligible. Regarding the dependent claims, dependent claims 2-4, 6, and 7 recite determining and updating models; claim 5 recite models deployed on servers; claim 8 recites server exposed via API; claims 11-13, 14, and 15 recite an apparatus to determining and updating models; claim 13 recite an apparatus for models deployed on servers; claim 16 recite an apparatus for server exposed via API. The dependent claims 2-8 and 10-16 recite limitations that are not technological in nature and merely limits the abstract idea to a particular environment. Claims 2-8 and 10-16 recites server, model management system, database, web interface, client device, models, apparatus, processor, memory, API which are considered an insignificant extra-solution activities of collecting and analyzing data; see MPEP 2106.05(g). Claims 2-8 and 10-16 recites server, model management system, database, web interface, client device, models, apparatus, processor, memory, API, which merely recites an instruction to apply the abstract idea using a generic computer component; MPEP 2106.05(f). Additionally, claims 2-8 and 10-16 recite steps that further narrow the abstract idea. No additional elements are disclosed in the dependent claims that were not considered in independent claims 1, 9, and 17. Therefore claims 2-8 and 10-16 do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Response to Arguments Applicant’s arguments filed 10/16/2025 have been fully considered but they are not persuasive. Applicant’s arguments will be addressed hereinbelow in the order in which they appear in the response filed 10/16/2025. Regarding the 35 U.S.C. 101 rejection, at pg. 9-13 Applicant argues with respect to claims at issue are not directed to an abstract idea In response to the 35 USC § 101 claim rejection argument, the Examiner respectfully disagrees. The Examiner did consider each claim and every limitation both individually and as a whole (the limitations are not evaluated in a vacuum), since the grounds of rejection clearly indicates that an abstract idea has been identified from elements recited in the claims. Using the two-part analysis, the Office has determined there are no elements, in the claim sufficient enough to ensure that the claims amounts to significantly more than the abstract idea itself. As recited, the claims are directed towards: a computer implemented method, comprising: accessing, by a model management system, a model repository database, the model repository database storing a plurality of dynamic predictive models; deploying, by the model management system using an Application Programming Interface (API), each of the plurality of dynamic predictive models to a respective one of a plurality of model evaluation servers, the deploying of each of the plurality of dynamic predictive models comprising: dynamically allocating, by the model management system using the API, computing resources across the plurality of model evaluation servers based on a volume of scoring requests to maintain system performance through dynamic load balancing; initiating, by the model management system using the API, a model release for at least one of the plurality of dynamic predictive models, the model release providing a new version of the at least one of the plurality of dynamic predictive models to the plurality of model evaluation servers, the model release including: accessing a release notification for the new release of a dynamic predictive model by one of the plurality of model evaluation servers; generating, by a model evaluation server, an API call configured to retrieve the new release of the dynamic predictive model from the model management system; and synchronizing, by the model management system, the plurality of dynamic predictive models, the synchronizing including at least determining a same version of the plurality of dynamic predictive models is deployed and available to all model evaluation servers; persisting the plurality of dynamic predictive models locally to a file storage component to alleviate a need for retrieving the new release on a restart of the plurality of model evaluation servers; receiving, at a scoring server, a scoring request for a score of a lead from at least one client device; separating, by the scoring server, the scoring request into a plurality of scoring sub-requests, each sub-request assigned to one of the deployed dynamic predictive models, each deployed dynamic predictive model executing on different processing cores to implement a parallelized workflow that improves data throughput and reduces system latency by reducing end-to-end response time; generating, using each of the deployed dynamic predictive models executed by a corresponding model evaluation server, evaluation results based on the each sub-request assigned to the each of the deployed dynamic predictive models, wherein the result from each of the deployed dynamic predictive models includes a model evaluation response with respect to each sub-request assigned to the each of the deployed dynamic predictive models; aggregating, using a RESTful API associated with the plurality of model evaluation servers executing the plurality of dynamic predictive models, the evaluation results from each of the deployed dynamic predictive models having a model release persisted in a local file storage component to minimize inter-server network communication latency; evaluating, by the scoring server, the aggregated evaluation results; and providing, by the scoring server, a response to the scoring request to the at least one client device based on evaluating the aggregated evaluation results. The claim(s) does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the computer as recited is a generic computer component that performs functions. Examiner finds the claim recite concepts which are now described in the 2019 PEG as certain methods of organizing human activity. In particular the claims recites limitations for managing of modeling data, which constitutes methods related to commercial or legal interactions relating behaviors and business relations which are still considered an abstract idea under the 2019 PEG. The computing servers and model management system, are comprised of generic computer elements to perform an existing business process. Examiner finds the claims recite mere instructions to implement the abstract idea on a computer and uses the computer as a tool to perform the abstract idea without reciting any improvements to a technology, technological process or computer-related technology. In regards to Ex Parte Desjardins, the instant claims are not similar to Ex Parte Desjardins, Examiner finds the Board determined the improvements in Desjardins to be directed to addressing problems arising in the context of a technical improvements to machine learning systems, which overcome a problem specifically arising in the realm of AI and machine learning inventions. There is no similar technological problem or solution here. Also, the Examiner did consider “ARP” decision and notes and all examination falls in line with the ARP decision and notes. Regarding, the steps at pg. 11 that Applicant points to as practical application are merely narrowing the abstract idea to a particular technological environment, which has been found to be ineffective to render an abstract idea eligible. Furthermore, the Examiner respectfully disagrees because the following arguments: “These claim elements collectively prescribe specific improvements to server hardware operation, memory utilization patterns, and network communication architectures that enhance computer system performance beyond generic implementation. The claim elements thus integrate any alleged abstract idea into a practical application that improves the functioning of computers and related technology..” seems to describe a “particular way” of implementing the service for managing of modeling data as part of the abstract idea. “ The Applicant is basically relying on the system elements as integrating the abstract idea into a practical application but those system elements aren't really utilized in any particular manner, and the specification indicates that at 0055 “In fact, processor 614 may include one or more of general-purpose computers,” which indicates the lack of particularity in the application to the technological environment. Furthermore, at 0064 the Applicant recites that “ One having ordinary skill in the art will readily understand that the invention as discussed above may be practiced with steps in a different order, and/or with hardware elements in configurations which are different than those which are disclosed. Therefore, although the invention has been described based upon these preferred embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions would be apparent, while remaining within the scope of the invention. In order to determine the nletes and bounds of the invention, therefore, reference should be made to the appended claims” These citations are a strong indicator that the technical application is NOT particular, Regarding the improvement at pg. 10-11 ( First, the separation of scoring requests into sub-requests assigned to different deployed dynamic predictive models executing on different processing cores implements a parallelized workflow architecture that transforms sequential, single-threaded operations into concurrent, multi- core processing. This architectural improvement directly addresses CPU utilization limitations by distributing computational load across multiple processing cores, thereby improving data throughput and reducing system latency by reducing end-to-end response time. The Specification supports these improvements in paragraph 0046:"the evaluator-api 226 may be a web module that exposes restful API for scoring requests. The evaluator 226 may also provide support for scoring with multiple models, support for scoring timeouts, and support for aggregators. Timeouts may happen in the event the servers take too long to process a request from the client. Additionally, aggregators may allow a scoring request to utilize multiple models and aggregate the results of each." These improvements are reflected at least in the following claim elements, "separating, by the scoring server, the scoring request into a plurality of scoring sub-requests, each sub-request assigned to one of the deployed dynamic predictive models, each deployed dynamic predictive model executing on different processing cores to implement a parallelized workflow that improves data throughput and reduces system latency by reducing end-to-end response time; generating, using each of the deployed dynamic predictive models executed by a corresponding model evaluation server, evaluation results based on the each sub-request assigned to the each of the deployed dynamic predictive models, wherein the result from each of the deployed dynamic predictive models includes a model evaluation response with respect to each sub-request assigned to the each of the deployed dynamic predictive models." Second, the local persistence of model releases in file storage components specifically minimizes inter-server network communication latency by eliminating repeated network requests for model retrieval. This addresses a concrete computer system problem where network I/O overhead creates performance bottlenecks. The Specification provides support of these improvements in paragraph 0049:"In these file models, model releases are persisted locally. Thus, on node restart, there would be no need to retrieve model releases from the admin-api," demonstrating that local persistence reduces both network communication and system restart latency. These improvements are reflected at least in the following claim elements, "persisting the plurality of dynamic predictive models locally to a file storage component to alleviate a need for retrieving the new release on a restart of the plurality of model evaluation servers; aggregating, using a RESTful API associated with the plurality of model evaluation servers executing the plurality of dynamic predictive models, the evaluation results from each of the deployed dynamic predictive models having a model release persisted in a local file storage component to minimize inter-server network communication latency." Third, the RESTful API coordination for aggregating evaluation results provides distributed system orchestration that allows for efficient result consolidation without centralized processing bottlenecks. The evaluator-api serves as a technical interface that "exposes restful API for scoring requests" (paragraphs 0038 and 0046) and provides "support for scoring with multiple models" (paragraphs 0046 and 0048), providing the claimed aggregation functionality while maintaining system scalability.).” of pg.10-11. The Examiner respectfully disagree, as the claim invention does not “improves the functioning of a computer or improves another technology or technical field.” or “an improvement to another technology or technical field. Then, the Examiner would like to point the Applicant to MPEP 2106.05(d)(II) : “Claim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure. However, examples of claim language, although not exhaustive, that may raise a question as to the limiting effect of the language in a claim are: (A) "adapted to" or "adapted for" clauses; (B) "wherein" clauses; and (C) "whereby" clauses. The determination of whether each of these clauses is a limitation in a claim depends on the specific facts of the case. See, e.g., Griffin v. Bertina, 285 F.3d 1029, 1034, 62 USPQ2d 1431 (Fed. Cir. 2002) (finding that a "wherein" clause limited a process claim where the clause gave "meaning and purpose to the manipulative steps"). In In re Giannelli, 739 F.3d 1375, 1378, 109 USPQ2d 1333, 1336 (Fed. Cir. 2014), the court found that an "adapted to" clause limited a machine claim where "the written description makes clear that 'adapted to,' as used in the [patent] application, has a narrower meaning, viz., that the claimed machine is designed or constructed to be used as a rowing machine whereby a pulling force is exerted on the handles." In Hoffer v. Microsoft Corp., 405 F.3d 1326, 1329, 74 USPQ2d 1481, 1483 (Fed. Cir. 2005), the court held that when a "‘whereby’ clause states a condition that is material to patentability, it cannot be ignored in order to change the substance of the invention." Id. However, the court noted that a "‘whereby clause in a method claim is not given weight when it simply expresses the intended result of a process step positively recited.’" Id. (quoting Minton v. Nat’l Ass’n of Securities Dealers, Inc., 336 F.3d 1373, 1381, 67 USPQ2d 1614, 1620 (Fed. Cir. 2003)).” As the claims are being directed as intended result, as the amended claims have been written to be intended use, for example. : “dynamically allocating, by the model management system using the API, computing resources across the plurality of model evaluation servers based on a volume of scoring requests to maintain system performance through dynamic load balancing;”; “persisting the plurality of dynamic predictive models locally to a file storage component to alleviate a need for retrieving the new release on a restart of the plurality of model evaluation servers;”. Additionally, the claims are clear steps for selecting and managing model data and not the improvement of the servers, models, or even software. As such, the Examiner would like to point the Applicant to the 2019 PEG, in which managing of model data and implementing services on behalf of a provider will fall under. The 2019 PEG which states: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g) Generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Heytens et al., U.S. Pub. 20030220860 , (discussing the evaluating and refreshing of modeling data). Schauser et al., W.O. Pub. 2006127499 , (discussing the processing of dynamic data of modeling data ). Bapat et al., A Tale Of Migration To Cloud Computing For Sharing Experiences and Observations, https://www.researchgate.net/publication/234780970_A_tale_of_migration_to_cloud_computing_for_sharing_experiences_and_observations, Proceedings of the 2nd international workshop on software engineering for cloud computing 2011 (discussing the processing of observing and evaluating of cloud computing data to include modeling ). THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to UCHE BYRD whose telephone number is (571)272-3113. The examiner can normally be reached Mon.-Fri.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /UCHE BYRD/Examiner, Art Unit 3624 /PATRICIA H MUNSON/Supervisory Patent Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

May 02, 2021
Application Filed
Jun 14, 2023
Non-Final Rejection — §101
Sep 13, 2023
Applicant Interview (Telephonic)
Sep 19, 2023
Response Filed
Sep 23, 2023
Examiner Interview Summary
Dec 23, 2023
Final Rejection — §101
Mar 27, 2024
Applicant Interview (Telephonic)
Apr 05, 2024
Examiner Interview Summary
Apr 29, 2024
Response after Non-Final Action
May 09, 2024
Applicant Interview (Telephonic)
May 15, 2024
Response after Non-Final Action
Jun 28, 2024
Request for Continued Examination
Jul 01, 2024
Response after Non-Final Action
Jul 12, 2024
Non-Final Rejection — §101
Oct 18, 2024
Response Filed
Jan 23, 2025
Final Rejection — §101
May 12, 2025
Interview Requested
May 23, 2025
Applicant Interview (Telephonic)
May 28, 2025
Request for Continued Examination
May 31, 2025
Examiner Interview Summary
Jun 02, 2025
Response after Non-Final Action
Jun 12, 2025
Non-Final Rejection — §101
Oct 16, 2025
Response Filed
Jan 16, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12499469
DATA ANALYSIS TO DETERMINE OFFERS MADE TO CREDIT CARD CUSTOMERS
2y 5m to grant Granted Dec 16, 2025
Patent 12499460
INFORMATION DELIVERY METHOD, APPARATUS, AND DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Dec 16, 2025
Patent 12282930
USING A PICTURE TO GENERATE A SALES LEAD
2y 5m to grant Granted Apr 22, 2025
Patent 12236377
METHOD AND SYSTEM FOR SWITCHING AND HANDOVER BETWEEN ONE OR MORE INTELLIGENT CONVERSATIONAL AGENTS
2y 5m to grant Granted Feb 25, 2025
Patent 12147927
Machine Learning System and Method for Predicting Caregiver Attrition
2y 5m to grant Granted Nov 19, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
23%
Grant Probability
51%
With Interview (+27.9%)
4y 8m
Median Time to Grant
High
PTA Risk
Based on 350 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month