Prosecution Insights
Last updated: April 19, 2026
Application No. 17/336,573

METHOD OF GENERATING SOLUTIONS FOR COMPLEX BUSINESS PROBLEMS INVOLVING GROUPS OF EQUIPMENT AND PERSONNEL USING AN ARTIFICIAL INTELLIGENCE MODEL

Final Rejection §101§103
Filed
Jun 02, 2021
Examiner
RAHMAN, IBRAHIM
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
At&T Mobility Ii LLC
OA Round
4 (Final)
0%
Grant Probability
At Risk
5-6
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 10 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
29 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
35.8%
-4.2% vs TC avg
§103
28.7%
-11.3% vs TC avg
§102
22.1%
-17.9% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§101 §103
Detailed Action This action is in response to the amendment filed 10/14/2025 for application 17/336,573, in which: Claims 1, 10, and 18 are independent claims. Claims 1, 10, and 18 have been amended. Claims 17 and 20 have been canceled. Claims 1-16, 18-19, and 21-22 are currently pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 10/14/2025 have been fully considered but they are not persuasive. Regarding the 35 USC § 103 Rejections: Applicant's arguments regarding the 35 U.S.C. § 103 rejections of the previous office action have been fully considered, but are unpersuasive. Applicant notes the 103 rejections and asserts (Pages 8-9) that Nakandala fails to explicitly disclose the specific uses cases which are not remedied by Wang. The alleged second use case taught by Wang merely describes travel demand modeling and does not recite thew newly added limitation in terms of an algorithm supplying a list of new equipment, inventory constraints and availability. Examiner respectfully disagrees. Nakandala does indeed not teach explicitly the specific uses causes; however, Wang remedies the deficiencies including the second use case wherein as part of the second use case an algorithm supplies: a list of new equipment, inventory constraints and availability. The specific second use case is taught by wang where the travel demand modeling is used to forecast travel demand. The deep learning traffic demand forecasting framework utilizes an algorithm to supply the data as the system is designed to determine travel demand by accommodating resources for users. Fig. 1 and 2 show example snapshots of an order count to create heatmaps shown in Fig. 9; where, the second use case is determining the destinations by the corresponding urban resource scheduling of the equipment (resources). As the resource scheduling is utilizing the heatmap for travel demand (interpreted as inventory constraint and availability) which is shown in the matrix (list) for order count (new requested trips; which is interpreted as new requested resources i.e. new equipment/trips. Thus, the algorithm is supplying the data for resource scheduling). Applicant asserts (Page 9), Nakandala does not describe the limitation combining …, wherein each of the other sub-results is obtained based on an invocation of a specific application program interface (API) of a plurality of APIs that communicates with the AI model; specifically, the newly added limitation for the user-facing API in terms of registering workers and issuing a deep net model selection workload. Examiner respectfully disagrees. Nakandala does describe and explicitly discloses combining the sub-result with other sub-results within Figure 5 by depicting the schedulers where the system’s (Cerebro’s) scheduler is combining the task results (which are interpreted by the examiner as sub-results due to the task results are validation results per task). However, Nakandala and Wang do not explicitly disclose the newly added limitation. Applicant' s arguments with respect to the newly added limitation wherein each of the other sub-results is obtained based on an invocation of a specific application program interface (API) of a plurality of APIs that communicates with the AI model have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant asserts (Pages 10-11), amended claim 1 is distinguishable from the applied art for at least the foregoing reasons. Same applies to similar independent claims 10 and 18. Claims 2-9, 11-16, 19, and 21-22 submitted herewith, which each depend from one of independent claims 1, 10, and 18, are distinguishable from the applied art for at least the same reasons as their respective base independent claims. Examiner respectfully disagrees. Applicant’s arguments regarding the other independent and dependent claims rely upon the same assertions as with respect to the independent claims, and are thus likewise unpersuasive. Therefore, for the reasons given above and in the rejections below, the rejection to all Claims (including Claim 1, similar independent claims, and all dependent Claims) are maintained and updated as necessitated by Claim amendments. More specific details are discussed below within the 35 USC § 103 Rejections. Regarding the 35 U.S.C. § 101 Rejections: Applicant's arguments regarding the 35 U.S.C. § 101 rejections of the previous office action have been fully considered, but are unpersuasive. Applicant disagrees (Page 11) with the rejections of Claim 1-16 18-19, and 21-22 being directed to an abstract idea without significantly more. However, the independent claims have been amended to further direct the claimed subject matter to one or more practical applications and technological environments and improvements. Thus, the rejection is rendered moot as it is inapplicable to the amended claims. The Applicant further supports their assertions with noting that the Office Action fails to substantively address the previous remarks of the previously submitted paper (June 26th, 2025) in any meaningful way. More details and elaboration upon the previous remarks will follow. Examiner respectfully disagrees. The 35 U.S.C. § 101 rejection is not rendered moot as the amended claims are directed to an abstract idea (Step 2A Prong 1) and do not integrate the abstract idea into a practical application (Step 2A Prong 2). The rejection follows the steps of the analysis as laid out in the MPEP which was followed for the previous and current examination (see MPEP 2106). The remarks/arguments laid forth by the applicant have been substantively and were fully responded to previously. The rejection also follows the steps of the analysis as laid out in the MPEP which was followed for the previous and current examination (see MPEP 2106). Therefore, for the reasons given above and in the updated rejections below, the rejection to all Claims (including Claim 1, similar independent claims, and all dependent Claims) are maintained and updated as necessitated by Claim amendments. More specific details are discussed below within the responses and 35 USC § 101 Rejections. (A) Applicant disagrees (Page 11) and notes that the Examiner is improperly distilling the claimed subject matter such that the rejection “falls out” from what little of the actual claim language is left after distillation. Even under BRI, the amended independent claims are directed to significantly more than “mathematical concepts” or “mental processes”. The examiner has failed to furnish actual proof/documentation to support the rejection. Applicant furthers supports their assertion by noting the Office Action merely notes the Claim falls within the mathematical concepts" or "mental processes" group of abstract ideas. Such a statement is merely contention/conclusion on the part of the Examiner. Thus, the Office Action fails to prove that the so-called "abstract idea" exception is applicable. The applicant further supports their assertions with noting that the 102/103 analyses are probative of the examiner’s understanding and comprehension for subject matter; thus, the examiner’s treatment of the claims regarding section 101 is in contradiction with the section 102/103 analyses provided by examiner over the course of prosecution history. Examiner respectfully disagrees. The rejection follows the steps of the analysis as laid out in the MPEP which was followed for the previous and current examination (see MPEP 2106). The amended claims recites selecting modeling logic for an artificial intelligence (Al) model that solves a use case of a plurality of use cases … (a human being can mentally apply evaluation and make a judgement to select modeling logic for an AI model that solves a use case), … forecasting groups of equipment and personnel to deploy … (a human being can mentally apply evaluation to forecast groups of equipment and personnel to deploy), … determining destinations that have slots available to accommodate the groups of equipment and personnel … (a human being can mentally apply evaluation to determine destinations that have slots available to accommodate specific groups), … determining transportation routes for bringing the groups of equipment and personnel from an origin to the destinations having the slots available … (a human being can mentally apply e evaluation to determine transportation routes), evaluating the sub-result based on an evaluation metric (a human being can mentally apply evaluation to evaluate the sub-result based on a metric), combining the sub-result with other sub-results of the plurality of use cases to generate intermediate data … (a human being can mentally apply evaluation to combine results to generate intermediate data), invoking a cost function for a business problem corresponding to the plurality of use cases on the intermediate data to obtain a score, wherein the cost function includes a length of time needed to achieve a deployment created by the plurality of use cases (a mathematical relationship between variables and/or numbers using a mathematical formula/equations), determining, based on the invoking, that the score is representative of an improvement (a human being can mentally apply evaluation to determine the score is representative of an improvement), taking, based on the determining, a snapshot of a business solution corresponding to the … (a human being can mentally apply evaluation to take a snapshot of a business solution), determining, based on the combining, whether an exit criteria has been met (a human being can mentally apply evaluation to determine whether an exit criteria has been met); which are all not per limitation as an evaluation or judgement that can be performed in the human mind, or by a human using pen and paper, or a mathematical relationship. The rejection also follows the steps of the analysis as laid out in the MPEP which was followed for the previous and current examination (see MPEP 2106). The 35 U.S.C. § 102/103 Rejections is irrelevant to the analysis as to whether a Claim recites an abstract idea or not. (B) Applicant asserts (Page 12) and notes the previous remarks, that the examiner has failed to furnish actual proof/documentation to support the rejection. Examiner merely notes the Claim recites a “mathematical concept” or “mental process”. Examiner respectfully disagrees. The abstract idea limitations were not merely noted as abstract ideas. As noted above, within the previous Office Action, and below, each limitation was evaluated with the reasoning of why they were interpreted as mental process/mathematical concepts (noted within the parentheses). (C) Applicant asserts (Page 12) the Office Action does not note what is impractical about the features and how they do not amount to an integration within an application. This absence of actual proof implies that the Examiner cannot demonstrate that the exception applies. The skilled artisan will appreciate based on a review of the disclosure that the features of the independent claims are integrated as part of a practical application within the meaning of 35 U.S.C. 101. Examiner respectfully disagrees. The office action establishes a proper and well-supported prima facie case as the claims are explained to be not patentable via the Patent Subject Matter Eligibility steps within MPEP 2106. The independent claim fails to recite the steps that achieve the alleged improvement. The independent claim is no more detailed than applying the selection model logic for specific/restricted use cases to evaluate a specific result via evaluation metric/score (which is obtained based on specific restrictions), applying a mathematical relationship for the specific use cases to generate specific scores to determine a score which is associated with improvement, taking a snapshot, and exiting on specific criteria with no detail on the application of the determination/business snapshot. The limitations are unable to provide improvement as they are currently being evaluated as either abstract idea(s) or additional elements that fall within MPEP 2106.05. The claims are directed towards the improvement of an abstract idea. Improvements to an abstract idea are still considered to an abstract idea. Additionally, the Claims do not reflect any improvement in the functioning of a computer or hardware processor rather the additional elements merely use a generic computer component to perform the abstract idea and/or is restricting the abstract idea to a particular technological environment. More discussed below with integrated as a practical application within (D). (D) Applicant asserts (Pages 12-13), the failure of furnishing proof is also extended to the limitations that were directed to as “well-understood, routine, or conventional activities”. In this respect, it is appreciated and understood that the claimed subject matter is directed to "significantly more" and directed to an improvement in technology than the alleged abstract idea identified by the Examiner. Should the Examiner persist in a rejection under 35 U.S.C. 101, it is requested that the Examiner specifically point out what condition stated in 35 U.S.C. 101 the Applicants have not complied with, particularly in view of the broad mandate 'any' set forth in 35 U.S.C. 101. Examiner respectfully disagrees. Applicant’s arguments with respect to Claim 1’s limitations being "well-understood, routine, or conventional activities" are moot as the previous office action did not mention any additional limitations that were considered to fall within well-understood, routine, and conventional activity. The previous office action is not alleging that any of the elements are well-understood, routine, and conventional activities. Therefore, the claims do not integrate the judicial exception into a practical application nor amount to significantly more. The claim is not patent eligible. Although the Claims are interpreted in light of the specification, limitations from the specification are not read into the Claims. MPEP 2106.05(a) recites: After the examiner has consulted the specification and determined that the disclosed invention improves technology, the claim must be evaluated to ensure the claim itself reflects the disclosed improvement in technology … the claim must include the components or steps of the invention that provide the improvement described in the specification … It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. See the discussion of Diamond v. Diehr, 450 U.S. 175, 187 and 191-92, 209 USPQ 1, 10 (1981)) in subsection II, below. Applicant fails to show how any alleged technical improvement would be provided by anything more than the judicial exception on its own. Additionally, applicant fails to show how the claim includes components or steps that would provide the alleged improvement described in the specification. By MPEP 2106.05(f)(1), "the claim recites only the idea of a solution or outcome, i.e. the claim fails to recite details of how a solution to a problem is accomplished". Moreover, the examiner maintains that the Claim does not impose any meaningful limits on the judicial exceptions. As noted in the rejection, the Claim does not include additional elements that are sufficient to amount to an integration of the identified abstract idea into a practical application, thus the claim is directed to an abstract idea. Applicant’s arguments regarding the other independent and dependent claims rely upon the same assertions as with respect to Claim 1, and are thus likewise unpersuasive. Therefore, for the reasons given above and in the updated rejections below, the rejection to all Claims (including Claim 1 and all dependent Claims) are maintained and updated as necessitated by Claim amendments. More specific details are discussed below within the 35 USC § 101 Rejections. (E) Applicant asserts (Pages 13-14), the Examiner was requested to identify where in the statute the "abstract idea" exception is contained to ensure/confirm that the statute is not being rewritten outside the legislative process reserved for the people's representatives. Apparently, the Examiner believe that the MPEP supersedes the jurisprudence of the United States Supreme Court - a preposterous result to say the least. As discussed above, it is in fact the Examiner's burden to demonstrate the applicability of a rejection with actual proof/evidence. Applicant notes Alice, Henry Schein, and Diamond v. Chakrabarty to further support their assertions. Examiner respectfully disagrees. As stated previously, the rejection follows the steps of the analysis as laid out in the MPEP which was followed for the previous and current examination (see MPEP 2106). Thus, the office action does not fail to establish a proper and well-supported prima facie case as the claims are explained to be not patentable via the Patent Subject Matter Eligibility steps within MPEP 2106. The remarks/arguments laid forth by the applicant has been substantively and fully responded to. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-16, 18-19, and 21-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding Claim 1: Subject Matter Eligibility Analysis Step 1: Claim 1 recites a device, thus a machine, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 1 further recites the device comprising of: selecting modeling logic for an artificial intelligence (Al) model that solves a use case of a plurality of use cases … (a human being can mentally apply evaluation and make a judgement to select modeling logic for an AI model that solves a use case) … forecasting groups of equipment and personnel to deploy … (a human being can mentally apply evaluation to forecast groups of equipment and personnel to deploy) … determining destinations that have slots available to accommodate the groups of equipment and personnel … (a human being can mentally apply evaluation to determine destinations that have slots available to accommodate specific groups) … determining transportation routes for bringing the groups of equipment and personnel from an origin to the destinations having the slots available … (a human being can mentally apply e evaluation to determine transportation routes) evaluating the sub-result based on an evaluation metric (a human being can mentally apply evaluation to evaluate the sub-result based on a metric) combining the sub-result with other sub-results of the plurality of use cases to generate intermediate data … (a human being can mentally apply evaluation to combine results to generate intermediate data) invoking a cost function for a business problem corresponding to the plurality of use cases on the intermediate data to obtain a score, wherein the cost function includes a length of time needed to achieve a deployment created by the plurality of use cases (a mathematical relationship between variables and/or numbers using a mathematical formula/equations) determining, based on the invoking, that the score is representative of an improvement (a human being can mentally apply evaluation to determine the score is representative of an improvement) taking, based on the determining, a snapshot of a business solution corresponding to the … (a human being can mentally apply evaluation to take a snapshot of a business solution) determining, based on the combining, whether an exit criteria has been met (a human being can mentally apply evaluation to determine whether an exit criteria has been met) Claim 1 thus recites an abstract idea (that falls into the “mathematical concepts” or “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because the additional elements consist of: a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising (to perform a mental process and the performance of an abstract idea on a computer is no more than instructions to “apply it” on a computer, by MPEP 2106.05(f)) wherein the plurality of use cases includes a first use case for … , a second use case for … , and a third use case for … and wherein as part of the second use case an algorithm supplies: a list of new equipment, inventory constraints and availability (which is restricting the abstract idea to a Particular Technological Environment, by MPEP 2106.05(h)) executing the AI model using holdout data to obtain a sub-result (to perform a mental process and the performance of an abstract idea on a computer is no more than instructions to “apply it” on a computer, by MPEP 2106.05(f)) wherein each of the other sub-results is obtained based on an invocation of a specific application program interface (API) of a plurality of APIs that communicates with the AI model (which is restricting the abstract idea to a Particular Technological Environment, by MPEP 2106.05(h)) Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself. Additional elements a and c are merely applying the abstract idea on a computer (MPEP 2106.05(f)) which cannot provide significantly more. Additional elements b and d are only restricting the abstract idea to a Particular Technological Environment (MPEP 2106.05(h)) which cannot provide significantly more. Thus, the claim is subject-matter ineligible. Regarding Claim 2: Subject Matter Eligibility Analysis Step 1: Dependent Claim 2 recites the device of Claim 1. Claim 1 is a device, thus a machine, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 2 does not recite any additional abstract ideas and only inherits the abstract ideas from Claim 1. Claim 2 thus recites an abstract idea (that falls into the “mathematical concepts” or “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because the new sole additional element recited consists of wherein each use case in the plurality of use cases is determined based on a common pattern in the business problem (which is restricting the abstract idea to a Particular Technological Environment, by MPEP 2106.05(h)). Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the new sole additional element recited, alone or in combination, does not provide significantly more than the abstract idea itself. The additional element is only restricting the abstract idea to a Particular Technological Environment (MPEP 2106.05(h)) which cannot provide significantly more. Thus, the claim is subject-matter ineligible. Regarding Claim 3: Subject Matter Eligibility Analysis Step 1: Dependent Claim 3 recites the device of Claim 2. Claim 2 is a device, thus a machine, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 3 does not recite any additional abstract ideas and only inherits the abstract ideas from Claim 2. Claim 3 thus recites an abstract idea (that falls into the “mathematical concepts” or “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because the new sole additional element recited consists of wherein the common pattern comprises regression, classification, optimization, or a combination thereof (which is restricting the abstract idea to a Particular Technological Environment, by MPEP 2106.05(h)). Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the new sole additional element recited, alone or in combination, does not provide significantly more than the abstract idea itself. The additional element is only restricting the abstract idea to a Particular Technological Environment (MPEP 2106.05(h)) which cannot provide significantly more. Thus, the claim is subject-matter ineligible. Regarding Claim 4: Subject Matter Eligibility Analysis Step 1: Dependent Claim 4 recites the device of Claim 2. Claim 2 is a device, thus a machine, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 4 further recites wherein the operations further comprise ranking the other sub-results based on the evaluation metric (a human being can mentally apply evaluation to rank the other sub-results based on a metric). Claim 4 thus recites an abstract idea (that falls into the “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because there are no new additional elements recited. Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no new additional elements recited. The judicial exception alone does not provide significantly more than the abstract idea itself. Thus, the claim is subject-matter ineligible. Regarding Claim 5: Subject Matter Eligibility Analysis Step 1: Dependent Claim 5 recites the device of Claim 4. Claim 4 is a device, thus a machine, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 5 further recites wherein the operations further comprise determining the exit criteria for the plurality of use cases … (a human being can mentally apply evaluation to determine the exit criteria for the plurality of use cases). Claim 5 thus recites an abstract idea (that falls into the “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because the new sole additional element recited consists of … wherein the exit criteria comprise options including: exit when the cost function is satisfied within a threshold, continue searching for better solutions until an execution time limit has expired, or execute for a predefined number of iterations (which is restricting the abstract idea to a Particular Technological Environment, by MPEP 2106.05(h)). Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the new sole additional element recited, alone or in combination, does not provide significantly more than the abstract idea itself. The additional element is only restricting the abstract idea to a Particular Technological Environment (MPEP 2106.05(h)) which cannot provide significantly more. Thus, the claim is subject-matter ineligible. Regarding Claim 6: Subject Matter Eligibility Analysis Step 1: Dependent Claim 6 recites the device of Claim 5. Claim 5 is a device, thus a machine, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 6 further recites wherein the device formulates the modeling logic for the AI model (a human being can mentally apply evaluation to formulate the modeling logic for the AI model). Claim 6 thus recites an abstract idea (that falls into the “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because there are no new additional elements recited. Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no new additional elements recited. The judicial exception alone does not provide significantly more than the abstract idea itself. Thus, the claim is subject-matter ineligible. Regarding Claim 7: Subject Matter Eligibility Analysis Step 1: Dependent Claim 7 recites the device of Claim 6. Claim 6 is a device, thus a machine, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 7 does not recite any additional abstract ideas and only inherits the abstract ideas from Claim 6. Claim 7 thus recites an abstract idea (that falls into the “mathematical concepts” or “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because the new sole additional element recited consists of wherein the operations further comprise training the AI model using training data (to perform a mental process and the performance of an abstract idea on a computer is no more than instructions to “apply it” on a computer, by MPEP 2106.05(f)). Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the new sole additional element recited, alone or in combination, does not provide significantly more than the abstract idea itself. The additional element is merely applying the abstract idea on a computer (MPEP 2106.05(f)) which cannot provide significantly more. Thus, the claim is subject-matter ineligible. Regarding Claim 8: Subject Matter Eligibility Analysis Step 1: Dependent Claim 8 recites the device of Claim 7. Claim 7 is a device, thus a machine, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 8 further recites wherein the operations further comprise performing data wrangling on the training data and the holdout data (a human being can mentally apply evaluation to perform data wrangling on data). Claim 8 thus recites an abstract idea (that falls into the “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because there are no new additional elements recited. Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no new additional elements recited. The judicial exception alone does not provide significantly more than the abstract idea itself. Thus, the claim is subject-matter ineligible. Regarding Claim 9: Subject Matter Eligibility Analysis Step 1: Dependent Claim 9 recites the device of Claim 8. Claim 8 is a device, thus a machine, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 9 does not recite any additional abstract ideas and only inherits the abstract ideas from Claim 8. Claim 9 thus recites an abstract idea (that falls into the “mathematical concepts” or “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because the new sole additional element recited consists of wherein the processing system comprises a plurality of processors operating in a distributed computing environment (which is restricting the abstract idea to a Particular Technological Environment, by MPEP 2106.05(h)). Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the new sole additional element recited, alone or in combination, does not provide significantly more than the abstract idea itself. The additional element is only restricting the abstract idea to a Particular Technological Environment (MPEP 2106.05(h)) which cannot provide significantly more. Thus, the claim is subject-matter ineligible. Regarding Claims 10-16: Claims 10-16 incorporate substantively all the limitations of Claims 1-8 in a non-transitory, machine-readable medium (thus, a manufacture) and further recites comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations, the operations comprising (these claim limitations appear to perform a mental process and the performance of an abstract idea on a computer is no more than instructions to “apply it” on a computer, by MPEP 2106.05(f)) and does not appear to integrate the abstract idea into a particular application; thus, the claim is subject-matter ineligible as it does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself); thus, Claims 10, 11, 12, 13-16 are rejected for reasons set forth in the rejections of Claims 1, 4, 2-3, 5-8, respectively. Regarding Claim 18: Subject Matter Eligibility Analysis Step 1: Claim 18 recites a method, thus a process, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 18 further recites the method comprising of: formulating … modeling logic for an artificial intelligence (Al) model that solves a use case of a plurality of use cases … (a human being can mentally apply evaluation and make a judgement to formulate modeling logic for an AI model that solves a use case) … forecasting groups of equipment and personnel to deploy … (a human being can mentally apply evaluation to forecast groups of equipment and personnel to deploy) … determining destinations that have slots available to accommodate the groups of equipment and personnel … (a human being can mentally apply evaluation to determine destinations that have slots available to accommodate specific groups) … determining transportation routes for bringing the groups of equipment and personnel from an origin to the destinations having the slots available … (a human being can mentally apply e evaluation to determine transportation routes) evaluating … the sub-result based on an evaluation metric (a human being can mentally apply evaluation to evaluate the sub-result based on a metric) combining … plural sub-results of the plurality of use cases to generate intermediate data (a human being can mentally apply evaluation to combine results to generate intermediate data) invoking … a cost function for a business problem corresponding to the plurality of use cases on the intermediate data to obtain a score, wherein the cost function includes a length of time needed to achieve a deployment created by the plurality of use cases (a mathematical relationship between variables and/or numbers using a mathematical formula/equations) determining, … and based on the invoking, that the score is representative of an improvement (a human being can mentally apply evaluation to determine the score is representative of an improvement) taking, … and based on the determining, a snapshot of a business solution corresponding to the … (a human being can mentally apply evaluation to take a snapshot of a business solution) determining, … and based on the combining, whether an exit criteria has been met (a human being can mentally apply evaluation to determine whether an exit criteria has been met) Claim 18 thus recites an abstract idea (that falls into the “mathematical concepts” or “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because the additional elements consist of: … by a processing system including a processor … (to perform a mental process and the performance of an abstract idea on a computer is no more than instructions to “apply it” on a computer, by MPEP 2106.05(f)) wherein the plurality of use cases includes a first use case for … , a second use case for … , and a third use case for … and wherein as part of the second use case an algorithm supplies: a list of new equipment, inventory constraints and availability (which is restricting the abstract idea to a Particular Technological Environment, by MPEP 2106.05(h)) executing … the AI model using holdout data … to obtain a sub-result (to perform a mental process and the performance of an abstract idea on a computer is no more than instructions to “apply it” on a computer, by MPEP 2106.05(f)) wherein each of the plural sub-results is obtained based on an invocation of a specific application program interface (API) of a plurality of APIs that communicates with the AI model (which is restricting the abstract idea to a Particular Technological Environment, by MPEP 2106.05(h)) Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements, alone or in combination, do not provide significantly more than the abstract idea itself. Additional elements a and c are merely applying the abstract idea on a computer (MPEP 2106.05(f)) which cannot provide significantly more. Additional elements b and d are only restricting the abstract idea to a Particular Technological Environment (MPEP 2106.05(h)) which cannot provide significantly more. Thus, the claim is subject-matter ineligible. Regarding Claim 19: Subject Matter Eligibility Analysis Step 1: Dependent Claim 19 recites the method of Claim 18. Claim 18 is a method, thus a process, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 19 further recites the method comprising of: dividing … the business problem into the plurality of use cases (a human being can mentally apply evaluation to determine whether an exit criteria has been met) ranking … the plural sub-results based on the evaluation metric (a human being can mentally apply evaluation to rank the sub-results based on the evaluation metric) Claim 19 thus recites an abstract idea (that falls into the “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because there are no new additional elements recited. Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no new additional elements recited. The judicial exception alone does not provide significantly more than the abstract idea itself. Thus, the claim is subject-matter ineligible. Regarding Claim 21: Subject Matter Eligibility Analysis Step 1: Dependent Claim 21 recites the device of Claim 1. Claim 1 is a device, thus a machine, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 21 further recites the device comprising of wherein the operations further comprise: cataloging the use case, resulting in a catalogued use case (a human being can mentally apply evaluation to organize and catalog the use case resulting in a cataloged use case). Claim 21 thus recites an abstract idea (that falls into the “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because there are no new additional elements recited. Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no new additional elements recited. The judicial exception alone does not provide significantly more than the abstract idea itself. Thus, the claim is subject-matter ineligible. Regarding Claim 22: Subject Matter Eligibility Analysis Step 1: Dependent Claim 22 recites the device of Claim 21. Claim 21 is a device, thus a machine, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: However, Claim 22 further recites wherein the operations further comprise: selecting second modeling logic for the AI model to solve a second business problem using the catalogued use case, wherein the second modeling logic is different from the modeling logic, and wherein the determining that the score is representative of an improvement is based on a comparison of the score with another score (a human being can mentally apply select modeling logic for a second business problem using the catalogues use case). Claim 22 thus recites an abstract idea (that falls into the “mental processes” group of abstract ideas). Subject Matter Eligibility Analysis Step 2A Prong 2: This judicial exception is not integrated into a practical application because there are no new additional elements recited. Subject Matter Eligibility Analysis Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because there are no new additional elements recited. The judicial exception alone does not provide significantly more than the abstract idea itself. Thus, the claim is subject-matter ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-16, 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Nakandala et al., “Cerebro: A Data System for Optimized Deep Learning Model Selection”, in view of Wang et al., “DeepSTCL: A Deep Spatio-temporal ConvLSTM for Travel Demand Prediction”, in view of Aarts et al., US-2021/0192314-A1. Regarding Claim 1: Nakandala teaches: A device, comprising: a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising: (Nakandala, [4. SYSTEM OVERVIEW], Page 2163, Figure 4; [6. EXPERIMENTAL EVALUATION], Page 2166, Paragraph 4 “Experimental Setup. We use two clusters: CPU-only for Criteo and GPU-enabled for ImageNet, both on Cloud- Lab [19]. Each cluster has 8 worker nodes and 1 master node. Each node in both clusters has two Intel Xeon 10- core 2.20 GHz CPUs, 192GB memory, 1TB HDD and 10 Gbps network” Figure 4 shows the system architecture of Cerebro containing the Cluster, Task Executor, and Scheduler. The clusters contain the processor/memory and interacts with the Task Executor (unit training/validation on cluster and model hopping) which interacts with the Scheduler (responsible for workload). selecting modeling logic for an artificial intelligence (Al) model that solves a use case of a plurality of use cases … (Nakandala, [4. SYSTEM OVERVIEW], Page 2163, Paragraph 4, “We present an overview of Cerebro, an ML system that uses MOP to execute deep net model selection workloads”; [1. INTRODUCTION], Page 2159, “Case Study. We present a real-world model selection scenario. Our public health collaborators at UC San Diego wanted to try deep nets for identifying different activities (e.g., sitting, standing, stepping, etc.) of subjects from body-worn accelerometer data… During model selection, we tried different deep net architectures such as…” Cerebro is a machine learning system that executes model selection for deep neural networks that are selected to identify a plurality of use cases (different types of activities in this scenario) through parsing input data). executing the AI model using holdout data to obtain a sub-result; (Nakandala, [4.1 User-facing API], Page 2163, Paragraph 6, “Cerebro takes the reference to the dataset, set of initial training configs, the AutoML procedure, and 3 user defined functions: input_fn, model_fn, and train_fn. It first invokes input_fn to read and pre-process the data. It then invokes model_fn to instantiate the neural architecture… The train_fn is invoked to perform one sub-epoch of training. We assume validation data is also partitioned and use the same infrastructure for evaluation”; Page 2163, Figure 4. Cerebro invokes the neural architecture (shown in Figure 4), executes the AI model (neural network) with holdout data (the Examiner interprets holdout data as data that is separate from training data and used for validation; thus synonymous with validation/test data) to obtain validation results (as Nakandala notes that the validation data is partitioned the same way for evaluation). A validation result (task result) is a model performance result of the executed model’s task; thus, a sub-result as it is one result out of a set of results used to compare model performances). evaluating the sub-result based on an evaluation metric; (Nakandala, Page 2165, Algorithm 1 & 2; Page 2163, [6.2 Drilldown Experiments], Page 2168, Paragraph 3, “We evaluate 5 batch sizes and report makespans and the validation error of the best model for each batch size after 10 epochs”; Page 2168, Figure 9. The validation error and makespans (shown in Figure 9) are evaluating the sub-results based on makespans/scheduling and validation errors/loss (which are interpreted by the examiner as the evaluation metrics)). combining the sub-result with other sub-results of the plurality of use cases to generate intermediate data … (Nakandala, Page 2163, Figure 4; Page 2164, Figure 5. Figure 5 shows the combining of the sub-results by the schedulers which is scheduling the task (validation) result; thus, the Cerebro scheduler is combining the sub-results with other sub-results (scheduling tasks together) to generate intermediate data (which the examiner interprets as the scheduling data (as the data has not been processed and is in a intermediatory form and merely scheduled to be executed within the Task Executor (Figure 4))). invoking a cost function for a business problem corresponding to the plurality of use cases on the intermediate data to obtain a score, (Nakandala, [5.1 Formal Problem Statement as MILP], Page 2164, Paragraph 6, “The objective and constraints of the MOP-based scheduling problem is as follows … PNG media_image1.png 172 329 media_image1.png Greyscale …”. Equation 1 is the objective function to minimize makespan workload (C) with respects to the constraints where the business problem is interpreted as scheduling for optimizing resource utilization and computational costs/runtimes with specific constraints; thus, the objective function is interpreted as a cost function for a business problem by the examiner where C is a makespan score). wherein the cost function includes a length of time needed to achieve a deployment created by the plurality of use cases; (Nakandala, [5.4 Comparing Different Scheduling Methods], Page 2165, Paragraph 8, “We set a maximum optimization time of 5min for tractability sake. We compare the scheduling methods on 3 dimensions … Sub-epoch training time (unit time) of a training config is directly proportional to the compute cost of the config and inversely proportional to compute capacity of the worker … heterogeneous setting, training config compute costs are randomly sampled (with replacement) from a set of popular deep CNNs (n=35) obtained from [3]. The costs vary from 360 MFLOPS to 21000 MFLOPS with a mean of 5939 MFLOPS and standard deviation of 5671 MFLOPS. Due to space constraints we provide these computational costs in the Appendix …”; Page 2166, Figure 6; Page 2167, Figure 7. C is the makespan score which includes a length of time as a makespan is a length of time (total time to complete a set of tasks (time needed to achieve a deployment of multi-model parallel task scheduling))). determining, based on the invoking, that the score is representative of an improvement; (Nakandala, Page 2166, Figure 6. Figure 6 shows a depiction of determining that the score (makespan) is representative of an improvement when scheduling with the randomized scheduler within Cerebro. Figure 9 shows runtime as well and validation error % to depict the improvement). taking, based on the determining, a snapshot of a business solution corresponding to the combining of the sub-result with the other sub-results; and (Nakandala, Page 2166, Figure 6. Figure 7 shows the results corresponding to the makespan schedules for the different systems; thus, a snapshot of a business solution corresponding to the combining of the sub-result with the other sub-results. Figure 9 shows runtime as well and validation error % for the snapshots of Cerebro compared to Horovod). determining, based on the combining, whether an exit criteria has been met. (Nakandala, Page 2165, Algorithm 1 & 2; Algorithm 1 and 2 are used in the Randomized Algorithm-based Scheduler and will continue to execute until the exit criteria is met (which is when Q (set of all validation units) is empty and leaves the workers and models idle)). While Nakandala teaches selecting modeling logic for an Al model that solves a use case of a plurality of use cases… Nakandala does not explicitly disclose the specific use cases. However, Wang explicitly discloses: wherein the plurality of use cases includes a first use case for forecasting groups of equipment and personnel to deploy, (Wang, Page 1, Column 2, Paragraph 1, “Thus, the order data is used to predict travel demand, achieve appropriate urban resource scheduling and provide better services for passengers in this mode. In this paper, a Deep Spatio-Temporal Convolutional LSTM (DeepSTCL) is proposed to forecast travel demand which considers the time and space factors comprehensively and gets a great prediction performance”; Abstract, “Therefore, it is significant to predict travel demand for urban resource dispatching”. DeepSTCL is used for forecasting travel demand, where travel demand is the need or desire to travel based on geography, travel patterns, destinations, etc. The forecasting of travel demand is needed for urban resource dispatching (how a city manages and distributes (interpreted by the examiner as deploys) its resources, including personnel, equipment, vehicles and other assets to ensure effective service delivery). Thus, the use case of forecasting groups of equipment and personnel to deploy is taught by Wang). a second use case for determining destinations that have slots available to accommodate the groups of equipment and personnel, (Wang, Fig. 1, 2 & 9; Page 1, Column 2, Paragraph 2, “Travel demand data is typical spatio-temporal data”; Page 7, Paragraph 3, “Travel demand modeling is an inherent part of smarter transportation. Analyzing and forecasting travel demand can help us manage the hot spot of passenger demand in the next period, balance supply and demand and schedule vehicle resources for passengers”; Abstract, “Urban resource scheduling is an important part of the development of a smart city, and transportation resources are the main components of urban resources. Currently, a series of problems with transportation resources such as unbalanced distribution and road congestion disrupt the scheduling discipline”. The deep learning traffic demand forecasting framework is based on spatio-temporal data which allows for analyzing congestion and unbalanced deployment of equipment/personnel. Fig. 1 shows a pictorial example of a geographical rectangle and Fig. 2 shows an example of a snapshot of an order count (order demand/requests) where both are used to create the heatmaps shown in Fig. 9 (which highlights the forecasted scenarios/situations for travel demand). Thus, the method of DeepSTCL determines destinations (location for urban resource scheduling) that have slots available to accommodate (capacity based resource scheduling to avoid congestion/unbalanced distribution) the groups of equipment and personnel (deployable urban resources such as transportation resources)). and a third use case for determining transportation routes for bringing the groups of equipment and personnel from an origin to the destinations having the slots available, (Wang, Fig. 9; Page 1, Column 2, Paragraph 2, “Travel demand data is typical spatio-temporal data”; Abstract, “Urban resource scheduling is an important part of the development of a smart city, and transportation resources are the main components of urban resources. Currently, a series of problems with transportation resources such as unbalanced distribution and road congestion disrupt the scheduling discipline”. Urban resource scheduling’s main component is transportation resources as they cause issues such as unbalanced distribution and road congestion (both of which are capacity based and interpreted by the examiner as available slots). By forecasting travel demand accurately, transportation routes are able to be optimized for scheduling deployments (where scheduling is based off routing from origin to endpoint using travel demand prediction). Travel demand heatmaps (such as Fig. 9) are utilized for equipment/personnel deployment (which is the scheduling of urban resources to avoid unbalanced deployments and road congestion)). and wherein as part of the second use case an algorithm supplies: a list of new equipment, inventory constraints and availability; (Wang, Fig. 1, 2 & 9; Page 5, Column 2, Paragraph 4, “ PNG media_image2.png 126 272 media_image2.png Greyscale ”. As noted previously, Fig. 1 and 2 show example snapshots of an order count to create heatmaps shown in Fig. 9; where, the second use case is determining the destinations by the corresponding urban resource scheduling of the equipment (resources). As the resource scheduling is utilizing the heatmap for travel demand (interpreted as inventory constraint and availability) which is shown in the matrix (list) for order count (new requested trips; which is interpreted as new requested resources i.e. new equipment/trips. Thus, the algorithm is supplying the data for resource scheduling). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the Nakandala’s process of selecting of the modeling logic to solve a use case, with the plurality of specific use cases taught by Wang to illustrate the importance of being able to analyze and forecast travel demand based on spatio-temporal data to manage efficiency and optimize distribution scheduling (see Wang, Page 7, Column 1, Paragraph 3, “Analyzing and forecasting travel demand can help us manage the hot spot of passenger demand in the next period, balance supply and demand and schedule vehicle resources for passengers … ConvLSTM-based deep learning model for travel demand (ST Data) prediction is proposed that takes advantage of both temporal and spatial properties … Our models’ performances are significantly beyond two baseline models, confirming that it is better and more flexible for the travel demand prediction”). Nakandala/Wang do not explicitly teach: … wherein each of the other sub-results is obtained based on an invocation of a specific application program interface (API) of a plurality of APIs that communicates with the AI model; However, Aarts teaches: … wherein each of the other sub-results is obtained based on an invocation of a specific application program interface (API) of a plurality of APIs that communicates with the AI model; (Aarts, Page 58, [0556], “… a software layer may be implemented as a … API through which … may be invoked (e.g., called) … for performing compute, AI, or … to perform processing tasks in an effective and efficient manner”; Page 5, [0098], “… an application programming interface comprises a concatenation function for combining forward and reverse outputs of a ragged bidirectional recurrent neural network.”; FIG. 8. FIG. 8 depicts an example process where an API call (invocation) can occur for a specific API that communicates with the example RNN (recurrent neural network -> AI Model); where the graph definition and the recurrence attribute are considered the sub-results within this example). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the process of Nakandala/Wang process of selecting modeling logic to solve specific use cases, with the API calls of Aarts to generate optimized processes/results, automation, reduce complexity, technical advantages, etc. (see Aarts, FIG 8, Page 2, [0062], “In at least one embodiment, a graph is made to represent a recurrent neural network by associating a recurrence attribute 110 with the graph. In at least one embodiment, said graph is a nested graph. In at least one embodiment, association of a recurrence property with a graph effectively makes recurrence or looping an attribute of said graph. In at least one embodiment, use of a recurrence property in an application programming interface provides a technical advantage over use of a while loop, or similar programming construct, which may require construction of separate graphs to represent header, body, and exit portions of a while loop”). Regarding Claim 2: Nakandala/Wang/Aarts teach the device of Claim 1 and Nakandala further teaches: wherein each use case in the plurality of use cases is determined based on a common pattern in the business problem. (Nakandala, [1. INTRODUCTION], Page 2159, Paragraph 2, “Case Study. We present a real-world model selection scenario. Our public health collaborators at UC San Diego wanted to try deep nets for identifying different activities (e.g., sitting, standing, stepping, etc.) of subjects from body-worn accelerometer data…” The examiner interprets a business problem as a challenge an organization is facing. The case study taught by Nakandala notes a business problem (to identify different activities of subjects wearing accelerometers for the public health collaborators at UC San Diego) where the use cases are different activities based on a common pattern (e.g., sitting, standing, stepping, etc.). Regarding Claim 3: Nakandala/Wang/Aarts teach the device of Claim 2 and Nakandala further teaches: wherein the common pattern comprises regression, classification, optimization, or a combination thereof. (Nakandala, [7. DISCUSSION AND LIMITATIONS], Page 2169, Paragraph 4, “Applications. Cerebro is in active use for time series analytics for our public health collaborators. In the case study from Section 1, Cerebro helped us pick 16 deep net configs to compare. To predict sitting vs. not-sitting, these configs had accuracies between 62% and 93%, underscoring the importance of rigorous model selection… However, note that MOP and Cerebro's ideas are directly usable for model selection of any ML models trainable with SGD. Examples include linear/logistic regression, some support vector machines, low-rank matrix factorization, and conditional random fields.” The case study for the public health collaborators is for classification with the use of stochastic gradient descent (optimization algorithm). Nakandala teaches optimization (the Examiner interprets optimization in terms of accuracy) and other machine learning models that can be used instead of stochastic gradient descent such as different regression algorithms). Regarding Claim 4: Nakandala/Wang/Aarts teach the device of Claim 2 and Nakandala further teaches: wherein the operations further comprise ranking the other sub-results based on the evaluation metric. (Nakandala, Page 2164, Figure 5 and [5. CEREBRO SCHEDULER] Paragraph 4, “Consider the model selection workload shown in Figure 5(A). Assume workers are homogeneous and there is no data replication. For one epoch of training, Figure 5(B) shows an optimal task-parallel schedule for this workload with a 9-unit makespan. Figure 5(C) shows a non-optimal MOP schedulewith also 9 units makespan. But as Figure 5(D) shows, an optimal MOP schedule has a makespan of only 7 units. Overall, we see that MOP's training unit-based scheduling offers more flexibility to raise resource utilization”. Figure 5 denotes the different scheduling that occurs for model selection workloads. The ranking is done by optimization (resource utilization) and runtime (denoted in makespan units)). Regarding Claim 5: Nakandala/Wang/Aarts teach the device of Claim 4 and Nakandala further teaches: wherein the operations further comprise determining the exit criteria for the plurality of use cases, wherein the exit criteria comprise options including: exit when the cost function is satisfied within a threshold, continue searching for better solutions until an execution time limit has expired, or execute for a predefined number of iterations. (Nakandala, Page 2165, Algorithm 1 & 2; [5.4 Comparing Different Scheduling Methods], Page 2165, Paragraph 8,“We use simulations to compare the efficiency and makespans yielded by the three alternative schedulers… We set a maximum optimization time of 5min for tractability sake” Algorithm 1 & 2 are used within the randomized scheduler to schedule tasks and once Q is empty (all dataset units) the scheduler has met the exit criteria as all units were removed (leaving the workers and models idle). Section 5.4 discusses comparing different scheduling methods and Nakandala teaches time limit constraints (time limit expiry) when scheduling tasks for a model. Also, the amount of time it takes for Q to become empty can be interpreted as a cost function (the examiner interprets a cost function as mapping values with an event; in this scenario cost would be the total runtime of the scheduler)). Regarding Claim 6: Nakandala/Wang/Aarts teach the device of Claim 5 and Nakandala further teaches: wherein the device formulates the modeling logic for the AI model. (Nakandala, [4.2 System Architecture], Page 2163, Paragraph 8, “Supporting Multiple Deep Learning Tools. The functions input_fn, model_fn, and train_fn are written by users in the deep learning tool's APIs. We currently support TensorFlow and PyTorch (it is simple to add support for more). To support multiple such tools, we adopt a handler-based architecture…” Cerebro applies deep learning tools to formulate/configure the model logic that will be used by the scheduler). Regarding Claim 7: Nakandala/Wang/Aarts teach the device of Claim 6 and Nakandala further teaches: wherein the operations further comprise training the AI model using training data. (Nakandala, [4.1 User-facing API], Page 2163, Paragraph 6, “Cerebro takes the reference to the dataset, set of initial training configs, the AutoML procedure, and 3 user defined functions: input_fn, model_fn, and train_fn. It first invokes input_fn to read and pre-process the data. It then invokes model_fn to instantiate the neural architecture… The train_fn is invoked to perform one sub-epoch of training”). Regarding Claim 8: Nakandala/Wang/Aarts teach the device of Claim 7 and Nakandala further teaches: wherein the operations further comprise performing data wrangling on the training data and the holdout data. (Nakandala, Page 2166, Table 4. Table 4 provides the dataset details (all data used within the benchmark datasets for experimenting which contains training and holdout data). The values provided are after preprocessing which includes data wrangling (examiner interprets data wrangling as transforming data) as Nakandala notes the data being encoded and densified in [6. EXPERIMENTAL EVALUATION]: Page 2166, Paragraph 2). Regarding Claim 9: Nakandala/Wang/Aarts teach the device of Claim 8 and Nakandala further teaches: wherein the processing system comprises a plurality of processors operating in a distributed computing environment. (Nakandala, [4.3 System Implementation Details], Page 2164, Paragraph 3, “We prototype Cerebro in Python using XML-RPC client-server package. Scheduler runs on the client. Each worker runs a single service. Scheduling follows a push-based model-Scheduler assigns tasks and periodically checks the responses from the workers. We use a shared network file system (NFS) as the central repository for models. Model hopping is realized implicitly by workers writing models to and reading models from this shared file system”; [6. EXPERIMENTAL EVALUATION], Page 2166, “Experimental Setup. We use two clusters: CPU-only for Criteo and GPU-enabled for ImageNet, both on Cloud- Lab [19]. Each cluster has 8 worker nodes and 1 master node. Each node in both clusters has two Intel Xeon 10- core 2.20 GHz CPUs, 192GB memory, 1TB HDD and 10 Gbps network. Each GPU cluster worker node has an extra Nvidia P100 GPU. All nodes run Ubuntu 16.04.” The examiner interprets a distributed computing environment as computer network setup where databases are located within multiple nodes allowing access locally/remotely (Nakandala teaches both remote/local reading from the partitions within [6.2 Drill-down experiments], Page 2168, “In this setting, the dataset is partitioned, replicated, and stored on 8 workers. We then load all local data partitions into each worker's memory. Celery performs remote reads for nonlocal partitions”)). Regarding Claims 10-16: Claims 10-16 incorporate substantively all the limitations of Claims 1-8 in a non-transitory, machine-readable medium and further recites comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations, the operations comprising (Nakandala, Page 2163, Figure 4; [6. EXPERIMENTAL EVALUATION], Page 2166, Paragraph 4 “Experimental Setup. We use two clusters: CPU-only for Criteo and GPU-enabled for ImageNet, both on Cloud- Lab [19]. Each cluster has 8 worker nodes and 1 master node. Each node in both clusters has two Intel Xeon 10- core 2.20 GHz CPUs, 192GB memory, 1TB HDD and 10 Gbps network”. Figure 4 shows the system architecture of Cerebro containing the Cluster, Task Executor, and Scheduler. The clusters contain the processor/memory and interacts with the Task Executor (unit training/validation on cluster and model hopping) which interacts with the Scheduler (responsible for workload). Thus, the experiments done on the clusters are being done on a processor and a CRM is inherent); thus, Claims 10, 11, 12, 13-16 are rejected for reasons set forth in the rejections of Claims 1, 4, 2-3, 5-8, respectively. Regarding Claim 18: Nakandala teaches: A method, comprising: formulating, by a processing system including a processor, modeling logic for an artificial intelligence (AI) model that solves a use case of a plurality of use cases, … (Nakandala, [4. SYSTEM OVERVIEW], Page 2163, Paragraph 4, “We present an overview of Cerebro, an ML system that uses MOP to execute deep net model selection workloads”; [1. INTRODUCTION], Page 2159, “Case Study. We present a real-world model selection scenario. Our public health collaborators at UC San Diego wanted to try deep nets for identifying different activities (e.g., sitting, standing, stepping, etc.) of subjects from body-worn accelerometer data… During model selection, we tried different deep net architectures such as…” Cerebro is a machine learning system that executes model selection method for deep neural networks that are selected to identify a plurality of use cases (different types of activities in this scenario) through parsing input data). executing, by the processing system, the AI model using holdout data to obtain a sub-result; (Nakandala, [4.1 User-facing API], Page 2163, Paragraph 6, “Cerebro takes the reference to the dataset, set of initial training configs, the AutoML procedure, and 3 user defined functions: input_fn, model_fn, and train_fn. It first invokes input_fn to read and pre-process the data. It then invokes model_fn to instantiate the neural architecture… The train_fn is invoked to perform one sub-epoch of training. We assume validation data is also partitioned and use the same infrastructure for evaluation”; Page 2163, Figure 4. Cerebro invokes the neural architecture (shown in Figure 4), executes the AI model (neural network) with holdout data (the Examiner interprets holdout data as data that is separate from training data and used for validation; thus synonymous with validation/test data) to obtain validation results (as Nakandala notes that the validation data is partitioned the same way for evaluation). A validation result (task result) is a model performance result of the executed model’s task; thus, a sub-result as it is one result out of a set of results used to compare model performances). evaluating, by the processing system, the sub-result based on an evaluation metric; and (Nakandala, Page 2165, Algorithm 1 & 2; Page 2163, [6.2 Drilldown Experiments], Page 2168, Paragraph 3, “We evaluate 5 batch sizes and report makespans and the validation error of the best model for each batch size after 10 epochs”; Page 2168, Figure 9. The validation error and makespans (shown in Figure 9) are evaluating the sub-results based on makespans/scheduling and validation errors/loss (which are interpreted by the examiner as the evaluation metrics)). combining, by the processing system, plural sub-results of the plurality of use cases to generate intermediate data; (Nakandala, Page 2163, Figure 4; Page 2164, Figure 5. Figure 5 shows the combining of the sub-results by the schedulers which is scheduling the task (validation) result; thus, the Cerebro scheduler is combining the sub-results with other sub-results (scheduling tasks together) to generate intermediate data (which the examiner interprets as the scheduling data (as the data has not been processed and is in a intermediatory form and merely scheduled to be executed within the Task Executor (Figure 4))). invoking, by the processing system, a cost function for a business problem corresponding to the plurality of use cases on the intermediate data to obtain a score, (Nakandala, [5.1 Formal Problem Statement as MILP], Page 2164, Paragraph 6, “The objective and constraints of the MOP-based scheduling problem is as follows … PNG media_image1.png 172 329 media_image1.png Greyscale …”. Equation 1 is the objective function to minimize makespan workload (C) with respects to the constraints where the business problem is interpreted as scheduling for optimizing resource utilization and computational costs/runtimes with specific constraints; thus, the objective function is interpreted as a cost function for a business problem by the examiner where C is a makespan score). wherein the cost function includes a length of time needed to achieve a deployment created by the plurality of use cases; (Nakandala, [5.4 Comparing Different Scheduling Methods], Page 2165, Paragraph 8, “We set a maximum optimization time of 5min for tractability sake. We compare the scheduling methods on 3 dimensions … Sub-epoch training time (unit time) of a training config is directly proportional to the compute cost of the config and inversely proportional to compute capacity of the worker … heterogeneous setting, training config compute costs are randomly sampled (with replacement) from a set of popular deep CNNs (n=35) obtained from [3]. The costs vary from 360 MFLOPS to 21000 MFLOPS with a mean of 5939 MFLOPS and standard deviation of 5671 MFLOPS. Due to space constraints we provide these computational costs in the Appendix …”; Page 2166, Figure 6; Page 2167, Figure 7. C is the makespan score which includes a length of time as a makespan is a length of time (total time to complete a set of tasks (time needed to achieve a deployment of multi-model parallel task scheduling))). determining, by the processing system and based on the invoking, that the score is representative of an improvement; (Nakandala, Page 2166, Figure 6. Figure 6 shows a depiction of determining that the score (makespan) is representative of an improvement when scheduling with the randomized scheduler within Cerebro. Figure 9 shows runtime as well and validation error % to depict the improvement). taking, by the processing system and based on the determining, a snapshot of a business solution corresponding to the combining of the sub-result with the other sub-results; and (Nakandala, Page 2166, Figure 6. Figure 7 shows the results corresponding to the makespan schedules for the different systems; thus, a snapshot of a business solution corresponding to the combining of the sub-result with the other sub-results. Figure 9 shows runtime as well and validation error % for the snapshots of Cerebro compared to Horovod). determining, by the processing system and based on the combining, whether an exit criteria has been met. (Nakandala, Page 2165, Algorithm 1 & 2; Algorithm 1 and 2 are used in the Randomized Algorithm-based Scheduler and will continue to execute until the exit criteria is met (which is when Q (set of all validation units) is empty and leaves the workers and models idle)). While Nakandala teaches selecting modeling logic for an Al model that solves a use case of a plurality of use cases… Nakandala does not explicitly disclose the specific use cases. However, Wang explicitly discloses: wherein the plurality of use cases includes a first use case for forecasting groups of equipment and personnel to deploy, (Wang, Page 1, Column 2, Paragraph 1, “Thus, the order data is used to predict travel demand, achieve appropriate urban resource scheduling and provide better services for passengers in this mode. In this paper, a Deep Spatio-Temporal Convolutional LSTM (DeepSTCL) is proposed to forecast travel demand which considers the time and space factors comprehensively and gets a great prediction performance”; Abstract, “Therefore, it is significant to predict travel demand for urban resource dispatching”. DeepSTCL is used for forecasting travel demand, where travel demand is the need or desire to travel based on geography, travel patterns, destinations, etc. The forecasting of travel demand is needed for urban resource dispatching (how a city manages and distributes (interpreted by the examiner as deploys) its resources, including personnel, equipment, vehicles and other assets to ensure effective service delivery). Thus, the use case of forecasting groups of equipment and personnel to deploy is taught by Wang). a second use case for determining destinations that have slots available to accommodate the groups of equipment and personnel, (Wang, Fig. 1, 2 & 9; Page 1, Column 2, Paragraph 2, “Travel demand data is typical spatio-temporal data”; Page 7, Paragraph 3, “Travel demand modeling is an inherent part of smarter transportation. Analyzing and forecasting travel demand can help us manage the hot spot of passenger demand in the next period, balance supply and demand and schedule vehicle resources for passengers”; Abstract, “Urban resource scheduling is an important part of the development of a smart city, and transportation resources are the main components of urban resources. Currently, a series of problems with transportation resources such as unbalanced distribution and road congestion disrupt the scheduling discipline”. The deep learning traffic demand forecasting framework is based on spatio-temporal data which allows for analyzing congestion and unbalanced deployment of equipment/personnel. Fig. 1 shows a pictorial example of a geographical rectangle and Fig. 2 shows an example of a snapshot of an order count (order demand/requests) where both are used to create the heatmaps shown in Fig. 9 (which highlights the forecasted scenarios/situations for travel demand). Thus, the method of DeepSTCL determines destinations (location for urban resource scheduling) that have slots available to accommodate (capacity based resource scheduling to avoid congestion/unbalanced distribution) the groups of equipment and personnel (deployable urban resources such as transportation resources)). and a third use case for determining transportation routes for bringing the groups of equipment and personnel from an origin to the destinations having the slots available, (Wang, Fig. 9; Page 1, Column 2, Paragraph 2, “Travel demand data is typical spatio-temporal data”; Abstract, “Urban resource scheduling is an important part of the development of a smart city, and transportation resources are the main components of urban resources. Currently, a series of problems with transportation resources such as unbalanced distribution and road congestion disrupt the scheduling discipline”. Urban resource scheduling’s main component is transportation resources as they cause issues such as unbalanced distribution and road congestion (both of which are capacity based and interpreted by the examiner as available slots). By forecasting travel demand accurately, transportation routes are able to be optimized for scheduling deployments (where scheduling is based off routing from origin to endpoint using travel demand prediction). Travel demand heatmaps (such as Fig. 9) are utilized for equipment/personnel deployment (which is the scheduling of urban resources to avoid unbalanced deployments and road congestion)). and wherein as part of the second use case an algorithm supplies: a list of new equipment, inventory constraints and availability; (Wang, Fig. 1, 2 & 9; Page 5, Column 2, Paragraph 4, “ PNG media_image2.png 126 272 media_image2.png Greyscale ”. As noted previously, Fig. 1 and 2 show example snapshots of an order count to create heatmaps shown in Fig. 9; where, the second use case is determining the destinations by the corresponding urban resource scheduling of the equipment (resources). As the resource scheduling is utilizing the heatmap for travel demand (interpreted as inventory constraint and availability) which is shown in the matrix (list) for order count (new requested trips; which is interpreted as new requested resources i.e. new equipment/trips. Thus, the algorithm is supplying the data for resource scheduling). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the Nakandala’s process of selecting of the modeling logic to solve a use case, with the plurality of specific use cases taught by Wang to illustrate the importance of being able to analyze and forecast travel demand based on spatio-temporal data to manage efficiency and optimize distribution scheduling (see Wang, Page 7, Column 1, Paragraph 3, “Analyzing and forecasting travel demand can help us manage the hot spot of passenger demand in the next period, balance supply and demand and schedule vehicle resources for passengers … ConvLSTM-based deep learning model for travel demand (ST Data) prediction is proposed that takes advantage of both temporal and spatial properties … Our models’ performances are significantly beyond two baseline models, confirming that it is better and more flexible for the travel demand prediction”). Nakandala/Wang do not explicitly teach: … wherein each of the other sub-results is obtained based on an invocation of a specific application program interface (API) of a plurality of APIs that communicates with the AI model; However, Aarts teaches: … wherein each of the other sub-results is obtained based on an invocation of a specific application program interface (API) of a plurality of APIs that communicates with the AI model; (Aarts, Page 58, [0556], “… a software layer may be implemented as a … API through which … may be invoked (e.g., called) … for performing compute, AI, or … to perform processing tasks in an effective and efficient manner”; Page 5, [0098], “… an application programming interface comprises a concatenation function for combining forward and reverse outputs of a ragged bidirectional recurrent neural network.”; FIG. 8. FIG. 8 depicts an example process where an API call (invocation) can occur for a specific API that communicates with the example RNN (recurrent neural network -> AI Model); where the graph definition and the recurrence attribute are considered the sub-results within this example). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the process of Nakandala/Wang process of selecting modeling logic to solve specific use cases, with the API calls of Aarts to generate optimized processes/results, automation, reduce complexity, technical advantages, etc. (see Aarts, FIG 8, Page 2, [0062], “In at least one embodiment, a graph is made to represent a recurrent neural network by associating a recurrence attribute 110 with the graph. In at least one embodiment, said graph is a nested graph. In at least one embodiment, association of a recurrence property with a graph effectively makes recurrence or looping an attribute of said graph. In at least one embodiment, use of a recurrence property in an application programming interface provides a technical advantage over use of a while loop, or similar programming construct, which may require construction of separate graphs to represent header, body, and exit portions of a while loop”). Regarding Claim 19: Nakandala/Wang/Aarts teach the device of Claim 18 and Nakandala further teaches: dividing, by the processing system a business problem into the plurality of use cases; and (Nakandala, [1. INTRODUCTION], Page 2159, Paragraph 2, “Case Study. We present a real-world model selection scenario. Our public health collaborators at UC San Diego wanted to try deep nets for identifying different activities (e.g., sitting, standing, stepping, etc.) of subjects from body-worn accelerometer data…” The examiner interprets a business problem as a challenge an organization is facing. The case study taught by Nakandala notes a business problem (to identify different activities of subjects wearing accelerometers) from the public health collaborators at UC San Diego which is divided into different types of activities (e.g., sitting, standing, stepping, etc.). ranking, by the processing system, the plural sub-results based on the evaluation metric. (Nakandala, Page 2164, Figure 5 and [5. CEREBRO SCHEDULER] Paragraph 4, “Consider the model selection workload shown in Figure 5(A). Assume workers are homogeneous and there is no data replication. For one epoch of training, Figure 5(B) shows an optimal task-parallel schedule for this workload with a 9-unit makespan. Figure 5(C) shows a non-optimal MOP schedulewith also 9 units makespan. But as Figure 5(D) shows, an optimal MOP schedule has a makespan of only 7 units. Overall, we see that MOP's training unit-based scheduling offers more flexibility to raise resource utilization”. Figure 5 denotes the different scheduling that occurs for model selection workloads. The ranking is done by optimization (resource utilization) and runtime (denoted in makespan units)). Claims 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Nakandala et al., “Cerebro: A Data System for Optimized Deep Learning Model Selection”, in view of Wang et al., “DeepSTCL: A Deep Spatio-temporal ConvLSTM for Travel Demand Prediction”, in view of Aarts et al., US-2021/0192314-A1, in view of Deshpande et al. “A linearized framework and a new benchmark for model selection for fine-tuning”. Regarding Claim 21: Nakandala/Wang/Aarts teach the device of Claim 1 and Nakandala further teaches: wherein the operations further comprise: cataloging the use case, resulting in a catalogued use case. However, Deshpande teaches: wherein the operations further comprise: cataloging the use case, resulting in a catalogued use case. (Deshpande, Page 1, Column 1, Paragraph 1, “A “model zoo” is a collection of pre-trained models, obtained by training different architectures on many datasets covering a variety of tasks and domains. … typical use of a model zoo is to provide a good initialization which can then be fine-tuned for a new target task, for which we have few training data”; Page 5, Column 1, Paragraph 2, “We evaluate model selection and fine-tuning with both, a model zoo of single-domain experts … and a model zoo of multi-domain experts … We include publicly available large source … from different domains, e.g. … consists of aerial imagery, … consist of food, plant images, … contain scene images. This allows us to maximize the coverage of our model zoo to different domains and enables more effective transfer when fine-tuning on different target tasks. Model zoo of single-domain experts. We build a model zoo of a total of 30 models … to evaluate our model selection”. The model zoo is a collection/repository of pretrained models (different configurations/initializations that are saved); thus, the model zoo is cataloging the use cases, resulting in a catalogued use case). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the Nakandala’s process of selecting of the modeling logic to solve a use case, with the plurality of specific use cases taught by Wang to illustrate the importance of being able to analyze and forecast travel demand based on spatio-temporal data, with the model selection utilizing catalogued use cases of Deshpande to manage efficiency, boost efficiency, save cost, optimize distribution scheduling, using prior historical data to initialize model selections for other use cases, and comparing scores (see Deshpande, Page 8, Column 2, Paragraph 5, “Fine-tuning using model zoo is a simple method to boost accuracy. We show that while a model zoo may have modest gains in the high-data regime, it outperforms Imagenet experts networks in the low-data regime. We show that simple baseline methods derived from a linear approximation of fine-tuning – Label-Gradient Correlation (LGC) and Label-Feature Correlation (LFC) – can select good models (single-domain) or parameters (multi-domain) to fine-tune, and match or outperform relevant model selection methods in the literature. Our model selection saves the cost of bruteforce fine-tuning and makes model zoos viable”). Regarding Claim 22: Nakandala/Wang/Aarts/Deshpande teach the device of Claim 21. Nakandala in view of Wang fails to explicitly teach: wherein the operations further comprise: selecting second modeling logic for the AI model to solve a second business problem using the catalogued use case, wherein the second modeling logic is different from the modeling logic, and wherein the determining that the score is representative of an improvement is based on a comparison of the score with another score. However, Deshpande teaches: wherein the operations further comprise: selecting second modeling logic for the AI model to solve a second business problem using the catalogued use case, wherein the second modeling logic is different from the modeling logic, and wherein the determining that the score is representative of an improvement is based on a comparison of the score with another score. (Deshpande, Page 2, Figure 2; Page 8, Figure 6; Page 5, Column 1, Paragraph 2, “This allows us to maximize the coverage of our model zoo to different domains and enables more effective transfer when fine-tuning on different target tasks. … We build a model zoo of a total of 30 models … to evaluate our model selection”. Figure 2 shows the selecting of the second modeling logic using a pretrained model configurations (catalogued use case) for initialization versus different architectures; thus, selecting second modeling logic for the AI model to solve a second business problem using the catalogued use case, wherein the second modeling logic is different from the modeling logic. Figure 6 shows the LFC scores having the highest Spearman’s ranking correlation used to compare the predicted model selection method score versus the actual performance score after fine-tuning; thus, determining that the score is representative of an improvement is based on a comparison of the score with another score). The motivation of Claim 21’s combination of Nakandala/Wang/Aarts/Deshpande is still maintained. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IBRAHIM RAHMAN whose telephone number is (703)756-1646. The examiner can normally be reached M-F 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /I.R./Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Jun 02, 2021
Application Filed
Aug 22, 2024
Non-Final Rejection — §101, §103
Nov 21, 2024
Response Filed
Nov 21, 2024
Response after Non-Final Action
Jan 07, 2025
Response Filed
Mar 18, 2025
Final Rejection — §101, §103
Jun 26, 2025
Request for Continued Examination
Jun 30, 2025
Response after Non-Final Action
Jul 11, 2025
Non-Final Rejection — §101, §103
Oct 14, 2025
Response Filed
Jan 24, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month