Prosecution Insights
Last updated: April 19, 2026
Application No. 17/968,944

MOVEMENT OF OPERATIONS BETWEEN CLOUD AND EDGE PLATFORMS

Final Rejection §101§102§103§112
Filed
Oct 19, 2022
Examiner
PHAKOUSONH, DARAVANH
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
1 granted / 2 resolved
-5.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
33 currently pending
Career history
35
Total Applications
across all art units

Statute-Specific Performance

§101
31.2%
-8.8% vs TC avg
§103
38.1%
-1.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment/Arguments 1. Amendments to claims 15 and 16 are no longer interpreted under 35 U.S.C. 112(f). 2. Applicant’s Argument to the rejection under 35 U.S.C. 101 filed on December 4, 2025 have been fully considered but are not persuasive. Applicant contends that the Office improperly characterized limitations such as “training” and “transferring” as mental processes and further argues that the claims are not directed to a judicial exception because the recited operations cannot not be practically performed in the human mind. These arguments mischaracterize the Office Action. The Office did not identify the limitations of training a machine learning model or transferring execution between a cloud computing platform and an edge platform as mental processes. Rather, as set forth in the Office Action, these limitations were analyzed as computer-implemented operations corresponding to generic data processing and conventional workload redirection performed on generic computing infrastructure. The judicial exception identified by the Office resides in the analytical and determinative logic that governs those operations, not in the operations themselves. In particular, the claims recite analyzing results, determining whether a machine learning algorithm should be additionally trained, and making a negative determination that governs whether execution is transferred. These limitations recite evaluation, judgement, and decision-making logic, which constitute mental processes under MPEP 2106.04(a)(2)(III), even when automated or implemented on a computing platform. The amendments to the claims do not alter the nature of these limitations and do not move the claims outside the mental process grouping. Further, the claims as a whole include validation and evaluation steps such as computing prediction errors, generating learning curves, and assessing model performance, which constitute mathematical calculations used to evaluate results and make determinations. Per MPEP 2106.04(a)(2)(I), mathematical relationships, formulas, and calculations are judicial exceptions. Such calculations can be performed in the human mind, including through the use of basic computational tools such as a calculator or a spreadsheet, and therefore falls within the mental process grouping. The fact that these calculations are expressed in words rather than equations does not alter their abstract nature, as words operating on data to solve a problem serve the same purpose as a mathematical formula. Accordingly, calculations such as statistical validation remain abstract mathematical concepts and mental processes whether performed using basic tools or automated on a generic cloud computing platform. Applicant’s argument regarding system complexity and the use of cloud and edge platforms are not persuasive. As explained in Mortgage Grader, Benson, and Versata, the courts do not distinguish between mental processes performed entirely in the human mind and those performed with the assistance of basic tools, such as pen and paper or a calculator, nor do they distinguish between mental processes performed by humans and those automated on a computer. Thus, implementing evaluative and decision-making logic on generic cloud or edge computing platforms does not remove such logic from the mental process grouping. Further, as explained in Electric Power Group, claims reciting collecting information, analyzing it, and making a determination based on the analysis at a high level of generality are directed to abstract ideas. Here, the claimed platforms merely act as tools to execute abstract evaluations and decisions, such as making a negative determination based on analyzed results, and do not improve the functioning of the platform themselves. Additionally, the claims do not recite a particular algorithm or specific technical implementation for performing the recited training, validation, or transfer operations. Instead, the claims are drafted in a result-oriented manner, claiming the outcome of a trained and transferred model rather than a specific implementation that improves computer functionality. Accordingly, the claims as a whole are directed to abstract ideas, including mental processes and mathematical concepts, under Step 2A. The additional elements, whether considered individually or in combination, merely automate these abstract ideas using generic computing infrastructure and therefore do not amount to significantly more under Step 2B. The rejections of claims 1-20 under 35 U.S.C. 101 is maintained. 3. Applicant’s arguments filed on December 4, 2025 regarding the rejection of claims 1-20 under 35 U.S.C. 102 and 35 U.S.C. 103 have been fully considered but are not persuasive. Applicant asserts that Sharma only pushes a “completed” model to the edge for live execution and therefore does not disclose transferring operations for “further training.” However, under the broadest reasonable interpretation, the term “further training” is not limited to full retraining performed exclusively on the edge platform. The claims reasonably encompass continued refinement of model execution based on performance, including iterative update cycles, drift detection, and deployment of updated models. Sharma teaches detecting degradation in model accuracy, triggering retraining or reevaluation, and deploying updated models to the edge following a negative determination regarding model performance. These disclosures satisfy the claimed transfer “for further execution training” under BRI. Applicant has not shown that the claims require a specific training protocol or edge-based retraining mechanism that is absent from the prior art. To the extent Applicant relies on a narrower interpretation of “further training,” such an interpretation is not commensurate with the broadest reasonable interpretation of the claims. The additional references of record (e.g., Brownlee) further confirm that determining whether additional training is warranted based on model performance was well known. Accordingly, Applicant has no identified error in the anticipation or obviousness rejections. The rejections under 35 U.S.C. 102 and 35 U.S.C. 103 are therefore maintained. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-5, 10, 14 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claims recite, inter alia, “transferring…the given operation to an edge computing platform for further training of the machine learning algorithm of the given operation executing on the edge computing platform.” The phrase “ for further training of the machine learning algorithm of the given operation” renders the scope of the claims unclear. It is unclear what constitutes the “given operation,” whether the “given operation” is distinct from the “machine learning algorithm,” or whether the two refer to the same subject matter. It is further unclear how transferring the “given operation” to the edge computing platform results in “further training” of the machine learning algorithm, and whether such “further training” requires retraining at the edge platform, retraining in the cloud followed by redeployment, continued execution with iterative updating, or something other mechanism. The claims do not clearly define the relationship between the “given operation,” the “machine learning algorithm,” the transfer step, and the alleged “further training.” Accordingly, one of ordinary skill in the art would not be reasonably apprised of the metes and bounds of the claimed invention. Claims 6-9, 11-13, and 15-17, and 19-20 depend directly or indirectly from the claims 1, 14, or 18 and therefore incorporate the indefinite limitation. Accordingly, the claims are likewise indefinite. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. 101 Subject Matter Eligibility Analysis Step 1: Claims 1-20 are within the four statutory categories (a process, machine, manufacture or composition of matter). Claims 1-13 are directed to a method consisting of a series of steps, meaning that it is directed to the statutory category of process. Claims 14-20 are directed to storage mediums and processors which are machines. Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. None of the claims represent an improvement to technology. Regarding claim 1, the following claim elements are abstract ideas: analyzing…results of executing the machine learning algorithm of the given operation executing on the cloud computing platform (This is an abstract idea of a “mental process.” It amounts to reviewing a model’s outputs and judging their quality. (e.g., compare the correct answers, tally right/wrong, compute a simple score, and decide if a threshold is met). Because this can be done in the mind or with pen and paper, it falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).); determining…based at least in part on the analysis, whether the machine learning algorithm should be additionally trained (The limitation recites deciding, based on prior analysis, whether additional training is warranted. Such evaluation (e.g., comparing a performance score to a threshold and choosing “continue” or “stop”) can be performed in the human mind or with pen and paper. Accordingly, it falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).); The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: cloud computing platform (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) training…a machine learning algorithm on a cloud platform of a given operation executing(This limitation amounts to mere instructions to apply the abstract idea. See MPEP 2106.05(f). It also confines the idea to a technological environment (“on a cloud platform”) and thus is a field-of-use limitation (see MPEP 2106.05(h)). The step of “executing a machine learning algorithm” is merely a generic data operation that amounts to receiving data, run code, and store/return results which has been recognized by the courts as well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II).). transferring…based at least in part on a negative determination, the given operation to do an edge computing platform for further training of the machine learning algorithm of the given operation executing on the edge computing platform (The step of “transferring the execution” is merely an instruction to apply the abstract idea and does provide a meaningful limitation. See MPEP 2106.05(f). “Transferring the execution” is conventional workload redirection in distributed systems (for example, sending a job or control signal or redeploying a model or container from cloud to edge), which is a well understood, routine, and conventional activity (see MPEP 2106.05(d)(II)) and merely confines the concept to a technological environment of cloud and edge (field of use, see MPEP 2106.05(h).). Regarding claim 2, the rejection of claim 1 is incorporated herein, the following claim elements are abstract ideas: generating…in response to the negative determination (This is an abstract idea of a “mental process.” It involves mentally producing or formulating something – such as an alternative plan, revised data, or a new set of instructions – after concluding that a prior result did not meet a desired standard. For example, a person could determine that a test score is too low and, in response, mentally create a new strategy or generate a revised solution. Such responsive generation based on an earlier conclusion can be carried out entirely in the human mind or with pen and paper and therefore falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).), The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: the cloud computing platform (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) a request that the further training of the machine learning algorithm be performed on the edge computing platform (The step of “requesting” further training on a particular platform is merely a generic communication between computing components and amounts to insignificant extra-solution activity, as discussed in MPEP 2106.05(g). Sending a request from one system to another is well-understood, routine, and conventional computer function that does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. See MPEP 2106.05(d)(II)(i).); transmitting…the request to the edge computing platform (The step of transmitting a request between platforms is merely a generic communication between computing devices and amounts to insignificant extra-solution activity, as discussed in MPEP 2106.05(g). Transmitting information from one system to another is a well-understood, routine, and conventional computer function that does not integrate the judicial exception into a practical application or amount to significantly more than the judicial exception. See MPEP 2106.05(d)(II)(i).); Regarding claim 3, the rejection of claim 1 is incorporated herein, the following claim elements are abstract ideas: determining…based at least in part on the edge device resource availability, whether to transfer the given operation to the edge platform further training of the machine learning algorithm from the cloud platform to the edge platform (The step of “determining,” based on at least part of the edge device resource availability, whether to transfer the further execution of the machine learning algorithm from the cloud platform to the edge platform, is an act that can be performed in the human mind or by a human using pen and paper, such as evaluating available resources and deciding whether to move a task from one location to another. As such, this step describes a mental process and falls within the mental process grouping of abstract ideas.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception:] cloud computing platform (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) receiving…data corresponding to edge device resource availability from the edge computing platform (The step of receiving data corresponding to an edge device resource availability from the edge platform describes receiving data over a network, which is a well-understood, routine, conventional computer function. Such activity has been recognized by the courts as insignificant extra-solution activity and falls within the category of generic computer functions, i.e., “receiving or transmitting data over a network,” as set forth in MPEP 2106.05(d)(II)(i).); Regarding claim 4, the rejection of claim 1 is incorporated herein, the following claim elements are abstract ideas: determining…based at least in part on the amount of data being processed by the machine learning algorithm, whether to transfer the given operation to the edge computing platform further training of the machine learning algorithm (The step of determining, based on at least in part on the amount of data being processed by the machine learning algorithm, whether to transfer the further execution of the machine learning algorithm from the cloud platform to the edge platform, is an act that can be performed in the human mind, such as evaluating the amount of work being performed in one location and deciding whether to move the task to another location. As such, this step describes a mental process and thus falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: obtaining…data corresponding to an amount of data being processed by the machine learning algorithm of the given operation executing on the cloud computing platform (This limitation amounts to adding insignificant extra-solution activity to a judicial exception, as discussed in MPEP 2106.05(g). Receiving data (i.e., mere data gathering in conjunction with the abstract idea) is directed to a well understood routine conventional activity data transmission see 2106.05(d)(II)(i).); Regarding claim 5, the rejection of claim 1 is incorporated herein, the following claim elements are abstract ideas: determining…based at least in part on the frequency of requests for utilizing the machine learning algorithm, whether to transfer the given operation to the edge computing platform for further training of the machine learning algorithm (This is an abstract idea of a “mental process.” It involves reviewing how often requests are made and deciding whether to shift execution to a different platform. A person could mentally assess request frequency and make a decision to transfer based on that assessment. This type of evaluation can be readily be performed in the human mind or with pen and paper, and thus constitutes an abstract idea of a mental process.). The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: cloud computing platform (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) obtaining…data corresponding to a frequency of requests for execution of the machine learning algorithm on the cloud platform (This limitation amounts to adding insignificant extra-solution activity to a judicial exception, as discussed in MPEP 2106.05(g). Receiving data (i.e., mere data gathering in conjunction with the abstract idea) is directed to a well understood routine conventional activity data transmission see 2106.05(d)(II)(i).); Regarding claim 6, the rejection of claim 1 is incorporated herein, the following claim elements are abstract ideas: wherein the analyzing of the results of training the machine learning algorithm comprises computing a prediction error of the machine learning algorithm over a period of time (This is an abstract idea of a “mental process.” It involves reviewing the results, calculating the difference between predicted and actual values, and determining the error over a given period of time. A person could mentally or manually compute such an error using basic arithmetic over recorded data. This type of evaluation can be readily performed in the human mind or with pen and paper and thus constitutes an abstract idea of a mental process.). Regarding claim 7, the rejection of claim 6 is incorporated herein, the following claim elements are abstract ideas: wherein computing the prediction error of the machine learning algorithm over the period of time is performed for a testing data set and a training data set (This is an abstract idea of a “mental process.” It involves performing basic computations to compare predicted and actual results for two different data sets over a period of time. A person could compute such prediction errors using pen and paper or simple tools, and thus constitutes an abstract idea of a mental process.). Regarding claim 8, the rejection of claim 7 is incorporated herein, the following claim elements are abstract ideas: generating a learning curve based at least in part on the computed prediction error (This is an abstract idea of a “mental process.” It involves creating a visual representation of model performance over time by plotting prediction errors, and could be performed by using pen and paper. Such activity – computing a prediction error and drawing the corresponding curve – can be readily performed mentally or with simple tools, and thus constitutes an abstract idea of a mental process.) Regarding claim 9, the rejection of claim 8 is incorporated herein, the following claim elements are abstract ideas: identifying a point on the learning curve corresponding to where the machine learning algorithm is between underfitting and overfitting the training data set (This is an abstract idea of a “mental process.” It involves reviewing a plotted graph of the learning curve, locating a specific point that represents the balance between underfitting and overfitting, and could be performed by a person entirely in the mind or using pen and paper. Such analysis and identification from a visual plot can be readily performed mentally or with simple tools, and thus constitutes an abstract idea of a mental process.); making the negative determination responsive to the identifying (This is an abstract idea of a “mental process.” It involves, after identifying a point on a plotted learning curve, using observation and judgement to decide that the result is unfavorable. Such a determination could be performed by a person entirely in the mind or using pen and paper, and thus constitutes an abstract idea of a mental process.). Regarding claim 10, the rejection of claim 1 is incorporated herein, the following claim elements are abstract ideas: generating, based at least in part on the negative determination, a recommendation whether to transfer the further execution of the machine learning algorithm from the cloud platform to the edge platform, wherein the recommendation comprises a confidence score (This is an abstract idea of a “mental process.” It involves using observation and judgement to decide, based on an unfavorable determination, whether a task should be moved from one location to another, and assigning a confidence score to that recommendation. Such decision-making and scoring could be performed by a person entirely in the mind or using pen and paper, and thus constitutes an abstract idea of a mental process.). Regarding claim 11, the rejection of claim 10 is incorporated herein, the following claim elements are abstract ideas: cloud computing platform (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) wherein the confidence score is computed using a conformal prediction model (This is an abstract idea of a “mental process.” It involves performing mathematical calculations to generate a confidence score based on a statistical model. Such computations could be carried out by a person using pen and paper or simple tools, and thus constitutes an abstract idea of a mental process.). Regarding claim 12, the rejection of claim 10 is incorporated herein, the following claim elements are abstract ideas: wherein the recommendation is generated using one or more machine learning classifiers (This is an abstract idea of a “mental process.” It involves classifying information and generating a corresponding recommendation, which could be performed by a person using observation and judgement entirely in the mind or with pen and paper. Thus, it constitutes an abstract idea of a mental process.). Regarding claim 13, the rejection of claim 10 is incorporated herein. Further, claim 13 recites the following additional elements, which taken alone or in combination with other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: cloud computing platform (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) transmitting the recommendation to one or more user devices (This limitation amounts to adding insignificant extra-solution activity to a judicial exception, as discussed in MPEP 2106.05(g). Transmitting to a user (i.e., mere data gathering in conjunction with the abstract idea) is directed to a well understood routine conventional activity data transmission see 2106.05(d)(II)(i).); Regarding claim 14, the following claim elements are abstract ideas: analyzing results of training the machine learning algorithm of a given operation executing on the cloud computing platform (This is an abstract idea of a “mental process.” It amounts to reviewing a model’s outputs and judging their quality. (e.g., compare the correct answers, tally right/wrong, compute a simple score, and decide if a threshold is met). Because this can be done in the mind or with pen and paper, it falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).); determining, based at least in part on the analysis, whether the machine learning algorithm should be additionally trained (The limitation recites deciding, based on prior analysis, whether additional training is warranted. Such evaluation (e.g., comparing a performance score to a threshold and choosing “continue” or “stop”) can be performed in the human mind or with pen and paper. Accordingly, it falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).); The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: at least one processor (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) at least one memory storing computer program instructions (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) training a machine learning algorithm of a given operation executing on the cloud computing platform (This limitation amounts to mere instructions to apply the abstract idea. See MPEP 2106.05(f). It also confines the idea to a technological environment (“on a cloud platform”) and thus is a field-of-use limitation (see MPEP 2106.05(h)). The step of “executing a machine learning algorithm” is merely a generic data operation that amounts to receiving data, run code, and store/return results which has been recognized by the courts as well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II).); transferring, based at least in part on a negative determination, the given operation to an edge computing platform for further training of the machine learning algorithm of the given operation executing on the edge computing platform (The step of “transferring the execution” is merely an instruction to apply the abstract idea and does provide a meaningful limitation. See MPEP 2106.05(f). “Transferring the execution” is conventional workload redirection in distributed systems (for example, sending a job or control signal or redeploying a model or container from cloud to edge), which is a well understood, routine, and conventional activity (see MPEP 2106.05(d)(II)) and merely confines the concept to a technological environment of cloud and edge (field of use, see MPEP 2106.05(h)). Regarding claim 15, the rejection of claim 14 is incorporated herein, the following claim elements are abstract ideas: wherein, analyzing the results of training the machine learning algorithm comprises computing, a prediction error of the machine learning algorithm over a period of time (This is an abstract idea of a “mental process.” It involves reviewing the results, calculating the difference between predicted and actual values, and determining the error over a given period of time. A person could mentally or manually compute such an error using basic arithmetic over recorded data. This type of evaluation can be readily performed in the human mind or with pen and paper and thus constitutes an abstract idea of a mental process.). Regarding claim 16, the rejection of claim 15 is incorporated herein, the following claim elements are abstract ideas: generating a learning curve based at least in part on the computed prediction error (This is an abstract idea of a “mental process.” It involves creating a visual representation of model performance over time by plotting prediction errors, and could be performed by using pen and paper. Such activity – computing a prediction error and drawing the corresponding curve – can be readily performed mentally or with simple tools, and thus constitutes an abstract idea of a mental process.). Regarding claim 17, the rejection of claim 16 is incorporated herein, the following claim elements are abstract ideas: identifying a point on the learning curve corresponding to where the machine learning algorithm is between underfitting and overfitting a training data set (This is an abstract idea of a “mental process.” It involves reviewing a plotted graph of the learning curve, locating a specific point that represents the balance between underfitting and overfitting, and could be performed by a person entirely in the mind or using pen and paper. Such analysis and identification from a visual plot can be readily performed mentally or with simple tools, and thus constitutes an abstract idea of a mental process.); making the negative determination responsive to the identifying (This is an abstract idea of a “mental process.” It involves, after identifying a point on a plotted learning curve, using observation and judgement to decide that the result is unfavorable. Such a determination could be performed by a person entirely in the mind or using pen and paper, and thus constitutes an abstract idea of a mental process.). Regarding claim 18, the following claim elements are abstract ideas: analyzing results of training the machine learning algorithm of the given operation executing on the cloud computing platform(This is an abstract idea of a “mental process.” It amounts to reviewing a model’s outputs and judging their quality. (e.g., compare the correct answers, tally right/wrong, compute a simple score, and decide if a threshold is met). Because this can be done in the mind or with pen and paper, it falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).); determining, based at least in part on the analysis, whether the machine learning algorithm should be additionally trained (The limitation recites deciding, based on prior analysis, whether additional training is warranted. Such evaluation (e.g., comparing a performance score to a threshold and choosing “continue” or “stop”) can be performed in the human mind or with pen and paper. Accordingly, it falls within the mental process grouping of abstract ideas. See MPEP 2106.04(a)(2)(III).); The following claim elements are additional elements which, taken alone or in combination with the other elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A computer program product (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) cloud computing platform (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) a non-transitory computer-readable medium (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) machine executable instructions (This a high-level recitation of generic computer components for performing the abstract idea. See MPEP 2106.05(f).) training a machine learning algorithm of a given operation executing on the cloud computing platform (This limitation amounts to mere instructions to apply the abstract idea. See MPEP 2106.05(f). It also confines the idea to a technological environment (“on a cloud platform”) and thus is a field-of-use limitation (see MPEP 2106.05(h)). The step of “executing a machine learning algorithm” is merely a generic data operation that amounts to receiving data, run code, and store/return results which has been recognized by the courts as well-understood, routine, and conventional activity. See MPEP 2106.05(d)(II).); transferring, based at least in part on a negative determination, the given operation to an edge computing platform for further training of the machine learning algorithm of the given operation executing on the edge computing platform (The step of “transferring the execution” is merely an instruction to apply the abstract idea and does provide a meaningful limitation. See MPEP 2106.05(f). “Transferring the execution” is conventional workload redirection in distributed systems (for example, sending a job or control signal or redeploying a model or container from cloud to edge), which is a well understood, routine, and conventional activity (see MPEP 2106.05(d)(II)) and merely confines the concept to a technological environment of cloud and edge (field of use, see MPEP 2106.05(h)). Regarding claim 19, the rejection of claim 18 is incorporated herein, the following claim elements are abstract ideas: wherein, analyzing the results of training the machine learning algorithm comprises computing a prediction error of the machine learning algorithm over a period of time (This is an abstract idea of a “mental process.” It involves reviewing the results, calculating the difference between predicted and actual values, and determining the error over a given period of time. A person could mentally or manually compute such an error using basic arithmetic over recorded data. This type of evaluation can be readily performed in the human mind or with pen and paper and thus constitutes an abstract idea of a mental process.). Regarding claim 20, the rejection of claim 18 is incorporated herein, the following claim elements are abstract ideas: generating a learning curve based at least in part on the computed prediction error (This is an abstract idea of a “mental process.” It involves creating a visual representation of model performance over time by plotting prediction errors, and could be performed by using pen and paper. Such activity – computing a prediction error and drawing the corresponding curve – can be readily performed mentally or with simple tools, and thus constitutes an abstract idea of a mental process.). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 6, 14, 15, 18, and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sharma et al., (Pub. No.: US 20200327371 A1 (Filed: 2019)). Regarding claim 1, Sharma discloses: A method, comprising: training, by a cloud computing platform, a machine learning algorithm of a given operation executing on the cloud computing platform (Sharma, paragraph [0180] mentions “a machine learning model to be deployed to and executed on the example edge platform 406, 609 may be suitably developed and trained in the cloud 412 using a model creation component 902, model training component 904, and data storage and aggregation component 573. Model creation and training components 902, 904 may comprise known high-level programming or model development software, or both, such as Python™ (Python Software Foundation), R, RStudio, Matlab® (The MathWorks, Inc.), TensorFlow™ (Google), and Spark™ MLlib (Apache).”); analyzing, by the cloud computing platform, results of training the machine learning algorithm of a given operation executing on the cloud platform(Sharma, paragraph [0232] mentions “ Once an edge-converted ML model is deployed to the edge platform and begins operating on live sensor data, it may be desirable to periodically evaluate the accuracy of the predictions, inferences, and other outputs generated by the model and iteratively update the model as necessary.” [0233] “a closed-loop arrangement between the edge platform 406, 609 and the cloud platform 412 provides for periodic evaluation and iterative updating of ML models on the edge platform. Predictions, inferences, and other model outputs, sensor data from which the predictions and inferences were generated, and other analytics results can be transferred periodically from the edge platform 406, 609 to the cloud platform 412. “); determining, by the cloud computing platform, based at least in part on the analysis, whether the machine learning algorithm should be additionally trained (Sharma, paragraph [0020] “one or more of the first inferences produced by the edge-converted model…may be transmitted to the remote cloud network for evaluation. At the remote cloud network, the inferences may be evaluated for accuracy using a remote version of the edge-converted machine learning model…The one or more first and second inferences produced by the edge-converted model and the remote version of the model may be compared for accuracy.” [0023] “A model update cycle may be initiated by a trigger…Analytics expressions implementing selected logic, math, statistical or other functions may be applied to a stream of inferences generated by the model on the edge platform. The analytics expressions define what constitutes an unacceptable level of drift or degradation of model accuracy and track the model output to determine if the accuracy has degraded beyond an acceptable limit. In response, the edge platform can automatically take action, such as recording raw sensor data and sending it with the corresponding inferences to the cloud for re-training or re-evaluation, or both, of the model.”) transferring, by the cloud computing platform, based at least in part on a negative determination, the given operation to an edge computing platform for further training of the machine learning algorithm of the given operation executing on the edge computing platform (Sharma paragraph [0023] mentions “A model update cycle may be initiated by a trigger, which may comprise a manual trigger, a time-based trigger, or a trigger derived from evaluating inferences generated by the model at the edge. Analytics expressions implementing selected logic, math, statistical or other functions may be applied to a stream of inferences generated by the model on the edge platform. The analytics expressions define what constitutes an unacceptable level of drift or degradation of model accuracy and track the model output to determine if the accuracy has degraded beyond an acceptable limit. In response… such as recording raw sensor data and sending it with the corresponding inferences to the cloud for re-training or re-evaluation, or both, of the model.” [0085] “As examples, an application executing on an example intelligent edge platform according to the invention may monitor and analyze locally and in real-time sensor data from pumps in an industrial IIoT environment. In one example, based on the real-time analysis of the data, which may include the use of machine learning models, an application may output in real-time a predictive maintenance schedule for the pumps, or may automatically take action in the local network to redirect flow around a pump to prevent costly damage due to a cavitation or other event detected or predicted. In another example, an application may monitor a wind energy management system and may output recommendations or automatically take action to alter operating parameters to maximize power generation, extend equipment life, and apply historical analysis for accurate energy forecasting. “ [0159] “the models themselves are converted and optimized to execute efficiently and rapidly on the edge platform on the streaming sensor data in real-time. This may include optimizing the model computations and entirely or partially converting the models from typical high-level cloud-based machine learning model languages…The converted and optimized models are thus able to execute very rapidly in real-time on the streaming sensor data received at the edge platform and to provide immediate outputs at rates sufficient for edge-based machine learning applications to trigger immediate actions. Model creation and training may still be accomplished in the cloud, where significant compute and storage resources are available. Once a model is trained, it can then be “edge-ified” as described herein and pushed to the edge for live execution.” [0233] a closed-loop arrangement between the edge platform 406, 609 and the cloud platform 412 provides for periodic evaluation and iterative updating of ML models on the edge platform. Predictions, inferences, and other model outputs, sensor data from which the predictions and inferences were generated, and other analytics results can be transferred periodically from the edge platform 406, 609 to the cloud platform 412. ” –In Sharma, the “given operation” corresponds to the model’s real time prediction function that processes streaming sensor data to generate outputs such as predictive maintenance schedules, cavitation detection, or energy optimization recommendations as shown paragraph [0085]. Paragraph [0023] teaches evaluating the inferences produced by this operation at the edge to determine whether the accuracy has degraded beyond an acceptable limit, which constitutes negative determination. Paragraph [0023] further explains that, in response to such drift or degradation, the edge platform sends raw sensor data and corresponding inferences to the cloud for re-training or re-evaluation of the model. Paragraph [0233] describes a close-loop arrangement in which predictions, inferences, and sensor data are periodically transferred from the edge to the cloud to support periodic evaluation and iterative updating of the model by the cloud platform. Paragraph [0159] confirms that the model training is performed in the cloud, after which cloud transfers the updated model back to the edge for execution. Thus, Sharma teaches a model-update cycle in which negative determination at the edge triggers further training and periodic updating in the cloud, and the cloud computing platform transfer the updated operation back to the edge.). Regarding claim 2, Sharma discloses: The method of claim 1, further comprising: generating, by the cloud computing platform, in response to the negative determination, a request that the further training of the machine learning algorithm be performed on the edge computing platform (Sharma paragraph [0023] mentions “A model update cycle may be initiated by a trigger, which may comprise a manual trigger, a time-based trigger, or a trigger derived from evaluating inferences generated by the model at the edge… The analytics expressions define what constitutes an unacceptable level of drift or degradation of model accuracy and track the model output to determine if the accuracy has degraded beyond an acceptable limit. In response, the edge platform can automatically take action” [0159] “The converted and optimized models are thus able to execute very rapidly in real-time on the streaming sensor data received at the edge platform and to provide immediate outputs at rates sufficient…to trigger immediate actions…Once a model is trained, it can then be “edge-ified” as described herein and pushed to the edge for live execution.” [0233] “ a closed-loop arrangement between the edge platform 406, 609 and the cloud platform 412 provides for periodic evaluation and iterative updating of ML models on the edge platform.” – Sharma teaches that when the cloud platform pushes the edge-ified model to the edge platform, this action inherently generates a request or command for the edge platform to begin further execution and iterative updating of the model as part of Sharma’s closed-loop learning arrangement.) transmitting, by the cloud computing platform, the request to the edge computing platform (Sharma, paragraph [0159] mentions “Once a model is trained, it can then be “edge-ified” as described herein and pushed to the edge for live execution.” [0081] “the cloud 412 may comprise edge provisioning and orchestration 443 functionality. Such functionality incorporates a remote management console backed by microservices to remotely manage various hardware and software aspects of one or more edge computing platforms in communication with the cloud. Using this functionality, multiple different edge installations can be configured, deployed, managed and monitored via the remote management console.” – Sharma discloses that one a model is trained, it is “edge-ified” and “pushed to the edge for live execution,” and Sharma further discloses that the cloud includes remove provisioning and orchestration functionality to remotely configure, deploy, and manage edge computing platforms, thereby transmitting a request from the cloud computing platform to the edge computing platform.) Regarding claim 3, Sharma discloses: The method of claim 1, further comprising: receiving, by the cloud computing platform, data corresponding to edge device resource availability from the edge computing platform (Sharma, paragraph [0012] “A machine learning model is created and trained in the remote network using aggregated sensor data and deployed to the edge platform. Before being deployed, the model is edge-converted (“edge-ified”) to run optimally with the constrained resources of the edge device and with the same or better level of accuracy.” [0082] “the edge software services also may include management services and a management console with a user interface (UI). The management services may reside and run on the edge platform, in the cloud, on on-premises computing environments, or a combination of these. The management services provide for remotely deploying, setting up, configuring, and managing edge platforms and components, including resource provisioning.” – together, these disclosures teach that the cloud computing platform manages and provisions edge platform resources and adapts models to constrained edge device resources, which necessarily requires the cloud computing platform to receive data corresponding to edge device resource availability from the edge computing platform in order to perform such remote management and resource provisioning.); determining, by the cloud computing platform, based at least in part on the edge device resource availability, whether to transfer the given operation to the edge computing platform for further training of the machine learning algorithm (Sharma, paragraph [0012] “ A machine learning model is created and trained in the remote network using aggregated sensor data and deployed to the edge platform. Before being deployed, the model is edge-converted (“edge-ified”) to run optimally with the constrained resources of the edge device and with the same or better level of accuracy. The “edge-ified” model is adapted to operate on continuous streams of sensor data in real-time and produce inferences.” [0159] “The converted and optimized models are thus able to execute very rapidly in real-time on the streaming sensor data received at the edge platform and to provide immediate outputs at rates sufficient for edge-based machine learning applications to trigger immediate actions… Once a model is trained, it can then be “edge-ified” as described herein and pushed to the edge for live execution.” – teaches that the cloud computing platform determines whether the machine learning model (algorithm) should be converted (“edge-ified”) and transferred to the edge platform based at least in part on edge device resource constraints, since the model is expressly optimized to operate within constrained edge resources prior to deployment. Thus, Sharma teaches determining, by the cloud computing platform and based on at least in part on edge device resource availability, whether to transfer the given operation to the edge computing platform for further training of the associated machine learning algorithm. As discussed with respect to claim 1, the “given operation” corresponds to the machine learning model’s real time inference function (e.g., predictive maintenance or event detection) performed on streaming sensor data). Regarding claim 4, Sharma discloses: The method of claim 1, further comprising: obtaining, by the cloud computing platform data corresponding to an amount of data being processed by the machine learning algorithm of the given operation executing on the cloud computing platform (Sharma, paragraph [0097] mentions “Before further describing the data processing layer 515, a discussion regarding the communication of data between the layers is in order. The system and method described herein employs a queuing system to account for issues of latency and throughput in communicating data between the layers. The relationship between latency and throughput across different levels or layers is very complicated as defined by Little's law.” – teaches that the system measures and accounts for throughput, which corresponds to the amount of data being processed over time, and because the queuing system operates within the cloud-based data processing layers that execute the machine learning algorithm, the cloud computing platform obtains data corresponding to the amount of data being processed by the machine learning algorithm executing on the cloud computing plaform.).; determining, by a cloud computing platform, based at least in part on the amount of data being processed by the machine learning algorithm, whether to transfer the given operation to the edge computing platform for further training of the machine learning algorithm (Sharma, paragraph [0087] “ Before further describing the data processing layer 515, a discussion regarding the communication of data between the layers is in order. The system and method described herein employs a queuing system to account for issues of latency and throughput in communicating data between the layers…” [0012] “ In another aspect, the system and method provide a closed loop arrangement for continuously evaluating the accuracy of the model on the edge-computing platform, generating an updated or modified model, and iteratively updating or replacing the model on the edge computing platform to improve accuracy. In yet another aspect, the system and method provide for updating a model on the edge computing platform non-disruptively without interrupting the real-time processing of any data by the model” [0159] “The converted and optimized models are thus able to execute very rapidly in real-time on the streaming sensor data received at the edge platform and to provide immediate outputs at rates sufficient for edge-based machine learning applications to trigger immediate actions… Once a model is trained, it can then be “edge-ified” as described herein and pushed to the edge for live execution.” – teach that the cloud computing platform monitors throughput, which corresponds to the amount of data being processed, as part of the data processing and queuing system, and determines – within the disclosed closed-loop arrangement – whether the machine learning model (algorithm) should be converted (“edge-ified”) and transferred to the edge computing platform. Once transferred, the associated machine learning algorithm continues to be evaluated and iteratively updated. Thus, Sharma teaches determining, by the cloud computing platform and based at least in part on the amount of data being processed by the machine learning algorithm, whether to transfer the given operation to the edge computing platform for further training of the machine learning algorithm. As discussed with respect to claim 1, the “given operation” corresponds to the machine learning model’s real time inference function (e.g., predictive maintenance or event detection) performed on streaming sensor data). Regarding claim 6, Sharma discloses: The method of claim 1, wherein the analyzing of the results of training the machine learning algorithm comprises computing a prediction error of the machine learning algorithm over a period of time (Sharma, paragraph [0234] “The transfer of such data and information…for example based on a predetermined period of time elapsing… The analytics expressions can be selected to effectively define what constitutes an unacceptable level of drift or degradation of accuracy for the model in response to selected input sensor data and to track the model output to determine if the accuracy has degraded or drifted beyond the acceptable limit. For example, the analytics expressions may determine a statistical characteristic of the inferences over time, such as a mean, average, statistically significant range, or statistical variation.” – teaches computing prediction error over a period of time, because determining degradation or drift in model accuracy by evaluating statistical characteristics inferences over time necessarily involves computing deviations between predicted outputs and expected behavior, i.e., prediction error, across a temporal window.) Regarding claim 14, Sharma discloses: An apparatus, comprising: at least one processor and at least one memory storing computer program which are executed by the at least one processor to implement operations that are performed by a cloud computing platform, the operations comprising: (Sharma, paragraph [0055] “.A computer-readable medium may include any medium that participates in providing instructions to one or more processors for execution. Such a medium may take many forms including, but not limited to, nonvolatile, volatile, and transmission media. Non-volatile media may include, for example, flash memory, or optical or magnetic disks. Volatile media includes static or dynamic memory, such as cache memory or RAM.“) training a machine learning algorithm of a given operation executing on the cloud computing platform (Sharma, paragraph [0180] mentions “a machine learning model to be deployed to and executed on the example edge platform 406, 609 may be suitably developed and trained in the cloud… Model creation and training components 902, 904 may comprise known high-level programming or model development software, or both, such as Python™ (Python Software Foundation), R, RStudio, Matlab® (The MathWorks, Inc.), TensorFlow™ (Google), and Spark™ MLlib (Apache).”); analyzing results of training the machine learning algorithm of the given operation executing on the cloud computing platform (Sharma, paragraph [0232] mentions “ Once an edge-converted ML model is deployed to the edge platform and begins operating on live sensor data, it may be desirable to periodically evaluate the accuracy of the predictions, inferences, and other outputs generated by the model and iteratively update the model as necessary.” [0233] “ a closed-loop arrangement between the edge platform 406, 609 and the cloud platform 412 provides for periodic evaluation and iterative updating of ML models on the edge platform…Such information may be useful in evaluating prediction accuracy or as a check of model output against required specifications.”); determining, based at least in part on the analysis, whether the machine learning algorithm should be additionally trained (Sharma, paragraph [0023] “A model update cycle may be initiated by a trigger…Analytics expressions implementing selected logic, math, statistical or other functions may be applied to a stream of inferences generated by the model on the edge platform. The analytics expressions define what constitutes an unacceptable level of drift or degradation of model accuracy and track the model output to determine if the accuracy has degraded beyond an acceptable limit. In response, the edge platform can automatically take action, such as recording raw sensor data and sending it with the corresponding inferences to the cloud for re-training or re-evaluation, or both, of the model.”); transferring, based at least in part on a negative determination, the given operation to an edge computing platform for further training of the machine learning algorithm of the given operation executing on the edge computing platform (Sharma paragraph [0023] mentions “A model update cycle may be initiated by a trigger, which may comprise a manual trigger, a time-based trigger, or a trigger derived from evaluating inferences generated by the model at the edge. Analytics expressions implementing selected logic, math, statistical or other functions may be applied to a stream of inferences generated by the model on the edge platform. The analytics expressions define what constitutes an unacceptable level of drift or degradation of model accuracy and track the model output to determine if the accuracy has degraded beyond an acceptable limit. In response… such as recording raw sensor data and sending it with the corresponding inferences to the cloud for re-training or re-evaluation, or both, of the model.” [0085] “As examples, an application executing on an example intelligent edge platform according to the invention may monitor and analyze locally and in real-time sensor data from pumps in an industrial IIoT environment. In one example, based on the real-time analysis of the data, which may include the use of machine learning models, an application may output in real-time a predictive maintenance schedule for the pumps, or may automatically take action in the local network to redirect flow around a pump to prevent costly damage due to a cavitation or other event detected or predicted. In another example, an application may monitor a wind energy management system and may output recommendations or automatically take action to alter operating parameters to maximize power generation, extend equipment life, and apply historical analysis for accurate energy forecasting. “ [0159] “the models themselves are converted and optimized to execute efficiently and rapidly on the edge platform on the streaming sensor data in real-time. This may include optimizing the model computations and entirely or partially converting the models from typical high-level cloud-based machine learning model languages…The converted and optimized models are thus able to execute very rapidly in real-time on the streaming sensor data received at the edge platform and to provide immediate outputs at rates sufficient for edge-based machine learning applications to trigger immediate actions. Model creation and training may still be accomplished in the cloud, where significant compute and storage resources are available. Once a model is trained, it can then be “edge-ified” as described herein and pushed to the edge for live execution.” [0233] a closed-loop arrangement between the edge platform 406, 609 and the cloud platform 412 provides for periodic evaluation and iterative updating of ML models on the edge platform. Predictions, inferences, and other model outputs, sensor data from which the predictions and inferences were generated, and other analytics results can be transferred periodically from the edge platform 406, 609 to the cloud platform 412. ” –In Sharma, the “given operation” corresponds to the model’s real time prediction function that processes streaming sensor data to generate outputs such as predictive maintenance schedules, cavitation detection, or energy optimization recommendations as shown paragraph [0085]. Paragraph [0023] teaches evaluating the inferences produced by this operation at the edge to determine whether the accuracy has degraded beyond an acceptable limit, which constitutes negative determination. Paragraph [0023] further explains that, in response to such drift or degradation, the edge platform sends raw sensor data and corresponding inferences to the cloud for re-training or re-evaluation of the model. Paragraph [0233] describes a close-loop arrangement in which predictions, inferences, and sensor data are periodically transferred from the edge to the cloud to support periodic evaluation and iterative updating of the model by the cloud platform. Paragraph [0159] confirms that the model training is performed in the cloud, after which cloud transfers the updated model back to the edge for execution. Thus, Sharma teaches a model-update cycle in which negative determination at the edge triggers further training and periodic updating in the cloud, and the cloud computing platform transfer the updated operation back to the edge.). Regarding claim 15, Sharma discloses: The apparatus of claim 14, wherein, analyzing the results of training the machine learning algorithm comprises computing a prediction error of the machine learning algorithm over a period of time (Sharma, paragraph [0234] “The transfer of such data and information…for example based on a predetermined period of time elapsing… The analytics expressions can be selected to effectively define what constitutes an unacceptable level of drift or degradation of accuracy for the model in response to selected input sensor data and to track the model output to determine if the accuracy has degraded or drifted beyond the acceptable limit. For example, the analytics expressions may determine a statistical characteristic of the inferences over time, such as a mean, average, statistically significant range, or statistical variation.” – tracking degradation of accuracy and determining statistical characteristics of inferences over time constitutes computing a prediction error of the machine learning algorithm over a period of time under the broadest reasonable interpretation.) Regarding claim 18, Sharma discloses: A computer program product stored on a non-transitory computer-readable medium and comprising machine executable instructions, the machine executable instructions, when executed by at least one processing device, cause the at least one processing device to implement operations that are performed by a cloud computing platform, the operations comprising(Sharma, paragraph [0055] “A computer-implemented or computer-executable version or computer program product incorporating the invention or aspects thereof may be embodied using, stored on, or associated with computer-readable medium. A computer-readable medium may include any medium that participates in providing instructions to one or more processors for execution… Non-volatile media may include, for example, flash memory, or optical or magnetic disks. Volatile media includes static or dynamic memory, such as cache memory or RAM.”: training a machine learning algorithm of a given operation executing on the cloud computing platform (Sharma, paragraph [0180] mentions “a machine learning model to be deployed to and executed on the example edge platform 406, 609 may be suitably developed and trained in the cloud… Model creation and training components 902, 904 may comprise known high-level programming or model development software, or both, such as Python™ (Python Software Foundation), R, RStudio, Matlab® (The MathWorks, Inc.), TensorFlow™ (Google), and Spark™ MLlib (Apache).”); analyzing results of training the machine learning algorithm of the given operation executing on the cloud computing platform(Sharma, paragraph [0232] mentions “ Once an edge-converted ML model is deployed to the edge platform and begins operating on live sensor data, it may be desirable to periodically evaluate the accuracy of the predictions, inferences, and other outputs generated by the model and iteratively update the model as necessary.” [0234] “The predictions, inferences, and any other data may be transferred to the cloud over the Internet or via another suitable network or other connection.” [0235] “On the cloud platform 412, the transferred predictions, inferences, data and analytics results, and other information can be aggregated in cloud storage 573.” – evaluating accuracy of predictions and transferring those results to the cloud platform for aggregation constitutes analyzing results of the training machine learning algorithm by the cloud computing platform under BRI.); determining, based at least in part on the analysis, whether the machine learning algorithm should be additionally trained (Sharma, paragraph [0023] “A model update cycle may be initiated by a trigger…Analytics expressions implementing selected logic, math, statistical or other functions may be applied to a stream of inferences generated by the model on the edge platform. The analytics expressions define what constitutes an unacceptable level of drift or degradation of model accuracy and track the model output to determine if the accuracy has degraded beyond an acceptable limit. In response, the edge platform can automatically take action, such as recording raw sensor data and sending it with the corresponding inferences to the cloud for re-training or re-evaluation, or both, of the model.” – determining whether model accuracy has degraded beyond an acceptable limit and initiating re-training or re-evaluation constitutes determining whether the machine learning algorithm should be additionally trained under BRI.); transferring, based at least in part on a negative determination, the given operation to an edge computing platform for further training of the machine learning algorithm of the given operation executing on the edge computing platform (Sharma paragraph [0023] mentions “A model update cycle may be initiated by a trigger, which may comprise a manual trigger, a time-based trigger, or a trigger derived from evaluating inferences generated by the model at the edge. Analytics expressions implementing selected logic, math, statistical or other functions may be applied to a stream of inferences generated by the model on the edge platform. The analytics expressions define what constitutes an unacceptable level of drift or degradation of model accuracy and track the model output to determine if the accuracy has degraded beyond an acceptable limit. In response… such as recording raw sensor data and sending it with the corresponding inferences to the cloud for re-training or re-evaluation, or both, of the model.” [0085] “As examples, an application executing on an example intelligent edge platform according to the invention may monitor and analyze locally and in real-time sensor data from pumps in an industrial IIoT environment. In one example, based on the real-time analysis of the data, which may include the use of machine learning models, an application may output in real-time a predictive maintenance schedule for the pumps, or may automatically take action in the local network to redirect flow around a pump to prevent costly damage due to a cavitation or other event detected or predicted. In another example, an application may monitor a wind energy management system and may output recommendations or automatically take action to alter operating parameters to maximize power generation, extend equipment life, and apply historical analysis for accurate energy forecasting. “ [0159] “the models themselves are converted and optimized to execute efficiently and rapidly on the edge platform on the streaming sensor data in real-time. This may include optimizing the model computations and entirely or partially converting the models from typical high-level cloud-based machine learning model languages…The converted and optimized models are thus able to execute very rapidly in real-time on the streaming sensor data received at the edge platform and to provide immediate outputs at rates sufficient for edge-based machine learning applications to trigger immediate actions. Model creation and training may still be accomplished in the cloud, where significant compute and storage resources are available. Once a model is trained, it can then be “edge-ified” as described herein and pushed to the edge for live execution.” [0233] a closed-loop arrangement between the edge platform 406, 609 and the cloud platform 412 provides for periodic evaluation and iterative updating of ML models on the edge platform. Predictions, inferences, and other model outputs, sensor data from which the predictions and inferences were generated, and other analytics results can be transferred periodically from the edge platform 406, 609 to the cloud platform 412. ” –In Sharma, the “given operation” corresponds to the model’s real time prediction function that processes streaming sensor data to generate outputs such as predictive maintenance schedules, cavitation detection, or energy optimization recommendations as shown paragraph [0085]. Paragraph [0023] teaches evaluating the inferences produced by this operation at the edge to determine whether the accuracy has degraded beyond an acceptable limit, which constitutes negative determination. Paragraph [0023] further explains that, in response to such drift or degradation, the edge platform sends raw sensor data and corresponding inferences to the cloud for re-training or re-evaluation of the model. Paragraph [0233] describes a close-loop arrangement in which predictions, inferences, and sensor data are periodically transferred from the edge to the cloud to support periodic evaluation and iterative updating of the model by the cloud platform. Paragraph [0159] confirms that the model training is performed in the cloud, after which cloud transfers the updated model back to the edge for execution. Thus, Sharma teaches a model-update cycle in which negative determination at the edge triggers further training and periodic updating in the cloud, and the cloud computing platform transfer the updated operation back to the edge.). Regarding claim 19, Sharma discloses: The computer program product of claim 18, wherein, analyzing the results of training the machine learning algorithm comprises computing a prediction error of the machine learning algorithm over a period of time (Sharma, paragraph [0234] “The transfer of such data and information…for example based on a predetermined period of time elapsing… The analytics expressions can be selected to effectively define what constitutes an unacceptable level of drift or degradation of accuracy for the model in response to selected input sensor data and to track the model output to determine if the accuracy has degraded or drifted beyond the acceptable limit. For example, the analytics expressions may determine a statistical characteristic of the inferences over time, such as a mean, average, statistically significant range, or statistical variation.”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5 and 7 are rejected under the 35 U.S.C. 103 as being unpatentable over Sharma et al.,, (Pub. No.: US 20200327371 A1 (File: 2019)) in view of Dirac et al., (Pub. No.: US 20150379424 A1 (Filed: 2014)). Regarding claim 5, Sharma, as outlined above, teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1, mutatis mutandis. However, Sharma does not teach but Sharma in view of Dirac teaches the limitations: obtaining, by the cloud computing platform, data corresponding to a frequency of requests for execution of the machine learning algorithm on the cloud computing platform (Dirac paragraph [0066] “ a client 164 of the MLS may submit a model execution request 812 to the MLS control plane 180 via a programmatic interface 861…In response the MLS may generate a plan for model execution… For example, a client may indicate via a parameter of the model execution/creation request that up to 100 prediction requests per day are expected on data sets of 1 million records each, and the servers selected for the model may be chosen to handle the specified request rate.” – specifying an expected number of prediction requests per day constitutes data corresponding to a frequency of requests for execution of the machine learning algorithm under BRI.); determining, by the cloud computing platform, based at least in part on the frequency of requests for utilizing the machine learning algorithm of the given operation,(Dirac paragraph [0066] “ a client 164 of the MLS may submit a model execution request 812 to the MLS control plane 180 via a programmatic interface 861…In response the MLS may generate a plan for model execution… For example, a client may indicate via a parameter of the model execution/creation request that up to 100 prediction requests per day are expected on data sets of 1 million records each, and the servers selected for the model may be chosen to handle the specified request rate.”), whether to transfer the given operation to the edge computing platform for further training of the machine learning algorithm (Sharma, paragraph [0159] “The converted and optimized models are thus able to execute very rapidly in real-time on the streaming sensor data received at the edge platform and to provide immediate outputs at rates sufficient for edge-based machine learning applications to trigger immediate actions… Once a model is trained, it can then be “edge-ified” as described herein and pushed to the edge for live execution.” – determining execution planning base on request frequency and transferring the machine learning model (algorithm) to the edge computing platform for execution constitutes determining, base at least in part on the frequency of requests, whether to transfer the given operation under the broadest reasonable interpretation. Once transferred, the associated machine learning algorithm continues to be evaluated and iteratively updated in the closed-loop arrangement disclosed in Sharma, thereby corresponding to further training of the machine learning algorithm. As discussed with respect to claim 1, the “given operation” corresponds to the machine learning model’s real time inference function (e.g., predictive maintenance or event detection) performed on streaming sensor data). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, having a combination of Sharma and Dirac before them, to incorporate Dirac’s request-rate parameter into Sharma’s cloud-to-edge migration system to trigger transfer of a cloud-trained, edge-converted model to the edge platform when request volume exceeds a threshold. One would have been motivated to make such a combination to meet latency and throughput targets while reducing backhaul and compute cost. Regarding claim 7, Sharma, as outlined above, teaches all the elements of claim 6, therefore is rejected for the same reasons as those presented for claim 6, mutatis mutandis. However, Sharma does not teach, but Sharma in view of Dirac teaches the limitation: wherein computing the prediction error of the machine learning algorithm over the period of time (Sharma, paragraph [0234] “The transfer of such data and information to the cloud… for example based on a predetermined period of time elapsing,…The analytics expressions can be selected to effectively define what constitutes an unacceptable level of drift or degradation of accuracy for the model in response to selected input sensor data and to track the model output to determine if the accuracy has degraded or drifted beyond the acceptable limit. For example, the analytics expressions may determine a statistical characteristic of the inferences over time, such as a mean, average, statistically significant range, or statistical variation.”) is performed for a testing data set and a training data set (Dirac, paragraph [0077] “the MLS control plane may comprise a set of monitoring agents that collect performance and other metrics from the resources used for the various phases of machine learning operations (element 1054)…quantitative measures of model predictive effectiveness such as the area under receiver operating characteristic (ROC) curves for various classifiers may also be collected…some of the information regarding quality may be deduced or observed implicitly by the MLS instead of being obtained via explicit client feedback, e.g., by keeping track of the set of parameters that are changed during training iterations before a model is finally used for a test data set.” – Collecting predictive effectiveness metrics during training iterations and subsequently evaluating the trained model on a test data set constitutes computing prediction error for both a training data set and a testing data set under BRI.). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date, having Sharma and Dirac before them, to perform computing of the prediction error of a machine learning model over a period of time for both a training dataset and a testing dataset. One would have been motivated to combine these teachings to ensure comprehensive model validation, both during training and after training, by computing prediction error on both the training and test data, thereby improving model robustness and reliability, with a reasonable expectation of success. Claims 10, 11, 13 are rejected under the 35 U.S.C. 103 as being unpatentable over Sharma et al.,, (Pub. No.: US 20200327371 A1 (File: 2019)) in view of Shafer et al., (NPL: “A Tutorial on Conformal Prediction,” (Published: 2008)). Regarding claim 10, Sharma, as outlined above, teaches all the elements of claim 1, therefore is rejected for the same reasons as those presented for claim 1, mutatis mutandis. However, Sharma does not teach, but Sharma in view of Shafer teaches the limitation: generating, by the cloud computing platform, based at least in part on the negative determination, a recommendation whether to transfer the given operation to the edge computing platform for further training of the machine learning algorithm on the edge computing platform (Sharma paragraph [0023] mentions “A model update cycle may be initiated by a trigger, which may comprise a manual trigger, a time-based trigger, or a trigger derived from evaluating inferences generated by the model at the edge. Analytics expressions implementing selected logic, math, statistical or other functions may be applied to a stream of inferences generated by the model on the edge platform. The analytics expressions define what constitutes an unacceptable level of drift or degradation of model accuracy and track the model output to determine if the accuracy has degraded beyond an acceptable limit.” [0085] “As examples, an application executing on an example intelligent edge platform according to the invention may monitor and analyze locally and in real-time sensor data…based on the real-time analysis of the data, which may include the use of machine learning models, an application may output in real-time a predictive maintenance schedule for the pumps, or may automatically take action in the local network…In another example, an application may monitor a wind energy management system and may output recommendations or automatically take action to alter operating parameters” [0159] “the models themselves are converted and optimized to execute efficiently and rapidly on the edge platform on the streaming sensor data in real-time. This may include optimizing the model computations and entirely or partially converting the models from typical high-level cloud-based machine learning model languages…The converted and optimized models are thus able to execute very rapidly in real-time on the streaming sensor data received at the edge platform and to provide immediate outputs at rates sufficient for edge-based machine learning applications to trigger immediate actions. Model creation and training may still be accomplished in the cloud, where significant compute and storage resources are available. Once a model is trained, it can then be “edge-ified” as described herein and pushed to the edge for live execution.”, wherein the recommendation comprises a confidence score (confidence level 1 - ε ) (Shafer, Abstract “Conformal prediction uses past experience to determine precise levels of confidence in new predictions. Page 2, paragraph 2 mentions “Conformal prediction can be used with any method of point prediction for classification or regression, including support vector machines, decision trees, boosting, neural networks, and Bayesian prediction. Starting from the method for point prediction, we construct a nonconformity measure, which measures how unusual an example looks relative to previous examples, and the conformal algorithm turns this non conformity measure into prediction regions. Given a conformity measure, the conformal algorithm produces prediction region Γ ԑ for every probability error ԑ. The region Γ ԑ is a (1-ԑ)-prediction region; it contains y with probability at least 1-ԑ….the corresponding value of 1-ԑ is the confidence we assert in the predicted label.” – determining unacceptable model accuracy and generating a recommendation, as to whether the machine learning-enabled workload (i.e., a given operation), including the trained machine learning model and its associated logic, should be transferred to the edge computing platform corresponds to generating a recommendation whether to transfer the given operation under the broadest reasonable interpretation. Further, Shafer teaches associating such a recommendation with a confidence score (1-ԑ), thereby meeting the limitation that the recommendation comprises a confidence score. As discussed with respect to claim 1, the “given operation” corresponds to the machine learning model’s real time inference function (e.g., predictive maintenance or event detection) performed on streaming sensor data.). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date, having Sharma and Shafer before them, to modify Sharma’s cloud-to-edge migration recommendation in include a confidence level (1-ԑ) directly corresponds to a numerical confidence score for the model’s prediction, where ԑ is the significance threshold (e.g., ԑ=0.05 -> confidence score = 95%). Thus, attaching such a score to the predicted outcome is equivalent to providing the claimed confidence score. One would have been motivated to do so in order to obtain calibrated, model-agnostic uncertainty for automated go/no-go migration policies (e.g., migrate only if confidence ≥   τ ), thereby improving decision reliability with a reasonable expectation of success. Regarding claim 11, Sharma in view of Shafer, as outlined above, teaches all the elements of claim 10, therefore is rejected for the same reasons as those presented for claim 1, mutatis mutandis. Shafer further teaches: wherein the confidence score is computed using a conformal prediction model (Shafer, pages 1-2, Introduction mentions “In machine learning, these questions are usually answered in a fairly rough way from past experience. We expect new predictions to fare about as well as past predictions. Conformal prediction uses past experience to determine precise levels of confidence in predictions. Given a method for prediction ŷ, conf, conformal prediction produces a 95% prediction region – a set Γ0.05 that contains y with probability at least 95%...Conformal prediction can be used with any method of point prediction for classification or regression, including support-vector machines, decision trees, boosting, neural networks, and Bayesian prediction.”) Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date, having Sharma and Shafer before them, to compute the confidence score via a conformal prediction – i.e., take 1-ԑ derived from nonconformity/p-values – as calibrated, model-agnostic measure of reliability for migration recommendation. Regarding claim 13, Sharma in view of Shafer, as outlined above, teaches all the elements of claim 10, therefore is rejected for the same reasons as those presented for claim 1, mutatis mutandis. Sharma further teaches: further comprising transmitting, by the cloud computing platform, the recommendation to one or more user devices (Sharma, paragraph [0230] “Edge-converted models deployed to the example edge platform 406… The machine learning platform can be accessed via a suitable user interface (UI)…The edge-converted models execute efficiently on the CEP engine to generate predictions and inferences that can be used by edge applications alone or combined with other inferences, analytics results, and intelligence generated by other models, applications, analytics expressions, and others in real-time from live streaming sensor or other data… They can also be used to determine whether to take actions in the local network with respect to control systems, machines, sensors and devices 523, or the like, or to provide information, or a combination, alarms, warnings, and others, for example to a management system user interface 908 of the local network.” – providing information, alarms, warnings to a management system user interface assessed via networked user devices constitutes transmitting the recommendation from the cloud computing platform to one or more user devices under BRI.) Claim 12 is rejected under the 35 U.S.C. 103 as being unpatentable over Sharma et al.,, (Pub. No.: US 20200327371 A1 (File: 2019)) in view of Dirac et al., (Pub. No.: US 20150379424 A1 (Filed: 2014)) further in view of Shafer et al., (NPL: “A Tutorial on Conformal Prediction,” (Published: 2008)). Regarding claim 12, Sharma in view of Shafer, as outlined above, teaches all the elements of claim 10, therefore is rejected for the same reasons as those presented for claim 1, mutatis mutandis. However, Sharma in view of Shafer does not teach but Sharma in view of Shafer further in view of Dirac teaches the limitation: wherein the recommendation is generated using one or more machine learning classifiers (Dirac, paragraph [0077] mentions “ In some embodiments, quantitative measures of model predictive effectiveness such as the area under receiver operating characteristic (ROC) curves for various classifiers may also be collected…” [0066] “In response the MLS may generate a plan for model execution and select the appropriate resources to implement the plan.” - recommendation). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date, having Sharma, Dirac, and Shafer before them, to generate Sharma’s cloud-to-edge recommendation using one or more machine-learning classifies in Dirac and to attach a confidence level (1-ԑ) from conformal prediction per Shafer, thereby driving the migration decision with classifier outputs and calibrated confidence – as expected, routine integration yielding predictable improvements in automated placement. Claims 16, 17, and 20 are rejected under the 35 U.S.C. 103 as being unpatentable over Sharma et al.,, (Pub. No.: US 20200327371 A1 (File: 2019)) in view of Brownlee (NPL: “How to use Learning Curves to Diagnose Machine Learning Model Performance,” (Published: 2019)). Regarding claim 16, Sharma, as outlined above, teaches all the elements of claim 15, therefore is rejected for the same reasons as those presented for claim 15, mutatis mutandis. However, Sharma does not teach but Sharma in view of Brownlee teaches the limitation: wherein the operations performed by the cloud computing platform further comprise generating a learning curve based at least in part on the computed prediction error (Brownlee teaches a learning curve as a plot of model performance over time and says a model is evaluated on the training and hold-out validation set after each update, with “plots of the measured performance” created as learning curves. It also explains that the metric is commonly a minimized score such a “loss or error,” and that dual learning curves are typically generated for train and validation – i.e., the curve is based (at least in part) on the computed prediction error.) Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date, having Sharma and Brownlee before them, to generate a learning curve based on the computed prediction error (loss over epochs) for the cloud-trained model with Sharma’s cloud-to-edge pipeline. One would have been motivated to do so to objectively determine when training is sufficient (detect under/overfitting), choosing a stopping point that triggers edge deployment, and avoid pushing undertrained/overfit models to the edge, thereby reducing cloud compute cost and improving edge accuracy and responsiveness, with predictable results. Regarding claim 17, Sharma in view of Brownlee, as outlined above, teaches all the elements of claim 16, therefore is rejected for the same reasons as those presented for claim 16, mutatis mutandis. Sharma in view of Brownlee further teaches: identifying a point on the learning curve corresponding to where the machine learning algorithm is between underfitting and overfitting a training data set; and making the negative determination responsive to the identifying. (Brownlee states “Learning curves of model performance on the train and validation datasets can be used to diagnose an underfit, overfit, or well-fit model.” Brownlee further discloses “ A good fi t is the goal of the learning algorithm and exists between an overfit and underfit model.” Brownlee explains how that point is identified: “A good fit is identified by a training and validation loss that decreases to a point of stability with a minimal gap between the two final loss values” Brownlee additionally states “The inflection point in validation loss may be the point at which training could be halted as experience after that point shows the dynamics of overfitting.” – Brownlee expressly teaches analyzing training and validation learning curves to identify a “good fit” point that exists between underfitting and overfitting, where losses stabilize and the generalization gap is minimal. Identifying this stabilization point corresponds to identifying where the machine learning algorithm is between underfitting and overfitting. Further, Brownlee teaches that training may be halted at the identified point, which under the broadest reasonable interpretation constitutes making a negative determination that additional training is not required responsive to identifying that point on the learning curve.). Accordingly, it would have been obvious to person of ordinary skill in the art, before the effective filing date, having Sharma and Brownlee before them, to generate a learning curve based on the computed prediction error (loss over epochs) for the cloud-trained model with Sharma’s cloud-to-edge pipeline. One would have been motivated to do so to objectively determine when training is sufficient (detect under/overfitting), choose a stopping point that triggers edge deployment, and avoid pushing undertrained/overfit models to the edge, thereby reducing cloud compute cost and improving accuracy and responsiveness, with predictable results. Regarding claim 20, Sharma, as outlined above, teaches all the elements of claim 19, therefore is rejected for the same reasons as those presented for claim 19, mutatis mutandis. However, Sharm does not teach but Sharma in view of Brownlee teaches the limitation: wherein the operations performed by the cloud computing platform further comprises generating a learning curve based at least in part on the computed prediction error. (Brownlee teaches a learning curve as a plot of model performance over time and says a model is evaluated on the training and hold-out validation set after each update, with “plots of the measured performance” created as learning curves. It also explains that the metric is commonly a minimized score such a “loss or error,” and that dual learning curves are typically generated for train and validation – i.e., the curve is based (at least in part) on the computed prediction error.) Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date, having Sharma and Brownlee before them, to generate a learning curve based on the computed prediction error (loss over epochs) for the cloud-trained model with Sharma’s cloud-to-edge pipeline. One would have been motivated to do so to objectively determine when training is sufficient (detect under/overfitting), choosing a stopping point that triggers edge deployment, and avoid pushing undertrained/overfit models to the edge, thereby reducing cloud compute cost and improving edge accuracy and responsiveness, with predictable results. Claims 8 and 9 are rejected under the 35 U.S.C. 103 as being unpatentable over Sharma et al.,, (Pub. No.: US 20200327371 A1 (File: 2019)) in view of Dirac et al., (Pub. No.: US 20150379424 A1 (Filed: 2014)) further in view of Brownlee (NPL: “How to use Learning Curves to Diagnose Machine Learning Model Performance,” (Published: 2019)). Regarding claim 8, Sharma in view of Dirac, as outlined above, teaches all the elements of claim 7, therefore is rejected for the same reasons as those presented for claim 7, mutatis mutandis. However, Sharma in view of Dirac does not teach but Sharma in view of Dirac further in view of Brownlee teaches the limitation: generating a learning curve based at least in part on the computed prediction error (Brownlee teaches a learning curve as a plot of a model performance over time and says a model is evaluated on the training and a hold-out validation set after each update, with “plots of measured performance” created as learning curves. It also explains that the metric is commonly a minimizing score such as “loss or error,” and that dual learning curves are typically generated for train and validation – i.e., the curve is based (at least in part) on the computer prediction error.). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date, having Sharma, Dirac, and Brownlee before them, to generate a learning curve based on the computed prediction error in Sharma’s cloud-to-edge system. One would have been motived to do so to visualize training versus validation error, detect underfitting or overfitting, and identify an appropriate point to trigger migration decisions, thereby improving efficiency and accuracy with predictable results. Regarding claim 9, Sharma in view of Dirac in further view of Brownlee, as outlined above, teaches all the elements of claim 8, therefore is rejected for the same reasons as those presented for claim 8, mutatis mutandis. Sharma in view of Dirac in further view of Brownlee further teaches: identifying a point on the learning curve corresponding to where the machine learning algorithm is between underfitting and overfitting the training data set; and making the negative determination responsive to the identifying (Brownlee teaches using dual train/validation learning curves to diagnose underfit, overfit, and a “good fit.” Overfitting is shown when validation loss decreases and then increases, and the “inflection point in validation loss may be the point at which training could be halted,” while a “good fit” – i.e., between under – and overfitting – is when both losses decrease to stability with a small gap. Thus, identifying the validation-loss minimum/stability point provides the basis to decide no further training is needed (negative determination).). Accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date, having Sharma, Dirac, and Brownlee before them, to identify a point on the learning curve where the model lies between underfitting and overfitting, so that deployment decisions in Sharma’s cloud-to-edge system could be made at a stage of balanced generalization. One would have been motivated to do so to avoid transferring weak or overfit models to the edge, thereby improving accuracy and reducing wasted compute, with predictable results. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daravanh Phakousonh whose telephone number is (571)272-6324. The examiner can normally be reached Mon - Thurs 7 AM - 5 PM, Every other Friday 7 AM - 4PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached at 571-272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Daravanh Phakousonh/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Oct 19, 2022
Application Filed
Aug 19, 2025
Non-Final Rejection — §101, §102, §103
Dec 04, 2025
Response Filed
Feb 19, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572821
ACCURACY PRIOR AND DIVERSITY PRIOR BASED FUTURE PREDICTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
99%
With Interview (+100.0%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month