Prosecution Insights
Last updated: April 19, 2026
Application No. 18/306,138

PAUSE, RESUME, AND REPLAY FUNCTIONS FOR PIPELINE EXECUTION IN AN ORCHESTRATION FRAMEWORK

Non-Final OA §101§103§112
Filed
Apr 24, 2023
Examiner
XU, ZUJIA
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
GM Cruise Holdings LLC
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
114 granted / 169 resolved
+12.5% vs TC avg
Strong +82% interview lift
Without
With
+81.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
33 currently pending
Career history
202
Total Applications
across all art units

Statute-Specific Performance

§101
16.0%
-24.0% vs TC avg
§103
46.2%
+6.2% vs TC avg
§102
2.0%
-38.0% vs TC avg
§112
31.0%
-9.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 169 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending for examination. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1, Statutory Category: Yes, the claim 17 is a system that recites a series of steps and therefore falls in the statutory category of a machine. Step 2A- Prong 1: Judicial Exception Recited: Yes, the claim recites: “identify at least one active node from the plurality of nodes that is currently executing”. As drafted, the claim as a whole recites a method including steps that could be performed in the human mind, but for the recitation of generic computing components. The human mind can easily judging/evaluating/determining/identifying that at least one active node from the plurality of nodes that is currently executing. Therefore, but for the recitation of generic computing components, these steps may be a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion). Therefore, yes, the claims do recite judicial exceptions. Step 2A- Prong 2: Integrated into a practical Application: No, this judicial exception is not integrated into a practical application. In particular, the claim recites an additional limitations that “receive, by a resolver in an orchestration framework, a pause request for stopping execution of a first pipeline that includes a plurality of nodes”, and “receive paused state data that is associated with the at least one active node” which is insignificant pre-solution data gathering (see MPEP § 2106.05(g)). In addition, the limitation of “a memory; and one or more processors coupled to the memory, the one or more processors being configured to”, “wherein the paused state data corresponds to an intermediate execution state of the at least one active node” and “wherein the pipeline checkpoint data includes the paused state data associated with the at least one active node.” which is directed to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a generic computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). Further, the limitation of “send a pause command to the at least one active node from the plurality of nodes in the first pipeline”, “send pipeline checkpoint data corresponding to the first pipeline to a server,” which is insignificant extra solution activity (i.e., transmitting data) See MPEP 2106.05(g). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to the abstract idea. Step 2B: Claim provides an Inventive Concept: No. The additional element “a memory; and one or more processors coupled to the memory, the one or more processors being configured to”, “wherein the paused state data corresponds to an intermediate execution state of the at least one active node” and “wherein the pipeline checkpoint data includes the paused state data associated with the at least one active node.” which is directed to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a generic computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). In addition, the limitation of “receive, by a resolver in an orchestration framework, a pause request for stopping execution of a first pipeline that includes a plurality of nodes”, “receive paused state data that is associated with the at least one active node” which is insignificant pre-solution data gathering (see MPEP § 2106.05(g)). Further, the limitation of “send a pause command to the at least one active node from the plurality of nodes in the first pipeline”, “send pipeline checkpoint data corresponding to the first pipeline to a server,” which is insignificant extra solution activity (i.e., transmitting data) See MPEP 2106.05(g) and they are well understood, routine, conventional activity (see MPEP § 2106.05(d)). Courts have identified “receiving and transmitting data, storing and retrieving information”, et cetera as well understood, routine, conventional and mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea (see MPEP 2106.05(f))). These additional elements and combination of the elements does not amount to significant more than the exception itself or provide an inventive concept in Step 2B. Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. Here, the “receive” and “send” steps were considered to be extra-solution activity in Step 2A as insignificant data gathering and communication and are well understood, routine, conventional activity in the field. The “receive” and “send” steps are for the purpose of “communication” and “transmitting the data” and these can be reached on one of court case (Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) see MPEP § 2106.05(d) II). Accordingly, a conclusion that “receive” and “send” are well understood, routine, conventional activity is supported under Berkheimer options 2. For these reasons, there is no inventive concept in the claim, and thus the claim is ineligible. With respect to the dependent claim 18, the claim elaborates that wherein the at least one active node from the plurality of nodes is identified based on a directed acyclic graph that is associated with the first pipeline (“identified based on a directed acyclic graph” as being treated as part of abstract idea and is analogous to Mental processes, such that concept can be performed in the human mind. Further, the claim as a whole is a Mental Processes that can be performed in the human mind (including an observation, evaluation, judgment, opinion)). With respect to the dependent claim 19, the claim elaborates that wherein the pipeline checkpoint data includes input data and output data corresponding to each of the plurality of nodes (“the pipeline checkpoint data includes input data and output data corresponding to each of the plurality of nodes” which is directed to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a generic computer as a tool to perform an abstract idea (see MPEP 2106.05(f)). With respect to the dependent claim 20, the claim elaborates that wherein the paused state data includes at least one of a machine learning model, machine learning model weights, input data received by the at least one active node, output data generated by the at least one active node, and data corresponding to a last completed epoch (these limitations are directed to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a generic computer as a tool to perform an abstract idea (see MPEP 2106.05(f))). Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “resolver that is configured to” in claim 10, “second pipeline is configured to” in claims 12 and 16. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Claim limitations “resolver” in claim 10, “second pipeline” in claims 12 and 16 invokes 35 U.S.C. 112(f). The specification paragraph [0060] that discloses “In some aspects, process 400 may start at step 402, which may include initializing of hardware or software systems associated with an orchestration framework (e.g., orchestration framework 200) and/or with any other component or device that may be configured to execute one or more steps in process 400” and specification paragraph [0095] that discloses “Other examples of the disclosure may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics” as performing corresponding structure. However, said “resolver” and “second pipeline” without the detail about the means to accomplish the functions are not an adequate disclosure of corresponding structure (i.e., they are general purpose computer and they are not sufficient structure to be corresponding structure under 112(f). That is, the general purpose computer must be transformed into a specially programmed computer by way of an algorithm). MPEP § 2181(II)(B) specifically indicated that “For a computer-implemented 35 U.S.C. 112(f) claim limitation, the specification must disclose an algorithm for performing the claimed specific computer function, or else the claim is indefinite under 35 U.S.C. 112(b). See Net MoneyIN, Inc. v. Verisign. Inc., 545 F.3d 1359, 1367, 88 USPQ2d 1751, 1757 (Fed. Cir. 2008). See also In re Aoyama, 656 F.3d 1293, 1297, 99 USPQ2d 1936, 1939 (Fed. Cir. 2011) ("[W]hen the disclosed structure is a computer programmed to carry out an algorithm, ‘the disclosed structure is not the general purpose computer, but rather that special purpose computer programmed to perform the disclosed algorithm.’") (quoting WMS Gaming, Inc. v. Int’l Game Tech., 184 F.3d 1339, 1349, 51 USPQ2d 1385, 1391 (Fed. Cir. 1999))” and “The corresponding structure is not simply a general purpose computer by itself but the special purpose computer as programmed to perform the disclosed algorithm. Aristocrat, 521 F.3d at 1333, 86 USPQ2d at 1239. Thus, the specification must sufficiently disclose an algorithm to transform a general purpose microprocessor to the special purpose computer” Therefore, the claims (i.e., 10-16) are indefinite and is rejected under 35 U.S.C. 112(b). Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f); (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 10-16 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims 10, 12 and 16 contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. As described above in 112(f) (i.e., “resolver” in claim 10 and “second pipeline” in claims 12 and 16), the disclosure does not provide adequate structure to perform the claimed functions. The specification does not demonstrate that applicant has made an invention that achieves the claimed function because the invention is not described with sufficient detail such that one of ordinary skill in the art can reasonably conclude that the inventor had possession of the claimed invention. See MPEP § 2181(II)(B) “When a claim containing a computer-implemented 35 U.S.C. 112(f) claim limitation is found to be indefinite under 35 U.S.C. 112(b) for failure to disclose sufficient corresponding structure (e.g., the computer and the algorithm) in the specification that performs the entire claimed function, it will also lack written description under 35 U.S.C. 112(a)”. Claims 11-16, they are depend on claim 10 and do not overcome the deficiencies thereof, therefore they are rejected for the same reason as claim 10 above. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 10-16 are rejected under 35 U.S.C. 112(b), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. As per claims 10-16: As described above in 112(f) (i.e., “resolver” in claim 10 and “second pipeline” in claims 12 and 16) without the detail about the means to accomplish the functions are not an adequate disclosure of corresponding structure. The MPEP § 2181(II)(B) specifically indicated that “For a computer-implemented 35 U.S.C. 112(f) claim limitation, the specification must disclose an algorithm for performing the claimed specific computer function, or else the claim is indefinite under 35 U.S.C. 112(b). See Net MoneyIN, Inc. v. Verisign. Inc., 545 F.3d 1359, 1367, 88 USPQ2d 1751, 1757 (Fed. Cir. 2008). See also In re Aoyama, 656 F.3d 1293, 1297, 99 USPQ2d 1936, 1939 (Fed. Cir. 2011) ("[W]hen the disclosed structure is a computer programmed to carry out an algorithm, ‘the disclosed structure is not the general purpose computer, but rather that special purpose computer programmed to perform the disclosed algorithm.’") (quoting WMS Gaming, Inc. v. Int’l Game Tech., 184 F.3d 1339, 1349, 51 USPQ2d 1385, 1391 (Fed. Cir. 1999))”. Therefore, the claims (i.e., 10-16) are indefinite and is rejected under 35 U.S.C. 112(b). Claim Rejections - 35 USC § 103 The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 and 10-13 are rejected under 35 U.S.C. 103 as being unpatentable over Liguori et al. (US Pub. 2020/0310845 A1) in view of Cseri et al. (US Patent. 12,461,940 B2). As per claim 1, Liguori teaches the invention substantially as claimed including A system comprising (Liguori, Fig. 1,100): a memory; and one or more processors coupled to the memory, the one or more processors being configured to (Liguori, Claim 5, A system, comprising: one or more processors; and memory that stores computer-executable instructions that, as a result of being executed, cause the one or more processors to): receive, by a server in an orchestration framework, a request for execution of a first workflow that was previously executed within the orchestration framework, (Liguori, Fig. 2, 212 Call API, 214 resume function from state; Fig. 3, 320 receive request with the handle to continue the workflow (as request for execution of a first pipeline that was previously executed); Fig. 6, 600 (as orchestration framework); Abstract, A resume workflow request is received from the entity, where the resume workflow request includes a handle to a snapshot that corresponds to a state of execution of the software code and a response to the operation request to the entity. Using the handle to the snapshot and the response to the operation request, a second instance is caused to execute the software code from the first state to perform a second portion of the workflow; Fig. 1, 104, 106 and 108 (as provided by server; also see Fig. 6, functions of workflow are executed within the virtual machine of the server); [0028] lines 1-8, the VM configuration 104 is under the control of a service such as a serverless compute service provided by the computing resource service provider to its customers to perform various functions on behalf of the customers. Examples of serverless compute services include AWS Lambda, Google Cloud Functions, IBM Cloud Functions, Fn or Fn Project, platform-as-a-service service providers; also see [0040] lines 1-2, all of process 200 may be performed by any suitable system, such as a server in a data center, by various components of the system 900 described in conjunction with FIG. 9, such as the web server 906 or the application server 908 (as server in an orchestration framework)); wherein the first workflow includes a plurality of nodes (Liguori, [0013] lines 1-3, perform a workflow over a series of stages (as plurality of nodes); [0044] lines 1-7, the resuming 214 of the function may comprise the instantiation of a second virtual machine instance different from the first virtual machine instance to perform the one or more functions of the function. The different virtual machine instances may further execute on different physical computing devices (“hosts”). In this manner, a virtual computer system service that provides the virtual machines to customers may use any available physical hosts to efficiently host the functions at different stages of the workflow; also see Fig. 6, different functions as plurality of nodes). retrieve, by the server in the orchestration framework, workflow checkpoint data corresponding to the first workflow (Liguori, Fig. 2, 206 store state, 214 resume function from state; Fig. 3, 322 instantiate the next configuration to execute the function from the most recent state; [0040] lines 1-2, all of process 200 may be performed by any suitable system, such as a server in a data center; [0060] lines 1-10, receive 320 a request with the handle to continue the workflow. In various embodiments, the request may originate from the service to which the request 317 was submitted. Using the handle, the system may then instantiate 322 the next configuration to execute the function from the most recent state. In some examples, instantiating the next configuration may comprise preparing one or more physical systems, such as a server computer, and/or one or more virtual systems, such as a virtual machine to execute the function. In an embodiment, the system utilizes the handle the request comprises to instantiate the next configuration; the handle the request may provide the system with access to the most recent state of the previous configuration (as to retrieve, by the server in the orchestration framework (see Fig. 6), workflow checkpoint data (i.e., state) corresponding to the first workflow to continue execution of the workflow)); initiate, based on the workflow checkpoint data corresponding to the first workflow (Liguori, Fig. 3, 324 cause the next configuration to execute the function to continue the workflow). Liguori fails to specifically teach the workflow is pipeline, when initiate, it is initiate a second pipeline that is a clone of the first pipeline, and initiate a resolver that is configured to monitor execution of the second pipeline. However, Cseri teaches the workflow is pipeline, when initiate, it is initiate a second pipeline that is a clone of the first pipeline (Cseri, Abstract, lines 2-6, detecting a committed version of recurrently executed tasks of a first data pipeline on a primary deployment that is hosted on a first cloud service, and replicating the committed version of the recurrently executed tasks to a second data pipeline on a secondary deployment that is hosted on a second cloud service; Fig. 9, data pipeline including tasks/nodes; Col 3, lines 17-22, A customer decides how frequently they wish to replicate and hence they also need sufficient information to decide at failover if they want to use the replicated checkpoint or externally upgrade the checkpoint to a later state if they so desire. In this way, they get the minimum configured loss of recency with the ability to override it for a newer pipeline version; Col 13, lines 50-55, data pipeline replicator 232 identifies a latest committed version of a primary pipeline in response to (or created by) a “resume” or “execute” command. In block 604, the data pipeline replicator 232 replicates the identified latest committed version of the primary pipeline to a secondary pipeline; Col 23, lines 1-6, the first data pipeline and the second data pipeline comprising same computing tasks; receiving a confirmation from the client device indicating that a version of the second data pipeline is acceptable for resuming execution of the first data pipeline on the second cloud service (as initiate a second pipeline that is a clone of the first pipeline), and initiate a resolver that is configured to monitor execution of the second pipeline (Cseri, Col 7, lines 19-21, the management console service 208 (as resolver) may receive a request to execute a job and monitor the workload on the system (as include monitor execution of the second pipeline). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Liguori with Cseri because Cseri’s teaching of replicating the first pipeline with checkpoint to second pipeline and resume the execution by executing the second pipeline would have provided Liguori’s system with the advantage and capability to allow the system to continue the execution of the pipeline when the system is at failover in order to providing minimum configured loss of recency with the ability to override it for a newer pipeline version (see Cseri, Col 3, lines 16-24). As per claim 2, Liguori and Cseri teach the invention according to claim 1 above. Liguori further teaches wherein the request corresponds to a resume request for restarting execution of the first workflow from an intermediate execution state of at least one node from the plurality of nodes in the first workflow (Liguori, Fig. 2, 206 store state (as the intermediate execution state), 214 resume function from state; Fig. 3, 324, 310, 316; Abstract, A resume workflow request is received from the entity, where the resume workflow request includes a handle to a snapshot that corresponds to a state of execution of the software code and a response to the operation request to the entity. Using the handle to the snapshot and the response to the operation request, a second instance is caused to execute the software code from the first state to perform a second portion of the workflow; [0096] lines 1-10, the client 740 invokes 716 a handle to resume the terminated function. The terminated function may be resumed by the instantiation of a virtual machine instance to perform a second generation 742B of the function. In some examples, the invoke handle may be utilized to continue 714 the operation of the previously terminated function. For example, the previously terminated function may have comprised analyzing a portion of a large set of data; the continued operation of the second generation 742B of the function may continue to analyze additional portions of the large set of data. In an embodiment, the continued operation of the second generation 742B of the function comprises the performance of one or more operations, and a storage 718 of a continuation or state of the second generation 742B of the function following the performance of the one or more operations. The stored state, which in some examples can be referred to as a continuation). In addition, Cseri teaches the workflow is pipeline (Cseri, Col 13, lines 50-55, data pipeline replicator 232 identifies a latest committed version of a primary pipeline in response to (or created by) a “resume” or “execute” command. In block 604, the data pipeline replicator 232 replicates the identified latest committed version of the primary pipeline to a secondary pipeline; Col 23, lines 1-6, the first data pipeline and the second data pipeline comprising same computing tasks; receiving a confirmation from the client device indicating that a version of the second data pipeline is acceptable for resuming execution of the first data pipeline on the second cloud service (as initiate a second pipeline that is a clone of the first pipeline). As per claim 3, Liguori and Cseri teach the invention according to claim 2 above. Liguori further teaches allocate compute resources for execution of the first workflow wherein the first workflow is configured to commence execution from a resume state corresponding to the intermediate execution state of the at least one node (Liguori, Fig. 2, 214 as resume state; [0028] lines 1-16, the VM configuration 104 is under the control of a service such as a serverless compute service provided by the computing resource service provider to its customers to perform various functions on behalf of the customers. Examples of serverless compute services include AWS Lambda, Google Cloud Functions, IBM Cloud Functions, Fn or Fn Project, platform-as-a-service service providers, and more. A serverless compute service may be serverless in the sense that computing resources are dynamically allocated to perform functions (also referred to as serverless compute functions, serverless functions, Lambda functions) triggered by the events such as invocation of an endpoint from a client (e.g., a web API call via a network such as the Internet). In an embodiment, a serverless compute function is triggered when a serverless compute endpoint is invoked and computing resources in which the function can run are provisioned in response to the trigger being detected. Note, however, that embodiments of the present disclosure need not be limited to use with serverless compute services, but may also be implemented on some other virtual computing service platform, such as a software container service or virtual computer system service. The computing resources utilized may be in accordance with a computing environment that is suitable to execute the function. The computing resources can be physical, which may include physical server computers, or virtual, which may include virtual machines; Abstract, A resume workflow request is received from the entity, where the resume workflow request includes a handle to a snapshot that corresponds to a state of execution of the software code and a response to the operation request to the entity. Using the handle to the snapshot and the response to the operation request, a second instance is caused to execute the software code from the first state to perform a second portion of the workflow; [0042] lines 2-3, the stored state may be accessed by the usage of an invoke handle; [0044] lines 1-3, the resuming 214 of the function may comprise the instantiation of a second virtual machine instance different from the first virtual machine instance to perform the one or more functions of the function). In addition, Cseri teaches the first workflow is second pipeline (Cseri, Abstract, lines 2-6, detecting a committed version of recurrently executed tasks of a first data pipeline on a primary deployment that is hosted on a first cloud service, and replicating the committed version of the recurrently executed tasks to a second data pipeline on a secondary deployment that is hosted on a second cloud service; Fig. 9, data pipeline including tasks/nodes; Col 3, lines 17-22, A customer decides how frequently they wish to replicate and hence they also need sufficient information to decide at failover if they want to use the replicated checkpoint or externally upgrade the checkpoint to a later state if they so desire. In this way, they get the minimum configured loss of recency with the ability to override it for a newer pipeline version; Col 13, lines 50-55, data pipeline replicator 232 identifies a latest committed version of a primary pipeline in response to (or created by) a “resume” or “execute” command. In block 604, the data pipeline replicator 232 replicates the identified latest committed version of the primary pipeline to a secondary pipeline; Col 23, lines 1-6, the first data pipeline and the second data pipeline comprising same computing tasks; receiving a confirmation from the client device indicating that a version of the second data pipeline is acceptable for resuming execution of the first data pipeline on the second cloud service). As per claim 4, Liguori and Cseri teach the invention according to claim 2 above. Liguori further teaches wherein the workflow checkpoint data includes paused state data that corresponds to the intermediate execution state (Liguori, Fig. 2, 206 store state (as intermediate execution state), 214 resume function from state (as checkpoint data includes paused state data); also see Fig. 3, 316, 320, 322, 324 and back to 310 again and 316). In addition, In addition, Cseri further teaches the workflow checkpoint data is pipeline checkpoint data (Cseri, Col 3, lines 17-22, A customer decides how frequently they wish to replicate and hence they also need sufficient information to decide at failover if they want to use the replicated checkpoint or externally upgrade the checkpoint to a later state if they so desire. In this way, they get the minimum configured loss of recency with the ability to override it for a newer pipeline version; Col 13, lines 50-55, data pipeline replicator 232 identifies a latest committed version of a primary pipeline in response to (or created by) a “resume” or “execute” command. In block 604, the data pipeline replicator 232 replicates the identified latest committed version of the primary pipeline to a secondary pipeline; Col 23, lines 1-6, the first data pipeline and the second data pipeline comprising same computing tasks; receiving a confirmation from the client device indicating that a version of the second data pipeline is acceptable for resuming execution of the first data pipeline on the second cloud service (as initiate a second pipeline that is a clone of the first pipeline). As per claim 5, Liguori and Cseri teach the invention according to claim 4 above. Liguori further teaches wherein the paused state data includes at least one of a machine learning model, machine learning model weights, input data received by the at least one node, output data generated by the at least one node, and data corresponding to a last completed epoch (Liguori, [0038] an embodiment where service A 106 is called repeatedly by the function, which stores its state to resume from the stored state when service A 106 responds with the invoke handle and its response; [0060] the system may then instantiate 322 the next configuration to execute the function from the most recent state; [0068] the results of the first stage 402A of the function may be stored with its state such that the results are accessible to the next stage of the function upon revival from the stored state (as output data generated by the at least one node, and data corresponding to a last completed epoch)). As per claim 10, it is a method claim of claim 1 above. Therefore, it is rejected for the same reason as claim 1 above. As per claims 11-13, they are method claims of claims 2-5 above (i.e., claim 13 which is corresponding to claims 4 and 5 above). Therefore, they are rejected for the same reason as claims 2-5 above. Claims 6, 8-9, 14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Liguori and Cseri, as applied to claims 1 and 10 respectively above, and further in view of Uhrenholt (US patent. 11,132,835 B1). As per claim 6, Liguori and Cseri teach the invention according to claim 1 above. Liguori and Cseri fail to specifically teach wherein the request corresponds to a replay request for repeating execution of at least a portion of the first pipeline. However, Uhrenholt teaches wherein the request corresponds to a replay request for repeating execution of at least a portion of the first pipeline (Uhrenholt, Col 10, lines 45-56, primitives currently in the second section of the graphics processing pipeline when the suspend command is received were simply discarded, and then repeated from the beginning when processing is resumed without, e.g., first clearing the associated fragment state (data) the repeated processing of the primitives may introduce observable artefacts into the render output. The technology described herein thus provides an efficient mechanism for clearing the intermediate fragment state (data) from the second section of the graphics processing pipeline (and thereby allowing the processing of the primitives to be safely stopped and then repeated from the beginning; Col 32, lines 19-25, issuing the primitives for which it is determined that the processing was previously suspended into the graphics processing pipeline and repeating at least some of the processing for the primitives in the graphics processing pipeline to generate from the primitives respective sets of one or more graphics fragments for rendering). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Liguori and Cseri with Uhrenholt because Uhrenholt’s teaching of the resume request that corresponding to repeating/replay request for repeating execution of at least some of the portions of the processing pipeline would have provided Liguori and Cseri’s system with the advantage and capability to allowing the processing of the primitives to be safely stopped and then repeated from the beginning which improving the data processing efficiency (see Uhrenholt, Col 12, lines 33-40, “improved overall suspend mechanism”) As per claim 8, Liguori, Cseri and Uhrenholt teach the invention according to claim 6 above. Liguori further teaches allocate compute resources for execution of the first workflow pipeline, wherein the first workflow is configured to commence execution from a state, wherein the pause request was received while the first workflow was previously executed within the orchestration framework (Liguori, Fig. 2, 208 terminate function (as pause), 214 resume function from state; [0028] lines 1-16, the VM configuration 104 is under the control of a service such as a serverless compute service provided by the computing resource service provider to its customers to perform various functions on behalf of the customers. Examples of serverless compute services include AWS Lambda, Google Cloud Functions, IBM Cloud Functions, Fn or Fn Project, platform-as-a-service service providers, and more. A serverless compute service may be serverless in the sense that computing resources are dynamically allocated to perform functions (also referred to as serverless compute functions, serverless functions, Lambda functions) triggered by the events such as invocation of an endpoint from a client (e.g., a web API call via a network such as the Internet). In an embodiment, a serverless compute function is triggered when a serverless compute endpoint is invoked and computing resources in which the function can run are provisioned in response to the trigger being detected. Note, however, that embodiments of the present disclosure need not be limited to use with serverless compute services, but may also be implemented on some other virtual computing service platform, such as a software container service or virtual computer system service. The computing resources utilized may be in accordance with a computing environment that is suitable to execute the function. The computing resources can be physical, which may include physical server computers, or virtual, which may include virtual machines; Abstract, A resume workflow request is received from the entity, where the resume workflow request includes a handle to a snapshot that corresponds to a state of execution of the software code and a response to the operation request to the entity. Using the handle to the snapshot and the response to the operation request, a second instance is caused to execute the software code from the first state to perform a second portion of the workflow; [0042] lines 2-3, the stored state may be accessed by the usage of an invoke handle; [0044] lines 1-3, the resuming 214 of the function may comprise the instantiation of a second virtual machine instance different from the first virtual machine instance to perform the one or more functions of the function). In addition, Cseri teaches the first workflow is second pipeline, and wherein the pause request was received while the first pipeline was previously executed within the orchestration framework (Cseri, Abstract, lines 2-6, detecting a committed version of recurrently executed tasks of a first data pipeline on a primary deployment that is hosted on a first cloud service, and replicating the committed version of the recurrently executed tasks to a second data pipeline on a secondary deployment that is hosted on a second cloud service; Fig. 9, data pipeline including tasks/nodes; Col 3, lines 17-22, A customer decides how frequently they wish to replicate and hence they also need sufficient information to decide at failover if they want to use the replicated checkpoint or externally upgrade the checkpoint to a later state if they so desire. In this way, they get the minimum configured loss of recency with the ability to override it for a newer pipeline version; Col 13, lines 50-55, data pipeline replicator 232 identifies a latest committed version of a primary pipeline in response to (or created by) a “resume” or “execute” command. In block 604, the data pipeline replicator 232 replicates the identified latest committed version of the primary pipeline to a secondary pipeline; Col 15, line 37, SUSPEND the task; Col 23, lines 1-6, the first data pipeline and the second data pipeline comprising same computing tasks; receiving a confirmation from the client device indicating that a version of the second data pipeline is acceptable for resuming execution of the first data pipeline on the second cloud service [Examiner noted: the pause request also applies to second pipeline, and that pause request was recited while the first pipeline was previously executed, due to dynamic changes to the database system (see Col 5, lines 42-43, This architecture supports dynamic changes to the database system]). Further, Uhrenholt teaches when commence execution, it is from a replay state that is prior to an intermediate execution state of at least one node that was active during a pause request (Uhrenholt, Col 7, lines 1-5, writing out a set of suspend operation state information (as include intermediate execution state) for the set of one or more graphics fragments indicating that the processing in respect of the set of one or more graphics fragments was suspended; Col 10, lines 45-56, primitives currently in the second section of the graphics processing pipeline when the suspend command is received were simply discarded, and then repeated from the beginning (as from a replay state that is prior to an intermediate execution state) when processing is resumed without, e.g., first clearing the associated fragment state (data) the repeated processing of the primitives may introduce observable artefacts into the render output. The technology described herein thus provides an efficient mechanism for clearing the intermediate fragment state (data) from the second section of the graphics processing pipeline (and thereby allowing the processing of the primitives to be safely stopped and then repeated from the beginning; Col 32, lines 19-25, issuing the primitives for which it is determined that the processing was previously suspended into the graphics processing pipeline and repeating at least some of the processing for the primitives in the graphics processing pipeline to generate from the primitives respective sets of one or more graphics fragments for rendering). As per claim 9, Liguori, Cseri and Uhrenholt teach the invention according to claim 8 above. Uhrenholt further teaches wherein the replay state that is prior to the intermediate execution state of the at least one node corresponds to at least one other node from the plurality of nodes, wherein the at least one other node completed execution prior to the pause request (Uhrenholt, Col 7, lines 1-5, writing out a set of suspend operation state information (as include intermediate execution state) for the set of one or more graphics fragments (As nodes) indicating that the processing in respect of the set of one or more graphics fragments was suspended; Col 8, lines 18-19, (repeatedly) suspend/resume the processing of a (and each) render output; Col 11, lines 46-54, when processing is to be resumed, the graphics primitives for which processing was suspended in this way, i.e. that were in the second section of the graphics processing pipeline when the suspend command was received, can safely be (and in the technology described herein are) re-issued into the graphics processing pipeline and processed again from the beginning in order to rebuild the necessary intermediate fragment state (data) for the render output (as prior to the intermediate execution state of the at least one node corresponds to at least one other node from the plurality of nodes, wherein the at least one other node completed execution prior to the pause request); also see Col 10, lines 45-56, primitives currently in the second section of the graphics processing pipeline when the suspend command is received were simply discarded, and then repeated from the beginning when processing is resumed without, e.g., first clearing the associated fragment state (data) the repeated processing of the primitives may introduce observable artefacts into the render output. The technology described herein thus provides an efficient mechanism for clearing the intermediate fragment state (data) from the second section of the graphics processing pipeline (and thereby allowing the processing of the primitives to be safely stopped and then repeated from the beginning; Col 32, lines 19-25, issuing the primitives for which it is determined that the processing was previously suspended into the graphics processing pipeline and repeating at least some of the processing for the primitives in the graphics processing pipeline to generate from the primitives respective sets of one or more graphics fragments for rendering). As per claim 14, it is a method claim of claim 6 above. Therefore, it is rejected for the same reason as claim 6 above. As per claim 16, it is a method claim of claim 8 above. Therefore, it is rejected for the same reason as claim 8 above. Claims 7 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Liguori, Cseri and Uhrenholt, as applied to claims 6 and 14 respectively above, and further in view of Bradshaw et al. (US Pub. 2015/0277965 A1). As per claim 7, Liguori, Cseri and Uhrenholt teach the invention according to claim 6 above. Liguori, Cseri and Uhrenholt fail to specifically teach wherein the pipeline checkpoint data includes input data and output data corresponding to each of a plurality of nodes in the first pipeline. However, Bradshaw teaches wherein the pipeline checkpoint data includes input data and output data corresponding to each of a plurality of nodes in the first pipeline (Bradshaw, Fig. 1, 106 pipeline; [0040] the process 400 includes determining a pipeline state in response to executing the pipeline on the first input data set, the pipeline state including a representation of the first input data set and the first output data set. The pipeline state may be updated in response to executing the pipeline on the set of differences from the first input data set to generate an updated pipeline state, the updated pipeline state including a representation of the second input data set and the second output data set. In some cases, a pipeline object state may be determined for each of the one or more pipeline objects in response to executing the pipeline on the first input data set, the pipeline object state including a representation of the input data set and the output data set for the pipeline object. The pipeline object state may also be updated in response to executing the pipeline on the set of differences from the first input data set to generate an updated pipeline object state, the updated pipeline object state including differences from the input data set and the output data set for the pipeline object). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Liguori, Cseri and Uhrenholt with Bradshaw because Bradshaw’s teaching of pipeline object state (as pipeline checkpoint data) including a representation of the input data set and the output data set for the pipeline object would have provided Liguori, Cseri and Uhrenholt’s system with the advantage and capability to allow the system to quickly resume the execution based on the saved input and output state information which improving the system performance and efficiency. As per claim 15, it is a method claim of claim 7 above. Therefore, it is rejected for the same reason as claim 7 above. Claims 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Uhrenholt (US Patent. 11,132,835 B1) in view of CAIN et al. (US Pub. 2016/0373833 A1) and further in view of MERCURI (US Pub. 2012/0116980 A1). As per claim 17, Uhrenholt teach the invention substantially as claimed including A system comprising (Uhrenholt, Fig. 1;): a memory; and one or more processors coupled to the memory, the one or more processors being configured to (Uhrenholt, Fig. 1, GPU and memory; Col 37, lines 5-15, the technology described herein is in an embodiment implemented in a system comprising a memory system and a graphics processing unit (GPU) (a graphics processor). Data for a render output (e.g. image to be displayed) is in an embodiment stored in a memory of the memory system. The GPU is in an embodiment arranged to read required data from the memory system for generating the render output (e.g. in the manner described above). The render output, once generated in this way, is then in an embodiment displayed, e.g. on a display such as a screen or the like); also see Col 40, lines 12-16, the graphics processor will typically receive commands and data from a driver, e.g. executing on a host processor, that indicates to the graphics processor the operations that it is to carry out and the data to be used for those operations): receive, by a resolver in an orchestration framework, a pause request for stopping execution of a first pipeline that includes a plurality of nodes (Uhrenholt, Fig. 2, 21 Fragment shader endpoint, 26 primitive reorder unit (as resolver in an orchestration framework, i.e., Fig. 2, 10) receive VTILE_SUSPEND from 21; Abstract, To suspend processing of a sequence of primitives being processed in a graphics processing pipeline; (As a first pipeline that includes a plurality of nodes (i.e., sequence of primitives)); identify at least one active node from the plurality of nodes that is currently executing (Uhrenholt, Col 43, lines 47-61, At the same time, the suspend operation is signaled to the primitive re-order unit 26 (step 303), which then determines a suitable suspend operation boundary primitive at which to suspend the current sequence of primitives. In particular, in the present embodiments, the selected boundary primitive is the last primitive in the primitive re-order unit 26 that is guaranteed to still be in order, and for which the processing thus far has not produce any observable effects for the render output. The primitive re-order unit 26 then responds with a primitive identifier (“primitive_id”) identifying the position of the selected boundary primitive within the sequence of primitives for the rendering tile that is currently being processed, as well as a tile identifier (“VTILE_id”) identifying the tile in question. The tile buffer 33 is then notified which tile is to be suspended).; receive paused state data that is associated with the at least one active node, wherein the paused state data corresponds to an intermediate execution state of the at least one active node and the paused state data associated with the at least one active node (Uhrenholt, Abstract, To suspend processing of a sequence of primitives (as including the at least one active node) being processed in a graphics processing pipeline; Col 10, lines 29-36, provide a more efficient mechanism for handling the relatively large amounts of intermediate fragment state (data) that may typically be generated by the (out of order) processing of primitives in the second section of the graphics processing pipeline, which intermediate fragment state (data) may otherwise need to be written out in order to be able to safely resume processing of these primitives; Col 13, lines 30-35, each execution thread of the programmable execution unit in an embodiment has an associated set of registers for storing data for the execution thread. The data may be loaded into the registers, and written out from the registers from and to an appropriate memory system of or accessible to the graphics processor; Col 14, lines 51-61, Once the execution of the fragment shader program for a set of graphics fragments has been suspended, the content of the registers associated with the threads of the group of one or more execution threads executing the fragment shader program for the set of one or more graphics fragments is in an embodiment then written out, e.g., to storage. This can then allow the execution of the fragment shader program for the set of one or more graphics fragments to be resumed from the next instruction to be executed in a straightforward and efficient manner (as receive paused state data that is associated with the at least one active node)). Uhrenholt fails to specifically teach send a pause command to the at least one active node from the plurality of nodes in the first pipeline. However, CAIN teaches send a pause command to the at least one active node from the plurality of nodes in the first pipeline (CAIN, [0656] The resource manager 6830 retrieves the previously allocated resource from the first pipeline 6840 according to the step S6818 or the step S6820 and the retrieved resource is allocated to the second pipeline 6850 again (S6822). In this case, the resource manager 6830 transmits a pause command to the first application 6810 related to the first pipeline (as send a pause command to the at least one active node from the plurality of nodes in the first pipeline) 6840 (S6824). And, the resource manager 6830 suspends the first pipeline 6840 (S6826)). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Uhrenholt with CAIN because CAIN’s teaching of sending the pause command directly to the application/node of the pipeline for suspending the pipeline would have provided Uhrenholt’s system with the advantage and capability to allow the system to easily suspending the selected node/application from the pipeline which improving the system performance and efficiency. Uhrenholt and CAIN fail to specifically teach send pipeline checkpoint data corresponding to the first pipeline to a server, wherein the pipeline checkpoint data includes the paused state data. However, MERCURI teaches send pipeline checkpoint data corresponding to the first pipeline to a server, wherein the pipeline checkpoint data includes the paused state data (MERCURI, Fig. 1, 130, 132 as server; Fig. 3, 340 receive stop command, 346 store workflow state (as pipeline checkpoint data), 348 transmit workflow state, 350 receive workflow state, 354, 356, 358 and 360 continue executing workflow with new state; [0099] The workflow state may be received in block 350 by the workflow manager 302 and may be transmitted to the second workflow provider 306 in block 354. The second workflow provider 306 may receive the workflow state in block 356 and may perform a translation or conversion in block 358 to match the local schema. The second workflow provider 306 may continue executing the workflow with the new state in block 360 (please note: pipeline was taught by Uhrenholt)). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Uhrenholt and CAIN with MERCURI because MERCURI’s teaching of sending the workflow checkpoint data to another server provider to continue execution would have provided Uhrenholt and CAIN s system with the advantage and capability to allow the system to sending the paused state of the workflow to a low cost server for continue execution which improving the reducing the execution cost and improving the system performance (see, MERCURI, [0045] “ when there may be a cost advantage to change providers”). As per claim 20, Uhrenholt, CAIN and MERCURI teach the invention according to claim 17 above. Uhrenholt further teaches wherein the paused state data includes at least one of a machine learning model, machine learning model weights, input data received by the at least one active node, output data generated by the at least one active node, and data corresponding to a last completed epoch (Uhrenholt, Abstract, To suspend processing of a sequence of primitives being processed in a graphics processing pipeline; Col 9, lines 57-61, in response to receiving a command to suspend processing, rather than simply at that point immediately, e.g., storing out any and all required data, state, etc. needed to allow the processing of the primitives to be resumed at a later time; Col 50, line 66 – Col 51, line 4, whose shader program execution is to be resumed, the previously stored warp state will first be loaded from memory (step 80). The corresponding register file content for the thread group in question will then also be loaded from memory into the appropriate registers for the thread group (step 81). (as data corresponding to a last completed epoch)). Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Uhrenholt, CAIN and MERCURI, as applied to claim 17 above, and further in view of Zorin et al. (US Pub. 2023/0409386 A1). As per claim 18, Uhrenholt, CAIN and MERCURI teach the invention according to claim 17 above. Uhrenholt teaches wherein the at least one active node from the plurality of nodes is identified (Uhrenholt, Col 43, lines 47-61, At the same time, the suspend operation is signaled to the primitive re-order unit 26 (step 303), which then determines a suitable suspend operation boundary primitive at which to suspend the current sequence of primitives. In particular, in the present embodiments, the selected boundary primitive is the last primitive in the primitive re-order unit 26 that is guaranteed to still be in order, and for which the processing thus far has not produce any observable effects for the render output. The primitive re-order unit 26 then responds with a primitive identifier (“primitive_id”) identifying the position of the selected boundary primitive within the sequence of primitives for the rendering tile that is currently being processed, as well as a tile identifier (“VTILE_id”) identifying the tile in question. The tile buffer 33 is then notified which tile is to be suspended). Uhrenholt, CAIN and MERCURI fail to specifically teach the identification is based on a directed acyclic graph that is associated with the first pipeline. However, Zorin teaches the identification is based on a directed acyclic graph that is associated with the first pipeline (Zorin, [0004] The orchestration interface iteratively identifies dependent tasks that depend on already completed tasks as per the runtime dependencies by traversing the DAG and determining the status of each node and the dependencies to and from respective nodes and, accordingly, instructs the TES to execute the dependent tasks identified; [0042] pipelines of tasks). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Uhrenholt, CAIN and MERCURI with Zorin because Zorin’s teaching of identifying the status of the tasks/node based on the DAG would have provided Uhrenholt, CAIN and MERCURI’s system with the advantage and capability to allow the system to easily determining the execution status of the different tasks within the pipeline which improving the system performance and efficiency. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Uhrenholt, CAIN and MERCURI, as applied to claim 17 above, and further in view of Bradshaw et al. (US Pub. 2015/0277965 A1). As per claim 19, Uhrenholt, CAIN and MERCURI teach the invention according to claim 17 above. Uhrenholt, CAIN and MERCURI fail to specifically teach wherein the pipeline checkpoint data includes input data and output data corresponding to each of the plurality of nodes. However, Bradshaw teaches wherein the pipeline checkpoint data includes input data and output data corresponding to each of the plurality of nodes (Bradshaw, Fig. 1, 106 pipeline; [0040] the process 400 includes determining a pipeline state in response to executing the pipeline on the first input data set, the pipeline state including a representation of the first input data set and the first output data set. The pipeline state may be updated in response to executing the pipeline on the set of differences from the first input data set to generate an updated pipeline state, the updated pipeline state including a representation of the second input data set and the second output data set. In some cases, a pipeline object state may be determined for each of the one or more pipeline objects in response to executing the pipeline on the first input data set, the pipeline object state including a representation of the input data set and the output data set for the pipeline object. The pipeline object state may also be updated in response to executing the pipeline on the set of differences from the first input data set to generate an updated pipeline object state, the updated pipeline object state including differences from the input data set and the output data set for the pipeline object). It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to have combined the teaching of Uhrenholt, CAIN and MERCURI with Bradshaw because Bradshaw’s teaching of pipeline object state (as pipeline checkpoint data) including a representation of the input data set and the output data set for the pipeline object would have provided Uhrenholt, CAIN and MERCURI’s system with the advantage and capability to allow the system to quickly resume the execution based on the saved input and output state information which improving the system performance and efficiency. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZUJIA XU whose telephone number is (571)272-0954. The examiner can normally be reached M-F 9:30-5:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZUJIA XU/Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Apr 24, 2023
Application Filed
Nov 26, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602249
Hardware Resource Allocation System for Allocating Resources to Threads
2y 5m to grant Granted Apr 14, 2026
Patent 12541397
THREAD MANAGEMENT
2y 5m to grant Granted Feb 03, 2026
Patent 12504983
SUPERVISORY DEVICE WITH DEPLOYED INDEPENDENT APPLICATION CONTAINERS FOR AUTOMATION CONTROL PROGRAMS
2y 5m to grant Granted Dec 23, 2025
Patent 12498971
COMPUTING TASK SCHEDULING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Dec 16, 2025
Patent 12436805
COMPUTER SYSTEM WITH PROCESSING CIRCUIT THAT WRITES DATA TO BE PROCESSED BY PROGRAM CODE EXECUTED ON PROCESSOR INTO EMBEDDED MEMORY INSIDE PROCESSOR
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+81.5%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 169 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month