Prosecution Insights
Last updated: April 19, 2026
Application No. 18/434,452

RUNTIME LAYER OPERATING CONVERGENCE LOOP FOR APPLICATION DEPLOYMENT AND ROLLBACK

Non-Final OA §101§103
Filed
Feb 06, 2024
Examiner
NGUYEN, MONGBAO
Art Unit
2192
Tech Center
2100 — Computer Architecture & Software
Assignee
Prodvana
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
482 granted / 562 resolved
+30.8% vs TC avg
Strong +43% interview lift
Without
With
+43.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
24 currently pending
Career history
586
Total Applications
across all art units

Statute-Specific Performance

§101
17.1%
-22.9% vs TC avg
§103
58.4%
+18.4% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 562 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This initial office action is based on the application filed on 02/06/2024, which claims 1-20 have been presented for examination. Status of Claim Claims 1-20 are pending in the application and have been examined below, of which, claims 1, 12 and 15 are presented in independent form. Priority The present application claims priority to U.S. provisional patent application serial number 63/489,934, filed March 13th, 2023 Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/09/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Abstract Objection Line 1 of Abstract recites “disclosed for…” Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. Claim Objections Claims 1-20 are objected to because of the following informalities: Claim 1, line 9, “the plurality of clusters” lacks proper antecedent basis. It should have been --the plurality of cluster capabilities--. Claim 12, line 1, replace “configured to store” with --storing--. Further, line 11, “the plurality of clusters” lacks proper antecedent basis. It should have been --the plurality of cluster capabilities--. Claim 20, line 14, “the plurality of clusters” lacks proper antecedent basis. It should have been --the plurality of cluster capabilities--. Claims 2-11 and 13-19 depend on the objected claims and inherit the same issues. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis specific to Claims 1, 12 and 20 is being presented below. Claims 1, 12 and 20: Step 1 Analysis: Claim 1 of the instant application is direct to process. Claim 12 of the instant application is direct to product. Claim 20 of the instant application is direct to apparatus. Step 2 Analysis: Claims 1, 12 and 20 recites: (a) receiving, by a convergence tool, using an API of the application layer, configuration information for a service, the configuration information received using the runtime layer; (b) accessing, by the convergence tool, a database storing a plurality of cluster capabilities available for operating the service, the database populated by the runtime layer maintaining a state of capabilities for each of the plurality of clusters; (c) determining, by the convergence tool, from the configuration information, a plurality of deployment conditions for the service; (d) instantiating, by the convergence tool, a convergence loop, the convergence loop monitoring for each deployment condition of the plurality of deployment conditions in parallel; (e) determining, by the convergence tool, that convergence has occurred based on each of the deployment conditions being met; (f) in response to determining that convergence has occurred, instructing, by the convergence tool, using an API of the infrastructure layer, the infrastructure layer to implement the service. Step 2A -- Prong 1: The claims 1, 12 and 20 recites the limitations of: (c) determining, by the convergence tool, from the configuration information, a plurality of deployment conditions for the service; (e) determining, by the convergence tool, that convergence has occurred based on each of the deployment conditions being met; Limitations (c) and (e) are limitations that, as drafted, are processes that, under its broadest reasonable interpretations, covers performance of the limitation in the mind. That is, nothing in the claim elements precludes the step from practically being performed in the mind or with a pen and paper, i.e. “determining” can be performed in the human mind through observation, evaluation, judgement, opinion with the aid of pen and paper. As such, these limitations fall within the “Mental Processes” grouping of abstract ideas. Step 2A -- Prong 2: The claim 1 recites the additional limitations of “a convergence tool”. The limitation “a convergence tool” recited as tool perform abstract idea. Claim 12 recites the additional limitations of “A non-transitory computer readable medium” and “one or more processors”. The limitations of ““A non-transitory computer readable medium” and “one or more processors” is recited at a high level of generality, i.e., merely instructions to implement the abstract idea on a generic computer or merely uses a computer as a tool to perform the abstract idea. The limitations “a convergence tool” recited as tool perform abstract idea. Claim 20 recites the additional limitations of “A system”, “a non-transitory medium comprising memory” and “one or more processors”. The limitations of “A system”, “a non-transitory medium comprising memory” and “one or more processors” is recited at a high level of generality, i.e., merely instructions to implement the abstract idea on a generic computer or merely uses a computer as a tool to perform the abstract idea. The limitations “a convergence tool” recited as tools perform abstract idea. Additionally, limitations (a) perform as well-understood, routine and conventional activity, limitation (b) is merely insignificant extra solution activity of accessing data (d) are merely insignificant extra solution activity of observing data and (f) are merely insignificant extra solution activity of processing data Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Step 2B: As explained with respect to Step 2A Prong Two, the additional elements in the claim are recited at a high level of generality and amount to no more than mere instructions to apply the exception using generic computer components. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The same analysis applies here in 2B, i.e., simply adding extra-solution activity or well-understood, routine and conventional activity or generic computer components does not integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B since the courts have identified functions such as gathering, displaying, updating, transmitting/receiving and storing data as well- understood, routine, conventional activity. See MPEP 2106.05(d) and See MPEP 2106.05(g) . Therefore, claims are ineligible. Dependent claims Additionally, claims 2 and 13 recite “wherein at least two of the plurality of clusters use different data schemas with respect to one another” is merely insignificant extra solution activity. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claims 2 and 13 are ineligible. Additionally, claims 3 and 14 recite “wherein the state of capabilities maintained by the database are abstracted to a normalized data schema, and wherein data deployment of the service comprises performing a data mutation to the different data schemas on a per-cluster basis for the at least two of the plurality of clusters” is merely insignificant extra solution activity of data performance. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claims 3 and 14 are ineligible. Additionally, claims 4 and 15 recite “wherein the plurality of deployment conditions comprise interdependencies between at least two interdependent deployment conditions” is merely insignificant extra solution activity of evaluating data. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claims 4 and 15 are ineligible. Additionally, claims 5 and 16 recite “wherein instructing the infrastructure layer to implement the service comprises automatically authorizing the infrastructure layer to deploy the service without prompting a human” is merely insignificant extra solution activity of evaluating data. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claims 5 and 16 are ineligible. Additionally, claims 6 and 17 recites “wherein the configuration information comprises one or more preconditions for deploying the service, the one or more preconditions being a subset of the plurality of deployment conditions for the service, the plurality of deployment conditions comprising protections for deploying the service” is merely insignificant extra solution activity of evaluating data. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claims 6 and 17 are ineligible. Additionally, claims 7 and 18 recite “wherein instructing the infrastructure layer to implement the service comprises prompting a human with an alert that convergence has occurred based on the protections” is merely insignificant extra solution activity of displaying data. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claims 7 and 18 are ineligible. Additionally, claims 8 and 19 recites “wherein the alert comprises a selectable option, and wherein the method further comprises responsive to detecting selection of the selectable option, deploying the service” is merely insignificant extra solution activity of delivering data. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claims 8 and 19 are ineligible. Additionally, claim 10 recites “further comprising: detecting that the service is not in compliance with at least one deployment condition of the plurality of deployment conditions; and responsive to detecting that the service is not in compliance with at least one deployment condition of the plurality of deployment conditions, rolling back the service to an earlier version” is merely insignificant extra solution activity of restoring data. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claim 10 is ineligible. Additionally, claim 11 recites “wherein the earlier version is a last known working version” is merely insignificant extra solution activity of defining data. Accordingly, these limitations do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea or provide an inventive concept and thus do not amount to significantly more that the abstract idea. As such, these claims fail both Step 2A prong 2 and Step 2B. Therefore, claim 11 is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7, 9-18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fu et al. (US Pub. No. 2021/0224093 A1 – IDS filed on 10/09/2024) in view of White et al. (US Pub. No. 2023/0409409 A1 – herein after White). Regarding claim 1. Fu discloses A method for orchestrating application protocol interfaces (APIs) of an infrastructure layer and an application layer using a runtime layer between the infrastructure layer and the application layer (facilitate specification, configuration, orchestration, deployment, and management of composable distributed systems – Abstract. PaaS providers typically manage the platform (infrastructure and software stack), while the application run-time/execution environment may be user-managed – See paragraph [0030-0031]. Cloud adapters, which may run on pilot cluster 259 and/or invoked by pilot cluster 279 (e.g. via application programming interfaces (APIs)) may be used to build cloud specific cluster images for the specified cloud(s) (e.g. in system composition specification S 150) – See paragraph [0173]), the method comprising: receiving, by a convergence tool (a declarative implementation of the composable distributed system may ensure that a system converges – See paragraph [0042]. A declarative implementation of the composable distributed system may ensure that a system converges – see paragraph [0042]. Examiner respectfully notes that convergence tool is as declarative implementation/model – See paragraph [0060]. Receive system composition specification S 150 and/or information to obtain system composition specification S 150 – See paragraph [0114]), using an API of the application layer (lead node(s) 270.sub.i.sup.l for cluster T1 may be instantiated (e.g. based on the cloud specific images) by appropriate cloud specific commands/APIs for the cloud provider 510 – See paragraphs [0173-0175]), configuration information for a service (Containers may take the form of a package (e.g. an image), which may include the application, application dependencies (e.g. services used by the application), the application's runtime environment (e.g. environment variables, privileges etc.), application libraries, other executables, and configuration files – See paragraph [0031]), the configuration information received using the runtime layer (PaaS providers typically manage the platform (infrastructure and software stack), while the application run-time/execution environment may be user-managed – See paragraphs [0030-0032]. A GUI may facilitate selection and/or configuration of components associated with a corresponding layer pack. For each layer, cluster profile layer selection menu 102 may facilitate selection of the corresponding available layer components or implementation choices or “Packs”. Packs represent available implementation choices for a corresponding layer. In some embodiments, (a) packs may be built and managed by providers and/or system operators (which are referred to herein as “default packs”), and/or (b) users may define, build and manage packs (which are referred to herein as “custom packs”). User selection of pack components/implementations may be facilitated by cluster profile layer selection menu 102, which may be provided using a GUI – See paragraph [0047]); accessing, by the convergence tool (a declarative implementation of the composable distributed system may ensure that a system converges – see paragraph [0042]. Examiner respectfully notes that convergence tool is as declarative implementation/model – See paragraph [0060]), a database storing a plurality of cluster capabilities available for operating the service (Deployment refers to the process of enabling access to functionality provided by the distributed system (e.g. cloud infrastructure, cloud platform, applications, and/or services). Orchestration refers to the coordination of tasks associated with a distributed system/distributed applications including instantiation, task sequencing, task scheduling, task distribution, scaling, etc. – See paragraph [0038]. The components associated with each layer of cluster profile 104 may be selected and configured by a user (e.g. through a Graphical User Interface (GUI)) using cluster profile layer selection menu 102, and the components selected and/or configured may be stored in file such as a JavaScript Object Notation (JSON) file… A GUI may facilitate selection and/or configuration of components associated with a corresponding layer pack. For each layer, cluster profile layer selection menu 102 may facilitate selection of the corresponding available layer components or implementation choices or “Packs”. Packs represent available implementation choices for a corresponding layer. In some embodiments, (a) packs may be built and managed by providers and/or system operators (which are referred to herein as “default packs”), and/or (b) users may define, build and manage packs (which are referred to herein as “custom packs”) – See paragraph [0047]), the database populated by the runtime layer maintaining a state of capabilities for each of the plurality of clusters (cluster profile selections and/or layer implementations that meet or exceed the specified security policy parameters may be displayed to the user for selection/configuration (e.g. during cluster configuration and/or in cluster profile layer selection menu 102), when composing the distributed system/applications (e.g. using a UI). When DPE 202 is implemented as an SaaS, then policies and/or policy parameters that affect user menu choices or user cluster configuration options may be stored in a database (e.g. associated with DPE 202) – See paragraph [0100]. Manage the running distributed system to maintain consistency with a target state. In some embodiments, the DPE may use cluster profile B, the cluster specification C with associated parameters to build a cluster image for each cluster, which may be used to instantiate and deploy the cluster(s) – See paragraphs [0045-0046]. if a security policy specifies one or more parameters to be met (e.g. “security hardened”), then, cluster profile selections and/or layer implementations that meet or exceed the specified security policy parameters may be displayed to the user for selection/configuration (e.g. during cluster configuration and/or in cluster profile layer selection menu 102), when composing the distributed system/applications (e.g. using a UI). When DPE 202 is implemented as an SaaS, then policies and/or policy parameters that affect user menu choices or user cluster configuration options may be stored in a database (e.g. associated with DPE 202) – See paragraphs [0097-0100]); determining, by the convergence tool, from the configuration information, a plurality of deployment conditions for the service (a declarative model implementation may: (a) periodically monitor distributed system composition and/or system state during distributed system deployment, orchestration, run time, maintenance, and/or tear down (e.g. over the system lifecycle); (b) determine that a current system composition and/or current system state is not in compliance with a system composition specification and/or target system state specification, respectively – See paragraph [0060]. Based on stored user specified cluster and/or node pool configurations, hardware specifications associated with anode 270.sub.i.sup.w_k may be used to assign nodes to node pools/clusters and/or to designate one or more nodes as lead nodes for a cluster (e.g. in conformance with cluster specification 180/node pool related specification information 180-k) – See paragraphs [0115-0118]); instantiating, by the convergence tool, a convergence [[loop]], the convergence [[loop]] monitoring for each deployment condition of the plurality of deployment conditions in parallel (a declarative model implementation may: (a) periodically monitor distributed system composition and/or system state during distributed system deployment, orchestration, run time, maintenance, and/or tear down (e.g. over the system lifecycle); (b) determine that a current system composition and/or current system state is not in compliance with a system composition specification and/or target system state specification, respectively; and (c) effectuate remedial action to bring system composition into compliance with the system composition specification and/or the target system state specification, respectively – See paragraph [0060]. The configuration of node pools in a cluster may be performed in parallel. In some embodiments, when the distributed system includes a plurality of clusters, clusters may be configured in parallel – See paragraph [0121]. Multiple node pools for a cluster may be instantiated (e.g. in parallel) using the approach described in FIG. 4 – See paragraph [0168 and 0205]); determining, by the convergence tool, that convergence has occurred based on each of the deployment conditions being met (the use of cluster profiles, which may be tested, published, and re-used, facilitates consistency, repeatability, and facilitates system wide maintenance (e.g. rollbacks/updates). Further, by using a declarative model to realize the distributed system (as composed)—compliance with the system composition specification (e.g. as outlined in the cluster profile and cluster specification) can be ensured – See paragraph [0060]); and in response to determining that convergence has occurred, instructing, by the convergence tool, using an API of the infrastructure layer, the infrastructure layer to implement the service (during system operation, the composition and state of the composable distributed system may be monitored and brought into compliance with the specified composition (e.g. as specified or updated) and/or declared state (e.g. as specified or updated) – See paragraph [0028]. Orchestration may involve various resources associated with the distributed system including infrastructure, software, and/or services. In general, application deployment may depend on various operational parameters including orchestration (e.g. for cloud-native applications), availability, resource management, persistence, performance, scalability, networking, security, monitoring, etc. These operational parameters may also apply to containers…. to facilitate compliance, containers may be deployed along with VMs or over physical hardware – See paragraph [0032]. Cloud adapters, which may run on pilot cluster 259 and/or invoked by pilot cluster 279 (e.g. via application programming interfaces (APIs)) may be used to build cloud specific cluster images for the specified cloud(s) (e.g. in system composition specification S 150) – See paragraph [0173]). Lu discloses a declarative model implementation may: (a) periodically monitor distributed system composition and/or system state during distributed system deployment, orchestration, run time, maintenance, and/or tear down (e.g. over the system lifecycle) – See paragraph [0060]. Lu does not disclose convergence loop. White discloses instantiating, by the convergence tool, a convergence loop, the convergence loop monitoring for each deployment condition of the plurality of deployment conditions in parallel (the order in which the NFs and other resources are to be instantiated is also very complex and includes many streams of task that are able to work in parallel and other streams that are not. The SCA creates an ordering which is based on constraints specified in the NS design – See paragraph [0043]. As the process proceeds, the SCA convergence loop 314 stores data in a working set in a data store within the SCA. The working set stores a record of where it is, i.e. what resources it has already created, updated. and/or deleted. The store is internal and allows the SCA to track which operations have happened, and what they have returned. The working set and the SCA desired state are compared from time to time to establish progress. The SCA uses the NS site convergence state 304 to report back what happened at the NS level, for example: “convergence is ongoing”, “convergence has completed”, or “convergence has failed” – See paragraph [0057]); White also discloses determining, by the convergence tool, that convergence has occurred based on each of the deployment conditions being met (determines a state of convergence to the required network via the SCA convergence loop 234. The SCA convergence loop 234 loops back to the NS site convergence state 226 – See paragraph [0041]); and in response to determining that convergence has occurred, instructing, by the convergence tool, using an API of the infrastructure layer, the infrastructure layer to implement the service (executing a site convergence agent (SCA) convergence loop operable to, for each of the required resources, implement the required resource by calling for the required resource to be created, updated, or deleted; determining a convergence of the required resources to the sequence of the required resources by comparing the implemented required resources with the list of the required resources; and deploying the requested network service by instantiating the implemented required resources—See paragraph [0084]). It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use White’s teaching into Lu’s invention because incorporating White’s teaching would enhance Lu to enable to execute convergence loop operable to, for each of the required resources and deploying the requested service by instantiating the implemented required resources as suggested by White (paragraph [0084]). Regarding claim 2, the method of claim 1, Fu discloses wherein at least two of the plurality of clusters use different data schemas with respect to one another (the distributed system as composed include clusters: Cluster 1 207-1 . . . Cluster-r 207-r . . . and Cluster N. Each cluster 207-i may be associated with a corresponding cluster specification C.sub.i 180-i and cluster profile B.sub.i 104-I – See paragraphs [0181-0185]). Regarding claim 3, the method of claim 2, Fu discloses wherein the state of capabilities maintained by the database are abstracted to a normalized data schema (the composition and state of the composable distributed system may be monitored and brought into compliance with the specified composition (e.g. as specified or updated) and/or declared state (e.g. as specified or updated) – See paragraph [0028]), and wherein data deployment of the service comprises performing a data mutation to the different data schemas on a per-cluster basis for the at least two of the plurality of clusters (specification of components/resources associated with each layer. In some embodiments, the specification of layers and/or the specification of components/resources associated with each layer may be cluster-specific. For example, a first cluster may be specified as being composed with a configuration (e.g. layers and layer components) that is different from the configuration associated with one or more second clusters – See paragraphs [0036-0037]). Regarding claim 4, the method of claim 1, Fu discloses wherein the plurality of deployment conditions comprise interdependencies between at least two interdependent deployment conditions (pilot cluster 279 may include one or more pilot sub-clusters, which may coordinate to deploy the distributed system in accordance with system composition specification S 150 – See paragraphs [0128-0132]). Regarding claim 5, the method of claim 1, Fu discloses wherein instructing the infrastructure layer to implement the service comprises automatically authorizing the infrastructure layer to deploy the service without prompting a human (cluster configuration related information 288 may include version numbers and/or version metadata (e.g. “latest”, “stable” etc.), credentials, and/or other parameters for configuration of a selected layer implementation. In some embodiments, adapters for various layers/implementations may be specified and stored as part of cluster configuration related information 288. Adapters may be managed using cluster profile management block 232. Adapters may facilitate installation and/or configuration of layer implementations on a composed distributed system – See paragraph [0095]. System composition specification S 150 and/or cluster specification C.sub.i 180 may indicate that the cluster is to deployed on an Amazon AWS cloud, and the cloud credentials may be shared parameters among clusters C.sub.i. Cluster configuration Q.sub.i may include implementation details and/or other parameters specific to cloud provider to deploy the cluster T.sub.i on AWS – See paragraph [0190]). Regarding claim 6, the method of claim 1, Fu discloses wherein the configuration information comprises one or more preconditions for deploying the service (containers may create additional layers of complexity. For example, applications may use multiple containers, which can potentially be deployed across multiple servers based on various system parameters. Thus, container operation and deployment can be complex. To ensure proper deployment, realize resource utilization efficiencies, and optimal run time performance, containers are orchestrated – See paragraph [0032]), the one or more preconditions being a subset of the plurality of deployment conditions for the service (application deployment may depend on various operational parameters including orchestration (e.g. for cloud-native applications), availability, resource management, persistence, performance, scalability, networking, security, monitoring, etc. These operational parameters may also apply to containers. Accordingly, the use and deployment of containers may also involve extensive customization to ensure compliance with operational parameters – See paragraph [0032]), the plurality of deployment conditions comprising protections for deploying the service (such operational parameters can lead to an increase distributed application deployment complexity, and/or decrease resource utilization/performance, and/or result in deployment errors (e.g. due to the complexity) that may expose the application to unwanted risks (e.g. security risks) – See paragraph [0032]). Regarding claim 7, the method of claim 6, Lu discloses wherein instructing the infrastructure layer to implement the service comprises prompting a human with an alert that convergence has occurred based on the protections (configuration engines 281.sub.i.sup.w_k, monitor, and report a configuration and state of a tenant node 270.sub.i.sup.w_k, provide cluster profile updates (e.g. received from an external entity) – See paragraph [0105]). White also discloses wherein instructing the infrastructure layer to implement the service comprises prompting a human with an alert that convergence has occurred based on the protections (the working set and the SCA desired state are compared from time to time to establish progress. The SCA uses the NS site convergence state 304 to report back what happened at the NS level, for example: “convergence is ongoing”, “convergence has completed”, or “convergence has failed” – See paragraph [0057]). Regarding claim 9, the method of claim 1, further comprising: Fu discloses deploying the service (automated deployment of end-to-end composable distributed systems – See paragraphs [0123-0125]); and while the service is deployed, monitoring the service for compliance with the plurality of deployment conditions (automated deployment of end-to-end composable distributed systems, while continuing to support orchestration, deployment, and scaling of applications, including containerized applications – See paragraphs [0123-0125]). Regarding claim 10, the method of claim 9, further comprising: Fu discloses detecting that the service is not in compliance with at least one deployment condition of the plurality of deployment conditions (determine that a current system composition and/or current system state is not in compliance with a system composition specification and/or target system state specification, respectively; and (c) effectuate remedial action to bring system composition into compliance with the system composition specification and/or the target system state specification –See paragraph [0060]); and responsive to detecting that the service is not in compliance with at least one deployment condition of the plurality of deployment conditions, rolling back the service to an earlier version (the updates may be applied in a rolling fashion to bring the system in compliance with the new declared state (e.g. as reflected by cluster specification updates 278). For example, nodes 270 may be updated one at a time, so that other nodes can continue running thus ensuring system availability. Thus, the composable distributed system and applications executing on the composable distributed system may continue running as the system is updated. In some embodiments, cluster specification updates 278 may specify that upon detection of any failures, or errors, a rollback to a prior state (e.g. prior to the attempted update) should be initiated – See paragraph [0123]). Regarding claim 11, the method of claim 10, Fu discloses wherein the earlier version is a last known working version (cluster specification updates 278 may specify that upon detection of any failures, or errors, a rollback to a prior state (e.g. prior to the attempted update) should be initiated – See paragraphs [0123-0124]). Regarding claim 12. A non-transitory computer readable medium configured to store instructions, the instructions for orchestrating application protocol interfaces (APIs) of an infrastructure layer and an application layer using a runtime layer between the infrastructure layer and the application layer, the instructions, when executed by one or more processors, causing the processor to perform operations, the instructions comprising instructions to: Regarding claim 12, recites the same limitations as rejected claim 1 above. Regarding claim 13, recites the same limitations as rejected claim 2 above. Regarding claim 14, recites the same limitations as rejected claim 3 above. Regarding claim 15, recites the same limitations as rejected claim 4 above. Regarding claim 16, recites the same limitations as rejected claim 5 above. Regarding claim 17, recites the same limitations as rejected claim 6 above. Regarding claim 18, recites the same limitations as rejected claim 7 above. Regarding claim 20. A system comprising: a non-transitory medium comprising memory with instructions encoded thereon for orchestrating application protocol interfaces (APIs) of an infrastructure layer and an application layer using a runtime layer between the infrastructure layer and the application layer; and one or more processors that, when executing the instructions, are caused to perform operations comprising: Regarding claim 20, recites the same limitations as rejected claim 1 above. Claim(s) 8 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fu and White as applied to claims 1 and 12 respectively above, and further in view of Jha et al. (US Pub. No. 2022/0318202 A1 – herein after Jha). Regarding claim 8, the method of claim 7, Jha discloses wherein the alert comprises a selectable option, and wherein the method further comprises responsive to detecting selection of the selectable option (The event type can then be used, as indicated by curved arrow 2006 in FIG. 20A, to select a parsing function ƒ( ) for the event type that can be used to extract the high-information-content, variable values from the log/event message 2008… The event type, or ID, is used to select, as indicated by curved arrow 2024, a message-restoration function ƒ.sup.1( ) which can be applied 2026 to the expression 2018 obtained by the event-tuple-generation process to generate the original message 2028 – See paragraph [0090]. The Logstash tool also provides functionalities for transforming input log/event messages into event tuples. The regular-expression patterns, as mentioned above, can be specified by log/event-message-system users, such as administrative personnel, can be generated by user interfaces manipulated by log/event-message-system users, or may be automatically generated by machine-learning-based systems that automatically develop efficient compression methods based on analysis of log/event-message streams – See paragraph [0096]), deploying the service (provide services that are distributed across multiple clouds – See paragraph [0014]). It would have been obvious to one ordinary skill in the art before the effective filing date of claimed invention to use Jha’s teaching into Fu’s and White’s inventions because incorporating Jha’s teaching would enhance Fu and White to enable to populate with content-pack information known to developers and users of the log/event-message system and is updated by those developers and users as new content packs become available as suggested by Jha (paragraph [0145]). Regarding claim 19, recites the same limitations as rejected claim 8 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bawa et al. (US Pub. No. 2024/0272887 A1) discloses generating metadata for the selected extensible application pattern; generating a configuration for the selected extensible application pattern; creating execution isolations for deployments based on the metadata; applying security policies to the selected extensible application pattern; generating source code for the selected extensible application pattern; creating and linking the source code for the selected extensible application pattern to a continuous integration/continuous development pipelines; initializing the source code to an infrastructure; and deploying an application artifacts infrastructure, wherein the deployment achieves an immutable infrastructure – See Abstract and specification for more details. Saha et al. (US Pub. No. 2024/0303401 A1) discloses due to the complex nature of circuit design, not all runs through a particular stage using an EDA tool will converge to a solution. In such situations, a circuit engineer provides changes to the input design state data provided to the EDA tool before rerunning the tool to see if the changes will enable the tool to converge to an optimized solution. Thus, circuit design is an iterative process that can involve an EDA tool being executed multiple times (e.g., 5 times, 10 times, 15 times, 20 times, 30 times, etc.) for any given stage in the construction process – see paragraph [0025]. Krishnaumrthy et al. (US Pub. No. 2024/0022472 A1) discloses a stop condition may include: (1) a set number of iterations or attempts have been performed; (2) an amount of processing time has been reached; (3) convergence (e.g., the difference between consecutive iterations is less than a first threshold value); (4) divergence (e.g., the performance deteriorates); and (5) an acceptable outcome has been reached – See paragraph [0040]. Darji (US Pub. No. 2023/0214246 A1) discloses Readers will appreciate that the various components described above may be grouped into one or more optimized computing packages as converged infrastructures. Such converged infrastructures may include pools of computers, storage and networking resources that can be shared by multiple applications and managed in a collective manner using policy-driven processes. Such converged infrastructures may be implemented with a converged infrastructure reference architecture, with standalone appliances, with a software driven hyper-converged approach (e.g., hyper-converged infrastructures), or in other ways – See paragraph [0156]. Brebner (US Pub. No. 2020/0285977 A1) discloses the simulation module 1206 may perform conformance simulation to generate an abstract representation that converges to one or more fitness criteria – See paragraphs [0508-0509]. Schibler et al. (US Pub. No. 2019/0312800 A1) discloses this provides the ability for a user or system to configure a weighted degree of preference between performance and cost (e.g., using a slider in a UI). The general form of this function allows for separately normalizing performance and cost, normalizing a particular score to a particular value (e.g., normalize such that the score of the first runtime configuration is 0), and scaling the exponential scores into a usable/fixed range – See paragraph [0030]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MONGBAO NGUYEN whose telephone number is (571)270-7180. The examiner can normally be reached Monday-Friday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S. Sough can be reached at 571-272-6799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MONGBAO NGUYEN/ Examiner, Art Unit 2192
Read full office action

Prosecution Timeline

Feb 06, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596626
HIGH-SPEED DEBUG PORT TRACE CIRCUIT
2y 5m to grant Granted Apr 07, 2026
Patent 12596639
SELF-GENERATING ROBOTIC PROCESS ENVIRONMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12585442
Display Interface Layout Method and Electronic Device
2y 5m to grant Granted Mar 24, 2026
Patent 12578961
DYNAMIC REVIEW OF SOFTWARE UPDATES AFTER PULL REQUESTS
2y 5m to grant Granted Mar 17, 2026
Patent 12572344
Cloud-Phone-Based Application Installation Method, Cloud Platform, and Related Device
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+43.1%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 562 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month