Prosecution Insights
Last updated: April 19, 2026
Application No. 18/356,551

ENABLING LIFECYCLE MANAGEMENT SERVICES FOR HETEROGENEOUS CLOUD RESOURCES USING BLUEPRINTS

Non-Final OA §103
Filed
Jul 21, 2023
Examiner
CAO, DIEM K
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
531 granted / 663 resolved
+25.1% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
29 currently pending
Career history
692
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
20.5%
-19.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 663 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Aronov et al. (US 2021/0367846 A1) in view of Maes et al. (US 2016/0239595 A1). As to claim 1, Aronov teaches a method for provisioning and managing resources on resource providers, comprising: obtaining, by a blueprint orchestrator, a blueprint generation request associated with a plurality of resource providers (RPs) from a user (“during the first communication 102, the user device 101 initiates a request to provision a resource on the cloud resource(s) 105. Moreover, during the second communication 104, the user device 101 communicates the request to provision to a resource on the cloud resource(s) 105 to a cloud management platform 103. Typically, the request to provision a resource provided by the user device 101 includes details on the blueprint that specify the desired resource to be provisioned and/or resource configurations”; paragraphs [0021], [0024], [0029] and “Provisioning and maintenance of resources are automated through creation of blueprints (e.g., models) that include components of requested services along with their relationships”; paragraph [0018]); in response to the determination (During a third communication 106, the cloud management platform 103 decides, based on a set of rules and/or the obtained blueprint; paragraph [0021]): obtaining resource information associated with the plurality of RPs (the cloud management platform 103 decides, based on a set of rules and/or the obtained blueprint, the cloud resource(s) 105 to be utilized. In response, the cloud management platform 103 provides a request to the cloud resource(s) 105 in order to provision a resource on the cloud resource(s) 105 during the fourth communication 108. In some implementations, the cloud management platform 103 may execute a series of requests to the cloud resource(s) 105 in order to provision a resource on the cloud resource(s) 105; paragraphs [0021], [0037], [0047]); generating a blueprint based on the resource information and a software repository associated with the plurality of RPs (“Having obtained the requests from the cloud management Having obtained the requests from the cloud management platform 103, the cloud resource(s) 105 then provisions the resource during the fifth communication 110. Once provisioned, the cloud resource(s) 105 communicate(s) with the cloud management platform 103 to indicate the resource has been provisioned during the sixth communication 112. platform 103, the cloud resource(s) 105 then provisions the resource during the fifth communication 110. Once provisioned, the cloud resource(s) 105 communicate(s) with the cloud management platform 103 to indicate the resource has been provisioned during the sixth communication 112”; paragraph [0021] and “Such a later time may be determined when the provisioning request 203 includes a satisfactory blueprint and/or no allocation flag 205. As such, the selected one(s) of the cloud resource(s) 208a, 208b, 208c are to be fully provisioned. (i.e., the blueprint is created)”; paragraph [0051]). Aronov does not teach in response to obtaining the blueprint generation request: making a determination that the blueprint generation request is associated with lifecycle management (LCM) services; in response to the determination: obtaining resource information associated with the LCM services; generating a blueprint based on the resource information and a software repository associated with the plurality of RPs; composing RP resources and LCM resources on the plurality of RPs based on a generic composition request using the blueprint; and performing LCM operations on the RP resources using the LCM resources and the blueprint. However, Maes teaches the created blueprint is associated with lifecycle management (LCM) services (see abstract and “The different elements of infrastructure, platforms, applications, and services are described in the context of lifecycle management topologies. In the topologies, elements in each layer are defined as nodes. Nodes may be defined by a data model that defines what the nodes are and how to manage them, or using metadata that decorates the nodes in the topology or is associated or referred to by a metadata document. In general, expressing topologies using metadata amounts to explicitly or implicitly decorating the nodes with lifecycle management logic. The lifecycle management logic may comprise a number of workflows that are combinations of conditions and actions associated with each management operation such as provisioning, managing, updating, retiring, among others, and properties for these operations”; paragraphs [0021], [0039], [0045], [0059]), obtaining resource information associated with the LCM services (the topology (302) with its associated policies (303) may be an input (501) to a provisioning policy engine (502). In this example, the blueprints (100, 1000) are the input at block 501. A policy provisioning engine (502) may be a stand alone device or incorporated into a device of FIG. 2A such as, for example, the resource offering manager (308). The policy provisioning engine (502) may obtain a number of provisioning policies from a resource provider called resource provider policies (PR); paragraphs [0032] and [0016]), composing RP resources and LCM resources on the plurality of RPs based on a generic composition request using the blueprint (The topology-derived blueprints are modified per the received provisioning policies (308-1) by the provisioning policy engine (502) as indicated by arrow 507, and sent to an interpreter (503). The interpreter (503) is any hardware device or a combination of hardware and software that interprets the provisioning policies to create an execution plan as indicted by arrow 508. The result is then interpreted and converted into an execution plan (508) that comprises a workflow or sequence of serial and/or parallel scripts in order to create an instance of the topology; paragraph [0033] and “With the above-described sequence based topology, an execution plan (508) may be represented as a blueprint. Conversely, a blueprint may be expressed as an execution plan (508). A blueprint with nodes expanded by policies (303) and LCMAs (304) may be similarly processed, instead, in a manner similar to the processing of an infrastructure topology; paragraph [0034]), and performing LCM operations on the RP resources using the LCM resources and the blueprint (Assuming the workflow or sequence of serial and/or parallel scripts is executable, which it should be in the case of an architecture descriptive topology, the actions associated with the workflow or sequence of serial and/or parallel scripts are executed by the LCM engine (311); paragraph [0033] and “LCMAs are expressed as a number of application programming interfaces (APIs), wherein the LCMAs are called during execution of the topology”; paragraph [0072]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Maes to the system of Aronov because Maes teaches a method to avoid mistakes and irreconcilable issues between the application and the underlying infrastructure by providing to users cloud services over a network may be designed, provisioned, deployed, and managed using a cloud service manager (paragraphs [0014]-[0015]). As to claim 2, Aronov as modified by Maes teaches the method of claim 1, wherein the blueprint comprises: first RP resource information of the resource information associated with a first RP resource of a first RP of the plurality of RPs (see Aronov: resource data and/or constraints of resources; paragraph [0040]); second RP resource information of the resource information associated with a second RP resource of a second RP of the plurality of RPs (see Aronov: resource data and/or constraints of resources; paragraph [0040]); first LCM resource information of the resource information associated with a first LCM resource of the first RP (see Maes: metadata of the node of plurality of nodes … lifecycle management logic; paragraph [0031] and [0114]); second LCM resource information of the resource information associated with a second LCM resource of the second RP (see Maes: metadata of the node of plurality of nodes … lifecycle management logic; paragraphs [0031], [0077] and [0114]); and request mappings associated with the plurality of RPs (see Maes: Blueprints describe services in terms of the collections of workflows that are to be executed to provision or manage all the components that make up the service in order to perform a particular lifecycle management action. Some of the functions of the workflows defined by blueprints are actual life cycle management actions that are then performed as calls to a resource provider. The resource provider converts the calls into well formed and exchanged instructions specific to the particular resource or instance offered by a resource provider; paragraph [0016]). As to claim 3, Aronov as modified by Maes teaches the method of claim 2, wherein the requests mappings comprise: a mapping of generic LCM requests to first RP native LCM requests associated with the first RP; and a mapping of the generic LCM requests to second RP native LCM requests associated with the second RP (see Maes: Blueprints describe services in terms of the collections of workflows that are to be executed to provision or manage all the components that make up the service in order to perform a particular lifecycle management action. Some of the functions of the workflows defined by blueprints are actual life cycle management actions that are then performed as calls to a resource provider. The resource provider converts the calls into well formed and exchanged instructions specific to the particular resource or instance offered by a resource provider; paragraph [0016] and “service providers”; paragraph [0017]). As to claim 4, Aronov as modified by Maes teaches the method of claim 3, wherein the first RP native LCM requests are different from the second RP native LCM requests (see Maes: there are a plurality of different service providers, thus, the first RP native LCM requests are different from the second RP native LCM requests; paragraphs [0016]-[0017]). As to claim 5, Aronov as modified by Maes teaches the method of claim 4, wherein the software repository comprises the first RP native LCM requests and the second RP native LCM request (see: Mae: A number of policies are associated with a number of nodes within the topology (302) formed from the blueprint. In one example, the policies are added as an additional node within the topology. A number of LCMAs may be associated with a number of nodes within the topology (302) formed from the blueprint. LCMAs may be linked to a number of resource providers using the policies. The association of the policies (303) and LCMAs (104) with the topology (302) may be performed as described below. In this manner, even though a number of policies and LCMAs may be derived from the blueprint by way of derivation of the containment relationships, the temporal dependency relationships, and the additional relationships from the blueprint, a number of additional policies and LCMAs may be added to the topology (302) by, for example, the topology designer (301) and the resource offering manager (108) in order to create a topology (302) that, when instantiated, will perform as desired or expected; paragraph [0059]). As to claim 6, Aronov as modified by Maes does not clearly teach wherein performing LCM operations on the RP resources using the blueprint comprises: obtaining a generic LCM request associated with the first RP resource and the second RP resource from the user; mapping the generic LCM request to a first RP native LCM request of the first RP native requests using the request mappings and the LCM request type; providing the first RP native LCM request to the first RP; mapping the generic LCM request to a second RP native LCM request of the second RP native requests using the request mappings; and sending the second RP native LCM request to the second RP. However, Maes teaches blueprints describe services in terms of the collections of workflows that are to be executed to provision or manage all the components that make up the service in order to perform a particular lifecycle management action. Some of the functions of the workflows defined by blueprints are actual life cycle management actions that are then performed as calls to a resource provider. The resource provider converts the calls into well formed and exchanged instructions specific to the particular resource or instance offered by a resource provider (paragraph [0016]), and “Each object (102-1, 102-2, 102-3, 102-4, 102-5, 102-6, 102-7, 102-8, 102-9, 102-10, 102-11, 102-12) in the blueprint may be associated with action workflows that call resource providers.” (paragraph [0017]). Thus, the LCM actions/requests are transformed into resource provider’s format before they can be performed/executed. Therefore, the system of Aronov as modified by Maes teaches the limitations of claim 6. As to claim 7, Aronov as modified by Maes teaches the method of claim 6, wherein: sending the first RP native LCM request to the first RP causes the first LCM resource to perform a LCM operation on the first RP resource; and sending the second RP native LCM request to the second RP causes the second LCM resource to perform the LCM operation on the second RP resource (see Maes: Blueprints describe services in terms of the collections of workflows that are to be executed to provision or manage all the components that make up the service in order to perform a particular lifecycle management action. Some of the functions of the workflows defined by blueprints are actual life cycle management actions that are then performed as calls to a resource provider. The resource provider converts the calls into well formed and exchanged instructions specific to the particular resource or instance offered by a resource provider; paragraph [0016] and “service providers”; paragraph [0017] and “In one example, the LCMAs are associated with the aspects of the topology by default by virtue of what computing device the node or nodes (302-1, 302-2, 302-3, 302-4, 302-5, 302-6, 302-7) represent. In another example, the LCMAs are associated with the aspects of the topology by explicitly providing a number of functions, F Action' that define how to select a resource provider to implement the action based on the policies associated with the aspects of the topology and the policies of the different relevant resource providers. These functions define how a resource provider is selected to implement the action based on the policies associated with the aspect of the topology and the policies of the different relevant resource providers.”; paragraph [0073]). As to claim 8, Aronov as modified by Maes teaches the method of claim 7, wherein the LCM operation comprises generating telemetry data associated with the first RP resource and sending the telemetry data to a telemetry collector (see Maes: monitoring different types of data/information/usage of nodes; paragraphs [0085]-[0090]). As to claim 9, Aronov as modified by Maes teaches the method of claim 7, wherein the LCM operation comprises upgrading the first RP resource (see Maes: updating; paragraph [0031]). As to claim 10, Aronov as modified by Maes teaches the method of claim 7, wherein the LCM operation comprises changing security information associated with the first RP resource (see Maes: “when a security threat is detected by a monitoring system (313), a remediation option may comprise making changes to a number of access control policies”; paragraph [0077]). As to claim 11, Aronov teaches a system for provisioning and managing resources on resource providers (a system; abstract), comprising: a resource provider (RP) environment (Cloud resources; Fig. 1); and a blueprint orchestrator, comprising a processor (processor 912; paragraph [0088]) and memory (memory; paragraph [0089]), and configured to (Cloud management platform; Fig. 1 and paragraphs [0087]-[0088]): obtain a blueprint generation request associated with a plurality of resource providers (RPs) from a user (“during the first communication 102, the user device 101 initiates a request to provision a resource on the cloud resource(s) 105. Moreover, during the second communication 104, the user device 101 communicates the request to provision to a resource on the cloud resource(s) 105 to a cloud management platform 103. Typically, the request to provision a resource provided by the user device 101 includes details on the blueprint that specify the desired resource to be provisioned and/or resource configurations”; paragraphs [0021], [0024], [0029] and “Provisioning and maintenance of resources are automated through creation of blueprints (e.g., models) that include components of requested services along with their relationships”; paragraph [0018]); in response to the determination (During a third communication 106, the cloud management platform 103 decides, based on a set of rules and/or the obtained blueprint; paragraph [0021]): obtaining resource information associated with the plurality of RPs (the cloud management platform 103 decides, based on a set of rules and/or the obtained blueprint, the cloud resource(s) 105 to be utilized. In response, the cloud management platform 103 provides a request to the cloud resource(s) 105 in order to provision a resource on the cloud resource(s) 105 during the fourth communication 108. In some implementations, the cloud management platform 103 may execute a series of requests to the cloud resource(s) 105 in order to provision a resource on the cloud resource(s) 105; paragraphs [0021], [0037], [0047]); generating a blueprint based on the resource information and a software repository associated with the plurality of RPs (“Having obtained the requests from the cloud management Having obtained the requests from the cloud management platform 103, the cloud resource(s) 105 then provisions the resource during the fifth communication 110. Once provisioned, the cloud resource(s) 105 communicate(s) with the cloud management platform 103 to indicate the resource has been provisioned during the sixth communication 112.platform 103, the cloud resource(s) 105 then provisions the resource during the fifth communication 110. Once provisioned, the cloud resource(s) 105 communicate(s) with the cloud management platform 103 to indicate the resource has been provisioned during the sixth communication 112”; paragraph [0021] and “Such a later time may be determined when the provisioning request 203 includes a satisfactory blueprint and/or no allocation flag 205. As such, the selected one(s) of the cloud resource(s) 208a, 208b, 208c are to be fully provisioned. (i.e., the blueprint is created)”; paragraph [0051]). Aronov does not teach in response to obtaining the blueprint generation request: making a determination that the blueprint generation request is associated with lifecycle management (LCM) services; in response to the determination: obtaining resource information associated with the LCM services; generating a blueprint based on the resource information and a software repository associated with the plurality of RPs; composing RP resources and LCM resources on the plurality of RPs based on a generic composition request using the blueprint; and performing LCM operations on the RP resources using the LCM resources and the blueprint. However, Maes teaches the created blueprint is associated with lifecycle management (LCM) services (see abstract and “The different elements of infrastructure, platforms, applications, and services are described in the context of lifecycle management topologies. In the topologies, elements in each layer are defined as nodes. Nodes may be defined by a data model that defines what the nodes are and how to manage them, or using metadata that decorates the nodes in the topology or is associated or referred to by a metadata document. In general, expressing topologies using metadata amounts to explicitly or implicitly decorating the nodes with lifecycle management logic. The lifecycle management logic may comprise a number of workflows that are combinations of conditions and actions associated with each management operation such as provisioning, managing, updating, retiring, among others, and properties for these operations”; paragraphs [0021], [0039], [0045], [0059]), obtaining resource information associated with the LCM services (the topology (302) with its associated policies (303) may be an input (501) to a provisioning policy engine (502). In this example, the blueprints (100, 1000) are the input at block 501. A policy provisioning engine (502) may be a stand alone device or incorporated into a device of FIG. 2A such as, for example, the resource offering manager (308). The policy provisioning engine (502) may obtain a number of provisioning policies from a resource provider called resource provider policies (PR); paragraphs [0032] and [0016]), composing RP resources and LCM resources on the plurality of RPs based on a generic composition request using the blueprint (The topology-derived blueprints are modified per the received provisioning policies (308-1) by the provisioning policy engine (502) as indicated by arrow 507, and sent to an interpreter (503). The interpreter (503) is any hardware device or a combination of hardware and software that interprets the provisioning policies to create an execution plan as indicted by arrow 508. The result is then interpreted and converted into an execution plan (508) that comprises a workflow or sequence of serial and/or parallel scripts in order to create an instance of the topology; paragraph [0033] and “With the above-described sequence based topology, an execution plan (508) may be represented as a blueprint. Conversely, a blueprint may be expressed as an execution plan (508). A blueprint with nodes expanded by policies (303) and LCMAs (304) may be similarly processed, instead, in a manner similar to the processing of an infrastructure topology; paragraph [0034]), and performing LCM operations on the RP resources using the LCM resources and the blueprint (Assuming the workflow or sequence of serial and/or parallel scripts is executable, which it should be in the case of an architecture descriptive topology, the actions associated with the workflow or sequence of serial and/or parallel scripts are executed by the LCM engine (311); paragraph [0033] and “LCMAs are expressed as a number of application programming interfaces (APIs), wherein the LCMAs are called during execution of the topology”; paragraph [0072]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Maes to the system of Aronov because Maes teaches a method to avoid mistakes and irreconcilable issues between the application and the underlying infrastructure by providing to users cloud services over a network may be designed, provisioned, deployed, and managed using a cloud service manager (paragraphs [0014]-[0015]). As to claim 12, see rejection of claim 2 above. As to claim 13, see rejection of claim 3 above. As to claim 14, see rejection of claim 4 above. As to claim 15, see rejection of claim 5 above. As to claim 16, Aronov teaches a non-transitory computer readable medium comprising computer readable program code, which when executed by a computer processor enables the computer processor to perform a method for provisioning and managing resources on resource providers (a non-transitory computer readable storage medium; paragraph [0095] and claim 8), the method comprising: obtaining, by a blueprint orchestrator, a blueprint generation request associated with a plurality of resource providers (RPs) from a user (“during the first communication 102, the user device 101 initiates a request to provision a resource on the cloud resource(s) 105. Moreover, during the second communication 104, the user device 101 communicates the request to provision to a resource on the cloud resource(s) 105 to a cloud management platform 103. Typically, the request to provision a resource provided by the user device 101 includes details on the blueprint that specify the desired resource to be provisioned and/or resource configurations”; paragraphs [0021], [0024], [0029] and “Provisioning and maintenance of resources are automated through creation of blueprints (e.g., models) that include components of requested services along with their relationships”; paragraph [0018]); in response to the determination (During a third communication 106, the cloud management platform 103 decides, based on a set of rules and/or the obtained blueprint; paragraph [0021]): obtaining resource information associated with the plurality of RPs (the cloud management platform 103 decides, based on a set of rules and/or the obtained blueprint, the cloud resource(s) 105 to be utilized. In response, the cloud management platform 103 provides a request to the cloud resource(s) 105 in order to provision a resource on the cloud resource(s) 105 during the fourth communication 108. In some implementations, the cloud management platform 103 may execute a series of requests to the cloud resource(s) 105 in order to provision a resource on the cloud resource(s) 105; paragraphs [0021], [0037], [0047]); generating a blueprint based on the resource information and a software repository associated with the plurality of RPs (“Having obtained the requests from the cloud management Having obtained the requests from the cloud management platform 103, the cloud resource(s) 105 then provisions the resource during the fifth communication 110. Once provisioned, the cloud resource(s) 105 communicate(s) with the cloud management platform 103 to indicate the resource has been provisioned during the sixth communication 112.platform 103, the cloud resource(s) 105 then provisions the resource during the fifth communication 110. Once provisioned, the cloud resource(s) 105 communicate(s) with the cloud management platform 103 to indicate the resource has been provisioned during the sixth communication 112”; paragraph [0021] and “Such a later time may be determined when the provisioning request 203 includes a satisfactory blueprint and/or no allocation flag 205. As such, the selected one(s) of the cloud resource(s) 208a, 208b, 208c are to be fully provisioned. (i.e., the blueprint is created)”; paragraph [0051]). Aronov does not teach in response to obtaining the blueprint generation request: making a determination that the blueprint generation request is associated with lifecycle management (LCM) services; in response to the determination: obtaining resource information associated with the LCM services; generating a blueprint based on the resource information and a software repository associated with the plurality of RPs; composing RP resources and LCM resources on the plurality of RPs based on a generic composition request using the blueprint; and performing LCM operations on the RP resources using the LCM resources and the blueprint. However, Maes teaches the created blueprint is associated with lifecycle management (LCM) services (see abstract and “The different elements of infrastructure, platforms, applications, and services are described in the context of lifecycle management topologies. In the topologies, elements in each layer are defined as nodes. Nodes may be defined by a data model that defines what the nodes are and how to manage them, or using metadata that decorates the nodes in the topology or is associated or referred to by a metadata document. In general, expressing topologies using metadata amounts to explicitly or implicitly decorating the nodes with lifecycle management logic. The lifecycle management logic may comprise a number of workflows that are combinations of conditions and actions associated with each management operation such as provisioning, managing, updating, retiring, among others, and properties for these operations”; paragraphs [0021], [0039], [0045], [0059]), obtaining resource information associated with the LCM services (the topology (302) with its associated policies (303) may be an input (501) to a provisioning policy engine (502). In this example, the blueprints (100, 1000) are the input at block 501. A policy provisioning engine (502) may be a stand alone device or incorporated into a device of FIG. 2A such as, for example, the resource offering manager (308). The policy provisioning engine (502) may obtain a number of provisioning policies from a resource provider called resource provider policies (PR); paragraphs [0032] and [0016]), composing RP resources and LCM resources on the plurality of RPs based on a generic composition request using the blueprint (The topology-derived blueprints are modified per the received provisioning policies (308-1) by the provisioning policy engine (502) as indicated by arrow 507, and sent to an interpreter (503). The interpreter (503) is any hardware device or a combination of hardware and software that interprets the provisioning policies to create an execution plan as indicted by arrow 508. The result is then interpreted and converted into an execution plan (508) that comprises a workflow or sequence of serial and/or parallel scripts in order to create an instance of the topology; paragraph [0033] and “With the above-described sequence based topology, an execution plan (508) may be represented as a blueprint. Conversely, a blueprint may be expressed as an execution plan (508). A blueprint with nodes expanded by policies (303) and LCMAs (304) may be similarly processed, instead, in a manner similar to the processing of an infrastructure topology; paragraph [0034]), and performing LCM operations on the RP resources using the LCM resources and the blueprint (Assuming the workflow or sequence of serial and/or parallel scripts is executable, which it should be in the case of an architecture descriptive topology, the actions associated with the workflow or sequence of serial and/or parallel scripts are executed by the LCM engine (311); paragraph [0033] and “LCMAs are expressed as a number of application programming interfaces (APIs), wherein the LCMAs are called during execution of the topology”; paragraph [0072]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Maes to the system of Aronov because Maes teaches a method to avoid mistakes and irreconcilable issues between the application and the underlying infrastructure by providing to users cloud services over a network may be designed, provisioned, deployed, and managed using a cloud service manager (paragraphs [0014]-[0015]). As to claim 17, see rejection of claim 2 above. As to claim 18, see rejection of claim 3 above. As to claim 19, see rejection of claim 4 above. As to claim 20, see rejection of claim 5 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Seago et al. (US 2012/0005359 A1) teaches system and method for aggregating cloud system resources by providing mapping from generic to specific format. Stefanov et al. (US 2019/0068458 A1) teaches method and apparatus to generate user interface virtual resource provisioning request forms. Dasgupta et al. (US 2021/0243088 A1) infrastructure resource simulation mechanism. Opsenica et al. (US 2021/0232438 A1) teaches serverless lifecycle management dispatcher. McPeak et al. (US 2024/0036749 A1) teaches machine learning approach to blueprint selection for resource generation with guardrail enforcement. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DIEM K CAO whose telephone number is (571)272-3760. The examiner can normally be reached Monday-Friday 8:00am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 571-270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DIEM K CAO/Primary Examiner, Art Unit 2196 DC December 23, 2025
Read full office action

Prosecution Timeline

Jul 21, 2023
Application Filed
Dec 23, 2025
Non-Final Rejection — §103
Mar 19, 2026
Interview Requested
Mar 26, 2026
Examiner Interview Summary
Mar 26, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596576
TECHNIQUES TO EXPOSE APPLICATION TELEMETRY IN A VIRTUALIZED EXECUTION ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596585
DATA PROCESSING AND MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12561178
SYSTEM AND METHOD FOR MANAGING DATA RETENTION IN DISTRIBUTED SYSTEMS
2y 5m to grant Granted Feb 24, 2026
Patent 12547445
AUTO TIME OPTIMIZATION FOR MIGRATION OF APPLICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12541396
RESOURCE ALLOCATION METHOD AND SYSTEM AFTER SYSTEM RESTART AND RELATED COMPONENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+19.4%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 663 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month