DETAILED ACTION
This office action is in response to application filed on 11/4/2022.
Claims 1 – 20 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 5 – 8, 10, 11,14 – 18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vegara et al (US 20230035486, hereinafter Vegara), in view of Acharya et al (US 20200042348, hereinafter Acharya).
As per claim 1, Vegara discloses: A method, comprising:
storing, by a processor set, results of a task in a pipeline running in a computing environment; generating, by the processer set, a check point for the task; (Vegara [0034]: “The system accesses a state store describing a previous execution of the pipeline. The state store maps a context for a stage to an execution status of the stage. The context represents inputs of the stage and the execution status indicates whether the stage successfully executed in the previous execution of the pipeline.”)
and re-executing, by the processor set, the pipeline from the check point for the task. (Vegara [0034]: “The system selects a stage. The system determines a context for the stage based on inputs of the stage for the subsequent execution. The system accesses an execution status of the stage from the state store. The system determines based on the execution status of the stage, whether to select the stage as a candidate stage for the subsequent execution of the pipeline or whether to skip the stage during the subsequent execution of the pipeline.”; [0154]: “The subsequent execution of the pipeline is performed such that the system skips the execution of stages that executed successfully. As a result, the system executes only a subset of the stages of the pipeline in the subsequent execution, the subset including stages that did not complete successful execution in the previous run of the pipeline.”)
Vegara did not explicitly disclose:
Wherein the task in the pipeline is executed in a container
Acharya teaches:
Wherein the task in the pipeline is executed in a container. (Acharya [0027])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Acharya into that of Vegara in order to have the task in the pipeline is executed in a container. Vergara [0090] teaches the pipeline stages are executed on platforms of cloud computing environment. Acharya [0017] teaches that the cloud nodes (platforms) may be containers used to execute tasks. It is well known in the art that cloud nodes maybe software defined nodes used to execute specific task as such virtualization of resources would allow greater flexibility and efficiency to allocate resources independent of underlying hardware, and therefore it would be obvious to one of ordinary skill in the art to execute tasks in container using the well-known techniques of virtualization of computing resources for resource allocation. Applicants have thus merely claimed the combination of known parts in the field to achieve predictable results to gain the commonly understood benefits of improved flexibility and efficiency of resource allocation through virtualization.
As per claim 2, the combination of Vegara and Acharya further teach:
The method of claim 1, further comprising: determining another task in the pipeline failed during execution of the pipeline; and re-executing the pipeline from the determined another task. (Vegara [0063]: “During execution of the pipeline, if the stage fails, the stage execution is retried according to the retry strategy. Since a pipeline may be an aggregate pipeline, each stage can itself be a pipeline, which in turn includes stages that are further pipelines and so on. A stage may fail due to failure of any stage of a nested pipeline within the stage. The retry module 350 also implements idempotency in execution of the pipeline such that if a pipeline is executed a subsequent time after a previous failure, the stages that previously executed successfully are skipped and only the stages that did not complete execution successfully in the previous runs are executed in a subsequent run.”)
As per claim 5, the combination of Vegara and Acharya further teach:
The method of claim 1, further comprising: generating a template pipeline to re-execute the pipeline. (Vegara [054]: subsequent execution; [0114]: pipeline template.)
As per claim 6, the combination of Vegara and Acharya further teach:
The method of claim 5, further comprising: updating the results of the pipeline based on template results of the template pipeline. (Vegara [0112]: test case results.)
As per claim 7, the combination of Vegara and Acharya further teach:
The method of claim 5, further comprising: generating a virtual task with results of the check point to start the template pipeline. (Vegara [0154]: “The subsequent execution of the pipeline is performed such that the system skips the execution of stages that executed successfully. As a result, the system executes only a subset of the stages of the pipeline in the subsequent execution, the subset including stages that did not complete successful execution in the previous run of the pipeline.”; [0155]: “The system selects 1520 a stage. Across the different iterations, the system selects the stages in an order in which the stages are sequenced in the pipeline, i.e., starting from the input of the pipeline and proceeding along the pipeline to the end of the pipeline.”)
As per claim 8, the combination of Vegara and Acharya further teach:
The method of claim 1, further comprising: setting a start task of the pipeline based on the check point. (Vegara [0154]: “The subsequent execution of the pipeline is performed such that the system skips the execution of stages that executed successfully. As a result, the system executes only a subset of the stages of the pipeline in the subsequent execution, the subset including stages that did not complete successful execution in the previous run of the pipeline.”; [0155]: “The system selects 1520 a stage. Across the different iterations, the system selects the stages in an order in which the stages are sequenced in the pipeline, i.e., starting from the input of the pipeline and proceeding along the pipeline to the end of the pipeline.”)
As per claim 10, it is the computer readable storage media variant of claim 1 and is therefore rejected under the same rationale. (Vegara [0190]: CRM.)
As per claim 11, it is the computer readable storage media variant of claim 2 and is therefore rejected under the same rationale.
As per claim 14, it is the computer readable storage media variant of claim 5 and is therefore rejected under the same rationale.
As per claim 15, it is the computer readable storage media variant of claim 6 and is therefore rejected under the same rationale.
As per claim 16, it is the computer readable storage media variant of claim 7 and is therefore rejected under the same rationale.
17. A system comprising: a processor set, one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media (Vergara figure 20 and [0182]), the program instructions executable to:
store results of a task in a pipeline executed in a computing environment; generate a check point for the task; (Vegara [0034]: “The system accesses a state store describing a previous execution of the pipeline. The state store maps a context for a stage to an execution status of the stage. The context represents inputs of the stage and the execution status indicates whether the stage successfully executed in the previous execution of the pipeline.”; [0034]: The system selects a stage. The system determines a context for the stage based on inputs of the stage for the subsequent execution. The system accesses an execution status of the stage from the state store. The system determines based on the execution status of the stage, whether to select the stage as a candidate stage for the subsequent execution of the pipeline or whether to skip the stage during the subsequent execution of the pipeline.)
and generate a template pipeline to re-execute the pipeline from the check point for the task reusing the results. (Vegara [0154]: “The subsequent execution of the pipeline is performed such that the system skips the execution of stages that executed successfully. As a result, the system executes only a subset of the stages of the pipeline in the subsequent execution, the subset including stages that did not complete successful execution in the previous run of the pipeline.”)
Vegara did not explicitly disclose:
Wherein the task in the pipeline is executed in a container
Acharya teaches:
Wherein the task in the pipeline is executed in a container. (Acharya [0027])
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Acharya into that of Vegara in order to have the task in the pipeline is executed in a container. Vergara [0090] teaches the pipeline stages are executed on platforms of cloud computing environment. Acharya [0017] teaches that the cloud nodes (platforms) may be containers used to execute tasks. It is well known in the art that cloud nodes maybe software defined nodes used to execute specific task as such virtualization of resources would allow greater flexibility and efficiency to allocate resources independent of underlying hardware, and therefore it would be obvious to one of ordinary skill in the art to execute tasks in container using the well-known techniques of virtualization of computing resources for resource allocation. Applicants have thus merely claimed the combination of known parts in the field to achieve predictable results to gain the commonly understood benefits of improved flexibility and efficiency of resource allocation through virtualization.
As per claim 18, the combination of Vegara and Acharya further teach:
The system of claim 17, wherein the program instructions are executable to: determine another task in the pipeline failed during execution of the pipeline; and re-execute the pipeline from the determined another task. (Vegara [0063]: “During execution of the pipeline, if the stage fails, the stage execution is retried according to the retry strategy. Since a pipeline may be an aggregate pipeline, each stage can itself be a pipeline, which in turn includes stages that are further pipelines and so on. A stage may fail due to failure of any stage of a nested pipeline within the stage. The retry module 350 also implements idempotency in execution of the pipeline such that if a pipeline is executed a subsequent time after a previous failure, the stages that previously executed successfully are skipped and only the stages that did not complete execution successfully in the previous runs are executed in a subsequent run.”)
As per claim 20, the combination of Vegara and Acharya further teach:
The system of claim 17, wherein the program instructions are executable to: update the results of the pipeline based on template results of the template pipeline. (Vegara [0112]: test case results.)
Claim(s) 3, 4, 12, 13 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vegara and Acharya, in view of Sedayao et al (US 20220222105, hereinafter Sedayao).
As per claim 3, the combination of Vegara and Acharya did not teach:
The method of claim 1, wherein the storing results includes: retrieving image information based on image name and/or tag; handling environment variables; and computing a hash value for a command, the image information, and the environment variables of the task.
However, Sedayao teaches:
The method of claim 1, wherein the storing results includes: retrieving image information based on image name and/or tag; handling environment variables; and computing a hash value for a command, the image information, and the environment variables of the task. (Sedayao col 3, line 49 – col 4, line 16.)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Sedayao into that of Vegara and Acharya in order to have the storing results includes: retrieving image information based on image name and/or tag; handling environment variables; and computing a hash value for a command, the image information, and the environment variables of the task. Sedayao has shown that the claimed limitations are merely commonly known data of a container, and can easily be combined into the container execution system of Acharya, such combination merely claims the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
As per claim 4, the combination of Vegara, Acharya and Sedayao further teach:
The method of claim 3, further comprising: comparing the hash value to another computed hash value of the check point. (Vegara [0148]: “the system determines a hash value based on a canonical representation of the structure that represents the inputs of the stage. The hash value may be a checksum based on numerical representation of various attributes of the stage. The system maps the hash value identifying the stage and its inputs to the execution status of the stage.”.)
As per claim 12, it is the computer readable storage media variant of claim 3 and is therefore rejected under the same rationale.
As per claim 13, it is the computer readable storage media variant of claim 4 and is therefore rejected under the same rationale.
As per claim 19, the combination of Vegara and Acharya did not teach:
The system of claim 17, wherein the program instructions are executable to: retrieve image information based on image name and/or tag; handle environment variables; and compute a hash value for a command, the image information, and the environment variables.
However, Sedayao teaches:
The system of claim 17, wherein the program instructions are executable to: retrieve image information based on image name and/or tag; handle environment variables; and compute a hash value for a command, the image information, and the environment variables. (Sedayao col 3, line 49 – col 4, line 16.)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Sedayao into that of Vegara and Acharya in order to have the storing results includes: retrieving image information based on image name and/or tag; handling environment variables; and computing a hash value for a command, the image information, and the environment variables of the task. Sedayao has shown that the claimed limitations are merely commonly known data of a container, and can easily be combined into the container execution system of Acharya, such combination merely claims the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Vegara and Acharya, in view of Vadapandeshwara et al (US 20230161596, hereinafter Vadapandeshwara).
As per claim 9, the combination of Vegara and Acharya did not teach:
The method of claim 1, further comprising: setting an end task of the pipeline based on a user selection.
However, Vadapandeshwara teaches:
The method of claim 1, further comprising: setting an end task of the pipeline based on a user selection. (Vadapandeshwara [0025]: user selection of nodes and links of the pipeline.)
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Vadapandeshwara into that of Vegara and Acharya in order to set an end task of the pipeline based on a user selection. Vadapandeshwara has shown that the claimed limitations are merely commonly known methods and steps to configure an execution pipeline, applicants have merely claimed the combination of known parts in the field to achieve predictable results and is therefore rejected under 35 USC 103.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Feldman et al (US 20210103482) teaches “requesting, by a node, a first lease from a first set of nodes; based at least on obtaining at least one first lease, requesting, by the node, a second lease from a second set of nodes; based at least on the node obtaining at least one second lease, determining a majority holder of second leases; and based at least on obtaining the majority of second leases, executing, by the node, a task associated with the at least one second lease. In some examples, the nodes comprise online processing units (NPUs). In some examples, if a first node begins executing the task and fails, another node automatically takes over to ensure completion.”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES M SWIFT whose telephone number is (571)270-7756. The examiner can normally be reached Monday - Friday: 9:30 AM - 7PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 5712701014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHARLES M SWIFT/Primary Examiner, Art Unit 2196