Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The previous 35 U.S.C. 112(a) rejections are withdrawn due to Applicant’s amendments.
Response to Arguments
Applicant’s arguments filed 11/26/2025 on page 9 of Remarks regarding the rejection under 35 U.S.C. 101 with respect to claims 1-18 have been fully considered but they are not persuasive. See updated rejection below.
Beginning on page 9 of Remarks, Applicant asserts that under 101 Step 2A Prong One, claim 1 is not directed to an abstract idea but rather a technological process involving intelligent orchestration in distributed computing environments. However, Examiner respectfully disagrees. MPEP 2106.04(a)(2)(III)(c) talks about mental processes on a generic computer. Also, see, MPEP 2106.04(d), 2106.05(f) and 2106.5(g). The above mentioned sections of the MPEP set forth that a claim may recite a mental process even with the use of a generic computer. The “checking, determining, classifying, clustering, identifying, validating, predicting” steps taken to determine if a computing environment is to operate better if a new container is instantiated are steps that can be performed mentally perhaps with the aid of pen and paper.
Applicant asserts that under 101 Step 2A Prong Two, claim 1 is not directed to the judicial exception. However, Examiner respectfully disagrees. See MPEP 2106.04(d) and 2106.05(f). The above mentioned sections of the MPEP set forth that a claim may recite a mental process even with the use of a generic computer. Specifically, the claims amount to mere instructions to apply the exception using an artificial intelligence algorithm (e.g., by using these elements as tools) to automatically perform the mental process. Also, see MPEP 2106.05(g). The additional elements of "receiving container instantiation data" and “receiving computer environment context data amount to mere data gathering and data storage, respectively, which are insignificant extra-solution activities that do not integrate a judicial exception into a practical application. See MPEP 2106.05(d)(II) to see that receiving data are insignificant extra-solution activities that are well-understood, routine, and conventional.
Applicant’s arguments on pages 14-16 regarding the rejection under 35 U.S.C. 103 with respect to claims 1-18 have been fully considered but are not persuasive and are moot. Applicant respectfully asserts that Bandari does not teach the amended claim language. However in Bandari, the data collected of “various factors, such as the current utilization of resources, the current demand on the system, and any constraints or limitations imposed by the underlying infrastructure” is the knowledge corpus used and built from the crowd sourced experience. Bandari goes over historical learning on page 6, “The AI algorithms used for predictive maintenance analyze the collected data to identify patterns and anomalies that may indicate potential performance issues. This analysis takes into account various factors, such as resource utilization, performance metrics, and error logs, to determine the health and performance of the containers and microservices”. New references Bai Vaithiyanathan and have been incorporated below to teach the newly presented limitation. See updated rejection below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1,
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 1 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“checking security related aspects for effective network traffic management and [automatically creating an open standard file format/data interchange format for orchestrated deployments]”
“[applying an artificial intelligence algorithm] to determine a predicted future status of the computing environment based, at least in part, upon the computer environment context data set, [wherein the AI algorithm] considers external or internal influencing factors, and predicts a change in a contextual situation where one or more users perform an activity differently or different activities are performed;”
“[wherein the AI algorithm has augmented intelligence to] classify and cluster tasks for a respective user persona to view an assigned task, make corrections, and approve decisions;”
“identifying how the activity is to be performed from a knowledge corpus, wherein the knowledge corpus is prebuilt from a crowd sourced experience;”
“utilizing [historical learning and] the change in contextual situation to identify where the one or more users perform the activity differently or the different activities are performed;”
“validating comments, from the respective user persona, against the contextual situation and generated output files;”
“utilizing a code repository to identify which code element is appropriate for a new mode of activity and if a required code element is not found in the code repository, [the Al algorithm sends a notification to develop the required code element;]”
“predicting a codebase, from multifarious repositories, [to automatically generate a microservice based on a contextual need identified for a dynamic or ad-hoc requirement accommodation by] performing real time prediction on a codebase;”
“determining, [by machine logic and based,] at least in part upon the predicted future status, that the computing environment is to operate better if a new container of the first type is instantiated; and”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., checking, determining, classifying, clustering, identifying, validating, predicting). The above limitations in the context of this claim encompass, inter alia, checking security related aspects for effective network traffic, determining a predicted future status, classifying and clustering tasks, identifying how the activity is to be performed, validating comments, predicting a codebase, determining if the environment is to operate better, (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
“[checking security related aspects for effective network traffic management and] automatically creating an open standard file format/data interchange format for orchestrated deployments”
“applying an artificial intelligence algorithm [to determine a predicted future status of the computing environment based, at least in part, upon the computer environment context data set,] wherein the AI algorithm [considers external or internal influencing factors, and predicts a change in a contextual situation where one or more users perform an activity differently or different activities are performed;]”
“wherein the AI algorithm has augmented intelligence to [classify and cluster tasks for a respective user persona to view an assigned task, make corrections, and approve decisions;]”
“utilizing historical learning [and the change in contextual situation to identify where the one or more users perform the activity differently or the different activities are performed;]”
“[predicting a codebase, from multifarious repositories,] to automatically generate a microservice based on a contextual need identified for a dynamic or ad-hoc requirement accommodation by [performing real time prediction on a codebase;]”
“[determining,] by machine logic and based, [at least in part upon the predicted future status, that the computing environment is to operate better if a new container of the first type is instantiated; and]”
“automatically instantiating, in the computing environment and based, at least in part, upon the container instantiation data set, a first container of the first type.”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using an artificial intelligence algorithm and machine learning (e.g., by using these elements as tools).
The limitations:
“receiving a container instantiation data set that includes the data needed to instantiate at least a first type of container;”
“receiving a computing environment context data set including information relevant to computing operations currently being performed in a computer environment, with the computer environment context data set including information indicative of at least first internal factor;”
“[utilizing a code repository to identify which code element is appropriate for a new mode of activity and if a required code element is not found in the code repository,] the Al algorithm sends a notification to develop the required code element;”
As drafted, amount to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. For example, the additional elements of
"receiving a data set" and “sending a notification” amount to mere data gathering and data storage, respectively, which are insignificant extra-solution activities that do not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to
amount to significantly more than the judicial exception.
The limitations:
“[checking security related aspects for effective network traffic management and] automatically creating an open standard file format/data interchange format for orchestrated deployments”
“applying an artificial intelligence (AI) algorithm [to determine a predicted future status of the computing environment based, at least in part, upon the computer environment context data set,] wherein the AI algorithm [considers external or internal influencing factors, and predicts a change in a contextual situation where one or more users perform an activity differently or different activities are performed;]”
“wherein the AI algorithm has augmented intelligence to [classify and cluster tasks for a respective user persona to view an assigned task, make corrections, and approve decisions;]”
“utilizing historical learning [and the change in contextual situation to identify where the one or more users perform the activity differently or the different activities are performed;]”
“[predicting a codebase, from multifarious repositories,] to automatically generate a microservice based on a contextual need identified for a dynamic or ad-hoc requirement accommodation by [performing real time prediction on a codebase;]”
“[determining,] by machine logic and based, [at least in part upon the predicted future status, that the computing environment is to operate better if a new container of the first type is instantiated; and]”
“automatically instantiating, in the computing environment and based, at least in part, upon the container instantiation data set, a first container of the first type.”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using an artificial intelligence algorithm (e.g., by using these elements as tools).
As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception. (i.e., the additional element describes a unit for applying the abstract ideas). Insignificant extra-solution activities and mere instructions to apply an exception cannot provide an inventive concept. Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See MPEP 2106.05(d)(II) ("The courts have recognized the following computer functions as well-understood, routine, and conventional functions ... i. Receiving or transmitting data over a network") (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)).
The claim is not patent eligible.
Regarding Claim 2,
Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 2 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: Please see the corresponding analysis of Claim 1.
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
“wherein the container instantiation data set includes a container image file.”
As drafted, amount to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. For example, the additional elements of
"receiving a data set" amount to mere data gathering and data storage, respectively, which
are insignificant extra-solution activities that do not integrate a judicial exception into a
practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to
amount to significantly more than the judicial exception. As discussed above with respect to
integration of the abstract idea into a practical application, all of the additional elements are
insignificant extra-solution activities or mere instructions to apply an exception. (i.e., the
additional element describes a unit for applying the abstract ideas). Insignificant extra-solution
activities and mere instructions to apply an exception cannot provide an inventive concept.
Moreover, receiving, communicating, and storing data are insignificant extra-solution activities
that are well-understood, routine, and conventional. See MPEP 2106.05(d)(II) ("The courts have
recognized the following computer functions as well-understood, routine, and conventional
functions ... i. Receiving or transmitting data over a network") (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)).
The claim is not patent eligible.
Regarding Claim 3,
Claim 3 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 3 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“[applying the artificial intelligence algorithm] to predict a codebase from a set of repositories;”
“[automatically] identifying a set of unit test(s) to be performed;”
“identifying appropriate codes and security rules from the set of repositories;”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., predicting, identifying). The above limitations in the context of this claim encompass, inter alia, predicting a codebase and identifying unit tests, codes and security rules (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
“applying the artificial intelligence algorithm to [predict a codebase from a set of repositories;]”
“automatically [identifying a set of unit test(s) to be performed;]”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using an artificial intelligence algorithm (e.g., by using these elements as tools).
The limitations:
“creating a first microservice; and”
“automatically deploying the first microservice.”
As drafted, amount to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. For example, the additional elements of “creating a microservice” and “deploying the first microservice” amount to selecting a particular data source or type of data to be manipulated, respectively, which are insignificant extra-solution activities that do not integrate a judicial exception into a practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to
amount to significantly more than the judicial exception.
The limitations:
“applying the artificial intelligence algorithm to [predict a codebase from a set of repositories;]”
“automatically [identifying a set of unit test(s) to be performed;]”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using an artificial intelligence algorithm (e.g., by using these elements as tools).
As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception. (i.e., the additional element describes a unit for applying the abstract ideas). Insignificant extra-solution activities and mere instructions to apply an exception cannot provide an inventive concept. Moreover, selecting a particular data source or type of data to be manipulated are insignificant extra-solution activities that are well-understood, routine, and conventional. See 2106.05(g)(iii) ("The courts have recognized the following computer functions as well-understood, routine, and conventional functions ... iii. Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display”) (citing Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016)).
The claim is not patent eligible.
Regarding Claim 4,
Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 4 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“[using historical learning] to identify pattern of change indicated by a computer environment contextual data set.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., identifying). The above limitations in the context of this claim encompass, inter alia, identifying pattern of change (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
“using historical learning [to identify pattern of change indicated by a computer environment contextual data set.]”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using machine learning (e.g., by using these elements as tools).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitations:
“using historical learning [to identify pattern of change indicated by a computer environment contextual data set.]”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using machine learning (e.g., by using these elements as tools).
The claim is not patent eligible.
Regarding Claim 5,
Claim 5 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 5 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“[applying the artificial intelligence algorithm] to predict system requirements.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., predicting). The above limitations in the context of this claim encompass, inter alia, predicting system requirements (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
“applying the artificial intelligence algorithm [to predict system requirements.]”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using an artificial intelligence algorithm (e.g., by using these elements as tools).
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The limitations:
“applying the artificial intelligence algorithm [to predict system requirements.]”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using an artificial intelligence algorithm (e.g., by using these elements as tools).
The claim is not patent eligible.
Regarding Claim 6,
Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 6 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“predicting functional requirements;”
“predicting non-functional requirements;”
“predicting security requirements;”
“predicting a number of users to use; and”
“predicting a workflow sequence.”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., predicting). The above limitations in the context of this claim encompass, inter alia, predicting requirements, a number and a sequence (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: Please see the corresponding analysis of Claim 1.
Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Regarding Claim 7,
Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 7 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“checking security related aspects for effective network traffic management and [automatically creating an open standard file format/data interchange format for orchestrated deployments]”
“[applying an artificial intelligence (AI) algorithm] to determine a predicted future status of the computing environment based, at least in part, upon the computer environment context data set, [wherein the AI algorithm] considers external or internal influencing factors, and predicts a change in a contextual situation where one or more users perform an activity differently or different activities are performed;”
“predicting a codebase, from multifarious repositories, [to automatically generate a microservice based on a contextual need identified for a dynamic or ad-hoc requirement accommodation by] performing real time prediction on a codebase;”
“determining, [by machine logic and based,] at least in part upon the predicted future status, that the computing environment is to operate better if a new container of the first type is instantiated; and”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., checking, determining, predicting). The above limitations in the context of this claim encompass, inter alia, checking security related aspects for effective network traffic, determining a predicted future status, predicting a codebase, determining if the environment is to operate better, (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
“[checking security related aspects for effective network traffic management and] automatically creating an open standard file format/data interchange format for orchestrated deployments”
“applying an artificial intelligence (AI) algorithm [to determine a predicted future status of the computing environment based, at least in part, upon the computer environment context data set,] wherein the AI algorithm [considers external or internal influencing factors, and predicts a change in a contextual situation where one or more users perform an activity differently or different activities are performed;]”
“[predicting a codebase, from multifarious repositories,] to automatically generate a microservice based on a contextual need identified for a dynamic or ad-hoc requirement accommodation by [performing real time prediction on a codebase;]”
“[determining,] by machine logic and based, [at least in part upon the predicted future status, that the computing environment is to operate better if a new container of the first type is instantiated; and]”
“automatically instantiating, in the computing environment and based, at least in part, upon the container instantiation data set, a first container of the first type.”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using an artificial intelligence algorithm (e.g., by using these elements as tools).
The limitations:
“receiving a container instantiation data set that includes the data needed to instantiate at least a first type of container;”
“receiving a computing environment context data set including information relevant to computing operations currently being performed in a computer environment, with the computer environment context data set including information indicative of at least first external factor;”
As drafted, amount to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. For example, the additional elements of
"receiving a data set" amount to mere data gathering and data storage, respectively, which
are insignificant extra-solution activities that do not integrate a judicial exception into a
practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to
amount to significantly more than the judicial exception.
The limitations:
“[checking security related aspects for effective network traffic management and] automatically creating an open standard file format/data interchange format for orchestrated deployments”
“applying an artificial intelligence (AI) algorithm [to determine a predicted future status of the computing environment based, at least in part, upon the computer environment context data set,] wherein the AI algorithm [considers external or internal influencing factors, and predicts a change in a contextual situation where one or more users perform an activity differently or different activities are performed;]”
“[predicting a codebase, from multifarious repositories,] to automatically generate a microservice based on a contextual need identified for a dynamic or ad-hoc requirement accommodation by [performing real time prediction on a codebase;]”
“[determining,] by machine logic and based, [at least in part upon the predicted future status, that the computing environment is to operate better if a new container of the first type is instantiated; and]”
“automatically instantiating, in the computing environment and based, at least in part, upon the container instantiation data set, a first container of the first type.”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using an artificial intelligence algorithm (e.g., by using these elements as tools).
As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception. (i.e., the additional element describes a unit for applying the abstract ideas). Insignificant extra-solution activities and mere instructions to apply an exception cannot provide an inventive concept. Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See MPEP 2106.05(d)(II) ("The courts have recognized the following computer functions as well-understood, routine, and conventional functions ... i. Receiving or transmitting data over a network") (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)).
The claim is not patent eligible.
Claims 8-12 recite substantially similar subject matter to claims 2-6 and are rejected with the same rationale, mutatis mutandis, failing to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea.
Regarding Claim 13,
Claim 13 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 Analysis: Claim 13 is directed to a method, i.e., a process, one of the statutory categories.
Step 2A Prong One Analysis: The limitations:
“checking security related aspects for effective network traffic management and [automatically creating an open standard file format/data interchange format for orchestrated deployments]”
“[applying an artificial intelligence (AI) algorithm] to determine a predicted future status of the computing environment based, at least in part, upon the computer environment context data set, [wherein the AI algorithm] considers external or internal influencing factors, and predicts a change in a contextual situation where one or more users perform an activity differently or different activities are performed;”
“predicting a codebase, from multifarious repositories, [to automatically generate a microservice based on a contextual need identified for a dynamic or ad-hoc requirement accommodation by] performing real time prediction on a codebase;”
“determining, [by machine logic and based,] at least in part upon the predicted future status, that the computing environment is to operate better if a new container of the first type is instantiated; and”
As drafted, under their broadest reasonable interpretation, cover concepts performed in human mind (including an observation, evaluation, judgement, or opinion, e.g., checking, determining, predicting). The above limitations in the context of this claim encompass, inter alia, checking security related aspects for effective network traffic, determining a predicted future status, predicting a codebase, determining if the environment is to operate better, (corresponding to mental processes which can be done mentally or by pen and paper).
Step 2A Prong Two Analysis: The judicial exceptions are not integrated into a practical application.
The limitations:
“[checking security related aspects for effective network traffic management and] automatically creating an open standard file format/data interchange format for orchestrated deployments”
“applying an artificial intelligence (AI) algorithm [to determine a predicted future status of the computing environment based, at least in part, upon the computer environment context data set,] wherein the AI algorithm [considers external or internal influencing factors, and predicts a change in a contextual situation where one or more users perform an activity differently or different activities are performed;]”
“[predicting a codebase, from multifarious repositories,] to automatically generate a microservice based on a contextual need identified for a dynamic or ad-hoc requirement accommodation by [performing real time prediction on a codebase;]”
“[determining,] by machine logic and based, [at least in part upon the predicted future status, that the computing environment is to operate better if a new container of the first type is instantiated; and]”
“automatically instantiating, in the computing environment and based, at least in part, upon the container instantiation data set, a first container of the first type.”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using an artificial intelligence algorithm (e.g., by using these elements as tools).
The limitations:
“receiving a container instantiation data set that includes the data needed to instantiate at least a first type of container;”
“receiving a computing environment context data set including information relevant to computing operations currently being performed in a computer environment, with the computer environment context data set including information indicative of at least a first internal factor and a first external factor;”
As drafted, amount to insignificant extra-solution activities, which do not integrate a judicial exception into a practical application. For example, the additional elements of
"receiving a data set" amount to mere data gathering and data storage, respectively, which
are insignificant extra-solution activities that do not integrate a judicial exception into a
practical application. See MPEP 2106.05(g).
Step 2B Analysis: The claim does not include additional elements that are sufficient to
amount to significantly more than the judicial exception.
The limitations:
“[checking security related aspects for effective network traffic management and] automatically creating an open standard file format/data interchange format for orchestrated deployments”
“applying an artificial intelligence (AI) algorithm [to determine a predicted future status of the computing environment based, at least in part, upon the computer environment context data set,] wherein the AI algorithm [considers external or internal influencing factors, and predicts a change in a contextual situation where one or more users perform an activity differently or different activities are performed;]”
“[predicting a codebase, from multifarious repositories,] to automatically generate a microservice based on a contextual need identified for a dynamic or ad-hoc requirement accommodation by [performing real time prediction on a codebase;]”
“[determining,] by machine logic and based, [at least in part upon the predicted future status, that the computing environment is to operate better if a new container of the first type is instantiated; and]”
“automatically instantiating, in the computing environment and based, at least in part, upon the container instantiation data set, a first container of the first type.”
As drafted, are additional elements that amount to no more than mere instructions to apply the exception for the abstract ideas. See MPEP 2106.05(f). Specifically, they amount to mere instructions to apply the exception using an artificial intelligence algorithm (e.g., by using these elements as tools).
As discussed above with respect to integration of the abstract idea into a practical application, all of the additional elements are insignificant extra-solution activities or mere instructions to apply an exception. (i.e., the additional element describes a unit for applying the abstract ideas). Insignificant extra-solution activities and mere instructions to apply an exception cannot provide an inventive concept. Moreover, receiving, communicating, and storing data are insignificant extra-solution activities that are well-understood, routine, and conventional. See MPEP 2106.05(d)(II) ("The courts have recognized the following computer functions as well-understood, routine, and conventional functions ... i. Receiving or transmitting data over a network") (citing OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)).
The claim is not patent eligible.
Claims 14-18 recite substantially similar subject matter to claims 2-6 and are rejected with the same rationale, mutatis mutandis, failing to integrate the abstract idea into a practical application or amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4, 5, 6, 7, 8, 10, 11, 12, 13, 14, 16, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Bandari et al. (A Comprehensive Review of AI Applications in Automated Container Orchestration, Predictive Maintenance, Security and Compliance, Resource Optimization, and Continuous Deployment and Testing); hereinafter Bandari in view Zhu et al. (US10719423B2); hereinafter Zhu in view of Dobrev et al. (US20180165122A1); hereinafter Dobrev in view of Gungabeesoon et al. (US20210124576); hereinafter Gungabeesoon in view of Bai et al. (US20190356555A1); hereinafter Bai and in further view of Vaithiyanathan et al. (US20210256217A1); hereinafter Vaithiyanathan
Claim 1 is rejected over Bandari, Zhu, Dobrev, Gungabeesoon, Bai and Vaithiyanathan.
Regarding claim 1, Bandari teaches receiving a container instantiation data set that includes the data needed to instantiate at least a first type of container; (“The container manager is responsible for pulling images from a registry, creating containers, and managing their lifecycle.”; page 3, Automated container orchestration; Note: The images are the container instantiation data set)
receiving a computing environment context data set including information relevant to computing operations currently being performed in a computer environment, with the computer environment context data set including information indicative of at least first internal factor; (“Dynamic resource optimization starts with the continuous monitoring of resource utilization across containers and microservices. This includes monitoring factors such as CPU utilization, memory usage, and network bandwidth, as well as other performance metrics that are relevant to the specific container orchestration system. The data is collected in real-time and analyzed by the optimization algorithms, which identify patterns and trends in the utilization of resources.”; page 8; Note: The information regarding CPU utilization, memory usage, and network bandwidth of the containers are internal factors of the container environment.)
checking security related aspects for effective network traffic management and (“One of the ways that AI algorithms can be used for security and compliance is by monitoring and enforcing security policies. AI algorithms can analyze logs and performance data from containers and applications in real-time, helping organizations to identify and mitigate security threats. Additionally, AI algorithms can enforce security policies, such as those related to access control, data encryption, and firewall configurations. This helps organizations to ensure that their applications and data are secure, and that they are in compliance with industry standards and regulations.”; page 12, column 2)
applying an artificial intelligence (AI) algorithm to determine a predicted future status of the computing environment based, at least in part, upon the computer environment context data set, wherein the AI algorithm considers external or internal influencing factors, (“The AI algorithms used for predictive maintenance analyze the collected data to identify patterns and anomalies that may indicate potential performance issues. This analysis takes into account various factors, such as resource utilization, performance metrics, and error logs, to determine the health and performance of the containers and microservices. Based on the results of the analysis, the AI algorithms generate alerts or recommendations for addressing any potential issues.”; page 6; Note: The collected data can be the computer environment context data set that contains external or internal influencing factors that is analyzed to see if in the future there are potential performance issues. The various factors such as resource utilization and performance metrics are the different activities performed.)
and predicts a change in a contextual situation where one or more users perform an activity differently or different activities are performed; (“Automatic scaling, on the other hand, increases or decreases the number of containers as necessary to meet changing demand (different activities), ensuring that resources are utilized efficiently and that applications remain highly available.”; page 4, column 1; and “Improved User Experience: Dynamic resource optimization can also help organizations deliver better outcomes for their customers. By improving the performance and availability of their container orchestration systems, organizations can ensure that their customers have a positive and seamless experience, regardless of the demands being placed on the system.”; page 9, column 2; Note: The different activities that are in a contextual situation include user traffic on a system.)
identifying how the activity is to be performed from a knowledge corpus, wherein the knowledge corpus is prebuilt from a crowd sourced experience; (“This includes monitoring factors such as CPU utilization, memory usage, and network bandwidth, as well as other performance metrics that are relevant to the specific container orchestration system. The data is collected in real-time and analyzed by the optimization algorithms, which identify patterns and trends in the utilization of resources. Based on the collected data, the optimization algorithms analyze the utilization of resources and make decisions about how to optimize them dynamically. The algorithms take into account various factors, such as the current utilization of resources, the current demand on the system, and any constraints or limitations imposed by the underlying infrastructure. They use this information to make decisions about how to allocate resources dynamically, based on the specific needs of the containers and microservices.”; page 8 Dynamic Resource optimization; Note: The data collected is the knowledge corpus used and built from this crowd sourced experience.)
utilizing historical learning and the change in contextual situation to identify where the one or more users perform the activity differently or the different activities are performed; (“The AI algorithms used for predictive maintenance analyze the collected data to identify patterns and anomalies that may indicate potential performance issues. This analysis takes into account various factors, such as resource utilization, performance metrics, and error logs, to determine the health and performance of the containers and microservices.”; page 6, Predictive maintenance; and “Dynamic resource optimization starts with the continuous monitoring of resource utilization across containers and microservices. This includes monitoring factors such as CPU utilization, memory usage, and network bandwidth, as well as other performance metrics that are relevant to the specific container orchestration system. The data is collected in real-time and analyzed by the optimization algorithms, which identify patterns and trends in the utilization of resources. Based on the collected data, the optimization algorithms analyze the utilization of resources and make decisions about how to optimize them dynamically. The algorithms take into account various factors, such as the current utilization of resources, the current demand on the system, and any constraints or limitations imposed by the underlying infrastructure. They use this information to make decisions about how to allocate resources dynamically, based on the specific needs of the containers and microservices.”; page 8 Dynamic Resource optimization)
determining, by machine logic and based, at least in part upon the predicted future status, that the computing environment is to operate better if a new container of the first type is instantiated; and (“Automated container orchestration also enables load balancing and automatic scaling of containers. Load balancing distributes incoming traffic across multiple containers to ensure that no single container becomes overwhelmed. Automatic scaling, on the other hand, increases or decreases the number of containers as necessary to meet changing demand, ensuring that resources are utilized efficiently and that applications remain highly available,” and “The use of automated container orchestration also helps to improve the security of applications and services. With the ability to manage containers at scale, organizations can ensure that they are using the latest security measures, without having to worry about manually updating them. This can reduce the risk of security breaches and help to protect sensitive information.”; pages 3-4, Automated container orchestration)
automatically instantiating, in the computing environment and based, at least in part, upon the container instantiation data set, a first container of the first type. (“they can implement a centralized container management platform that can automate the deployment, scaling, and management of containers. They can also adopt DevOps methodologies and tools that can streamline the software development and deployment process and ensure that containers are deployed and updated quickly and reliably. Additionally, organizations can implement monitoring and logging tools that can help them identify and resolve performance issues and security threats in real-time.”; page 16, column 1; and “The container manager is responsible for pulling images from a registry, creating containers, and managing their lifecycle.”; page 3, Automated container orchestration; Note: The images are the container instantiation data set; and “The first step in container orchestration is to define the desired state of the container environment. This includes the number of containers, their configuration, and their relationship with other services (a first container of the first type).”; page 2, column 1)
Bandari does not teach automatically creating an open standard file format/data interface format for orchestrated deployments;
However, Zhu teaches automatically creating an open standard file format/data interface format for orchestrated deployments; (“the application deployment specification may include a modified YAML™ or JSON file (open standard file format) that may be generated by automatically integrating the workload profile 140 and the deployment parameters 134 into the application specification file 132, thereby affording an application deployment specification in the form of a modified application specification file 132. As will become apparent, the generation of such “all-in-one” application deployment specification permits a more automated (and thus more effective/efficient) application deployment and statistics generation, for assessment purposes.”; col. 6, lines 11-21)
It would have been obvious before the effective filing date to combine the artificial intelligence algorithms for automated container orchestration of Bandari with the automated creation of open standard file formats of Zhu for efficient configuration of computing environments (Zhu, col. 6, lines 17-21). Bandari and Zhu are analogous art because they both concern automated orchestrated deployments.
Bandari does not teach [wherein the AI algorithm has augmented intelligence] to classify and cluster tasks for a respective user persona to view an assigned task, make corrections, and approve decisions;
However, Dobrev teaches [wherein the AI algorithm has augmented intelligence] to classify and cluster tasks for a respective user persona to view an assigned task, make corrections, and approve decisions; (“FIG. 21 shows example programming language in an automation plan configuration file (e.g., a configuration file of one of the automation plans 128 of FIGS. 1 and 2) that includes a user-configurable tasks-by-group parameter definition 2102 to identify a plurality of tasks as part of a set or group of tasks via a single example user-configurable parameter.”; [0117] and “FIG. 10 illustrates an example troubleshooting user interface 1000 that may be used to diagnose failures during the automated deployment of the SDDC 202. In the illustrated example, the customer user 204 can click on a status bar to view detailed information concerning a failure of a task. Such information can include an out-of-storage error, a missing resource error, etc. In some instances, the customer user 204 can fix an error or errors, and click on a continue button to continue execution of the task.”; Note: Clustering is also classifying. Clicking on a continue button to continue execution of the task is approving.)
It would have been obvious before the effective filing date to combine the artificial intelligence algorithms for automated container orchestration of Bandari with the automation plan configurations of Dobrev for time-efficient task grouping and execution (Dobrev, [0068]). Bandari and Dobrev analogous art because they both concern automated orchestrated deployments.
Bandari does not teach validating comments, from the respective user persona, against the contextual situation and generated output files;
However, Bai teaches validating comments, from the respective user persona, against the contextual situation and generated output files; (“FIG. 13 is a high-level block diagram of an interactive workflow system 1300 associated with a cloud computing environment 1390 according to some embodiments. As before, a machine learning architecture platform 1350 may receive input from a designer 1320 and and generate a recommended microservice architecture 1360. A dynamic recommendation computing component 1360 may interactively and iteratively interact with the designer 1320. For example, FIG. 14 is a flow diagram of an interactive workflow process in accordance with some embodiments. At 51410, a system may receive an initial design requirement from a designer. At S1420, the system may output a potential microservice architecture design. If the design is accepted by the designer at S1430, the model may be deployed at S1450. If the design is not accepted by the designer at S1430, the system may receive an adjusted design requirement from the designer at S1450 and the process continues iteratively at S1420 until an acceptable design is achieved. “; [0064]; and “Note that the system may blend requests from multiple designers when suggesting an architecture. Moreover, the system may also monitor comments from one designer to another during a design session to better understand the context and goals of design requirements”; [0066])
It would have been obvious before the effective filing date to combine the artificial intelligence algorithms for automated container orchestration of Bandari with the monitoring of comments of Bai to accurately and efficiently improve microservice architecture design (Bai, [0075]). Bandari and Bai are analogous arts because they both concern microservice deployments.
Bandari does not teach utilizing a code repository to identify which code element is appropriate for a new mode of activity and if a required code element is not found in the code repository, the Al algorithm sends a notification to develop the required code element;
However, Vaithiyanathan teaches utilizing a code repository to identify which code element is appropriate for a new mode of activity and if a required code element is not found in the code repository, the Al algorithm sends a notification to develop the required code element; (“a system includes a source code repository which stores source code entries, which include instructions in a programming language for performing computing tasks. A code generator receives, from a user, an input which includes a request in a natural language to perform a first computing task.”; [0004]; and “The determination 214 may employ machine learning or artificial intelligence to determine whether the new code 204 a,b has a style that corresponds to that of the appropriate style profile 128 a,b and can, thus, reliably be stored in the source code repository 122.”; [0042]; and “if an anomaly is detected at determination 214, the style analyzer 114 may provide an alert 218 indicating review of the code 204 a,b is needed. For instance, having been determined to be anomalous, the code 204 a,b may be provided to an administrator for review. The administrator may determine whether the code 204 a,b is acceptable (e.g., whether anomalies in the code 204 a,b are associated with malicious intent (not acceptable) or whether detected anomalies are associated with error or some other non-malicious intent. The results 220 of this review may be used to determine whether the style analyzer 114 should proceed to prevention 222 of storage of the source code 204 a,b or to editing 224 the source code 204 a,b.”; [0048]; Note: The computing task is a new mode of activity and editing the source code is developing the required code element.)
It would have been obvious before the effective filing date to combine the artificial intelligence algorithms for automated container orchestration of Bandari with the code generation from repositories of Vaithiyanathan for efficient custom source code generation (Vaithiyanathan, [0006]). Bandari and Vaithiyanathan are analogous art because they both concern using artificial intelligence for optimal deployment of computing resources.
Bandari does not teach predicting a codebase, from multifarious repositories, to automatically generate a microservice based on a contextual need identified for a dynamic or ad-hoc requirement accommodation by performing a real time prediction on a codebase;
However, Gungabeesoon teaches predicting a codebase, from multifarious repositories, to automatically generate a microservice based on a contextual need identified for a dynamic or ad-hoc requirement accommodation by performing a real time prediction on a codebase; (“A repository generator module 226 generates the SCM repository in the code versioning system 260 for the microservice by creating the required folders and files based on one or more previously defined repository templates. In the present embodiment, the SCM repository for the source code and files generated for the microservice can be stored into the SCM server 262.”; [0075]; and “The microservice generator module 242 is a stage and component within the microservice delivery pipeline 240 operable to generate source code and its corresponding shared object code for a desired microservice (codebase).”; [0078] and “In the “patch” mode of operation, the microservice generator module 242 is configured to keep the existing interface and implementation of the microservice but apply improvements/patches such as security patches for the microservice operational framework or library updates. Changes to microservice operation/method are applied manually by the microservice developers. If no changes to the operation/method are applied, then the microservice generator module 242 will apply the aforementioned solutions to the microservice. This mode of operation automatically preserves business logic code and regenerates/re-assembles code dependencies as required for the business logic code (dynamic requirement accommodation), as needed, based on the security patches or library updates applied to the microservice project.”; [0094])
It would have been obvious before the effective filing date to combine the artificial intelligence algorithms for automated container orchestration of Bandari with the automated microservice deployment of Gungabeesoon to efficiently generate desired microservices (Gungabeesoon, [0060]). Bandari and Gungabeesoon analogous art because they both concern automated orchestrated deployments.
Claim 2 is rejected over Bandari, Zhu, Dobrev, Gungabeesoon, Bai and Vaithiyanathan with the incorporation of claim 1.
Regarding claim 2, Bandari teaches wherein the container instantiation data set includes a container image file. (“The container manager is responsible for pulling images from a registry, creating containers, and managing their lifecycle.”; page 3, Automated container orchestration)
Claim 4 is rejected over Bandari, Zhu, Dobrev, Gungabeesoon, Bai and Vaithiyanathan with the incorporation of claim 1.
Regarding claim 4, Bandari teaches using historical learning to identify pattern of change indicated by a computer environment contextual data set. (“The AI algorithms used for predictive maintenance analyze the collected data to identify patterns and anomalies that may indicate potential performance issues. This analysis takes into account various factors, such as resource utilization, performance metrics, and error logs, to determine the health and performance of the containers and microservices.”; page 6, Predictive maintenance)
Claim 5 is rejected over Bandari, Zhu, Dobrev, Gungabeesoon, Bai and Vaithiyanathan with the incorporation of claim 1.
Regarding claim 5, Bandari teaches applying the artificial intelligence algorithm to predict system requirements. (“The AI algorithms used for predictive maintenance analyze the collected data to identify patterns and anomalies that may indicate potential performance issues. This analysis takes into account various factors, such as resource utilization, performance metrics, and error logs, to determine the health and performance of the containers and microservices.”; page 6, Predictive maintenance; and “Dynamic resource optimization can also introduce performance overhead, as the optimization algorithms and systems use computational resources to monitor and adjust resource utilization.”; page 10, Performance Overhead)
Claim 6 is rejected over Bandari, Zhu, Dobrev, Gungabeesoon, Bai and Vaithiyanathan with the incorporation of claim 1.
Regarding claim 6, Bandari teaches predicting functional requirements; (“Dynamic resource optimization starts with the continuous monitoring of resource utilization across containers and microservices. This includes monitoring factors such as CPU utilization, memory usage, and network bandwidth, as well as other performance metrics that are relevant to the specific container orchestration system. The data is collected in real-time and analyzed by the optimization algorithms, which identify patterns and trends in the utilization of resources.”; page 8, Dynamic Resource Optimization)
predicting non-functional requirements; (“Improved Uptime: One of the main benefits of predictive maintenance is improved uptime. By identifying and addressing potential performance issues proactively, organizations can reduce the likelihood of outages and downtime.”; page 6, Functions of predictive maintenance; Note: Performance and efficiency are included in non-functional requirements)
predicting security requirements; (“One of the ways that AI algorithms can be used for security and compliance is by monitoring and enforcing security policies. AI algorithms can analyze logs and performance data from containers and applications in real-time, helping organizations to identify and mitigate security threats.”; page 12, Security and compliance)
predicting a number of users to use; and (“With the ability to scale up or down as needed, organizations can ensure that they have the resources they need to meet the demands of their users without having to manually intervene. This not only helps to improve the reliability of their applications and services, but also reduces the cost of running them, as resources are only used when they are needed.”; page 4)
Bandari does not teach predicting a workflow sequence.
However, Gungabeesoon teaches predicting a workflow sequence. (the microservice delivery pipeline (workflow sequence) is generated based on one or more pipeline generation templates.”; [0022])
It would have been obvious before the effective filing date to combine the artificial intelligence algorithms for automated container orchestration of Bandari with the automated microservice pipeline of Gungabeesoon to efficiently generate desired microservices (Gungabeesoon, [0060]). Bandari and Gungabeesoon analogous art because they both concern automated orchestrated deployments.
Claim 7 is rejected over Bandari, Zhu, Dobrev, Gungabeesoon, Bai and Vaithiyanathan. Claim 7 is a method claim corresponding to claim 1 and is rejected for the same reasons as given in the rejection of claim 1, with the exception of the following limitations.
Regarding claim 7, Bandari teaches receiving a computing environment context data set including information relevant to computing operations currently being performed in a computer environment, with the computer environment context data set including information indicative of at least first external factor; (“Based on the collected data, the optimization algorithms analyze the utilization of resources and make decisions about how to optimize them dynamically. The algorithms take into account various factors, such as the current utilization of resources, the current demand on the system, “; page 8 and “The use of automated container orchestration makes it possible for businesses to respond quickly to changes in demand for their applications and services. With the ability to scale up or down as needed, organizations can ensure that they have the resources they need to meet the demands of their users without having to manually intervene.”; page 4; Note: The user demands are the external factor.)
The remainder of claim 7 is substantially similar to claim 1 and is rejected with the same rationale, mutatis mutandis.
Dependent claim 8 recites a method for performing steps substantially similar to those of claim 2 and is rejected with the same rationale, mutatis mutandis. For the rejection of the limitations specifically pertaining to the method of claim 7, see the rejection of claim 7 above.
Dependent claim 10 recites a method for performing steps substantially similar to those of claim 4 and is rejected with the same rationale, mutatis mutandis. For the rejection of the limitations specifically pertaining to the method of claim 7, see the rejection of claim 7 above.
Dependent claim 11 recites a method for performing steps substantially similar to those of claim 5 and is rejected with the same rationale, mutatis mutandis. For the rejection of the limitations specifically pertaining to the method of claim 7, see the rejection of claim 7 above.
Dependent claim 12 recites a method for performing steps substantially similar to those of claim 6 and is rejected with the same rationale, mutatis mutandis. For the rejection of the limitations specifically pertaining to the method of claim 7, see the rejection of claim 7 above.
Claim 13 is a method claim corresponding to claim 1 and is rejected for the same reasons as given in the rejection of claim 1, with the exception of the following limitations.
Regarding claim 13, Bandari teaches receiving a computing environment context data set including information relevant to computing operations currently being performed in a computer environment, with the computer environment context data set including information indicative of at least a first internal factor and (“Dynamic resource optimization starts with the continuous monitoring of resource utilization across containers and microservices. This includes monitoring factors such as CPU utilization, memory usage, and network bandwidth, as well as other performance metrics that are relevant to the specific container orchestration system. The data is collected in real-time and analyzed by the optimization algorithms, which identify patterns and trends in the utilization of resources.”; page 8; Note: The information regarding CPU utilization, memory usage, and network bandwidth of the containers are internal factors of the container environment.)
a first external factor; (“Based on the collected data, the optimization algorithms analyze the utilization of resources and make decisions about how to optimize them dynamically. The algorithms take into account various factors, such as the current utilization of resources, the current demand on the system, “; page 8 and “The use of automated container orchestration makes it possible for businesses to respond quickly to changes in demand for their applications and services. With the ability to scale up or down as needed, organizations can ensure that they have the resources they need to meet the demands of their users without having to manually intervene.”; page 4; Note: The user demands are the external factor.)
The remainder of claim 7 is substantially similar to claim 1 and is rejected with the same rationale, mutatis mutandis.
Dependent claim 14 recites a method for performing steps substantially similar to those of claim 2 and is rejected with the same rationale, mutatis mutandis. For the rejection of the limitations specifically pertaining to the method of claim 13, see the rejection of claim 13 above.
Dependent claim 16 recites a method for performing steps substantially similar to those of claim 4 and is rejected with the same rationale, mutatis mutandis. For the rejection of the limitations specifically pertaining to the method of claim 13, see the rejection of claim 13 above.
Dependent claim 17 recites a method for performing steps substantially similar to those of claim 5 and is rejected with the same rationale, mutatis mutandis. For the rejection of the limitations specifically pertaining to the method of claim 13, see the rejection of claim 13 above.
Dependent claim 18 recites a method for performing steps substantially similar to those of claim 6 and is rejected with the same rationale, mutatis mutandis. For the rejection of the limitations specifically pertaining to the method of claim 13, see the rejection of claim 13 above.
Claims 3, 9 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Bandari, Zhu, Dobrev and Gungabeesoon in view of Thomason et al. (US10140159B1); hereinafter Thomason
Claim 3 is rejected over Bandari, Zhu, Dobrev, Gungabeesoon, Bai, Vaithiyanathan and Thomason with the incorporation of claim 1.
Regarding claim 3, Bandari teaches automatically identifying a set of unit test(s) to be performed; (“Continuous Testing: The AI algorithms also automate the testing process, by continuously monitoring the performance of containers and applications and evaluating the results of tests. This allows organizations to catch performance issues and bugs early, before they impact end users. The AI algorithms can also analyze test results and identify patterns and trends, helping organizations to identify areas where they can improve the quality of their code and applications.”; page 10, Continuous deployment and testing)
applying the artificial intelligence algorithm to predict (“The AI algorithms used for predictive maintenance analyze the collected data to identify patterns and anomalies that may indicate potential performance issues. This analysis takes into account various factors, such as resource utilization, performance metrics, and error logs, to determine the health and performance of the containers and microservices. Based on the results of the analysis, the AI algorithms generate alerts or recommendations for addressing any potential issues.”; page 6)
Bandari does not teach a codebase from a set of repositories;
creating a first microservice; and
automatically deploying the first microservice.
However, Gungabeesoon teaches a codebase from a set of repositories; (“A repository generator module 226 generates the SCM repository in the code versioning system 260 for the microservice by creating the required folders and files based on one or more previously defined repository templates. In the present embodiment, the SCM repository for the source code and files generated for the microservice can be stored into the SCM server 262.”; [0075]; and “The microservice generator module 242 is a stage and component within the microservice delivery pipeline 240 operable to generate source code and its corresponding shared object code for a desired microservice.”; [0078])
creating a first microservice; and (“The microservice delivery pipeline 240 is intended to automate generation of the microservice, including microservice code generation, build, deploy, test, publish, and promotion.”; [0077])
automatically deploying the first microservice. (“The microservice delivery pipeline 240 is intended to automate generation of the microservice, including microservice code generation, build, deploy, test, publish, and promotion.”; [0077])
It would have been obvious before the effective filing date to combine the artificial intelligence algorithms for automated container orchestration of Bandari with the automated microservice deployment of Gungabeesoon to efficiently generate desired microservices (Gungabeesoon, [0060]). Bandari and Gungabeesoon analogous art because they both concern automated orchestrated deployments
Bandari does not teach identifying appropriate codes and security rules from the set of repositories;
However, Thomason teaches identifying appropriate codes and security rules from the set of repositories; (“In general, the one or more data stores 150 can include any information collected, stored or used by the central management system 140. For example, in various embodiments, the one or more data stores 150 can include container images, deployment settings for applications (e.g., on a tenant-specific and/or application-specific basis), applications for deployment, metadata for applications to be deployed (e.g., metadata which indicates interrelated containers of an application), combinations of the same and/or the like. In certain embodiments, data stored in the one or more data stores 150 can take the form of repositories, flat files, databases, etc. An example of the data store(s) 150 will be described in greater detail in relation to FIG. 1B.”; column 6, lines 13-25; and “The encryption key store 156 can include security keys used to validate signatures contained in container manifests. The container registry 158 can include information about containers that are known to an organization, attributes of those containers, deployment settings, combinations of same and/or the like.”; column 6, lines 38-43)
It would have been obvious before the effective filing date to combine the computing environment predictive maintenance of Bandari with the container settings in repositories of Thomason for effective configuration of deployed containers (Thomason, column 3, lines 1-14). Bandari and Thomason are analogous art because they both concern the deployment of containers.
Bandari does not teach creating a first microservice; and
However, Thomason teaches creating a first microservice; and (“an application can be distributed across multiple containers with each container providing a microservice in support of the application. In general, containers of an application can be deployed on managed resources of the central management system 140.”; column 3, lines 60-65)
It would have been obvious before the effective filing date to combine the computing environment predictive maintenance of Bandari with the microservice creation of Thomason for effective configuration of deployed containers (Thomason, column 3, lines 1-14). Bandari and Thomason are analogous art because they both concern the deployment of containers.
Bandari does not teach automatically deploying the first microservice.
However, Thomason teaches automatically deploying the first microservice. (“an application can be distributed across multiple containers with each container providing a microservice in support of the application. In general, containers of an application can be deployed on managed resources of the central management system 140.”; column 3, lines 60-65)
Dependent claim 9 recites a method for performing steps substantially similar to those of claim 3 and is rejected with the same rationale, mutatis mutandis. For the rejection of the limitations specifically pertaining to the method of claim 7, see the rejection of claim 7 above.
Dependent claim 15 recites a method for performing steps substantially similar to those of claim 3 and is rejected with the same rationale, mutatis mutandis. For the rejection of the limitations specifically pertaining to the method of claim 13, see the rejection of claim 13 above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
NPL: Shanmugam, Aravind Samy. “Docker Container Reactive Scalability and Prediction of CPU Utilization Based on Proactive Modelling.” (2016).
NPL: Buchaca et al. “Proactive Container Auto-scaling for Cloud Native Machine Learning Services.” (2020).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID H TRAN whose telephone number is (703)756-1525. The examiner can normally be reached M-F 9:30 am - 5:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID H TRAN/Examiner, Art Unit 2147
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147