NON-FINAL REJECITON
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. IN202221015542, filed on 03/21/2022.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-15 are rejected under 35 U.S.C 101 because the claimed invention is
directed to an abstract idea without significantly more. The analysis of the claims will
follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50
(“2019 PEG”).
Claim 1
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
Claim 1, recites “A processor implemented method for generating a contextual advisory for Everything as a service (XaaS), the method comprising:” therefore it is directed to the statutory category of a process.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites, inter alia:
“wherein each successive question among the set of questions is identified based on the response of the user to a previous question and is mapped to an initial inference node among a plurality of inference nodes preset in an inference table,” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to communicate with another person and evaluate the conversation. Further, a human is able to provide judgements and opinions to the conversation based on previous statements and/or questions presented by a user. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
“identifying, via the one or more hardware processors, a final inference node and a corresponding decision node among the plurality of decision nodes as an initial decision node, post querying the user with the set of questions, wherein the final interference node indicates initial user inclination and a current architecture of the user in the domain of interest;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data and identify patterns or processes within the presented data. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
“utilizing, via the one or more hardware processors, a decision path identified from the decision graph and the information in the attribute value form associated with each decision response for providing the contextual advisory to build XaaS for the domain of interest using document templates.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate and provide opinions and judgements based on the evaluation of the data. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “obtaining, via one or more hardware processors, a domain of interest and an initial inclination of a user indicating maturity of the user in the domain of interest when user requests for the contextual advisory for a platform, a process, a technology, and technical components to build XaaS for the domain of interest, wherein the initial inclination is obtained by sequentially querying the user with a set of questions and receiving a corresponding response for each question among the set of questions,” is an insignificant extra-solution activity required for any uses of the mental processes (see MPEP § 2106.05(g)) As such, the claim is ineligible.
“wherein each inference node corresponds to a set of inferences with each inference among the set of inferences having an initial inference weightage,” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein the initial inference node and the initial inference weightage of each inference node is iteratively updated in accordance with the response of the user to each question, and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein each inference node among the set of interference nodes is mapped to a decision node from among a plurality of decision nodes;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generating in the form of knowledge graph (a) a global knowledge repository for the domain of interest based on artifacts provided by a Subject Matter Expert (SME), and (b) a local knowledge repository for the current architecture based on artifacts provided by the user, via the one or more hardware processors;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generating a feature matrix, by extracting one or more entities from the global knowledge repository in accordance with a plurality of inputs, provided by the SME, and comprising a list of properties, a weightage of each of the list of properties and a list of components for the domain of interest, via the one or more hardware processors;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“sequentially querying the user, via the one or more hardware processors, with a set of decision questions starting with context of the initial decision node and receiving a corresponding decision response for each decision question among the set of decision questions, wherein each decision question has an associated decision response type, one or more choices for a decision response, dependency links of a decision question with remaining decision questions among the set of decision questions, a decision question weightage and information in an attribute value form associated with each decision response, wherein each successive decision question among the set of decision questions is identified based on the decision response of the user to a previous decision question;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generating, via the one or more hardware processors, a decision graph tracing a plurality of decision nodes starting from the initial decision node based on each decision response for each of the decision questions; and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “obtaining, via one or more hardware processors, a domain of interest and an initial inclination of a user indicating maturity of the user in the domain of interest when user requests for the contextual advisory for a platform, a process, a technology, and technical components to build XaaS for the domain of interest, wherein the initial inclination is obtained by sequentially querying the user with a set of questions and receiving a corresponding response for each question among the set of questions,” is an insignificant extra-solution activity required for any uses of abstract ideas (see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity (see MPEP § 2106.05(d)(i); “Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
“wherein each inference node corresponds to a set of inferences with each inference among the set of inferences having an initial inference weightage,” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein the initial inference node and the initial inference weightage of each inference node is iteratively updated in accordance with the response of the user to each question, and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein each inference node among the set of interference nodes is mapped to a decision node from among a plurality of decision nodes;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generating in the form of knowledge graph (a) a global knowledge repository for the domain of interest based on artifacts provided by a Subject Matter Expert (SME), and (b) a local knowledge repository for the current architecture based on artifacts provided by the user, via the one or more hardware processors;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generating a feature matrix, by extracting one or more entities from the global knowledge repository in accordance with a plurality of inputs, provided by the SME, and comprising a list of properties, a weightage of each of the list of properties and a list of components for the domain of interest, via the one or more hardware processors;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“sequentially querying the user, via the one or more hardware processors, with a set of decision questions starting with context of the initial decision node and receiving a corresponding decision response for each decision question among the set of decision questions, wherein each decision question has an associated decision response type, one or more choices for a decision response, dependency links of a decision question with remaining decision questions among the set of decision questions, a decision question weightage and information in an attribute value form associated with each decision response, wherein each successive decision question among the set of decision questions is identified based on the decision response of the user to a previous decision question;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generating, via the one or more hardware processors, a decision graph tracing a plurality of decision nodes starting from the initial decision node based on each decision response for each of the decision questions; and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 2
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein relevant and contextual choices are generated for the decision response by trained Machine Learning (ML) models using combination of the local knowledge repository and the global knowledge repository in accordance with a plurality of features present in the feature matrix, wherein the choices are ranked according to probability of generating best possible decision, and wherein for any unanswered question, the decision response is identified by the trained ML model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein relevant and contextual choices are generated for the decision response by trained Machine Learning (ML) models using combination of the local knowledge repository and the global knowledge repository in accordance with a plurality of features present in the feature matrix, wherein the choices are ranked according to probability of generating best possible decision, and wherein for any unanswered question, the decision response is identified by the trained ML model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 3
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein the contextual advisory comprises roadmap, blueprint, reference architecture and miscellaneous advisory documents, wherein the contextual advisory is a combination of static document with marked dynamics sections populated based on the decision path of the user.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the contextual advisory comprises roadmap, blueprint, reference architecture and miscellaneous advisory documents, wherein the contextual advisory is a combination of static document with marked dynamics sections populated based on the decision path of the user.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 4
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein, the feature matrix is a structure to host data on a graph database for comparing the one or more entities and associated information obtained from the global knowledge repository across the plurality of inputs.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein, the feature matrix is a structure to host data on a graph database for comparing the one or more entities and associated information obtained from the global knowledge repository across the plurality of inputs.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 5
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A process, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein each question among the set of questions has an associated response type, one or more choices for a response, dependency links of a question with remaining questions among the set of questions, and a question weightage.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein each question among the set of questions has an associated response type, one or more choices for a response, dependency links of a question with remaining questions among the set of questions, and a question weightage.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 6
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
Claim 6, recites “A system for generating a contextual advisory for Everything as a service (XaaS), the system comprising: a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to:” therefore it is directed to the statutory category of a machine.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites, inter alia:
“wherein each successive question among the set of questions is identified based on the response of the user to a previous question and is mapped to an initial inference node among a plurality of inference nodes preset in an inference table,” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to communicate with another person and evaluate the conversation. Further, a human is able to provide judgements and opinions to the conversation based on previous statements and/or questions presented by a user. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
“identify a final inference node and a corresponding decision node among the plurality of decision nodes as an initial decision node, post querying the user with the set of questions, wherein the final interference node indicates initial user inclination and a current architecture of the user in the domain of interest;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data and identify patterns or processes within the presented data. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
“utilize a decision path identified from the decision graph and the information in the attribute value form associated with each decision response for providing the contextual advisory to build XaaS for the domain of interest using document templates.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate and provide opinions and judgements based on the evaluation of the data. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “obtain a domain of interest and an initial inclination of a user indicating maturity of the user in the domain of interest when user requests for the contextual advisory for a platform, a process, a technology, and technical components to build XaaS for the domain of interest, wherein the initial inclination is obtained by sequentially querying the user with a set of questions and receiving a corresponding response for each question among the set of questions,” is an insignificant extra-solution activity required for any uses of the mental processes (see MPEP § 2106.05(g)) As such, the claim is ineligible.
“wherein each inference node corresponds to a set of inferences with each inference among the set of inferences having an initial inference weightage,” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein the initial inference node and the initial inference weightage of each inference node is iteratively updated in accordance with the response of the user to each question, and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein each inference node among the set of interference nodes is mapped to a decision node from among a plurality of decision nodes;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generate in the form of knowledge graph (a) a global knowledge repository for the domain of interest based on artifacts provided by a Subject Matter Expert (SME), and (b) a local knowledge repository for the current architecture based on artifacts provided by the user;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generate a feature matrix, by extracting one or more entities from the global knowledge repository in accordance with a plurality of inputs, provided by the SME, and comprising a list of properties, a weightage of each of the list of properties and a list of components for the domain of interest;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“sequentially query with a set of decision questions starting with context of the initial decision node and receiving a corresponding decision response for each decision question among the set of decision questions, wherein each decision question has an associated decision response type, one or more choices for a decision response, dependency links of a decision question with remaining decision questions among the set of decision questions, a decision question weightage and information in an attribute value form associated with each decision response, wherein each successive decision question among the set of decision questions is identified based on the decision response of the user to a previous decision question;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generate a decision graph tracing a plurality of decision nodes starting from the initial decision node based on each decision response for each of the decision questions; and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “obtain a domain of interest and an initial inclination of a user indicating maturity of the user in the domain of interest when user requests for the contextual advisory for a platform, a process, a technology, and technical components to build XaaS for the domain of interest, wherein the initial inclination is obtained by sequentially querying the user with a set of questions and receiving a corresponding response for each question among the set of questions,” is an insignificant extra-solution activity required for any uses of abstract ideas (see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity (see MPEP § 2106.05(d)(i); “Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
“wherein each inference node corresponds to a set of inferences with each inference among the set of inferences having an initial inference weightage,” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein the initial inference node and the initial inference weightage of each inference node is iteratively updated in accordance with the response of the user to each question, and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein each inference node among the set of interference nodes is mapped to a decision node from among a plurality of decision nodes;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generate in the form of knowledge graph (a) a global knowledge repository for the domain of interest based on artifacts provided by a Subject Matter Expert (SME), and (b) a local knowledge repository for the current architecture based on artifacts provided by the user;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generate a feature matrix, by extracting one or more entities from the global knowledge repository in accordance with a plurality of inputs, provided by the SME, and comprising a list of properties, a weightage of each of the list of properties and a list of components for the domain of interest;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“sequentially query with a set of decision questions starting with context of the initial decision node and receiving a corresponding decision response for each decision question among the set of decision questions, wherein each decision question has an associated decision response type, one or more choices for a decision response, dependency links of a decision question with remaining decision questions among the set of decision questions, a decision question weightage and information in an attribute value form associated with each decision response, wherein each successive decision question among the set of decision questions is identified based on the decision response of the user to a previous decision question;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generate a decision graph tracing a plurality of decision nodes starting from the initial decision node based on each decision response for each of the decision questions; and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 7
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein the one or more hardware processors are configured to generate relevant and contextual choices for the decision response by trained Machine Learning (ML) models using combination of the local knowledge repository and the global knowledge repository in accordance with a plurality of features present in the feature matrix, wherein the choices are ranked according to probability of generating best possible decision, and wherein for any unanswered question, the decision response is identified by the trained ML model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the one or more hardware processors are configured to generate relevant and contextual choices for the decision response by trained Machine Learning (ML) models using combination of the local knowledge repository and the global knowledge repository in accordance with a plurality of features present in the feature matrix, wherein the choices are ranked according to probability of generating best possible decision, and wherein for any unanswered question, the decision response is identified by the trained ML model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 8
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein the contextual advisory comprises roadmap, blueprint, reference architecture and miscellaneous advisory documents, wherein the contextual advisory is a combination of static document with marked dynamics sections populated based on the decision path of the user.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the contextual advisory comprises roadmap, blueprint, reference architecture and miscellaneous advisory documents, wherein the contextual advisory is a combination of static document with marked dynamics sections populated based on the decision path of the user.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 9
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein, the feature matrix is a structure to host data on a graph database for comparing the one or more entities and associated information obtained from the global knowledge repository across the plurality of inputs.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein, the feature matrix is a structure to host data on a graph database for comparing the one or more entities and associated information obtained from the global knowledge repository across the plurality of inputs.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 10
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein each question among the set of questions has an associated response type, one or more choices for a response, dependency links of a question with remaining questions among the set of questions, and a question weightage.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein each question among the set of questions has an associated response type, one or more choices for a response, dependency links of a question with remaining questions among the set of questions, and a question weightage.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 11
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
Claim 11, recites “One or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause:” therefore it is directed to the statutory category of a machine.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites, inter alia:
“wherein each successive question among the set of questions is identified based on the response of the user to a previous question and is mapped to an initial inference node among a plurality of inference nodes preset in an inference table,” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to communicate with another person and evaluate the conversation. Further, a human is able to provide judgements and opinions to the conversation based on previous statements and/or questions presented by a user. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
“identifying a final inference node and a corresponding decision node among the plurality of decision nodes as an initial decision node, post querying the user with the set of questions, wherein the final interference node indicates initial user inclination and a current architecture of the user in the domain of interest;” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate data and identify patterns or processes within the presented data. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
“utilizing a decision path identified from the decision graph and the information in the attribute value form associated with each decision response for providing the contextual advisory to build XaaS for the domain of interest using document templates.” Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of evaluating and observing data, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. A human is able to evaluate and provide opinions and judgements based on the evaluation of the data. The limitation is merely applying an abstract idea on generic computer system. See MPEP 2106.04(a)(2)(III)(c).
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “obtaining, a domain of interest and an initial inclination of a user indicating maturity of the user in the domain of interest when user requests for the contextual advisory for a platform, a process, a technology, and technical components to build XaaS for the domain of interest, wherein the initial inclination is obtained by sequentially querying the user with a set of questions and receiving a corresponding response for each question among the set of questions,” is an insignificant extra-solution activity required for any uses of the mental processes (see MPEP § 2106.05(g)) As such, the claim is ineligible.
“wherein each inference node corresponds to a set of inferences with each inference among the set of inferences having an initial inference weightage,” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein the initial inference node and the initial inference weightage of each inference node is iteratively updated in accordance with the response of the user to each question, and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein each inference node among the set of interference nodes is mapped to a decision node from among a plurality of decision nodes;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generating in the form of knowledge graph (a) a global knowledge repository for the domain of interest based on artifacts provided by a Subject Matter Expert (SME), and (b) a local knowledge repository for the current architecture based on artifacts provided by the user;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generating a feature matrix, by extracting one or more entities from the global knowledge repository in accordance with a plurality of inputs, provided by the SME, and further comprising a list of properties, a weightage of each of the list of properties and a list of components for the domain of interest;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“sequentially querying the user with a set of decision questions starting with context of the initial decision node and receiving a corresponding decision response for each decision question among the set of decision questions, wherein each decision question has an associated decision response type, one or more choices for a decision response, dependency links of a decision question with remaining decision questions among the set of decision questions, a decision question weightage and information in an attribute value form associated with each decision response, wherein each successive decision question among the set of decision questions is identified based on the decision response of the user to a previous decision question;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generating a decision graph tracing a plurality of decision nodes starting from the initial decision node based on each decision response for each of the decision questions; and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “obtaining, a domain of interest and an initial inclination of a user indicating maturity of the user in the domain of interest when user requests for the contextual advisory for a platform, a process, a technology, and technical components to build XaaS for the domain of interest, wherein the initial inclination is obtained by sequentially querying the user with a set of questions and receiving a corresponding response for each question among the set of questions,” is an insignificant extra-solution activity required for any uses of abstract ideas (see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity (see MPEP § 2106.05(d)(i); “Receiving or transmitting data over a network, e.g., using the Internet to gather data”.
“wherein each inference node corresponds to a set of inferences with each inference among the set of inferences having an initial inference weightage,” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein the initial inference node and the initial inference weightage of each inference node is iteratively updated in accordance with the response of the user to each question, and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“wherein each inference node among the set of interference nodes is mapped to a decision node from among a plurality of decision nodes;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generating in the form of knowledge graph (a) a global knowledge repository for the domain of interest based on artifacts provided by a Subject Matter Expert (SME), and (b) a local knowledge repository for the current architecture based on artifacts provided by the user;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generating a feature matrix, by extracting one or more entities from the global knowledge repository in accordance with a plurality of inputs, provided by the SME, and further comprising a list of properties, a weightage of each of the list of properties and a list of components for the domain of interest;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“sequentially querying the user with a set of decision questions starting with context of the initial decision node and receiving a corresponding decision response for each decision question among the set of decision questions, wherein each decision question has an associated decision response type, one or more choices for a decision response, dependency links of a decision question with remaining decision questions among the set of decision questions, a decision question weightage and information in an attribute value form associated with each decision response, wherein each successive decision question among the set of decision questions is identified based on the decision response of the user to a previous decision question;” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
“generating a decision graph tracing a plurality of decision nodes starting from the initial decision node based on each decision response for each of the decision questions; and” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 12
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein relevant and contextual choices are generated for the decision response by trained Machine Learning (ML) models using combination of the local knowledge repository and the global knowledge repository in accordance with a plurality of features present in the feature matrix, wherein the choices are ranked according to probability of generating best possible decision, and wherein for any unanswered question, the decision response is identified by the trained ML model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein relevant and contextual choices are generated for the decision response by trained Machine Learning (ML) models using combination of the local knowledge repository and the global knowledge repository in accordance with a plurality of features present in the feature matrix, wherein the choices are ranked according to probability of generating best possible decision, and wherein for any unanswered question, the decision response is identified by the trained ML model.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 13
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein the contextual advisory comprises roadmap, blueprint, reference architecture and miscellaneous advisory documents, wherein the contextual advisory is a combination of static document with marked dynamics sections populated based on the decision path of the user.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein the contextual advisory comprises roadmap, blueprint, reference architecture and miscellaneous advisory documents, wherein the contextual advisory is a combination of static document with marked dynamics sections populated based on the decision path of the user.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 14
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein, the feature matrix is a structure to host data on a graph database for comparing the one or more entities and associated information obtained from the global knowledge repository across the plurality of inputs.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein, the feature matrix is a structure to host data on a graph database for comparing the one or more entities and associated information obtained from the global knowledge repository across the plurality of inputs.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim 15
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
A machine, as above.
Step 2A Prong 1 – Does the claim recite an abstract idea, law of nature, or natural
phenomenon?
The claim recites the abstract ideas of the preceding claims from which it depends.
Step 2A Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
The claim recites the additional elements, “wherein each question among the set of questions has an associated response type, one or more choices for a response, dependency links of a question with remaining questions among the set of questions, and a question weightage.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
Finally, the claim taken as a whole does not contain an inventive concept which provides significantly more than the abstract idea. The additional elements, “wherein each question among the set of questions has an associated response type, one or more choices for a response, dependency links of a question with remaining questions among the set of questions, and a question weightage.” amounts to generic computer components used as a tool to perform an existing process. Thus, the additional element amounts to no more than a recitation of the words "apply it" (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer (see MPEP § 2106.05(f)).
Taken alone or in combination, the additional elements of the claim do not provide an inventive concept and thus the claim is subject-matter ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-15 are rejected under 35 U.S.C. 103 as being unpatentable over Polleri et al., (Polleri et al., “CHATBOT FOR DEFINING A MACHINE LEARNING (ML) SOLUTION”, US 2021/0081819 A1, Filed Jun. 4th 2020, hereinafter “Polleri”) in view of Debnath et al., (Debnath, “A XaaS savvy Automated Approach to Composite Applications” 2015, hereinafter “Debnath”).
Regarding claim 1, Polleri discloses, “wherein each successive question among the set of questions is identified based on the response of the user to a previous question and is mapped to an initial inference node among a plurality of inference nodes preset in an inference table,” (Detailed Description, pp. 5, [0065]; “At 204, the functionality includes receiving a second user input identifies a problem for which a solution can be generated by the machine learning application. In various embodiments the second user input can specify a type of problem that the user would like to implement machine learning for. In various embodiments, the problem can be identified through input of text via a user interface. In various embodiments, the problems can be entered as native language speech or text (e.g., through the use of a chatbot). This application also helps a user develop a program with no prior experience. This will use a chatbot to communicate with the user. The chatbot will pose questions about the project and will answer question accordingly.)
“wherein each inference node corresponds to a set of inferences with each inference among the set of inferences having an initial inference weightage,” (Detailed Description, pp. 6, [0089]; “The technique can decipher the native language to understand the goals of the machine learning model. Some of types of problems that machine learning can solve can include classification, regression, product recommendations, medical diagnosis, financial analysis, predictive maintenance, image and sound recognition, text recognition, and tabular data analysis. The techniques will recognize one or more keywords in the native language speech to recommend or select a particular machine learning algorithm.” This application uses a chatbot as a LLM to process the information the user inputs. This chatbot can be any form of chatbot that uses modern LLM architecture. This will save the users conversation with the chatbot and is later analyzed by an analytic engine to pull more information out of the conversation.)
“wherein the initial inference node and the initial inference weightage of each inference node is iteratively updated in accordance with the response of the user to each question, and” (Detailed Description, pp. 5, [0067]-[0068]; “At 206, the functionality includes receiving a third input of one or more performance requirements for the machine learning application. The third input can be entered as native language speech or text (e.g., through the use of a chatbot) or selected via an interface (e.g., a graphical user interface). [0068] The performance requirements can include Quality of Service (QoS) metrics. QoS metrics refer to objective, system-related characteristics that provide insight into the performance of the delivery service at the network/transmission level. QoS metrics are parameters that reflect the quality of service on the sender side, rather than on the application side. Example QoS metrics can include system latency and reliability.” This is an example of an input to the chatbot. The user can add Quality of Service metrics to the program to generate a more complete model. This chatbot will take in this information and save and weight it for later use. This will take the information from the users and determine the value and look for keyworks in the text to help develop the model.)
“wherein each inference node among the set of interference nodes is mapped to a decision node from among a plurality of decision nodes;” (Detailed Description, pp. 5-6, [0079]; “The library components 168 can include metadata that identifies features and functions of each of the library components 168. The technique can determine the one or more library components 168 to select based at least in part on the identified problem received via the second input to achieve the performance metrics of the third input. One or more variables of each of the library components can be adjusted to customize the machine learning model to achieve a solution to the identified problem.” This model will take in users’ statements and requirements for the machine learning model to be generated. This will save the responses and keywords for later analysis and use. This teaches that depending on what is given to the system the system will store important text, giving certain things more weight.)
“identifying, via the one or more hardware processors, a final inference node and a corresponding decision node among the plurality of decision nodes as an initial decision node, post querying the user with the set of questions, wherein the final interference node indicates initial user inclination and a current architecture of the user in the domain of interest;” (Introduction, pp. 9, [0118]; “The analytic engine may incorporate the statistics into the aggregate path diagram to determine additional information such as how many conversations flowed through the intent-specific paths of the dialog flow for a given period, the number of conversations maintained between each state, and the different execution paths taken because the conversation branched due to values getting set (or not set), or dead-ended because of some other problem like a malfunctioning custom component. Optionally, the bot system may be retrained using the statistics and aggregated path diagram to improve the performance of the bot system, such as retraining the intent classification models of the bot system to more accurately determining the user intents.” The user’s conversation with the chatbot is analyzed by the system. It is stated that any chatbot can be used for the data collection and this system will analyze the text. This will review the information the chatbot collected and analyze it for further user intent and interests or attributes. This application will take this information and use it to help build the machine learning model proposed.)
“generating in the form of knowledge graph (a) a global knowledge repository for the domain of interest based on artifacts provided by a Subject Matter Expert (SME), and (b) a local knowledge repository for the current architecture based on artifacts provided by the user, via the one or more hardware processors;” (Detailed Description, pp. 4, [0064]; “In various embodiments, the user can use the interface to identify the one or more locations of data that will be used for generating the machine learning model. As described above, the data can be stored locally or remotely. In various embodiments, the user can enter a network location for the data (e.g., Internet Protocol (IP) address). In various embodiments, the user can select a folder from a plurality of folders on a storage device (e.g., a cloud-storage device).” This system will take in a location of data and create a repository to help build the system. This repository can be local, contained on a USB device or a local computer. The repository can also be global as in contained on a cloud server or programming repository like GitHub.)
“generating a feature matrix, by extracting one or more entities from the global knowledge repository in accordance with a plurality of inputs, provided by the SME, and comprising a list of properties, a weightage of each of the list of properties and a list of components for the domain of interest, via the one or more hardware processors;” (Machine Learning Infrastructure Platform, pp. 3, [0050]; “A model composition engine 132 can be executed on one or more computing systems (e.g., infrastructure 128). The model composition engine 132 can receive inputs from a user 116 through an interface 104. The interface 104 can include various graphical user interfaces with various menus and user selectable elements. The interface 104 can include a chatbot (e.g., a text based or voice-based interface). The user 116 can interact with the interface 104 to identify one or more of: a location of data, a desired prediction of machine learning application, and various performance metrics for the machine learning model. The model composition engine 132 can interface with library components 168 to identify various pipelines 136, micro service routines 140, software modules 144, and infrastructure models 148 that can be used in the creation of the machine learning model 112.” The Machine learning composition engine will take in data and constraints to develop a machine learning model based on the user’s intent. This will use data from the repositories given by the user and develop an application according to the user’s intent and conversation. This system will extract information from different sources in order to develop the initial machine learning model.)
“sequentially querying the user, via the one or more hardware processors, with a set of decision questions starting with context of the initial decision node and receiving a corresponding decision response for each decision question among the set of decision questions, wherein each decision question has an associated decision response type, one or more choices for a decision response, dependency links of a decision question with remaining decision questions among the set of decision questions, a decision question weightage and information in an attribute value form associated with each decision response, wherein each successive decision question among the set of decision questions is identified based on the decision response of the user to a previous decision question;” (Detailed Description, pp. 8, [0107-0108]; “The techniques can receive multiple inputs from the user. Based on the multiple inputs, the techniques can determine the intentions of the user to establish a machine learning architecture. In the technique, the intelligent assistant can analyze the inputs and recommend various options for a user based on the analysis. The techniques can generate code for the machine learning architecture. The code can be stored and reused for one or more different machine learning processes. The disclosed techniques simplify the process of developing intelligent applications. An intelligent assistant can employ a chatbot. A chatbot is software module that conducts a conversation via auditory or textual methods as a dialog system for interacting with a user. The chatbots can use sophisticated natural language processing systems, or can scan for keywords within the input, then pull a reply with the most matching keywords, or the most similar wording pattern, from a database. The chatbot can be used to set up an artificial intelligence system that can answer a question. In this way, the artificial intelligence can be used for translating information provided to a software module and hardware infrastructure in a plain language manner.” This system will use a chatbot to communicate with the user about the developing machine learning model. This will ask and answer questions based on the current project the user is working on. This will save the answers in some form and will analyze that conversation later. Each question is generated dynamically and is not preset. The questions posed by the system will be based on the information already gathered during the conversation. This can use many different forms of Chatbots which can take into account the user level of skill in the field or domain.)
“generating, via the one or more hardware processors, a decision graph tracing a plurality of decision nodes starting from the initial decision node based on each decision response for each of the decision questions; and” (Chatbot for Defining a Machine Learning Solution, pp. 7, [0105]; “Machine learning models are trained for generating predictive outcomes for code integration requests. In one aspect, techniques can be used for defining a machine learning solution, including receiving a first input (e.g., aural, textual, or GUI) describing a problem for the machine learning solution. A model composition engine 132, as shown in FIG. 1, can transcribe the first input into one or more text fragments. The model composition engine 132 can determine an intent of a user to create a machine learning architecture based at least in part on the one or more text fragments.” This system will take in the data from the users and analyze it. The system will generate a model based on the keywords and the output of the analytic engine. This teaches the use of reviewing the conversation and saving important data for later use in a particular structure.)
“utilizing, via the one or more hardware processors, a decision path identified from the decision graph and the information in the attribute value form associated with each decision response for providing the contextual advisory to build XaaS for the domain of interest using document templates.” (Chatbot for Defining a Machine Learning Solution, pp. 7-8, [0105]; “The techniques can include correlating the one or more text fragments to one or more machine learning frameworks of a plurality of models. The techniques can include presenting (e.g., interface or audio) the one or more machine learning model to the user. The model composition engine 132 can receive a selection of one or more machine learning model (e.g., classification, recommender, reinforcement learning). The model composition engine 132 can receive several other user inputs including a second input identifying a data source for the machine learning architecture and a third input of one or more constraints (e.g., resources, location, security, or privacy) for the machine learning architecture. The model composition engine 132 can generate a plurality of code for the machine learning architecture based at least in part on the selected model, the second input identifying the data source, and the third input identifying the one or more constraints. The generated code can be stored in a memory.” After the conversation with the user is completed and the data has been input into the model generator an output model is produced. This will be presented to the user as a final result. The model generator will also train the model before sending it to the user.)
Polleri fails to explicitly disclose, “A processor implemented method for generating a contextual advisory for Everything as a service (XaaS), the method comprising:” and “obtaining, via one or more hardware processors, a domain of interest and an initial inclination of a user indicating maturity of the user in the domain of interest when user requests for the contextual advisory for a platform, a process, a technology, and technical components to build XaaS for the domain of interest, wherein the initial inclination is obtained by sequentially querying the user with a set of questions and receiving a corresponding response for each question among the set of questions,”.
However, Debnath discloses, “A processor implemented method for generating a contextual advisory for Everything as a service (XaaS), the method comprising:” (Introduction, pp. 734; “In this paper, we explore ways to overcome the aforementioned challenges based on new technologies and platforms and provide an algorithm for creating a composite application through automated service composition with minimal human intervention. Our efforts are based on first studying the XaaS platform landscape and then coming up with a XaaS relevant and practical schema for service descriptions and matching. We have implemented our approach as a prototype called AutoComp and we demonstrate its use on an internal XaaS.” This paper proposes a method to generate Xaas for a user. This will gather information from the user and will generate a manifest based on the interaction and requirements from one or more users.)
“obtaining, via one or more hardware processors, a domain of interest and an initial inclination of a user indicating maturity of the user in the domain of interest when user requests for the contextual advisory for a platform, a process, a technology, and technical components to build XaaS for the domain of interest, wherein the initial inclination is obtained by sequentially querying the user with a set of questions and receiving a corresponding response for each question among the set of questions,” (Services in Modern Cloud Platforms, pp. 737; “Based upon our study and considerations, we have assumed the following web service description schema, which is further used for our use-case services and consumed by our proposed algorithm. The documentation attributes are: [Name, Category, Description, Tags, #Inputs, #Outputs, InputDesc, OutputDesc] Here, Name is the service name, Category is the name of the category heading under which a service belongs to. Just like the other platforms, we define our own set of predefined categories. Description is a natural language text summary of the service. We have incorporated the concept of tags in our implementation. The core features of a service can be expressed as multiple number of tags. This benefits in searching for services with at least one matching tag between the requested and the already available services. Therefore, Tags attribute stores the associated tags to a particular web service. #Inputs is the total number of input items accepted by the service and InputDesc shows the input parameter names with their corresponding data types (whether a string or a number or a date, etc.). #Outputs and OutputDesc are defined similarly in context of service outputs. Figure 4 portrays a sample REST API of a Validator service and the values for all the mentioned parameters.” This program will help a user develop a program without any experience. Initially the user will input different data into a form so the computer can evaluate that information. The information gathered includes user inclination, domain, level of detail, attributes and other tags.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Polleri and Debnath. Polleri teaches a system that is able to communicate with a user with a chatbot to design a complex system based on the conversation with the user. Debnath teaches a system that is able to communicate with a user to design a complex system or an Xaas. One of ordinary skill would have motivation to combine a system that is able to communicate with a user via a chatbot to produce a generated complex machine learning model to be used for services or software with a system that is able to design a complex system or Xaas for later use, “Automatically composing applications will gain center stage as new technologies and platforms evolve and users need for nimble and highly customized applications increase. The upsurge in the cloud computing and the ever-increasing demand and supply of services through XaaS, has led us to look at an approach to re-use the existing applications to achieve a higher objective quickly and efficiently, instead of the traditional way of creating a new application (or parts of it) from scratch whenever requirements arrive or change. In this paper, we have attempted to provide a solution and discuss various aspects of the challenges and how we address them. We also presented a prototype implementation of our approach called AutoComp that automates creation of composite application on a XaaS. Our current work focuses on obtaining requirements for the composite app, after which appropriate services are assembled into a single composite plan, followed by the deployment and execution of the composed app on the cloud; this essentially completes the complete app delivery cycle. We have used services in the form of REST APIs which is the current trend for Web services. Also, as there are no global standards to document a service, we have started working with the basic minimum information as of now, so that we stick to the more real word practical ones.”, (Debnath, Conclusions and Future Work, pp. 740).
Regarding claim 2, Polleri discloses, “wherein relevant and contextual choices are generated for the decision response by trained Machine Learning (ML) models using combination of the local knowledge repository and the global knowledge repository in accordance with a plurality of features present in the feature matrix, wherein the choices are ranked according to probability of generating best possible decision, and wherein for any unanswered question, the decision response is identified by the trained ML model.” (Safe Serialization of the Predicted Pipeline, pp. 23-24, [0266]; “Aspects of the present disclosure provide various techniques (e.g., methods, systems, devices, computer-readable media storing computer-executable instructions used to perform computing functions, etc.) for generating and using machine learning models to predict outcomes of code integration requests. As discussed in more detail below, machine learning models may be generated and trained based on previous code integration requests submitted to and processed by a software architecture authorization system. Based on the machine learning and artificial intelligence-based techniques used, one or more models may be trained which may be developer-specific, project-specific, and organization- specific, meaning that trained models may output different outcome predictions, confidence levels, causes, and suggestions depending on the current developer, project, and organization. The machine learning models also may be trained based on specific inputs received in connection with previous code integration requests (e.g., the software library to be integrated, the target source code module, the reason for the code integration requests and/or functionality to be used within the library, etc.). Then, following the generation and training of one or more machine learning models, such models may be used to predict outcomes (e.g., approval or denial for authorization) for a potential code integration request. Such models may also be used to autonomously and independent identify the reasons associated with the predictions (e.g., security vulnerabilities, license incompatibility, etc.), and/or to suggest alternative software libraries that may be integrated instead to provide the desired functionality.” The system in this applicant uses machine learning models to evaluate user intent and develop a machine learning model based on their intent. This will use the repositories stated by the user and will use that data to develop the new machine learning model. The generated machine learning model will be developed based on the users’ requirements and the constraints of the data given to the system.)
Regarding claim 3, Polleri fails to explicitly disclose, “wherein the contextual advisory comprises roadmap, blueprint, reference architecture and miscellaneous advisory documents, wherein the contextual advisory is a combination of static document with marked dynamics sections populated based on the decision path of the user.”.
However, Debnath discloses, “wherein the contextual advisory comprises roadmap, blueprint, reference architecture and miscellaneous advisory documents, wherein the contextual advisory is a combination of static document with marked dynamics sections populated based on the decision path of the user.” (An Algorithm for Composite Manifest/Plan Generation, pp. 739; “An interaction with the composer is required at the end of each iteration to confirm with him, whether the service shortlisted by the algorithm is to be added to the manifest. Also, when a service set contains multiple services and only one service needs to be selected, the composer is consulted for his choice in this case. This is done by the SelectServ method which is called from multiple points in Algorithm 1 and it is explained by Algorithm 2. One more place where composer discretion is important, is when ServOT is not null. It means that there is at least one service which is satisfying the output requirements, as intended output, expressed by the composer. The composer needs to confirm if composition has ended, therefore if endComp is true, then Success becomes true and the algorithm ends. The result is a composite manifest which is a sequence of a set of services in a particular order, that achieves the higher objective as intended by the composer.” This system will use user data, statements and requirement to produce a manifest. This manifest is a sequence of services to be used to develop the proposed Xaas system. The output is a textual roadmap on how to implement the suggested Xaas system.)
Regarding claim 4, Polleri discloses, “wherein, the feature matrix is a structure to host data on a graph database for comparing the one or more entities and associated information obtained from the global knowledge repository across the plurality of inputs.” (Machine Learning Infrastructure Platform, pp. 3, [0050]; “A model composition engine 132 can be executed on one or more computing systems (e.g., infrastructure 128). The model composition engine 132 can receive inputs from a user 116 through an interface 104. The interface 104 can include various graphical user interfaces with various menus and user selectable elements. The interface 104 can include a chatbot (e.g., a text based or voice-based interface). The user 116 can interact with the interface 104 to identify one or more of: a location of data, a desired prediction of machine learning application, and various performance metrics for the machine learning model. The model composition engine 132 can interface with library components 168 to identify various pipelines 136, micro service routines 140, software modules 144, and infrastructure models 148 that can be used in the creation of the machine learning model 112.” The model composition engine will use the information and requirements from the user. It will also use the data from the given repositories to develop a machine learning model for the user. The system will develop and train the model with the data from the given repositories.)
Regarding claim 5, Polleri discloses, “wherein each question among the set of questions has an associated response type, one or more choices for a response, dependency links of a question with remaining questions among the set of questions, and a question weightage.” (Bot and Analytic System, pp. 10, [0124]; “In some embodiments, the bot system may intelligently handle end user interactions without interaction with an administrator or developer of the bot system. For example, an end user may send one or more messages to the bot system in order to achieve a desired goal. A message may include certain content, such as text, emojis, audio, image, video, or other method of conveying a message. In some embodiments, the bot system may convert the content into a standardized form (e.g., a representational state transfer (REST) call against enterprise services with the proper parameters) and generate a natural language response. The bot system may also prompt the end user for additional input parameters or request other additional information. In some embodiments, the bot system may also initiate communication with the end user, rather than passively responding to end user utterances. Described herein are various techniques for identifying an explicit invocation of a bot system and determining an input for the bot system being invoked. In certain embodiments, explicit invocation analysis is performed by a master bot based on detecting an invocation name in an utterance. In response to detection of the invocation name, the utterance may be refined for input to a skill bot associated with the invocation name.” This system uses a conversation chatbot to communicate with the user. The user will communicate requirements, architecture and other machine learning components to develop the model for the user. The user can ask questions to the chatbot and the chatbot can use clarifying questions and generated questions to gain more information form the user. Depending on the users answers in the conversation certain keyworks are given more weight and will be further analyzed by the analytic engine proposed in the application.)
Regarding claim 6, Polleri discloses, “wherein each successive question among the set of questions is identified based on the response of the user to a previous question and is mapped to an initial inference node among a plurality of inference nodes preset in an inference table,” (Detailed Description, pp. 5, [0065]; “At 204, the functionality includes receiving a second user input identifies a problem for which a solution can be generated by the machine learning application. In various embodiments the second user input can specify a type of problem that the user would like to implement machine learning for. In various embodiments, the problem can be identified through input of text via a user interface. In various embodiments, the problems can be entered as native language speech or text (e.g., through the use of a chatbot). This application also helps a user develop a program with no prior experience. This will use a chatbot to communicate with the user. The chatbot will pose questions about the project and will answer question accordingly.)
“wherein each inference node corresponds to a set of inferences with each inference among the set of inferences having an initial inference weightage,” (Detailed Description, pp. 6, [0089]; “The technique can decipher the native language to understand the goals of the machine learning model. Some of types of problems that machine learning can solve can include classification, regression, product recommendations, medical diagnosis, financial analysis, predictive maintenance, image and sound recognition, text recognition, and tabular data analysis. The techniques will recognize one or more keywords in the native language speech to recommend or select a particular machine learning algorithm.” This application uses a chatbot as a LLM to process the information the user inputs. This chatbot can be any form of chatbot that uses modern LLM architecture. This will save the users conversation with the chatbot and is later analyzed by an analytic engine to pull more information out of the conversation.)
“wherein the initial inference node and the initial inference weightage of each inference node is iteratively updated in accordance with the response of the user to each question, and” (Detailed Description, pp. 5, [0067]-[0068]; “At 206, the functionality includes receiving a third input of one or more performance requirements for the machine learning application. The third input can be entered as native language speech or text (e.g., through the use of a chatbot) or selected via an interface (e.g., a graphical user interface). [0068] The performance requirements can include Quality of Service (QoS) metrics. QoS metrics refer to objective, system-related characteristics that provide insight into the performance of the delivery service at the network/transmission level. QoS metrics are parameters that reflect the quality of service on the sender side, rather than on the application side. Example QoS metrics can include system latency and reliability.” This is an example of an input to the chatbot. The user can add Quality of Service metrics to the program to generate a more complete model. This chatbot will take in this information and save and weight it for later use. This will take the information from the users and determine the value and look for keyworks in the text to help develop the model.)
“wherein each inference node among the set of interference nodes is mapped to a decision node from among a plurality of decision nodes;” (Detailed Description, pp. 5-6, [0079]; “The library components 168 can include metadata that identifies features and functions of each of the library components 168. The technique can determine the one or more library components 168 to select based at least in part on the identified problem received via the second input to achieve the performance metrics of the third input. One or more variables of each of the library components can be adjusted to customize the machine learning model to achieve a solution to the identified problem.” This model will take in users’ statements and requirements for the machine learning model to be generated. This will save the responses and keywords for later analysis and use. This teaches that depending on what is given to the system the system will store important text, giving certain things more weight.)
“identify a final inference node and a corresponding decision node among the plurality of decision nodes as an initial decision node, post querying the user with the set of questions, wherein the final interference node indicates initial user inclination and a current architecture of the user in the domain of interest;” (Introduction, pp. 9, [0118]; “The analytic engine may incorporate the statistics into the aggregate path diagram to determine additional information such as how many conversations flowed through the intent-specific paths of the dialog flow for a given period, the number of conversations maintained between each state, and the different execution paths taken because the conversation branched due to values getting set (or not set), or dead-ended because of some other problem like a malfunctioning custom component. Optionally, the bot system may be retrained using the statistics and aggregated path diagram to improve the performance of the bot system, such as retraining the intent classification models of the bot system to more accurately determining the user intents.” The user’s conversation with the chatbot is analyzed by the system. It is stated that any chatbot can be used for the data collection and this system will analyze the text. This will review the information the chatbot collected and analyze it for further user intent and interests or attributes. This application will take this information and use it to help build the machine learning model proposed.)
“generate in the form of knowledge graph (a) a global knowledge repository for the domain of interest based on artifacts provided by a Subject Matter Expert (SME), and (b) a local knowledge repository for the current architecture based on artifacts provided by the user;” (Detailed Description, pp. 4, [0064]; “In various embodiments, the user can use the interface to identify the one or more locations of data that will be used for generating the machine learning model. As described above, the data can be stored locally or remotely. In various embodiments, the user can enter a network location for the data (e.g., Internet Protocol (IP) address). In various embodiments, the user can select a folder from a plurality of folders on a storage device (e.g., a cloud-storage device).” This system will take in a location of data and create a repository to help build the system. This repository can be local, contained on a USB device or a local computer. The repository can also be global as in contained on a cloud server or programming repository like GitHub.)
“generate a feature matrix, by extracting one or more entities from the global knowledge repository in accordance with a plurality of inputs, provided by the SME, and comprising a list of properties, a weightage of each of the list of properties and a list of components for the domain of interest;” (Machine Learning Infrastructure Platform, pp. 3, [0050]; “A model composition engine 132 can be executed on one or more computing systems (e.g., infrastructure 128). The model composition engine 132 can receive inputs from a user 116 through an interface 104. The interface 104 can include various graphical user interfaces with various menus and user selectable elements. The interface 104 can include a chatbot (e.g., a text based or voice-based interface). The user 116 can interact with the interface 104 to identify one or more of: a location of data, a desired prediction of machine learning application, and various performance metrics for the machine learning model. The model composition engine 132 can interface with library components 168 to identify various pipelines 136, micro service routines 140, software modules 144, and infrastructure models 148 that can be used in the creation of the machine learning model 112.” The Machine learning composition engine will take in data and constraints to develop a machine learning model based on the user’s intent. This will use data from the repositories given by the user and develop an application according to the user’s intent and conversation. This system will extract information from different sources in order to develop the initial machine learning model.)
“sequentially query with a set of decision questions starting with context of the initial decision node and receiving a corresponding decision response for each decision question among the set of decision questions, wherein each decision question has an associated decision response type, one or more choices for a decision response, dependency links of a decision question with remaining decision questions among the set of decision questions, a decision question weightage and information in an attribute value form associated with each decision response, wherein each successive decision question among the set of decision questions is identified based on the decision response of the user to a previous decision question;” (Detailed Description, pp. 8, [0107-0108]; “The techniques can receive multiple inputs from the user. Based on the multiple inputs, the techniques can determine the intentions of the user to establish a machine learning architecture. In the technique, the intelligent assistant can analyze the inputs and recommend various options for a user based on the analysis. The techniques can generate code for the machine learning architecture. The code can be stored and reused for one or more different machine learning processes. The disclosed techniques simplify the process of developing intelligent applications. An intelligent assistant can employ a chatbot. A chatbot is software module that conducts a conversation via auditory or textual methods as a dialog system for interacting with a user. The chatbots can use sophisticated natural language processing systems, or can scan for keywords within the input, then pull a reply with the most matching keywords, or the most similar wording pattern, from a database. The chatbot can be used to set up an artificial intelligence system that can answer a question. In this way, the artificial intelligence can be used for translating information provided to a software module and hardware infrastructure in a plain language manner.” This system will use a chatbot to communicate with the user about the developing machine learning model. This will ask and answer questions based on the current project the user is working on. This will save the answers in some form and will analyze that conversation later. Each question is generated dynamically and is not preset. The questions posed by the system will be based on the information already gathered during the conversation. This can use many different forms of Chatbots which can take into account the user level of skill in the field or domain.)
“generate a decision graph tracing a plurality of decision nodes starting from the initial decision node based on each decision response for each of the decision questions; and” (Chatbot for Defining a Machine Learning Solution, pp. 7, [0105]; “Machine learning models are trained for generating predictive outcomes for code integration requests. In one aspect, techniques can be used for defining a machine learning solution, including receiving a first input (e.g., aural, textual, or GUI) describing a problem for the machine learning solution. A model composition engine 132, as shown in FIG. 1, can transcribe the first input into one or more text fragments. The model composition engine 132 can determine an intent of a user to create a machine learning architecture based at least in part on the one or more text fragments.” This system will take in the data from the users and analyze it. The system will generate a model based on the keywords and the output of the analytic engine. This teaches the use of reviewing the conversation and saving important data for later use in a particular structure.)
“utilize a decision path identified from the decision graph and the information in the attribute value form associated with each decision response for providing the contextual advisory to build XaaS for the domain of interest using document templates.” (Chatbot for Defining a Machine Learning Solution, pp. 7-8, [0105]; “The techniques can include correlating the one or more text fragments to one or more machine learning frameworks of a plurality of models. The techniques can include presenting (e.g., interface or audio) the one or more machine learning model to the user. The model composition engine 132 can receive a selection of one or more machine learning model (e.g., classification, recommender, reinforcement learning). The model composition engine 132 can receive several other user inputs including a second input identifying a data source for the machine learning architecture and a third input of one or more constraints (e.g., resources, location, security, or privacy) for the machine learning architecture. The model composition engine 132 can generate a plurality of code for the machine learning architecture based at least in part on the selected model, the second input identifying the data source, and the third input identifying the one or more constraints. The generated code can be stored in a memory.” After the conversation with the user is completed and the data has been input into the model generator an output model is produced. This will be presented to the user as a final result. The model generator will also train the model before sending it to the user.)
Polleri fails to explicitly disclose, “A system for generating a contextual advisory for Everything as a service (XaaS), the system comprising: a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to:” and “obtain a domain of interest and an initial inclination of a user indicating maturity of the user in the domain of interest when user requests for the contextual advisory for a platform, a process, a technology, and technical components to build XaaS for the domain of interest, wherein the initial inclination is obtained by sequentially querying the user with a set of questions and receiving a corresponding response for each question among the set of questions,”.
However, Debnath discloses, “A system for generating a contextual advisory for Everything as a service (XaaS), the system comprising: a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to:” (AutoComp- An Implementation of our Composition Approach, pp. 739; “We decided to implement our approach a prototype and deployed it on our internal Xaas. This XaaS mimics deployment of various applications and their services on a private Cloud and contains some typical enterprise services which may be used in different composite applications. The architecture and deployment diagram of AutoComp as well as a prototype execution for the chosen use-case is described in the Figure 8.” This paper discloses a system which implements their method. This is seen in figure 8. This was designed to be used on a large system containing servers and multiple end user devices. For the experiment, they implemented the system on a generic computing system. This system contains memory linked to processors to execute machine code stored in the memory.)
“obtain a domain of interest and an initial inclination of a user indicating maturity of the user in the domain of interest when user requests for the contextual advisory for a platform, a process, a technology, and technical components to build XaaS for the domain of interest, wherein the initial inclination is obtained by sequentially querying the user with a set of questions and receiving a corresponding response for each question among the set of questions,” (Services in Modern Cloud Platforms, pp. 737; “Based upon our study and considerations, we have assumed the following web service description schema, which is further used for our use-case services and consumed by our proposed algorithm. The documentation attributes are: [Name, Category, Description, Tags, #Inputs, #Outputs, InputDesc, OutputDesc] Here, Name is the service name, Category is the name of the category heading under which a service belongs to. Just like the other platforms, we define our own set of predefined categories. Description is a natural language text summary of the service. We have incorporated the concept of tags in our implementation. The core features of a service can be expressed as multiple number of tags. This benefits in searching for services with at least one matching tag between the requested and the already available services. Therefore, Tags attribute stores the associated tags to a particular web service. #Inputs is the total number of input items accepted by the service and InputDesc shows the input parameter names with their corresponding data types (whether a string or a number or a date, etc.). #Outputs and OutputDesc are defined similarly in context of service outputs. Figure 4 portrays a sample REST API of a Validator service and the values for all the mentioned parameters.” This program will help a user develop a program without any experience. Initially the user will input different data into a form so the computer can evaluate that information. The information gathered includes user inclination, domain, level of detail, attributes and other tags.)
Regarding claim 7, Polleri discloses, “wherein the one or more hardware processors are configured to generate relevant and contextual choices for the decision response by trained Machine Learning (ML) models using combination of the local knowledge repository and the global knowledge repository in accordance with a plurality of features present in the feature matrix, wherein the choices are ranked according to probability of generating best possible decision, and wherein for any unanswered question, the decision response is identified by the trained ML model.” (Safe Serialization of the Predicted Pipeline, pp. 23-24, [0266]; “Aspects of the present disclosure provide various techniques (e.g., methods, systems, devices, computer-readable media storing computer-executable instructions used to perform computing functions, etc.) for generating and using machine learning models to predict outcomes of code integration requests. As discussed in more detail below, machine learning models may be generated and trained based on previous code integration requests submitted to and processed by a software architecture authorization system. Based on the machine learning and artificial intelligence-based techniques used, one or more models may be trained which may be developer-specific, project-specific, and organization- specific, meaning that trained models may output different outcome predictions, confidence levels, causes, and suggestions depending on the current developer, project, and organization. The machine learning models also may be trained based on specific inputs received in connection with previous code integration requests (e.g., the software library to be integrated, the target source code module, the reason for the code integration requests and/or functionality to be used within the library, etc.). Then, following the generation and training of one or more machine learning models, such models may be used to predict outcomes (e.g., approval or denial for authorization) for a potential code integration request. Such models may also be used to autonomously and independent identify the reasons associated with the predictions (e.g., security vulnerabilities, license incompatibility, etc.), and/or to suggest alternative software libraries that may be integrated instead to provide the desired functionality.” The system in this applicant uses machine learning models to evaluate user intent and develop a machine learning model based on their intent. This will use the repositories stated by the user and will use that data to develop the new machine learning model. The generated machine learning model will be developed based on the users’ requirements and the constraints of the data given to the system.)
Regarding claim 8, Polleri fails to explicitly disclose, “wherein the contextual advisory comprises roadmap, blueprint, reference architecture and miscellaneous advisory documents, wherein the contextual advisory is a combination of static document with marked dynamics sections populated based on the decision path of the user.”.
However, Debnath discloses, “wherein the contextual advisory comprises roadmap, blueprint, reference architecture and miscellaneous advisory documents, wherein the contextual advisory is a combination of static document with marked dynamics sections populated based on the decision path of the user.” (An Algorithm for Composite Manifest/Plan Generation, pp. 739; “An interaction with the composer is required at the end of each iteration to confirm with him, whether the service shortlisted by the algorithm is to be added to the manifest. Also, when a service set contains multiple services and only one service needs to be selected, the composer is consulted for his choice in this case. This is done by the SelectServ method which is called from multiple points in Algorithm 1 and it is explained by Algorithm 2. One more place where composer discretion is important, is when ServOT is not null. It means that there is at least one service which is satisfying the output requirements, as intended output, expressed by the composer. The composer needs to confirm if composition has ended, therefore if endComp is true, then Success becomes true and the algorithm ends. The result is a composite manifest which is a sequence of a set of services in a particular order, that achieves the higher objective as intended by the composer.” This system will use user data, statements and requirement to produce a manifest. This manifest is a sequence of services to be used to develop the proposed Xaas system. The output is a textual roadmap on how to implement the suggested Xaas system.)
Regarding claim 9, Polleri discloses, “wherein, the feature matrix is a structure to host data on a graph database for comparing the one or more entities and associated information obtained from the global knowledge repository across the plurality of inputs.” (Machine Learning Infrastructure Platform, pp. 3, [0050]; “A model composition engine 132 can be executed on one or more computing systems (e.g., infrastructure 128). The model composition engine 132 can receive inputs from a user 116 through an interface 104. The interface 104 can include various graphical user interfaces with various menus and user selectable elements. The interface 104 can include a chatbot (e.g., a text based or voice-based interface). The user 116 can interact with the interface 104 to identify one or more of: a location of data, a desired prediction of machine learning application, and various performance metrics for the machine learning model. The model composition engine 132 can interface with library components 168 to identify various pipelines 136, micro service routines 140, software modules 144, and infrastructure models 148 that can be used in the creation of the machine learning model 112.” The model composition engine will use the information and requirements from the user. It will also use the data from the given repositories to develop a machine learning model for the user. The system will develop and train the model with the data from the given repositories.)
Regarding claim 10, Polleri discloses, “wherein each question among the set of questions has an associated response type, one or more choices for a response, dependency links of a question with remaining questions among the set of questions, and a question weightage.” (Bot and Analytic System, pp. 10, [0124]; “In some embodiments, the bot system may intelligently handle end user interactions without interaction with an administrator or developer of the bot system. For example, an end user may send one or more messages to the bot system in order to achieve a desired goal. A message may include certain content, such as text, emojis, audio, image, video, or other method of conveying a message. In some embodiments, the bot system may convert the content into a standardized form (e.g., a representational state transfer (REST) call against enterprise services with the proper parameters) and generate a natural language response. The bot system may also prompt the end user for additional input parameters or request other additional information. In some embodiments, the bot system may also initiate communication with the end user, rather than passively responding to end user utterances. Described herein are various techniques for identifying an explicit invocation of a bot system and determining an input for the bot system being invoked. In certain embodiments, explicit invocation analysis is performed by a master bot based on detecting an invocation name in an utterance. In response to detection of the invocation name, the utterance may be refined for input to a skill bot associated with the invocation name.” This system uses a conversation chatbot to communicate with the user. The user will communicate requirements, architecture and other machine learning components to develop the model for the user. The user can ask questions to the chatbot and the chatbot can use clarifying questions and generated questions to gain more information form the user. Depending on the users answers in the conversation certain keyworks are given more weight and will be further analyzed by the analytic engine proposed in the application.)
Regarding claim 11, Polleri discloses, “wherein each successive question among the set of questions is identified based on the response of the user to a previous question and is mapped to an initial inference node among a plurality of inference nodes preset in an inference table,” (Detailed Description, pp. 5, [0065]; “At 204, the functionality includes receiving a second user input identifies a problem for which a solution can be generated by the machine learning application. In various embodiments the second user input can specify a type of problem that the user would like to implement machine learning for. In various embodiments, the problem can be identified through input of text via a user interface. In various embodiments, the problems can be entered as native language speech or text (e.g., through the use of a chatbot). This application also helps a user develop a program with no prior experience. This will use a chatbot to communicate with the user. The chatbot will pose questions about the project and will answer question accordingly.)
“wherein each inference node corresponds to a set of inferences with each inference among the set of inferences having an initial inference weightage,” (Detailed Description, pp. 6, [0089]; “The technique can decipher the native language to understand the goals of the machine learning model. Some of types of problems that machine learning can solve can include classification, regression, product recommendations, medical diagnosis, financial analysis, predictive maintenance, image and sound recognition, text recognition, and tabular data analysis. The techniques will recognize one or more keywords in the native language speech to recommend or select a particular machine learning algorithm.” This application uses a chatbot as a LLM to process the information the user inputs. This chatbot can be any form of chatbot that uses modern LLM architecture. This will save the users conversation with the chatbot and is later analyzed by an analytic engine to pull more information out of the conversation.)
“wherein the initial inference node and the initial inference weightage of each inference node is iteratively updated in accordance with the response of the user to each question, and” (Detailed Description, pp. 5, [0067]-[0068]; “At 206, the functionality includes receiving a third input of one or more performance requirements for the machine learning application. The third input can be entered as native language speech or text (e.g., through the use of a chatbot) or selected via an interface (e.g., a graphical user interface). [0068] The performance requirements can include Quality of Service (QoS) metrics. QoS metrics refer to objective, system-related characteristics that provide insight into the performance of the delivery service at the network/transmission level. QoS metrics are parameters that reflect the quality of service on the sender side, rather than on the application side. Example QoS metrics can include system latency and reliability.” This is an example of an input to the chatbot. The user can add Quality of Service metrics to the program to generate a more complete model. This chatbot will take in this information and save and weight it for later use. This will take the information from the users and determine the value and look for keyworks in the text to help develop the model.)
“wherein each inference node among the set of interference nodes is mapped to a decision node from among a plurality of decision nodes;” (Detailed Description, pp. 5-6, [0079]; “The library components 168 can include metadata that identifies features and functions of each of the library components 168. The technique can determine the one or more library components 168 to select based at least in part on the identified problem received via the second input to achieve the performance metrics of the third input. One or more variables of each of the library components can be adjusted to customize the machine learning model to achieve a solution to the identified problem.” This model will take in users’ statements and requirements for the machine learning model to be generated. This will save the responses and keywords for later analysis and use. This teaches that depending on what is given to the system the system will store important text, giving certain things more weight.)
“identifying a final inference node and a corresponding decision node among the plurality of decision nodes as an initial decision node, post querying the user with the set of questions, wherein the final interference node indicates initial user inclination and a current architecture of the user in the domain of interest;” (Introduction, pp. 9, [0118]; “The analytic engine may incorporate the statistics into the aggregate path diagram to determine additional information such as how many conversations flowed through the intent-specific paths of the dialog flow for a given period, the number of conversations maintained between each state, and the different execution paths taken because the conversation branched due to values getting set (or not set), or dead-ended because of some other problem like a malfunctioning custom component. Optionally, the bot system may be retrained using the statistics and aggregated path diagram to improve the performance of the bot system, such as retraining the intent classification models of the bot system to more accurately determining the user intents.” The user’s conversation with the chatbot is analyzed by the system. It is stated that any chatbot can be used for the data collection and this system will analyze the text. This will review the information the chatbot collected and analyze it for further user intent and interests or attributes. This application will take this information and use it to help build the machine learning model proposed.)
“generating in the form of knowledge graph (a) a global knowledge repository for the domain of interest based on artifacts provided by a Subject Matter Expert (SME), and (b) a local knowledge repository for the current architecture based on artifacts provided by the user;” (Detailed Description, pp. 4, [0064]; “In various embodiments, the user can use the interface to identify the one or more locations of data that will be used for generating the machine learning model. As described above, the data can be stored locally or remotely. In various embodiments, the user can enter a network location for the data (e.g., Internet Protocol (IP) address). In various embodiments, the user can select a folder from a plurality of folders on a storage device (e.g., a cloud-storage device).” This system will take in a location of data and create a repository to help build the system. This repository can be local, contained on a USB device or a local computer. The repository can also be global as in contained on a cloud server or programming repository like GitHub.)
“generating a feature matrix, by extracting one or more entities from the global knowledge repository in accordance with a plurality of inputs, provided by the SME, and further comprising a list of properties, a weightage of each of the list of properties and a list of components for the domain of interest;” (Machine Learning Infrastructure Platform, pp. 3, [0050]; “A model composition engine 132 can be executed on one or more computing systems (e.g., infrastructure 128). The model composition engine 132 can receive inputs from a user 116 through an interface 104. The interface 104 can include various graphical user interfaces with various menus and user selectable elements. The interface 104 can include a chatbot (e.g., a text based or voice-based interface). The user 116 can interact with the interface 104 to identify one or more of: a location of data, a desired prediction of machine learning application, and various performance metrics for the machine learning model. The model composition engine 132 can interface with library components 168 to identify various pipelines 136, micro service routines 140, software modules 144, and infrastructure models 148 that can be used in the creation of the machine learning model 112.” The Machine learning composition engine will take in data and constraints to develop a machine learning model based on the user’s intent. This will use data from the repositories given by the user and develop an application according to the user’s intent and conversation. This system will extract information from different sources in order to develop the initial machine learning model.)
“sequentially querying the user with a set of decision questions starting with context of the initial decision node and receiving a corresponding decision response for each decision question among the set of decision questions, wherein each decision question has an associated decision response type, one or more choices for a decision response, dependency links of a decision question with remaining decision questions among the set of decision questions, a decision question weightage and information in an attribute value form associated with each decision response, wherein each successive decision question among the set of decision questions is identified based on the decision response of the user to a previous decision question;” (Detailed Description, pp. 8, [0107-0108]; “The techniques can receive multiple inputs from the user. Based on the multiple inputs, the techniques can determine the intentions of the user to establish a machine learning architecture. In the technique, the intelligent assistant can analyze the inputs and recommend various options for a user based on the analysis. The techniques can generate code for the machine learning architecture. The code can be stored and reused for one or more different machine learning processes. The disclosed techniques simplify the process of developing intelligent applications. An intelligent assistant can employ a chatbot. A chatbot is software module that conducts a conversation via auditory or textual methods as a dialog system for interacting with a user. The chatbots can use sophisticated natural language processing systems, or can scan for keywords within the input, then pull a reply with the most matching keywords, or the most similar wording pattern, from a database. The chatbot can be used to set up an artificial intelligence system that can answer a question. In this way, the artificial intelligence can be used for translating information provided to a software module and hardware infrastructure in a plain language manner.” This system will use a chatbot to communicate with the user about the developing machine learning model. This will ask and answer questions based on the current project the user is working on. This will save the answers in some form and will analyze that conversation later. Each question is generated dynamically and is not preset. The questions posed by the system will be based on the information already gathered during the conversation. This can use many different forms of Chatbots which can take into account the user level of skill in the field or domain.)
“generating a decision graph tracing a plurality of decision nodes starting from the initial decision node based on each decision response for each of the decision questions; and” (Chatbot for Defining a Machine Learning Solution, pp. 7, [0105]; “Machine learning models are trained for generating predictive outcomes for code integration requests. In one aspect, techniques can be used for defining a machine learning solution, including receiving a first input (e.g., aural, textual, or GUI) describing a problem for the machine learning solution. A model composition engine 132, as shown in FIG. 1, can transcribe the first input into one or more text fragments. The model composition engine 132 can determine an intent of a user to create a machine learning architecture based at least in part on the one or more text fragments.” This system will take in the data from the users and analyze it. The system will generate a model based on the keywords and the output of the analytic engine. This teaches the use of reviewing the conversation and saving important data for later use in a particular structure.)
“utilizing a decision path identified from the decision graph and the information in the attribute value form associated with each decision response for providing the contextual advisory to build XaaS for the domain of interest using document templates.” (Chatbot for Defining a Machine Learning Solution, pp. 7-8, [0105]; “The techniques can include correlating the one or more text fragments to one or more machine learning frameworks of a plurality of models. The techniques can include presenting (e.g., interface or audio) the one or more machine learning model to the user. The model composition engine 132 can receive a selection of one or more machine learning model (e.g., classification, recommender, reinforcement learning). The model composition engine 132 can receive several other user inputs including a second input identifying a data source for the machine learning architecture and a third input of one or more constraints (e.g., resources, location, security, or privacy) for the machine learning architecture. The model composition engine 132 can generate a plurality of code for the machine learning architecture based at least in part on the selected model, the second input identifying the data source, and the third input identifying the one or more constraints. The generated code can be stored in a memory.” After the conversation with the user is completed and the data has been input into the model generator an output model is produced. This will be presented to the user as a final result. The model generator will also train the model before sending it to the user.)
Polleri fails to explicitly disclose, “One or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause:” and “obtaining, a domain of interest and an initial inclination of a user indicating maturity of the user in the domain of interest when user requests for the contextual advisory for a platform, a process, a technology, and technical components to build XaaS for the domain of interest, wherein the initial inclination is obtained by sequentially querying the user with a set of questions and receiving a corresponding response for each question among the set of questions,”.
However, Debnath discloses, “One or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause:” (AutoComp- An Implementation of our Composition Approach, pp. 739; “We decided to implement our approach a prototype and deployed it on our internal Xaas. This XaaS mimics deployment of various applications and their services on a private Cloud and contains some typical enterprise services which may be used in different composite applications. The architecture and deployment diagram of AutoComp as well as a prototype execution for the chosen use-case is described in the Figure 8.” This paper discloses a system which implements their method. This is seen in figure 8. This was designed to be used on a large system containing servers and multiple end user devices. For the experiment, they implemented the system on a generic computing system. This system contains memory linked to processors to execute machine code stored in the memory.)
“obtaining, a domain of interest and an initial inclination of a user indicating maturity of the user in the domain of interest when user requests for the contextual advisory for a platform, a process, a technology, and technical components to build XaaS for the domain of interest, wherein the initial inclination is obtained by sequentially querying the user with a set of questions and receiving a corresponding response for each question among the set of questions,” (Services in Modern Cloud Platforms, pp. 737; “Based upon our study and considerations, we have assumed the following web service description schema, which is further used for our use-case services and consumed by our proposed algorithm. The documentation attributes are: [Name, Category, Description, Tags, #Inputs, #Outputs, InputDesc, OutputDesc] Here, Name is the service name, Category is the name of the category heading under which a service belongs to. Just like the other platforms, we define our own set of predefined categories. Description is a natural language text summary of the service. We have incorporated the concept of tags in our implementation. The core features of a service can be expressed as multiple number of tags. This benefits in searching for services with at least one matching tag between the requested and the already available services. Therefore, Tags attribute stores the associated tags to a particular web service. #Inputs is the total number of input items accepted by the service and InputDesc shows the input parameter names with their corresponding data types (whether a string or a number or a date, etc.). #Outputs and OutputDesc are defined similarly in context of service outputs. Figure 4 portrays a sample REST API of a Validator service and the values for all the mentioned parameters.” This program will help a user develop a program without any experience. Initially the user will input different data into a form so the computer can evaluate that information. The information gathered includes user inclination, domain, level of detail, attributes and other tags.)
Regarding claim 12, Polleri discloses, “wherein relevant and contextual choices are generated for the decision response by trained Machine Learning (ML) models using combination of the local knowledge repository and the global knowledge repository in accordance with a plurality of features present in the feature matrix, wherein the choices are ranked according to probability of generating best possible decision, and wherein for any unanswered question, the decision response is identified by the trained ML model.” (Safe Serialization of the Predicted Pipeline, pp. 23-24, [0266]; “Aspects of the present disclosure provide various techniques (e.g., methods, systems, devices, computer-readable media storing computer-executable instructions used to perform computing functions, etc.) for generating and using machine learning models to predict outcomes of code integration requests. As discussed in more detail below, machine learning models may be generated and trained based on previous code integration requests submitted to and processed by a software architecture authorization system. Based on the machine learning and artificial intelligence-based techniques used, one or more models may be trained which may be developer-specific, project-specific, and organization- specific, meaning that trained models may output different outcome predictions, confidence levels, causes, and suggestions depending on the current developer, project, and organization. The machine learning models also may be trained based on specific inputs received in connection with previous code integration requests (e.g., the software library to be integrated, the target source code module, the reason for the code integration requests and/or functionality to be used within the library, etc.). Then, following the generation and training of one or more machine learning models, such models may be used to predict outcomes (e.g., approval or denial for authorization) for a potential code integration request. Such models may also be used to autonomously and independent identify the reasons associated with the predictions (e.g., security vulnerabilities, license incompatibility, etc.), and/or to suggest alternative software libraries that may be integrated instead to provide the desired functionality.” The system in this applicant uses machine learning models to evaluate user intent and develop a machine learning model based on their intent. This will use the repositories stated by the user and will use that data to develop the new machine learning model. The generated machine learning model will be developed based on the users’ requirements and the constraints of the data given to the system.)
Regarding claim 13, Polleri fails to explicitly disclose, “wherein the contextual advisory comprises roadmap, blueprint, reference architecture and miscellaneous advisory documents, wherein the contextual advisory is a combination of static document with marked dynamics sections populated based on the decision path of the user.”.
However, Debnath discloses, “wherein the contextual advisory comprises roadmap, blueprint, reference architecture and miscellaneous advisory documents, wherein the contextual advisory is a combination of static document with marked dynamics sections populated based on the decision path of the user.” (An Algorithm for Composite Manifest/Plan Generation, pp. 739; “An interaction with the composer is required at the end of each iteration to confirm with him, whether the service shortlisted by the algorithm is to be added to the manifest. Also, when a service set contains multiple services and only one service needs to be selected, the composer is consulted for his choice in this case. This is done by the SelectServ method which is called from multiple points in Algorithm 1 and it is explained by Algorithm 2. One more place where composer discretion is important, is when ServOT is not null. It means that there is at least one service which is satisfying the output requirements, as intended output, expressed by the composer. The composer needs to confirm if composition has ended, therefore if endComp is true, then Success becomes true and the algorithm ends. The result is a composite manifest which is a sequence of a set of services in a particular order, that achieves the higher objective as intended by the composer.” This system will use user data, statements and requirement to produce a manifest. This manifest is a sequence of services to be used to develop the proposed Xaas system. The output is a textual roadmap on how to implement the suggested Xaas system.)
Regarding claim 14, Polleri discloses, “wherein, the feature matrix is a structure to host data on a graph database for comparing the one or more entities and associated information obtained from the global knowledge repository across the plurality of inputs.” (Machine Learning Infrastructure Platform, pp. 3, [0050]; “A model composition engine 132 can be executed on one or more computing systems (e.g., infrastructure 128). The model composition engine 132 can receive inputs from a user 116 through an interface 104. The interface 104 can include various graphical user interfaces with various menus and user selectable elements. The interface 104 can include a chatbot (e.g., a text based or voice-based interface). The user 116 can interact with the interface 104 to identify one or more of: a location of data, a desired prediction of machine learning application, and various performance metrics for the machine learning model. The model composition engine 132 can interface with library components 168 to identify various pipelines 136, micro service routines 140, software modules 144, and infrastructure models 148 that can be used in the creation of the machine learning model 112.” The model composition engine will use the information and requirements from the user. It will also use the data from the given repositories to develop a machine learning model for the user. The system will develop and train the model with the data from the given repositories.)
Regarding claim 15, Polleri discloses, “wherein each question among the set of questions has an associated response type, one or more choices for a response, dependency links of a question with remaining questions among the set of questions, and a question weightage”. (Bot and Analytic System, pp. 10, [0124]; “In some embodiments, the bot system may intelligently handle end user interactions without interaction with an administrator or developer of the bot system. For example, an end user may send one or more messages to the bot system in order to achieve a desired goal. A message may include certain content, such as text, emojis, audio, image, video, or other method of conveying a message. In some embodiments, the bot system may convert the content into a standardized form (e.g., a representational state transfer (REST) call against enterprise services with the proper parameters) and generate a natural language response. The bot system may also prompt the end user for additional input parameters or request other additional information. In some embodiments, the bot system may also initiate communication with the end user, rather than passively responding to end user utterances. Described herein are various techniques for identifying an explicit invocation of a bot system and determining an input for the bot system being invoked. In certain embodiments, explicit invocation analysis is performed by a master bot based on detecting an invocation name in an utterance. In response to detection of the invocation name, the utterance may be refined for input to a skill bot associated with the invocation name.” This system uses a conversation chatbot to communicate with the user. The user will communicate requirements, architecture and other machine learning components to develop the model for the user. The user can ask questions to the chatbot and the chatbot can use clarifying questions and generated questions to gain more information form the user. Depending on the users answers in the conversation certain keyworks are given more weight and will be further analyzed by the analytic engine proposed in the application.)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAUL MICHAEL GALVIN-SIEBENALER whose telephone number is (571)272-1257. The examiner can normally be reached Monday - Friday 8AM to 5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PAUL M GALVIN-SIEBENALER/Examiner, Art Unit 2147
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147