DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1 – 9, 13 – 21 and 23 – 24 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Siebel et al. (US Patent No. 12,265,570), hereinafter Siebel.
Regarding claim 1, Siebel discloses a method comprising:
receiving, at a processor, data characterizing a query at a user interface (Column 6, lines 32-33, "In some embodiments, the query comprises a natural language query received through a graphical user interface."; Column 33, lines 5-8, "In some embodiments, a system comprises one or more processors, and memory storing instructions that, when executed by the one or more processors, cause the system to perform the functionality described herein.");
obtaining, by an informational model, a dataset characterizing an enterprise associated with the received query (Column 5, lines 51-59, "The generative AI models interact with one or more of the one or more retrieval multimodal models. One or more retrieval models are used for understanding underlying data, documents, and applications of an enterprise information environment. Underlying data of the enterprise information environment can be embedded by an orchestration module, for example, by a model driven architecture for the conceptual representation of enterprise and external data sets for data virtualization and access control."; Column 25, lines 28-34, "In step 604, the enterprise generative artificial intelligence system identifies, based on the query, one or more enterprise data sets, one or more artificial intelligence applications, and one or more data models from a plurality of different data domains of an enterprise information environment. In some embodiments, the orchestrator module identifies the enterprise data sets and/or artificial intelligence applications."; A retrieval model reads on an informational model, and identifying one or more enterprise data sets based on the query reads on obtaining a dataset characterizing an enterprise associated with the received query.);
obtaining, by the informational model, a dataset characterizing a parameter set for the enterprise associated with the received query (Column 6, lines 48-58, "Each data model of the plurality of data models can correspond to a different data domain of the plurality of different data domains. In some embodiments, each data model represents respective relationships and attributes of the corresponding different data domain of the plurality of different data domains. The respective relationships and attributes include any of data types, data formats, and industry-specific information. In some embodiments, the natural language output comprises a summary of at least one of the respective portions of the one or more enterprise data sets associated with a relevance score."; Column 25, lines 35-50, "In step 606, the enterprise generative artificial intelligence system determines, based on the data models from the plurality of different data domains, a plurality of relevance scores associated with at least a portion of the one or more enterprise data sets. In some embodiments, a retrieval module (e.g., retrieval module 406) determines the relevance scores (e.g., using a similarity machine learning model implementing a similarity algorithm). In step 608, the enterprise generative artificial intelligence system determines, by one or more generative artificial intelligence models, based on the plurality of relevance scores and one or more enterprise access control protocols, particular information from the plurality of different data domains of the enterprise information environment. In one example, the enterprise comprehension module determines the particular information."; A retrieval model reads on an informational model, and data types, data formats, industry-specific information, and particular information from the plurality of different data domains of the enterprise information environment read on a dataset characterizing a parameter set for the enterprise associated with the received query.);
determining, by a trained foundational model, a response to the received query based on at least one of the trained foundational model, the obtained dataset characterizing the enterprise, and the obtained dataset characterizing the parameter set (Column 5, lines 51-59, "The generative AI models interact with one or more of the one or more retrieval multimodal models. One or more retrieval models are used for understanding underlying data, documents, and applications of an enterprise information environment. Underlying data of the enterprise information environment can be embedded by an orchestration module, for example, by a model driven architecture for the conceptual representation of enterprise and external data sets for data virtualization and access control."; Column 25, lines 51-55, "In step 610, the enterprise generative artificial intelligence system generates a natural language output based on the particular information from the relevant data domains. In some embodiments, the enterprise comprehension module generates the natural language output."; A generative model reads on a trained foundational model, and the generative model generating a natural language output based on the particular information from the relevant data domains reads on determining a response to the received query based on at least one of the trained foundational model, the obtained dataset characterizing the enterprise, and the obtained dataset characterizing the parameter set.);
and providing, the determined response to the received query to a user (Column 25, lines 56-61, "In step 612, the enterprise generative artificial intelligence system facilitates presentation of the natural language output. A presentation module (e.g., presentation module 430) may facilitate the presentation (e.g., at least partially cause the natural language output to be presented through a graphical user interface of another system).").
Regarding claim 2, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
wherein obtaining the dataset characterizing the enterprise associated with the received query further comprises: determining the enterprise associated with the received query (Column 25, lines 28-32, "In step 604, the enterprise generative artificial intelligence system identifies, based on the query, one or more enterprise data sets, one or more artificial intelligence applications, and one or more data models from a plurality of different data domains of an enterprise information environment."; Identifying enterprise data sets based on the query reads on determining the enterprise associated with the received query.);
providing the informational model associated with the enterprise the data characterizing the query (Column 18, lines 46-50, "In some embodiments, the enterprise comprehension module 412 comprises a reasoning engine that determines the intent of a query and constructs of a request that interacts with one or more retrieval models of the retrieval module 406 to locate and synthesize data."; A retrieval model reads on an informational model, and determining the intent of a query and constructing a request that interacts with a retrieval model reads on providing the informational model associated with the enterprise the data characterizing the query.);
and receiving, by the informational model, data related to the enterprise responsive to the received query (Column 18, lines 50-57, "The enterprise comprehension module 412 and retrieval models may execute a series of interactions for a complex multi-level request to iteratively develop context-specific constructs that are responsive to the request. For example, in responding to a request, the enterprise comprehension module 412 may infer that a category of data is needed and request a specific retrieval model to retrieve data of the inferred category."; A retrieval model reads on an informational model, and inferring that a category of data is needed and retrieving data of the inferred category reads on receiving data related to the enterprise responsive to the received query.).
Regarding claim 3, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
wherein obtaining the dataset characterizing the parameter set for the enterprise associated with the received query further comprises: determining the enterprise associated with the received query (Column 25, lines 28-32, "In step 604, the enterprise generative artificial intelligence system identifies, based on the query, one or more enterprise data sets, one or more artificial intelligence applications, and one or more data models from a plurality of different data domains of an enterprise information environment."; Identifying enterprise data sets based on the query reads on determining the enterprise associated with the received query.);
providing the informational model associated with the enterprise the data characterizing the query (Column 18, lines 46-50, "In some embodiments, the enterprise comprehension module 412 comprises a reasoning engine that determines the intent of a query and constructs of a request that interacts with one or more retrieval models of the retrieval module 406 to locate and synthesize data."; A retrieval model reads on an informational model, and determining the intent of a query and constructing a request that interacts with a retrieval model reads on providing the informational model associated with the enterprise the data characterizing the query.);
and receiving, by the informational model, parameters related to the enterprise responsive to the received query (Column 16, lines 51-58, "One or more retrieval models are used for understanding underlying data, documents, and applications of an enterprise information environment. Underlying data of the enterprise information environment can be embedded by an orchestrator module, for example, by a model driven architecture for the conceptual representation of enterprise and external data sets for data virtualization and access control."; A retrieval model reads on an informational model, and retrieving external data sets for data virtualization and access control reads on receiving parameters related to the enterprise responsive to the received query.).
Regarding claim 4, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
wherein the informational model is communicatively coupled to at least one of a network or a database associated with the enterprise, and wherein the foundational model has limited access to the network or database associated with the enterprise (Column 18, line 66 - Column 19, line 3, "In some implementations, an orchestrator can control, regulate, or limit the interactive sequence between the enterprise comprehension modules and retrieval models to ensure performance or confidence thresholds."; Column 19, lines 41-51, "The crawling module 414 can crawl and index a corpus of data records (e.g., data records of one or more enterprise systems) using contextual information (e.g., contextual metadata) along with data record embeddings to provide access control (e.g., role-based access), provide improved data record identification and retrieval, and map relationships between data records. In one example, contextual information may prevent some users from accessing (e.g., viewing, retrieving) certain data records, and improve similarity evaluations used in retrieval operation (e.g., of a generative artificial intelligence process)."; Column 27, lines 54-59, "The enterprise generative artificial intelligence system can provide that input to a retrieval module 804 which can then reach out and “retrieve” information from various enterprise data sources (e.g., data stores, databases, artificial intelligence applications, and/or the like)."; A retrieval model reads on an informational model, enterprise data sources including databases reads on a database associated with the enterprise, and limiting interacting between the enterprise comprehension modules and retrieval models to ensure performance or confidence thresholds reads on the foundational model having limited access to the database associated with the enterprise.).
Regarding claim 5, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
further comprising: validating, by the trained foundational model, the response to the received query (Column 24, lines 33-36, "In step 504, the enterprise generative artificial intelligence system determines validation data for the set of potential responses. The validation data is from the plurality of data domains of an enterprise information environment."; Determining validation data for the set of potential responses reads on validating the response to the received query),
wherein the validating further comprises: encoding data from at least one of the dataset characterizing the enterprise or the dataset characterizing the parameter set with an identifier (Column 9, lines 59-65, "The generative enterprise search result portion 254 includes a type of generative artificial intelligence response 260, a generative artificial intelligence response status 262, a generative artificial intelligence enterprise search result 266, source data portions 268 that were used the generate the response, source identifications 269, and generative artificial intelligence response feedback elements 270."; Source identifications read on encoding data with an identifier.);
maintaining the identifier in the generated response (Column 9, lines 41-51, "FIG. 2B depicts an example enterprise generative artificial intelligence response graphical user interface 250 according to some embodiments. In some embodiments, the enterprise generative artificial intelligence response graphical user interface 250 can be generated at least in part by the enterprise generative artificial intelligence systems described herein. In the example of FIG. 2B, the enterprise generative artificial intelligence response graphical user interface 250 includes an enterprise search query input portion 252, a generative enterprise search result portion 254, and an interactive query portion 256."; Column 9, lines 59-65, "The generative enterprise search result portion 254 includes a type of generative artificial intelligence response 260, a generative artificial intelligence response status 262, a generative artificial intelligence enterprise search result 266, source data portions 268 that were used the generate the response, source identifications 269, and generative artificial intelligence response feedback elements 270."; A graphical user interface including a generative enterprise search result portion, where the search result portion includes source identifications, reads on maintaining the identifier in the generated response.);
and providing the identifier within the query response (Column 9, lines 59-65, "The generative enterprise search result portion 254 includes a type of generative artificial intelligence response 260, a generative artificial intelligence response status 262, a generative artificial intelligence enterprise search result 266, source data portions 268 that were used the generate the response, source identifications 269, and generative artificial intelligence response feedback elements 270."; A graphical user interface including a generative enterprise search result portion, where the search result portion includes an enterprise search result and source identifications, reads on providing the identifier within the query response.).
Regarding claim 6, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
wherein determining the response to the received query further comprises applying the trained foundational model on data characterizing at least one of a historical query or a response to a historical query (Column 28, lines 28-35, "A query and rationale generator 812 of the enterprise comprehension module 806 can process the information and generate a rationale for why it produced the result that it did. That rationale can be stored by the enterprise generative artificial intelligence system in an historical rationale datastore 810 and provide the foundation for the context of subsequent iterations."; Generating a rationale for why the enterprise generative artificial intelligence system produced the result that it did and storing the rationale in a historical rationale datastore to provide context of subsequent iterations reads on applying the trained foundational model on data characterizing a response to a historical query.).
Regarding claim 7, Siebel discloses the method as claimed in claim 5.
Siebel further discloses:
wherein validating the response to the received query further comprises adjusting the language, context, or variable naming of the response to the received query to conform to the language, context, or variable naming conventions of the enterprise (Column 8, lines 3-15, "The enterprise generative artificial intelligence system can also perform similar functionality based on the context of users and/or systems submitting the query. For example, a director and engineer may submit the same query (e.g., “what projects are past due?”), and the enterprise generative artificial intelligence system 102 can use contextual information (e.g., user role, permissions, domain associated with the user, and the like) to provide a response that is based on context both substantively (e.g., provide information on overdue projects for the particular requester) and/or with respect to presentation of the response (e.g., an engineer may receive more detailed technical information while a director may receive fewer technical details)."; Using contextual information to provide a response that is based on context reads on adjusting the context of the response to the received query to conform to context of the enterprise.).
Regarding claim 8, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
wherein the trained foundational model comprises at least one of a generative model, a multimodal model, a reinforcement learning model, transfer learning model, and a large language model (Column 5, lines 3-12, "Example aspects include systems and methods to implement machine learning models such as multimodal models, large language models (LLMs), and other machine learning models with enterprise grade integrity including access control, traceability, anti-hallucination, and data-leakage protections. Machine learning models can include some or all of the different types or modalities of models described herein (e.g., multimodal machine learning models, large language models, data models, statistical models, audio models, visual models, audiovisual models, etc.).").
Regarding claim 9, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
wherein the informational model comprises at least one of a descriptive model, diagnostic model, predictive model, prescriptive model, optimization model, a cost-benefit model, a constraint model, or a digital twin (Column 16, lines 65 - Column 17, line 10, "At search time, language models are used to understand the request to create one or more queries for the retrieval model to retrieve results. In an example implementation, the retrieval model uses machine learning to return or more results in the embedded space based on relevance to the query. The enterprise comprehension module calls the knowledge base to generate new content based on inferences and insights of relevant data domains. The retrieval model employs large language models to embed a search query. A Demonstrate-Search-Predict method can be used to allow both the large language model and retrieval model to understand and generate natural language such that they interact to improve results quality."; The retrieval model employing large language models to embed a search query using a Demonstrate-Search-Predict method reads on the informational model comprising a predictive model.).
Regarding claim 13, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
further comprising: generating a query for the information model based on the received query at the user interface by applying context specific data (Column 4, lines 18-22, "The framework described herein uses machine learning techniques to navigate enterprise information and applications, comprehend organization specific context queues (e.g., acronyms, nicknames, jargon, etc.), and locate information most relevant to a request."; Column 8, lines 3-23, "The enterprise generative artificial intelligence system can also perform similar functionality based on the context of users and/or systems submitting the query. For example, a director and engineer may submit the same query (e.g., “what projects are past due?”), and the enterprise generative artificial intelligence system 102 can use contextual information (e.g., user role, permissions, domain associated with the user, and the like) to provide a response that is based on context both substantively (e.g., provide information on overdue projects for the particular requester) and/or with respect to presentation of the response (e.g., an engineer may receive more detailed technical information while a director may receive fewer technical details). In some embodiments, the enterprise generative artificial intelligence system 102 can crawl, index, and/or map a corpus of data records (e.g., data records of one or more enterprise systems or environments) using contextual information (e.g., contextual metadata) along with data record embeddings to provide access control (e.g., role-based access), provide improved data record identification and retrieval, and map relationships between data records."; A user submitting a query reads on a received query, and contextual information reads on context specific data.).
Regarding claim 14, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
wherein the dataset characterizing the enterprise comprises at least one of key performance indicators, revenue, win rate, costs, budgets, statistics, inventory levels, logistics datasets, collections metrics and lead conversions (Column 10, lines 35-47, "The interactive query portion 256 can also include a status portion 278 indicating a process status of the additional related query 276. The status can include, for example, processing query (e.g., as shown in FIG. 2B), processed query, evaluated metric, searched documents, generated answer (e.g., result), generated visualization (e.g., a time series visualization for presentation in a response graphical user interface), and/or finished generating. Time series refers to a list of data points in time order that can represent the change in value over time of data relevant to a particular problem, such as inventory levels, equipment temperature, financial values, or customer transactions."; Column 11, line 66 - Column 12, line 6, "For example, supply chain data may be obtained from a first artificial intelligence application (e.g., an inventory management and optimization or supply chain application) and the impact information may be obtained based at least in part on the supply chain data and information from another artificial intelligence application, such as an artificial intelligence application used to monitor and predict maintenance needs for a fleet of vehicles."; Obtaining supply chain data including data that represents the change in value over time of data relevant to a particular problem such as inventory levels, reads on the dataset characterizing the enterprise comprising inventory levels.).
Regarding claim 15, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
wherein at least one of the dataset characterizing the enterprise and the dataset characterizing the parameter set comprises text summaries, images, number, tables, or formulae (Column 21, lines 49-56, "The extractor module 420 can function to process, extract and/or transform different types of data (e.g., text, database tables, images, video, code, and/or the like). For example, the extractor module 420 may take in a database table as input and transform it into natural language describing the database table which can then be provided to the orchestrator module 404, which can then process that transformed input to “answer,” or otherwise satisfy a query."; Extracting database tables reads on the dataset characterizing the enterprise and the dataset characterizing the parameter set comprising tables.).
Regarding claim 16, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
wherein the query is provided by the user interface in natural language form (Column 6, lines 32-33, "In some embodiments, the query comprises a natural language query received through a graphical user interface.").
Regarding claim 17, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
wherein the dataset characterizing the parameter set comprises key performance indicators for the enterprise (Column 12, lines 12-24, "Example information from different data domains or application objects may include key performance metrics (KPIs) (e.g., from left to right—a fleet readiness score, unscheduled maintenance avoided (hours) over a time period, a number of flights gained (e.g., due to avoided maintenance), operation time at risk, and/or the like), aircraft status risk score information, component risk score and ranking (e.g., by risk score) information, information associated with artificial intelligence alerts, flight capability information (e.g., by geographic region), case information, supply chain data, and impact information regarding aircraft being impacted by effects within the supply chain."; Information from different data domains including key performance metrics reads on the dataset characterizing the parameter set comprises key performance indicators for the enterprise.).
Regarding claim 18, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
wherein the foundational model comprises a learning model, the learning model trained via reinforcement learning from human feedback (Column 22, lines 4-10, "The model generation module 424 can function to generate and/or modify some or all of the different types of models described herein (e.g., machine learning models, large language models, data models). In some implementations, the model generation module 424 can use a variety of machine learning techniques or algorithms to generate models. "; Column 22, lines 53-67, "The model optimization module 428 can tune some or all of the models described herein, including models of the enterprise comprehension modules (e.g., large language models). For example, the model optimization module 428 may tune generative artificial intelligence models based on tracking user interactions within the system, by capturing explicit feedback (e.g., through a training user interface), implicit feedback, and/or the like. In some example implementations, a reinforcement learning module can optionally be used to accelerate knowledge base bootstrapping. Reinforcement learning can be used for explicit bootstrapping of the system with instrumentation of time spent, results clicked on, and the like. Example aspects can include an innovative learning framework that can bootstrap models for different enterprise environments."; Tuning generative artificial intelligence models based on tracking user interactions within the system by capturing explicit feedback using a reinforcement learning module reads on the foundational model comprises a learning model where the learning model trained via reinforcement learning from human feedback.).
Regarding claim 19, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
wherein the determined response is provided to the user interface in natural language form (Column 25, lines 56-61, "In step 612, the enterprise generative artificial intelligence system facilitates presentation of the natural language output. A presentation module (e.g., presentation module 430) may facilitate the presentation (e.g., at least partially cause the natural language output to be presented through a graphical user interface of another system).").
Regarding claim 20, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
further comprising: modifying at least one of the parameters of the foundational model based on the dataset characterizing the parameter set (Column 33, lines 18-25, "The one or more data models comprises multiple models trained for different data domains of the plurality of data domains, wherein each data model represents respective relationships and attributes of the corresponding different data domain of the plurality of different data domains, and the respective relationships and attributes include any of data types, data formats, and industry-specific information."; Column 37, lines 30-38, "The selected deterministic response and the corresponding validation data which is output is dependent on the technical functioning of the one or more data models. The one or more data models may have been trained using a machine learning algorithm. The operation of the one or more data models in generating a set of potential responses may therefore be based on parameters which the one or more data models have learned through training (as opposed to parameters which have been set by a human programmer)."; Training data models for different data domains of the plurality of data domains, where the data models generate a set of potential responses based on parameters which the data models have learned through training, reads on modifying at least one of the parameters of the foundational model based on the dataset characterizing the parameter set.).
Regarding claim 21, Siebel discloses the method as claimed in claim 1.
Siebel further discloses:
further comprising: training at least a portion of the foundational model based on the dataset characterizing the enterprise (Column 33, lines 18-25, "The one or more data models comprises multiple models trained for different data domains of the plurality of data domains, wherein each data model represents respective relationships and attributes of the corresponding different data domain of the plurality of different data domains, and the respective relationships and attributes include any of data types, data formats, and industry-specific information."; Training data models for different data domains of the plurality of data domains reads on training at least a portion of the foundational model based on the dataset characterizing the enterprise.).
Regarding claim 23, arguments analogous to claim 1 are applicable. In addition, Siebel discloses a system (Column 6, lines 24-26, “FIG. 1 depicts a diagram of an example enterprise generative artificial intelligence system architecture and environment 100 according to some embodiments.”) comprising:
at least one data processor (Column 30, lines 66-67, “The processor 1104 is configured to execute executable instructions (e.g., programs).”);
and memory coupled to the at least one data processor and storing instructions which, when executed by the at least one data processor, causes the at least one data processor to perform operations (Column 31, lines 13-16, “Each of the memory system 1106 and the storage system 1108 comprises a computer-readable medium, which stores instructions or programs executable by processor 1104.”) comprising the steps of claim 1.
Regarding claim 24, arguments analogous to claim 1 are applicable. In addition, Siebel discloses a non-transitory computer readable storage medium storing computer readable instructions, which, when executed by at least one data processor, causes the at least one data processor to perform operations (Column 31, lines 13-16, “Each of the memory system 1106 and the storage system 1108 comprises a computer-readable medium, which stores instructions or programs executable by processor 1104.”) comprising the steps of claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Siebel in view of Wray et al. (US Patent Application Publication No. 2021/0019662), hereinafter Wray.
Regarding claim 10, Siebel discloses the method as claimed in claim 9, but does not specifically disclose: wherein an optimization model comprises a set of models trained on a dataset using a set of resourcing levels and performance indicators.
Wray teaches:
wherein an optimization model comprises a set of models trained on a dataset using a set of resourcing levels and performance indicators (Paragraph 0003, lines 1-9, "In an aspect, data characterizing a set of models trained on a dataset using a set of resourcing levels can be received. The set of resourcing levels can specify a condition on outputs of models in the set of models. Performance of the set of models can be assessed using the set of resourcing levels. A feasible performance region can be determined using the assessment. The feasible performance region can associate each resourcing level in the set of resourcing levels with a model in the set of models.").
Wray is considered to be analogous to the claimed invention because it is in the same field of training machine learning models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Siebel to incorporate the teachings of Wray to train models on a dataset using a set of resourcing levels and performance indicators. Doing so would allow for applying a single ensemble model trained to define the feasible performance region for a business to the current business reality at multiple levels of granularity to optimize performance to greatly reduce the complexity, time, cost, and data required to generate individualized models (Wray; Paragraph 0045, lines 32-47).
Claims 11 – 12 are rejected under 35 U.S.C. 103 as being unpatentable over Siebel in view of Sengupta et al. (US Patent Application Publication No. 2022/0284368), hereinafter Sengupta.
Regarding claim 11, Siebel discloses the method as claimed in claim 9, but does not specifically disclose: wherein a cost-benefit model comprises a model trained to classify an event as belonging to a first event type or a second event type, wherein the classification of the event is responsive to at least one of an impact of correctly treating the event as belonging to a first event, an impact of erroneously treating the event as belonging to the first event, a cost of erroneously treating an event as not belonging to the first event, and a benefit of correctly treating an event as not belonging to the first event.
Sengupta teaches:
wherein a cost-benefit model comprises a model trained to classify an event as belonging to a first event type or a second event type, wherein the classification of the event is responsive to at least one of an impact of correctly treating the event as belonging to a first event, an impact of erroneously treating the event as belonging to the first event, a cost of erroneously treating an event as not belonging to the first event, and a benefit of correctly treating an event as not belonging to the first event (Paragraph 0057, lines 1-12, "As noted above, in some implementations, efficient frontier models can be utilized. In these implementations, the predictive model can form part of a set of models trained according to respective capacity levels and/or cost-benefit tradeoffs. The updated capacity and/or cost-benefit tradeoff can cause the predictive system to select, in response to determining the updated capacity and/or cost-benefit tradeoff, a new model from the set of models according to the updated capacity and/or cost-benefit tradeoff. In other words, with the updated capacity and/or cost-benefit tradeoff, a different model from the efficient frontier can be selected."; Paragraph 0009, lines 1-10, "A new model from a set of models can be selected in response to determining the updated capacity and according to the updated capacity. The event can include a sales opportunity and the first class indicates that the sales opportunity should be pursued. The capacity can characterize a number of events the user processes within a given period of time. The cost-benefit can characterize an impact of treating the event as belonging to the first class, the impact characterized by a cost of a false positive, a cost of a false negative, a benefit of a true positive, and a benefit of a true negative."; A cost of a false positive reads on an impact of erroneously treating the event as belonging to the first event, a cost of a false negative reads on a cost of erroneously treating an event as not belonging to the first event, a benefit of a true positive reads on an impact of correctly treating the event as belonging to a first event, and a benefit of a true negative reads on a benefit of correctly treating an event as not belonging to the first event.).
Sengupta is considered to be analogous to the claimed invention because it is in the same field of training machine learning models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Siebel to incorporate the teachings of Sengupta to implement a model trained according to cost-benefit tradeoffs that characterize an impact of treating an event as belonging to the first class, the impact characterized by a cost of a false positive, a cost of a false negative, a benefit of a true positive, and a benefit of a true negative. Doing so would allow for enabling monitoring of model performance and user compliance separately and learning different insights from each of these cases to both improve model performance and induce better model compliance by users (Sengupta; Paragraph 0062, lines 1-16).
Regarding claim 12, Siebel discloses the method as claimed in claim 9, but does not specifically disclose: wherein a constraint model comprises a model trained based on one or more resource constraints of the enterprise.
Sengupta teaches:
wherein a constraint model comprises a model trained based on one or more resource constraints of the enterprise (Paragraph 0077, lines 1-13, "FIG. 5 is a system block diagram illustrating an example implementation of a system 500 for training, assessing, and deploying a set of resourcing models. System 500 can include graphical user interface (GUI) 520, storage 530, training system 540, and prediction system 550. By training and assessing multiple models under different resourcing levels and providing an intuitive representation of the performance of the models under the different resource constraints, the model most appropriate for a given operational constraint can be selected and deployed. As such, the performance of the models can be improved and computational resources, production time, and production costs can be saved.").
Sengupta is considered to be analogous to the claimed invention because it is in the same field of training machine learning models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Siebel to incorporate the teachings of Sengupta to train models under different resourcing levels and provide an intuitive representation of the performance of the models under the different resource constraints. Doing so would allow for enabling monitoring of model performance and user compliance separately and learning different insights from each of these cases to both improve model performance and induce better model compliance by users (Sengupta; Paragraph 0062, lines 1-16).
Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Siebel in view of Shabat et al. (US Patent Application Publication No. 2024/0203404), hereinafter Shabat.
Regarding claim 22, Siebel discloses the method as claimed in claim 21, but does not specifically disclose: wherein the foundational model comprises a transfer learning model and the dataset characterizing the enterprise comprises additional training data for the transfer learning model.
Shabat teaches:
wherein the foundational model comprises a transfer learning model and the dataset characterizing the enterprise comprises additional training data for the transfer learning model (Paragraph 0003, lines 1-11, "Large, pre-trained transformer-based language models (such as LaMDA, BERT, T5, Meena, GPT-3, etc.), which may also be referred to as large language models, or LLMs, may be used in order to perform Natural Language Processing (NLP). These models can enable transfer learning of general-purpose knowledge into a specific NLP task. This may be achieved by fine-tuning a pre-trained LLM model using examples from the target NLP task. For instance, an NLU module of a SLU system may utilize a pre-trained LLM that is fine-tuned based on the target NLP task."; Using transfer learning to fine-tune a pre-trained large language model based on a target natural language processing task using examples from the target natural language processing task reads on the dataset characterizing the enterprise comprises additional training data for a transfer learning model.).
Shabat is considered to be analogous to the claimed invention because it is in the same field of training machine learning models. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Siebel to incorporate the teachings of Shabat to use transfer learning to fine-tune a pre-trained large language model based on a target natural language processing task using examples from the target natural language processing task. Doing so would allow for improving a large language model based spoken language understanding system (Shabat; Paragraph 0004, lines 1-15).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Gruber et al. (US Patent No. 12,087,308)
Patel et al. (US Patent Application Publication No. 2022/0309391)
Zamani et al. ("Retrieval-Enhanced Machine Learning")
Lewis et al. ("Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks")
Any inquiry concerning this communication or earlier communications from the examiner should be directed to James Boggs whose telephone number is (571)272-2968. The examiner can normally be reached M-F 8:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JAMES BOGGS/Examiner, Art Unit 2657