DETAILED ACTION
This office action is in response to the above identified application filed on February 13, 2025. The application contains claims 1-18.
Claims 1-18 are pending
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The present application claims Priority from Provisional Application 63562856, filed 03/08/2024.
Information Disclosure Statement
The information disclosure statement (IDS) was submitted on 02/13/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
Claims 1, 2, 6, 7, 13, and 14 are objected to because of the following informalities:
The “if the received question …” recited in the following claims is a contingent limitation. Per MPEP 2111.04 II, "The broadest reasonable interpretation of a method (or process) claim having contingent limitations requires only those steps that must be performed and does not include steps that are not required to be performed because the condition(s) precedent are not met." It is suggested that Applicant amend it if the contingency is unintended.
Claim 1, line 12
Claim 2, line 1
Claim 6, line 14
Claim 7, lines 2-3
Claim 13, line 12
Claim 14, lines 2-3
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 13-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 13 recites the limitation "the agent" in line 15. There is insufficient antecedent basis for this limitation in the claim. Therefore, claim 13 is indefinite and rejected under 35 U.S.C. 112(b).
Claims 14-18 each recite the limitation "The system of claim 13/16/17 …" in line 1, respectively. There is insufficient antecedent basis for this limitation in the claim. Therefore, claims 14-18 are indefinite and rejected under 35 U.S.C. 112(b).
Dependent claims 14-18 are also rejected for inheriting the deficiency from their corresponding independent claim 13, respectively.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Ragukumar et al. (US 20250200355 A1), in view of GANGWAR et al. (US 20220207066 A1).
With regard to claim 1,
Ragukumar teaches
generating, by a processor (Fig. 13: processor 1302), a plurality of contextual relationships that exist between data points in a dataset by applying a large language model (LLM) to the dataset (Fig. 3; [0059]: the embedding LLM 302 is configured to convert the tokenized data inputs into an embedding vector. The embeddings represent the meaning and context of the tokens in a high-dimensional vector space. This embedding process generates contextual relationships among tokens extracted from data inputs and represents the relationships in the embeddings);
determining, by the processor, at least one insight based on the dataset with the generated plurality of contextual relationships, by applying the LLM on the dataset with the generated plurality of contextual relationships (Fig. 3; [0061]: the completion LLM 306 is trained to understand context, generate coherent text, and perform various language-related tasks. Fig. 1; [0049]: The AI/ML engine 110 is configured to assess data stored in the index search database 106 and then extract insights from the data. The extracted insights may include, for example, patterns within the data, correlated events in the data, anomalies within the data, and so forth);
Ragukumar does not teach
a method of generating an agent, the method comprising:
determining, by the processor, at least one query for each determined at least one insight, by app lying the LLM on the dataset with the generated plurality of contextual relationships and determined at least one insight;
receiving, by the processor, a question for the dataset;
if the received question is associated with the determined at least one insight, applying, by the processor, the LLM on the determined at least one query for the associated determined at least one insight;
generating, by the processor, the agent by the LLM based on the determined at least one query; and
updating the LLM based on performance of the generated agent.
GANGWAR teaches
a method of generating an agent (Abstract: generate an executable bot application, wherein the executable bot application corresponds to “an agent”), the method comprising:
determining, by the processor, at least one query for each determined at least one insight, by applying the LLM on the dataset with the generated plurality of contextual relationships and determined at least one insight (Fig. 6; [0075]: at step 604, through a machine learning (ML) model, process training data comprising the set of potential queries, the video frame responses corresponding to each of the set of potential queries, and the intent that is mapped to each of the set of a potential queries to generate a trained model, wherein generating the trained model determines a mapping between a query and an intent, e.g., “insight”);
receiving, by the processor, a question for the dataset (Fig. 6; [0075]: at step 606 process an end-user query);
if the received question is associated with the determined at least one insight, applying, by the processor, the LLM on the determined at least one query for the associated determined at least one insight (Fig. 6; [0075]: at step 606, generate a prediction engine configured to process an end-user query and predict, from the plurality of intents, an intent associated with the end-user query);
generating, by the processor, the agent by the LLM based on the determined at least one query (Fig. 6; [0075]: at step 608, auto-generate, using the prediction engine, the executable bot application by the entity); and
updating the LLM based on performance of the generated agent ([0023]: the entity can add a new potential query to the set of potential queries and associate a corresponding video frame response to said new potential query, based on which the trained model is updated. The entity can also edit an existing potential query from the set of potential queries and associate a new or edited or the same corresponding video frame response to said edited potential query, based on which the trained model is updated).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ragukumar to incorporate the teachings of GANGWAR to determine, by the processor, at least one query for each determined at least one insight, by app lying the LLM on the dataset with the generated plurality of contextual relationships and determined at least one insight, receive, by the processor, a question for the dataset, if the received question is associated with the determined at least one insight, apply, by the processor, the LLM on the determined at least one query for the associated determined at least one insight, generate, by the processor, the agent by the LLM based on the determined at least one query, and update the LLM based on performance of the generated agent. Doing so would facilitate self-generation of entity/user specific bots that can be customized with one or more entity-specific automated visual responses to user queries, and at the same time are computationally convenient and time-efficient for generation, without requirement of any external help/vendor and at the same time ensuring that effective/informative responses are transmitted to end-user queries for enhanced user experience as taught by GANGWAR ([0008]).
With regard to claim 2,
As discussed in claim 1, Ragukumar and GANGWAR teach all the limitations therein.
GANGWAR further teaches
the method of claim 1, further comprising if the received question is not associated with the determined at least one insight, generating, by the processor, a new query by the LLM for the associated determined at least one insight ([0023]: the entity can add a new potential query to the set of potential queries and associate a corresponding video frame response to said new potential query, based on which the trained model is updated).
With regard to claim 3,
As discussed in claim 1, Ragukumar and GANGWAR teach all the limitations therein.
GANGWAR further teaches
the method of claim 1, further comprising applying, by the processor, the generated agent on at least one received user query (Fig. 4; [0070]; Fig. 6; [0075]: at step 606, generate a prediction engine configured to process an end-user query and predict, from the plurality of intents, an intent associated with the end-user query).
With regard to claim 4,
As discussed in claim 1, Ragukumar and GANGWAR teach all the limitations therein.
GANGWAR further teaches
the method of claim 1, wherein the dataset comprises a semantic layer of information of the dataset (Fig. 3A; [0066]: extract some information or features (308) and convert texts to sequences (318) are examples of a semantic layer).
With regard to claim 5,
As discussed in claim 4, Ragukumar and GANGWAR teach all the limitations therein.
Ragukumar further teaches
the method of claim 4, wherein the information is received from a dedicated database (Fig. 3; [0057]: the AI/ML engine 110 retrieves data from the index search database 106 and the domain-specific database 108).
With regard to claim 6,
As discussed in claim 5, Ragukumar and GANGWAR teach all the limitations therein.
Ragukumar further teaches
the method of claim 5, wherein the at least one insight is determined by a corresponding at least one query to the dedicated database (Fig. 1; [0049]: the AI/ML engine 110 is configured to assess data stored in the index search database 106 and then extract insights from the data).
With regard to claim 7,
Ragukumar teaches
generate a plurality of contextual relationships that exist between data points in the dataset by applying a large language model (LLM) to the dataset (Fig. 3; [0059]: the embedding LLM 302 is configured to convert the tokenized data inputs into an embedding vector. The embeddings represent the meaning and context of the tokens in a high-dimensional vector space. This embedding process generates contextual relationships among tokens extracted from data inputs and represents the relationships in the embeddings);
determine at least one insight based on the dataset with the generated plurality of contextual relationships, by applying the LLM on the dataset with the generated plurality of contextual relationships (Fig. 3; [0061]: the completion LLM 306 is trained to understand context, generate coherent text, and perform various language-related tasks. Fig. 1; [0049]: The AI/ML engine 110 is configured to assess data stored in the index search database 106 and then extract insights from the data. The extracted insights may include, for example, patterns within the data, correlated events in the data, anomalies within the data, and so forth);
Ragukumar does not teach
a system for generating an agent, the system comprising:
a server, comprising a dataset with a plurality of data points; and
a processor, in communication with the server, wherein the processor is configured to:
determine at least one query for each determined at least one insight, by app lying the LLM on the dataset with the generated plurality of contextual relationships and determined at least one insight;
receive a question for the dataset;
if the received question is associated with the determined at least one insight, apply the LLM on the determined at least one query for the associated determined at least one insight;
generate the agent by the LLM based on the determined at least one query; and
update the LLM based on performance of the generated agent.
GANGWAR teaches
a system for generating an agent (Abstract: generate an executable bot application, wherein the executable bot application corresponds to “an agent”), the system comprising:
a server, comprising a dataset with a plurality of data points (Fig. 1: centralized server. Fig. 2; [0054]: a plurality of datasets based on one or more pre-defined visual/video frame responses and pre-defined/potential queries received from the computing device (104), i.e., a plurality of data points); and
a processor (Fig. 8: processor 870), in communication with the server, wherein the processor is configured to:
determine at least one query for each determined at least one insight, by app lying the LLM on the dataset with the generated plurality of contextual relationships and determined at least one insight (Fig. 6; [0075]: at step 604, through a machine learning (ML) model, process training data comprising the set of potential queries, the video frame responses corresponding to each of the set of potential queries, and the intent that is mapped to each of the set of a potential queries to generate a trained model, wherein generating the trained model determines a mapping between a query and an intent, e.g., “insight”);
receive a question for the dataset (Fig. 6; [0075]: at step 606 process an end-user query);
if the received question is associated with the determined at least one insight, apply the LLM on the determined at least one query for the associated determined at least one insight (Fig. 6; [0075]: at step 606, generate a prediction engine configured to process an end-user query and predict, from the plurality of intents, an intent associated with the end-user query);
generate the agent by the LLM based on the determined at least one query (Fig. 6; [0075]: at step 608, auto-generate, using the prediction engine, the executable bot application by the entity); and
update the LLM based on performance of the generated agent ([0023]: the entity can add a new potential query to the set of potential queries and associate a corresponding video frame response to said new potential query, based on which the trained model is updated. The entity can also edit an existing potential query from the set of potential queries and associate a new or edited or the same corresponding video frame response to said edited potential query, based on which the trained model is updated).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ragukumar to incorporate the teachings of GANGWAR to determine, by the processor, at least one query for each determined at least one insight, by app lying the LLM on the dataset with the generated plurality of contextual relationships and determined at least one insight, receive, by the processor, a question for the dataset, if the received question is associated with the determined at least one insight, apply, by the processor, the LLM on the determined at least one query for the associated determined at least one insight, generate, by the processor, the agent by the LLM based on the determined at least one query, and update the LLM based on performance of the generated agent. Doing so would facilitate self-generation of entity/user specific bots that can be customized with one or more entity-specific automated visual responses to user queries, and at the same time are computationally convenient and time-efficient for generation, without requirement of any external help/vendor and at the same time ensuring that effective/informative responses are transmitted to end-user queries for enhanced user experience as taught by GANGWAR ([0008]).
With regard to claim 8,
As discussed in claim 7, Ragukumar and GANGWAR teach all the limitations therein.
GANGWAR further teaches
the system of claim 7, wherein the processor is further configured to generate a new query by the LLM for the associated determined at least one insight, if the received question is not associated with the determined at least one insight ([0023]: the entity can add a new potential query to the set of potential queries and associate a corresponding video frame response to said new potential query, based on which the trained model is updated).
With regard to claim 9,
As discussed in claim 7, Ragukumar and GANGWAR teach all the limitations therein.
GANGWAR further teaches
the system of claim 7, wherein the processor is further configured to apply the generated agent on at least one received user query (Fig. 4; [0070]; Fig. 6; [0075]: at step 606, generate a prediction engine configured to process an end-user query and predict, from the plurality of intents, an intent associated with the end-user query).
With regard to claim 10,
As discussed in claim 7, Ragukumar and GANGWAR teach all the limitations therein.
GANGWAR further teaches
the system of claim 7, wherein the dataset comprises a semantic layer of information of the dataset (Fig. 3A; [0066]: extract some information or features (308) and convert texts to sequences (318) are examples of a semantic layer).
With regard to claim 11,
As discussed in claim 10, Ragukumar and GANGWAR teach all the limitations therein.
Ragukumar further teaches
the system of claim 10, wherein the information is received from a dedicated database (Fig. 3; [0057]: the AI/ML engine 110 retrieves data from the index search database 106 and the domain-specific database 108).
With regard to claim 12,
As discussed in claim 11, Ragukumar and GANGWAR teach all the limitations therein.
Ragukumar further teaches
the system of claim 11, wherein the at least one insight is determined by a corresponding at least one query to the dedicated database (Fig. 1; [0049]: the AI/ML engine 110 is configured to assess data stored in the index search database 106 and then extract insights from the data).
With regard to claim 13,
Ragukumar teaches
generate a plurality of contextual relationships that exist between data points in a dataset by applying a large language model (LLM) to the dataset (Fig. 3; [0059]: the embedding LLM 302 is configured to convert the tokenized data inputs into an embedding vector. The embeddings represent the meaning and context of the tokens in a high-dimensional vector space. This embedding process generates contextual relationships among tokens extracted from data inputs and represents the relationships in the embeddings);
determine at least one insight based on the dataset with the generated plurality of contextual relationships, by applying the LLM on the dataset with the generated plurality of contextual relationships (Fig. 3; [0061]: the completion LLM 306 is trained to understand context, generate coherent text, and perform various language-related tasks. Fig. 1; [0049]: The AI/ML engine 110 is configured to assess data stored in the index search database 106 and then extract insights from the data. The extracted insights may include, for example, patterns within the data, correlated events in the data, anomalies within the data, and so forth);
Ragukumar does not teach
a computer-readable medium comprising instructions which, when executed by a processor, cause the processor to:
determine at least one query for each determined at least one insight, by app lying the LLM on the dataset with the generated plurality of contextual relationships and determined at least one insight;
receive a question for the dataset;
if the received question is associated with the determined at least one insight, apply the LLM on the determined at least one query for the associated determined at least one insight;
generate the agent by the LLM based on the determined at least one query; and
update the LLM based on performance of the generated agent.
GANGWAR teaches
a computer-readable medium comprising instructions which, when executed by a processor (Fig. 8: processor 870), cause the processor to:
determine at least one query for each determined at least one insight, by app lying the LLM on the dataset with the generated plurality of contextual relationships and determined at least one insight (Fig. 6; [0075]: at step 604, through a machine learning (ML) model, process training data comprising the set of potential queries, the video frame responses corresponding to each of the set of potential queries, and the intent that is mapped to each of the set of a potential queries to generate a trained model, wherein generating the trained model determines a mapping between a query and an intent, e.g., “insight”);
receive a question for the dataset (Fig. 6; [0075]: at step 606 process an end-user query);
if the received question is associated with the determined at least one insight, apply the LLM on the determined at least one query for the associated determined at least one insight (Fig. 6; [0075]: at step 606, generate a prediction engine configured to process an end-user query and predict, from the plurality of intents, an intent associated with the end-user query);
generate the agent by the LLM based on the determined at least one query (Fig. 6; [0075]: at step 608, auto-generate, using the prediction engine, the executable bot application by the entity); and
update the LLM based on performance of the generated agent ([0023]: the entity can add a new potential query to the set of potential queries and associate a corresponding video frame response to said new potential query, based on which the trained model is updated. The entity can also edit an existing potential query from the set of potential queries and associate a new or edited or the same corresponding video frame response to said edited potential query, based on which the trained model is updated).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Ragukumar to incorporate the teachings of GANGWAR to determine, by the processor, at least one query for each determined at least one insight, by app lying the LLM on the dataset with the generated plurality of contextual relationships and determined at least one insight, receive, by the processor, a question for the dataset, if the received question is associated with the determined at least one insight, apply, by the processor, the LLM on the determined at least one query for the associated determined at least one insight, generate, by the processor, the agent by the LLM based on the determined at least one query, and update the LLM based on performance of the generated agent. Doing so would facilitate self-generation of entity/user specific bots that can be customized with one or more entity-specific automated visual responses to user queries, and at the same time are computationally convenient and time-efficient for generation, without requirement of any external help/vendor and at the same time ensuring that effective/informative responses are transmitted to end-user queries for enhanced user experience as taught by GANGWAR ([0008]).
With regard to claim 14,
As discussed in claim 13, Ragukumar and GANGWAR teach all the limitations therein.
GANGWAR further teaches
the system of claim 13, wherein the processor is further configured to generate a new query by the LLM for the associated determined at least one insight, if the received question is not associated with the determined at least one insight ([0023]: the entity can add a new potential query to the set of potential queries and associate a corresponding video frame response to said new potential query, based on which the trained model is updated).
With regard to claim 15,
As discussed in claim 13, Ragukumar and GANGWAR teach all the limitations therein.
GANGWAR further teaches
the system of claim 13, wherein the processor is further configured to apply the generated agent on at least one received user query (Fig. 4; [0070]; Fig. 6; [0075]: at step 606, generate a prediction engine configured to process an end-user query and predict, from the plurality of intents, an intent associated with the end-user query).
With regard to claim 16,
As discussed in claim 13, Ragukumar and GANGWAR teach all the limitations therein.
GANGWAR further teaches
the system of claim 13, wherein the dataset comprises a semantic layer of information of the dataset (Fig. 3A; [0066]: extract some information or features (308) and convert texts to sequences (318) are examples of a semantic layer).
With regard to claim 17,
As discussed in claim 16, Ragukumar and GANGWAR teach all the limitations therein.
Ragukumar further teaches
the system of claim 16, wherein the information is received from a dedicated database (Fig. 3; [0057]: the AI/ML engine 110 retrieves data from the index search database 106 and the domain-specific database 108).
With regard to claim 18,
As discussed in claim 17, Ragukumar and GANGWAR teach all the limitations therein.
Ragukumar further teaches
the system of claim 17, wherein the at least one insight is determined by a corresponding at least one query to the dedicated database (Fig. 1; [0049]: the AI/ML engine 110 is configured to assess data stored in the index search database 106 and then extract insights from the data).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOQIN HU whose telephone number is (571)272-1792. The examiner can normally be reached on Monday-Friday 7:00am-3:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached on (571) 272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAOQIN HU/Examiner, Art Unit 2168
/CHARLES RONES/Supervisory Patent Examiner, Art Unit 2168