Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 08, 2025 has been entered.
REMARKS
Response filed October 22, 2025
On page 9, Applicant’s summary of the interview, October 22, 2025, is acknowledged.
The claim amendment overcomes the 35 USC ¶ 112, First Paragraph, rejection of record. The 35 USC ¶ 112, First Paragraph, rejection has been withdrawn.
PENDING MATTERS
Claim 7 is cancelled.
Claims 1-6 and 8-20, filed October 22, 2025, are examined on the merits.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 13, and 17 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Caron et al. (US 20250216955 A1).
Claim 1, Caron discloses a method comprising:
obtaining a user input ([0004], e.g. receiving, at a computer system from a terminal, a question);
obtaining, from a database, data indicative of a plurality of capabilities ([0016], e.g. the system has access to a database of available computer functions (Examiner interpreted functions as capabilities), and [0028], e.g. system using chatbot embeddings to narrow down the list of related functions based on the user input 306);
identifying, via a machine learning model, a subset of the plurality of capabilities based on the user input ([0016], e.g. LLM chatbot then responds to the request with a list of one or more functions which should be called), wherein a second capability of the subset of the plurality of capabilities is dependent on a second first capability of the subset of the plurality of capabilities such that the second capability receives, as an input, an output of the second first capability or the second capability performs a function that requires, for success, a function of the second first capability to have been successfully completed prior to performance of the function of the second capability ([0027], e.g. an example analogy of the system illustrated in FIG. 1. In this example, the user 202 (a Car Owner) makes a request 204 to a service assistant 206 to fix a problem. The service assistant 206 receives the request 204, processes the request 204, and reports the request 208 to a service manager 210, who asks the service assistant 206 to order a part 212. The service assistant orders the part 214 from the parts department 216, who then reports when the part is ready 218. The service assistant 206 reports back to the service manager 210 that the part is ready 220, and the service manager 210 assigns a mechanic 226 to replace the part 222. The assignment 222 passes is conveyed 224 by the service assistant 206 to the mechanic 226, and the mechanic 226 reports 228 to the service assistant 206 when the task is done, and [0028]);
determining a graph representing dependencies regarding the subset of the plurality of capabilities, wherein the graph represents the second capability as dependent on the second first capability ([0032], e.g. system 606 calls the one or more functions with the arguments 614 using a graph 616. The graph can be a query language and server to interface with micro-services, and can be used to talk to micro-services and create subscriptions. The graph can return the function results 618 to the system 606);
generating an application based on an output of the machine learning model and the graph, wherein the application includes the subset of the plurality of capabilities ([0032], e.g. system 606 sends the results 620 to the LLM chatbot 610, which provides a response to the question 604 based on the results 620 in the form of a Natural Language Answer 622 to the system 606. The system 606 then sends the natural language answer 624 to the terminal 602 as an answer to the question 604); and
executing the application, wherein executing the application comprises executing the second first capability prior to executing the second capability ([0016], e.g. system receives the list of one or more functions and executes those functions. The result of those functions is then passed back to the LLM chatbot, which interprets the results and provides the system with a natural language response. The system then passes the natural language response back to the user, and [0030], e.g. The bot 406 parses the received function(s) within the response from the chatbot 410 and performs the corresponding action (e.g., calling a query) 432, which can be accomplished by calling on one or more additional applications using the External/Internal Services/API 412. In this example, the External/Internal Services/API 412 returns the action result 434 to the bot 406, which responds to the chatbot 410 with the function result 436, and the chatbot generates a final answer to the question in natural language 438. The bot 406 then passes the answer 440 back to the User Interface 404, which provides the answer 442 to the user 402).
Claims 13 and 17, Caron discloses a computer-readable medium and system (Figure 8) comprising the cited steps.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 2-6, 8, 11, 12, 14, 15, 18, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Caron et al. (US 20250216955 A1), as applied to claims 1, 13, and 17 above, in view of Ramanasankaran et al. (Ramanasankaran hereafter, US 2024/0345551 A1).
Claim 2, Caron discloses the claimed invention except for performing a semantic search to identify a preliminary set of the plurality of capabilities that are semantically similar to the user input;
applying, to the machine learning model, the user input and a representation of the preliminary set of capabilities to generate an additional model output; and based on the additional model output, identifying the subset of the plurality of capabilities.
Ramanasankaran discloses identifying the plurality of capabilities comprises:
performing a semantic search to identify a preliminary set of the plurality of capabilities that are semantically similar to the user input (Ramanasankaran, [0193], e.g. users can provide prompts in plain English language (e.g., a natural language representation or a semantic representation as opposed to a specific programming language input query) to request a variety of information pertaining to data stored within the system 1200 (e.g., requesting samples of various datasets, requesting the model system 1232 to do a preliminary exploratory data analysis));
applying, to the machine learning model, the user input and a representation of the preliminary set of capabilities to generate an additional model output (Ramanasankaran, [0194], e.g. the output(s) provided by the system 1200 (e.g., via the model system 1232 and the interface 1236) may be utilized as input(s) to one or more additional models to train and/or obtain more in-depth analyses of the data); and
based on the additional model output, identifying the subset of the plurality of capabilities (Ramanasankaran, [0194], e.g. by using the output provided by the model system 1232 and the interface 1236, subsequent models can be trained and provide corresponding outputs more quickly as compared to using the entire data set captured by the system 1200. Similarly, in some instances, the output(s) provided via the model system 1232 and the interface 1236 can enable data as a service functionality or be provided to other systems generally for use in data analytics).
Ramanasankaran discloses system can implement various automated and/or expert-based thresholds and data quality management processes to improve the accuracy and quality of generated outputs and update training of the machine learning models accordingly ([0023]). One of ordinary skill in the art at the time prior to the effective filing date of the instant invention would have been motivated by Ramanasankaran to improve the system of Caron. Therefore, it would have been obvious for one of ordinary skill in the art to use system of Caron with the teachings of Ramanasankaran. The benefit would be to improve the accuracy and quality of generated outputs and update training of the machine learning models accordingly.
Claim 3, Caron as modified discloses performing the semantic search to identify the preliminary set of capabilities comprises:
determining an embedding of the user input in a multi-dimensional semantic space (Ramanasankaran, [0141], e.g. the natural language text may be provided to an embedding model to convert the natural language text into indexable vectors that can be easily utilized by the machine learning models described herein. The embedding model can receive the natural language prompts (and/or one or more tokens thereof as generated by a tokenizer, such as a byte pair encoding tokenizer, etc.), and apply the natural language prompts as input to the embedding model to cause the embedding model to generate respective vectors representing the natural language prompts in an n-dimensional space. The embedding model can include any of various functions, algorithms, rules, and/or machine learning models configured to generate vectors representative of text or semantic data, including but not limited to models such as CNNs, word2vec, or BERT); and
comparing the embedding of the user input in the multi-dimensional semantic space to embeddings, in the multi-dimensional semantic space, that represent respective capabilities of the plurality of capabilities to identify the preliminary set of the plurality of capabilities whose embeddings are near the embedding of the user input in the multi-dimensional semantic space (Ramanasankaran, [0154], e.g. the workflow 1100 allows for the system to provide accurate responses to user prompts based on relevant nodes and/or edges and associated enrichment data, while requiring less overall data processing and analysis compared to the full ahead-of-time training process discussed above, with respect to FIGS. 9 and 10, thereby reducing computational burden placed on the system, as well as system power consumption and time-to-compute durations).
Claim 4, Caron as modified discloses applying, to the machine learning model, the user input and the representation of the preliminary set of capabilities to generate the additional model output comprises applying, to the machine learning model, textual descriptions of functions of the preliminary set of capabilities (Ramanasankaran, [0194], e.g. the interface 1236) may be utilized as input(s) to one or more additional models to train and/or obtain more in-depth analyses of the data. In some instances, by using the output provided by the model system 1232 and the interface 1236, subsequent models can be trained and provide corresponding outputs more quickly as compared to using the entire data set captured by the system 1200).
Claim 5, Caron as modified discloses the plurality of capabilities comprise a user- generated capability, wherein data indicative of the user-generated capability in the database includes a textual description of a function of the user-generated capability, and wherein identifying the plurality of capabilities comprises:
applying, to the machine learning model, the user input and the textual description of the function of the user-generated capability to generate an additional model output (Ramanasankaran, [0023], e.g. system can enable real-time messaging and/or conversational interfaces for users to input various user prompts and receive corresponding informational completions via machine learning models trained using building knowledge graph data that is translated into natural language text prompts); and
based on the additional model output, identifying the subset of the plurality of capabilities (page 17, [0149], e.g. By enriching the various graph nodes, additional context about each node is provided within the natural language prompts that are ultimately generated for each graph node).
Claim 6, Caron as modified discloses the user input includes information relating to at least one input or configuration parameter of one of the subset of the plurality of capabilities (Ramanasankaran, page 21, [0185], e.g. receive a query input), and wherein identifying the plurality of capabilities comprises applying the user input to the machine learning model to generate an additional model output that identifies the subset of the plurality of capabilities and that also indicates the at least one input or configuration parameter of the one of the subset of the plurality of capabilities (Ramanasankaran, [0194], e.g. the interface 1236) may be utilized as input(s) to one or more additional models to train and/or obtain more in-depth analyses of the data. In some instances, by using the output provided by the model system 1232 and the interface 1236, subsequent models can be trained and provide corresponding outputs more quickly as compared to using the entire data set captured by the system 1200).
Claim 8, Caron as modified discloses executing the application comprises: for a particular capability of the subset of the plurality of capabilities, obtaining, from a user that supplied the user input, at least one input or configuration parameter of the particular capability (page 21, [0185], e.g. receive a query input); and executing the particular capability based on the at least one input or configuration parameter (Ramanasankaran, Figure 13, and [0188], e.g. An output can then be generated responsive to the operation of the fine-tuned LLM, at step 1325. The output can generated and decoded by the model system 1232 (e.g., the fine-tuned LLM) prior to being transmitted to the interface 1236 for display as a human-readable output that is responsive to the prompt received from the user).
Claim 11, Caron as modified discloses wherein identifying the plurality of capabilities comprises: applying, to the machine learning model, the user input and textual descriptions of functions of the plurality of capabilities to generate an additional model output; and based on the additional model output, identifying the subset of the plurality of capabilities (Ramanasankaran, [0194], e.g. the interface 1236) may be utilized as input(s) to one or more additional models to train and/or obtain more in-depth analyses of the data. In some instances, by using the output provided by the model system 1232 and the interface 1236, subsequent models can be trained and provide corresponding outputs more quickly as compared to using the entire data set captured by the system 1200).
Claim 12, Caron as modified discloses the graph representing dependencies regarding the subset of the plurality of capabilities represents every capability of the subset of capabilities as at least one of a dependency for at least one other capability or dependent upon at least one other capability (Ramanasankaran, [0149], e.g. By enriching the various graph nodes, additional context about each node is provided within the natural language prompts that are ultimately generated for each graph node).
Ramanasankaran as modified discloses identifying the plurality of capabilities comprises:
performing a semantic search to identify a preliminary set of the plurality of capabilities that are semantically similar to the user input (Ramanasankaran, [0193], e.g. users can provide prompts in plain English language (e.g., a natural language representation or a semantic representation as opposed to a specific programming language input query) to request a variety of information pertaining to data stored within the system 1200 (e.g., requesting samples of various datasets, requesting the model system 1232 to do a preliminary exploratory data analysis));
applying, to the machine learning model, the user input and a representation of the preliminary set of capabilities to generate an additional model output (Ramanasankaran, [0194], e.g. the output(s) provided by the system 1200 (e.g., via the model system 1232 and the interface 1236) may be utilized as input(s) to one or more additional models to train and/or obtain more in-depth analyses of the data); and
based on the additional model output, identifying the subset of the plurality of capabilities (Ramanasankaran, [0194], e.g. by using the output provided by the model system 1232 and the interface 1236, subsequent models can be trained and provide corresponding outputs more quickly as compared to using the entire data set captured by the system 1200. Similarly, in some instances, the output(s) provided via the model system 1232 and the interface 1236 can enable data as a service functionality or be provided to other systems generally for use in data analytics).
Claims 14, 15, 18, and 19 recites a computer-readable medium and a system comprising the same steps as claims 2-6, 8, 11, and 12 above. Caron as modified discloses a computer-readable medium and a system (Caron, Figure 8) for implementing the above cited steps.
CONCLUSION
Patent applicants with problems or questions regarding electronic images that can be viewed in the Patent Application Information Retrieval system (PAIR) can now contact the USPTO's Patent Electronic Business Center (Patent EBC) for assistance. Representatives are available to answer your questions daily from 6 am to midnight (EST). The toll free number is (866) 217-9197. When calling please have your application serial or patent number, the type of document you are having an image problem with, the number of pages and the specific nature of the problem. The Patent Electronic Business Center will notify applicants of the resolution of the problem within 5-7 business days. Applicants can also check PAIR to confirm that the problem has been corrected. The USPTO's Patent Electronic Business Center is a complete service center supporting all patent business on the Internet. The USPTO's PAIR system provides Internet-based access to patent application status and history information. It also enables applicants to view the scanned images of their own application file folder(s) as well as general patent information available to the public.
For all other customer support, please call the USPTO Call Center (UCC) at 800-786-9199. The USPTO's official fax number is 571-272-8300.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to C. Dune Ly, whose telephone number is (571) 272-0716. The examiner can normally be reached on Monday-Friday from 8 A.M. to 4 PM ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Neveen Abel-Jalil, can be reached on 571-270-0474.
/Cheyne D Ly/
Primary Examiner, Art Unit 2152
1/22/2026