Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
2. Claims 1, 2, 5, 13, 14, 16, 19, 20, 23, and 30 are presently amended. Claims 12, 15, and 17 are presently canceled without prejudice or disclaimer of the subject matter contained therein. No claims were previously canceled. New claims 31 through 33 are presently added. As a result, upon entry of the foregoing amendments, claims 1-11, 13-14, 16, and 18-33 will be pending in this application.
3. This office action is in response to the REM filed 11/26/2025.
4. Claims 1, 19 and 30 are independent claims.
5. The office action is made Final.
Information Disclosure Statement
6. The information disclosure statement (IDSs) submitted on 11/26/2025 was considered by the examiner.
Claim Rejections - 35 USC § 103
7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
a) A patent may not be obtained through the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
9. Claims -11, 13-14, 16, and 18-33 are rejected under 35 U.S.C.103 as being unpatentable over Ayed et al (US 20250245446 A1) hereinafter as Ayed in view of Bell et al (US 20240406166 A1) hereinafter as Bell.
10. Regarding claim 1, Ayed teaches A system comprising: at least one hardware processor; and at least one memory storing instructions that cause the at least one hardware processor to perform operations (Fig 1, [0025]) comprising:
causing presentation of a graphical user interface for an artificial intelligence-based assistant ([0002], [0024], “artificial intelligence (e.g., an LLM)”, [0027], “chat interface 122 (presentation of a graphical user interface for an artificial intelligence-based assistant)” Fig 9A, [0076], “FIG. 9A illustrates a chat interface 900”);
receiving from a user, by the graphical user interface a natural language request (Fig 9A, element 902 (a natural language request), [0076], “The chat interface 900 is shown to include a first prompt 902 provided by a user (a natural language request)”, Fig 10, step 1002, [0080], “receiving a user-provided prompt, where the user-provided prompt is provided in natural language (block 1002)”),
the schema being associated with a select database ([0059], “schema information associated with the user or user role (which may be stored in the data catalog KG 518)”, [0060], [0068], [0073], “obtain the schema associated with a user, extract the source types within the schema, and store the source types in storage 807”);
determining context data for responding to the natural language request, the context data comprising metadata associated with the schema ([0038], “determine the objective of the prompt 128 and provide the prompt 128 to the appropriate pipeline (set of LLM)”, Fig 2C, [0048], “user-provided prompt 280 may recite, “What does the inputlookup command do?” Upon receipt of the prompt 280, the interface module 204 provides the prompt 280 to the objective determinator 207, which determines the objective of the prompt 280”, Fig 3, step 310, [0054], “An auto-generated prompt is then constructed based on the user-provided prompt with the relevant historical search queries included therein to provide additional context (block 310).”, [0059], “The RAG module 504 may then seek to augment the prompt using best practice context (which may be stored in the IT/security practices data store 514), examples of historical search queries (which may be stored in the few shots database 516), and/or schema information associated with the user or user role (which may be stored in the data catalog KG 518).”, Fig 10, step 10004, [0080], “The process 1000 continues with an operations of identifying an objective of the user-provided prompt (context data), and based on the objective, providing the user-provided prompt to a first operational pipeline of a plurality of operational pipelines, wherein each operational pipeline is associated with a unique prompt template (blocks 1004, 1006).”, [0082], “identifying the objective of the user-provided prompt includes identifying a keyword provided with the user-provided prompt, and wherein the keyword is associated with the objective.”, [0125], “event metadata can include one or more key-value pairs that describe the data source 1402 or the event data included in the request.”);
using a set of large language models to generate a response to the natural language request based on the context data and the natural language request (Fig 2, “LLM 234”, [0032-0033], “The analysis performed by the LLM 120 resulting in generation of an auto-generated response to the user-provided prompt”, [0054], “An auto-generated prompt is then constructed based on the user-provided prompt with the relevant historical search queries included therein to provide additional context (block 310).”),
the response comprising:
a structured language data query for the select database, the structured language data query comprising a structured query language (SQL) query and a natural language explanation of the structured language data query ([0055], Fig 9A, [0076], “The chat interface 900 is shown to include a first prompt 902 provided by a user (query/question) and a first response that includes a first portion 904 being auto-generated software code in a structured query language, e.g., SPL (a structured language data query), and a second portion 906 being a natural language description of the SPL of the first portion 904 (a natural language explanation of the structured language data query).”, Fig 9B, [0078], “FIG. 9B illustrates an extension of the chat interface 900 as including a second prompt 910 provided by the user and a second response that includes a first portion 912 being auto-generated software code in a structured query language, e.g., SPL (a structured language data query), and a second portion 914 being a natural language description of the SPL of the first portion 912 (a natural language explanation of the structured language data query).”);
causing presentation of the response in the graphical user interface without executing the structured language data query in the response (Fig 9A, [0076], and Fig 9B, [0078], the response (a structured language data query and a natural language explanation of the structured language data query) are presented without executing the structured language data query in the response); and
causing presentation of a graphical user interface element in the graphical user, the graphical user interface element being configured to cause, upon selection of the graphical user interface element by the user (Fig 9, element 903, 905 and 907 (a graphical user interface element), specifically see Fig 9B, element 914, by pasting a query into the Splunk search bar and press Enter (the selection of the graphical user interface element by the user)):
execution of the structured language data query on the select database (Fig 9B, element 914, “running a search which execute the query”, see also, [0055], “the query relates to joining two tables of data, and the response includes information from a table joined from the two tables.”); and
display of a query result in the graphical user interface for the artificial intelligence-based assistant, the query result comprising tabular data, the query result being received in response to the execution of the structured language data query (Fig 9B, element 914, “the result is a table”, see also, [0055], “the query relates to joining two tables of data, and the response includes information from a table joined from the two tables.”).
Ayed didn’t specifically teach
receiving from a user, by the graphical user interface, a selection of a schema;
However, Bell teaches receiving from a user, by the graphical user interface, a selection of a schema, the schema being associated with a select database ([0060], “The hub component may be used in conjunction with the AI-enabled clinical assistant to allow physicians to interact using conversational language including natural language inputs”, [0133], [0159], “In some embodiments, an agent module is configured to automatically populate a structured query (e.g., an SQL query) from a user query (a natural language request) and transmit the structured query to a structured database. For example, the agent module may obtain a particular schema, obtain inclusion and exclusion criteria, and generate a structured query for a database based on the criteria identified from the query and the schema of the database to be searched (the schema being associated with a select data store). For example, a user query of “how many patients are older than 18?” may be converted to an SQL query “SELECT COUNT (*) FROM demographic WHERE age >18.””, Fig 11A-11C, [0180], “FIGS. 11A-11C, the digital assistant may be provided with various tools that are selected by the user (e.g., the cohort builder in FIG. 11B (the schema being associated with a select data store (e.g., database) that the user intends to query) and the table builder in FIG. 11C).”, [0192], “As shown in FIG. 15, users may submit prompts (e.g., via voice and/or text inputs). Automatic speech recognition (ASR) may be performed on the prompts (e.g., using an agent module) to identify an intent of the prompt.”).
Also, Bell teaches the amended elements:
causing presentation of a graphical user interface element in the graphical user (Fig 10A-10C, (commands “ask a question, clear and submit”) a graphical user interface elements)), the graphical user interface element being configured to cause, upon selection of the graphical user interface element by the user (Fig 10A-10C, (commands “ask a question, clear and submit”) a graphical user interface elements), upon selection of submit command):
execution of the structured language data query on the select database ([0090], “the corresponding logic 6112 allows for connecting to a corresponding database, e.g., by using an access token associated with the corresponding agent module 6102, communicating at least a portion of the obtained data to one or more nodes 6108, and/or execute one or more queries to identify/analyze such data.”, [0159], “an agent module is configured to automatically populate a structured query (e.g., an SQL query) from a user query and transmit the structured query to a structured database. For example, the agent module may obtain a particular schema, obtain inclusion and exclusion criteria, and generate a structured query for a database based on the criteria identified from the query and the schema of the database to be searched.”, [0184], “an SQL database agent configured to query one or more SQL databases, tables, and views in response to a natural language input”, [0210], “the verified assembly is passed to an execution block (e.g., block 1896) configured to execute assemblies. For example, the verified assembly and the user query (or a follow-up query) are provided to the execution block to obtain a response to the user query. In some embodiments, the response is provided to a formatting block (e.g., block 1898) (e.g., to generate a natural language response).”); and
display of a query result in the graphical user interface for the artificial intelligence-based assistant (Fig 8A, [0149], “the agent module 6102 sends the context, query, and optionally chat history to a node 6108 associated model 228 component (e.g., a large language model). The model 228 outputs a response to the agent module 6102, such as a terminal node, which transmits the response to the client device 102, in accordance with some embodiments.”, [0109], “the user may enter a text prompt such as “patients diagnosed with colorectal cancer greater than 45 years old treated with atezolizumab or durvalumab or interferon” and the agent module 6102 provides a response in accordance with a corresponding node architecture 6106 associated with the agent module 6102 (which may be visually represented in other user interfaces of the application (e.g., a workflow representation).”, Fig 10A-10C, “Answer”, [0210], “the response is provided to a formatting block (e.g., block 1898) (e.g., to generate a natural language response)”),
the query result comprising tabular data, the query result being received in response to the execution of the structured language data query ([0178], “with the table builder template, a user could select from their deliveries and then type in various data concepts that they'd like to pull across numerous tables in their delivery into a primary table that they can use for their analysis. With the template, the digital assistant is configured to determine which delivery the user is working with and therefore which columns are available, giving it a higher chance of successfully completing the task of creating a new table for the user (e.g., identifying a cohort-building agent module to answer cohort-related queries).”, [0264], “the report provides one or more real-time clinical summaries directly to a patient via a user interface display at the client device, in which the report includes information updated with self-reported outcomes and data from external services and/or databases associated with the subject, such as a connected fitness client application. In some embodiments, the report is configured to provide the patient with a diagnosis, track the health data of the patient during a third epoch, and/or visualize a health summary in real-time, such as through one or more charts or tables of the report.”, [0344], “the query relates to joining two tables of data, and the response includes information from a table joined from the two tables. For example, the agent joins the two tables and provides the joined table and/or data from the joined table.”, Fig 12C, [0360], “”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the concept of teachings suggested in Bell’s system into Ayed’s and by incorporating Bell into Ayed because both systems are related to LLM would provide a systems and methods that allow a user to query medical information (and other types of information) using natural language, intuitive interfaces, and follow-up questions (bell, [0009]).
11. Regarding claim 2, Ayed and Bell teach the invention as claimed in claim 1 above and further Bell teaches wherein the determining of the context data for responding to the natural language request comprises:
performing a search, on a metadata database, for the metadata associated with the schema, the search being performed using a query string that comprises the natural language request ([0082], “one or more user databases 244 for storing user data such as user preferences, user settings, and other metadata.”, [0120], “agent module types include transform agent modules (e.g., performing functions such as data transformations, regular expressions, and string templating)”, [0231], “referring briefly to FIG. 15, the prompt may be received from one or more client devices in text form (e.g., one or more text strings)”, [0249], “the request comprises a plurality of text data comprising one or more text strings inputted by the user.” [0155], “embeddings are also generated from metadata corresponding to the documents and stored in a vector database 904.”, [0366], [0379], [0403], [0413]); and
receiving a result to the search, the result comprising the metadata ([0155], “embeddings are also generated from metadata corresponding to the documents and stored in a vector database 904.”, [0366], [0379], “metadata should be provided in the output”).
12. Regarding claim 3, Ayed and Bell teach the invention as claimed in claim 2 above and further Bell teaches wherein the context data comprises a set of text from chat history data associated with the user ([0120], “the language model agent modules provide/store context information such as conversation history, user preferences, subject details, and the like.”, [0152], “one or more conversation histories of a chat history associated with a user of the client device 102 are obtained from the client device 102 or the server system 106.”, [0154], “the query, context, and chat history is formatted (e.g., vectorized) and input to the model 228”, [0346], “the context data includes a chat history for the end user (e.g., as illustrated in FIGS. 8A and 8B).”, [0394], “informational affordance indicating a conversation history that will be provided as context to the task-specific machine-learning model while the task-specific orchestration is being used”).
13. Regarding claim 4, Ayed and Bell teach the invention as claimed in claim 2 above and further Bell teaches wherein the query string comprises information regarding the user ([0154], “the query, context, and chat history is formatted (e.g., vectorized) and input to the model 228”, [0346], “the context data includes a chat history for the end user (e.g., as illustrated in FIGS. 8A and 8B).”, [0394], “informational affordance indicating a conversation history that will be provided as context to the task-specific machine-learning model while the task-specific orchestration is being used“).
14. Regarding claim 5, Ayed and Bell teach the invention as claimed in claim 2 above and further Bell teaches wherein the metadata comprises information regarding at least one of: the select database; one or more tables on the select database and relevant to the query string; one or more columns on the select database and relevant to the query string; or one or more views on the select data store and relevant to the query string ([0082], “one or more user databases 244 for storing user data such as user preferences, user settings, and other metadata.”, [0155], “embeddings are also generated from metadata corresponding to the documents and stored in a vector database 904.”, [0324], [0336]).
15. Regarding claim 6, Ayed and Bell teach the invention as claimed in claim 5 above and further Bell teaches wherein the context data comprises at least one of: a first set of sample values for the one or more columns; a second set of sample values for a first set of columns of the one or more tables; or a third set of sample values for a second set of columns of the one or more views ([0108], “provider samples (e.g., information about types of samples that can be processed by the provider)”, [0375], “while the user is interacting with the chat user interface 1838 in FIG. 18F, a clinical sample may be added to Input B (represented by the block 1834B), and based on the additional data, a message may be presented within the chat user interface indicating that the data has been updated.”, [0528], “To identify natural groupings, two issues can be addressed. First, a way to measure similarity (or dissimilarity) between two samples can be determined. This metric (e.g., similarity measure) can be used to ensure that the samples in one cluster are more like one another than they are to samples in other clusters.”).
16. Regarding claim 7, Ayed and Bell teach the invention as claimed in claim 2 above and further Bell teaches wherein the metadata comprises a set of comments associated with at least one table, column, or view relevant to the query string ([0108], “Advantageously, by utilizing multiple datasets associated with different domains of subject matter and/or applying a classification system to the datasets, the knowledge database provides a storage system for data, such as medical records and clinical documentation that one or more agent modules 6102 can retrieve based on a task-specific requirement associated with a respective domain or classification.”, [0155], “embeddings are also generated from metadata corresponding to the documents and stored in a vector database 904.”, [0205], [0324], “document metadata (e.g., document classifications)”, [0379]).
17. Regarding claim 8, Ayed and Bell teach the invention as claimed in claim 2 above and further Bell teaches wherein the metadata comprises a set of tags associated with at least one table, column, or view relevant to the query string ([0058], “The platform may also be configured to track and/or catalog relevant therapies (e.g., on label and/or off label use) for a set of disease state”, [0088], “parsing and/or evaluating the incoming data for recognized keywords, phrases, ground truth labels, etc.”, [0168], [0179], “the digital assistant uses information about previous interactions as context information for a query”, [0204], “each input and output has a corresponding data type (e.g., indicated by a color and/or label).”).
18. Regarding claim 9, Ayed and Bell teach the invention as claimed in claim 1 above and further Bell teaches wherein the context data comprises information from at least one of: user feedback data associated with the user; or a structured language data query history associated with the user ([0179], “the digital assistant uses information about previous interactions as context information for a query”, [0194], “FIGS. 18A to 18F illustrate an example sequence of a user's interaction with an example agent-builder application 1800”, [0209], “the chat user interface is configured to facilitate an interaction between the user and the orchestration 1850 that the user modifying via the user interface 1830.”).
19. Regarding claim 10, Ayed and Bell teach the invention as claimed in claim 1 above and further Bell teaches wherein the context data comprises information from verified query repository data, the verified query repository data comprising one or more individual structured language queries paired with natural language descriptions of the one or more individual structured language queries ([0184], “(v) a copilot assistant agent configured to be embedded in an application and use a knowledge base and/or question and answer pairs to assist a user of the application”, [0210], “the verified assembly is passed to an execution block (e.g., block 1896) configured to execute assemblies. For example, the verified assembly and the user query (or a follow-up query) are provided to the execution block to obtain a response to the user query. In some embodiments, the response is provided to a formatting block (e.g., block 1898) (e.g., to generate a natural language response).”, [0222], “a verification block is used to determine whether a conclusion in the response is consistent with the cited source material.”, [0269], “the agent module 6102 provides the recommendation and human verifiable support for the decisions made using the logic 6112 to arrive at the recommendation.”, [0515], “a network node sums up the products of all pairs of inputs, xi, and their associated parameters”).
20. Regarding claim 11, Ayed and Bell teach the invention as claimed in claim 1 above and further Bell teaches wherein the context data comprises one or more pre-instructions provided by the user ([0252], “the input node is configured to receive a prompt from a user associated with the specific clinical task. In some embodiments, the input node is an initial terminal node in the node architecture that receives the prompt from the user in a raw format (e.g., as instructed data).”).
21. Regarding claim 12, (Canceled)
22. Regarding claim 13, Ayed and Bell teach the invention as claimed in claim 1 above and further Bell teaches wherein the graphical user interface for the artificial intelligence-based assistant is presented within a software application environment, and wherein the context data comprises information regarding a current context of the software application environment ([0075-0079], [0184], “(v) a copilot assistant agent configured to be embedded in an application and use a knowledge base and/or question and answer pairs to assist a user of the application”, [0192], [0420]).
23. Regarding claim 14, Ayed and Bell teach the invention as claimed in claim 1 above and further Bell teaches wherein the operations comprise: determining a set of accessible schemas accessible to the user; and providing the set of accessible schemas for selection by the user via the graphical user interface, the selection of the schema being selected from the set of accessible schemas ([0056], “An environment may be defined by access to data sources and/or users. The agent configuration may be stored in a control plane.”, [0059], [0090], “the corresponding logic 6112 allows for connecting to a corresponding database, e.g., by using an access token associated with the corresponding agent module 6102, communicating at least a portion of the obtained data to one or more nodes 6108, and/or execute one or more queries to identify/analyze such data.” [0100], “one or more server data modules 330 for handling the storage of and/or access to data (e.g., clinical and user data).”, [0117], “the agent builder frontend includes an access component (e.g., an administrative console, such as the user interface 1830 in FIG. 18D, which may be a home user interface that a user is presented with upon providing access credentials to the application)… a plurality of task-specific orchestrations to which the user has access, e.g., based on the access credentials provided to the application), an agent builder component (e.g., either or both of the user interfaces 1812 and 1822 respectively, which may include a first representation of the node architecture 6106 (e.g., a form-builder representation) and a second representation of the node architecture 6106 (e.g., a workflow representation))”, [0184], [0195], “the data collections (e.g., accessible via a user input directed to a data collections user interface element 1806) available to the user are based on the specific credentials provided by the user to access the agent-builder application 1800.”, [0275], “the user identifier is checked against the access control lists to determine what data the user is authorized to access.”).
24. Regarding claim 15, (Canceled)
25. Regarding claim 16, Ayed and Bell teach the invention as claimed in claim 1 above and further Bell teaches wherein the graphical user interface for the artificial intelligence-based assistant is presented as a first graphical user interface within a software application environment, and wherein the operations comprise: causing presentation of the response in the first graphical user interface; and causing presentation of a graphical user interface element in the first graphical user interface, graphical user interface element being configured to cause, upon selection of the graphical user interface element by the user, insertion of the structured language data query from the response to a second graphical user interface of the software application environment that is external to the first graphical user interface of the artificial intelligence-based assistant ([0014], “a user interface framework, which allows for provisioning access to secure data based on user credentials, allowing access to different user interface elements to users based on tools and/or external services, and allowing creation and/or storage of personalized agent modules, such as for collaboration within third party users.”, [0112], “a system (e.g., the platform 100 or a component thereof) determines (e.g., using a machine-learning model different than the model receiving the prompt) that a prompt provided by the user includes a natural language description of a cohort (e.g., a patient cohort, including a set of one or more patients) the user wants to build… users of the application are able to more effectively and efficiently interact with a data source by using natural language prompts to cause operations that would otherwise require multiple user inputs to a plurality of different user interface elements and/or navigating through different user interfaces (e.g., a first set of user interfaces for determining the filtering operation based on a natural language prompt, and a second set of user interfaces for implementing the filtering operation (e.g., within a different web or desktop application)).”, [0128], “a first user may wish to employ a user interface that includes one or more user interface elements described with respect to the application (e.g., the user interface 500) by directly embedding the components within a web page, and a second user may wish to interact with an API that is configured to receive user requests and provide responses in the form of data structures, which the second user may integrate into different user interface elements not associated with the application.”, [0197-0198], [0227-0228], [0357], “(b) presenting a different user interface to the user for communicating with the respective task-specific orchestration (e.g., the user interface shown in FIG. 11); and (iv) in accordance with receiving a prompt provided by the user at the different user interface, presenting a response object, where the response object is generated by the respective task-specific orchestration based on (1) the prompt provided by the user, and (2) at least some of the data from the one or more data collections.”, [0359], [0362], [0366], [0370-0372], [0377-0389]).
26. Regarding claim 17, (Canceled)
27. Regarding claim 18, Ayed and Bell teach the invention as claimed in claim 1 above and further Bell teaches wherein the set of large language models comprises a chain of large language models, wherein a first large language model of the chain of large language models generates a first output based on a first input that comprises the natural language request and the context data, and wherein a second large language model of the chain of large language models generates a second output based on a second input that comprises the natural language request and the first output from the first large language model ([0135-0136], LLMChain, [0156], “FIG. 9B shows the question and relevant chunks being input to a large language model (LLM) and the LLM generating an answer (e.g., the relevant chunks are used as context by the LLM for answering the question).”, [0165], “By generating individual outputs for each snippet, the chain can extract specific information that contributes to a more comprehensive final result.”, [0166], “map-reduce model chain”, [0218], [0263], [0323], “an output from the large-language model represented by the block 1834C in FIG. 18D can be provided as an input to another large-language model for performing a different task.”).
28. Regarding claims 19-29, those claims recite a method performs the method of the system claims 1-12 respectively and are rejected under the same rationale.
29. Regarding claims 30-33, this claim recites a machine-readable storage medium, the machine-readable storage medium including instructions that when executed by a machine, cause the machine to perform operations of method claims 1, 2, 5 and 6 and is rejected under the same rationale.
Respond to Amendments and Arguments
30. In the REM received 11/26/2025, Claims 1, 2, 5, 13, 14, 16, 19, 20, 23, and 30 are presently amended. Claims 12, 15, and 17 are presently canceled without prejudice or disclaimer of the subject matter contained therein. No claims were previously canceled. New claims 31 through 33 are presently added. As a result, upon entry of the foregoing amendments, claims 1-11, 13-14, 16, and 18-33 will be pending in this application.
Applicant argued that Ayed and Bell do not teach the invention recited in claims for a number of reasons, including but not limited to, the following.
Ayed and Bell, whether considered alone or in any combination, fail to teach or suggest "causing presentation of the response in the graphical user interface without executing the structured language data query in the response," and the claimed separation of execution behind a distinct user-invoked GUI element, as recited by independent claim 1 as amended.
31. Examiner presents the following responses to Applicant’s arguments:
Applicant's arguments received on 11/26/2025 have been fully considered but they are not persuasive. Referring to the previous Office action, Examiner has cited relevant portions of the references as a means to illustrate the systems as taught by the prior art. As a means of providing further clarification as to what is taught by the references used in the first Office action, Examiner has expanded the teachings for comprehensibility while maintaining the same grounds of rejection of the claims, except as noted above in the section labeled “Status of Claims.” This information is intended to assist in illuminating the teachings of the references while providing evidence that establishes further support for the rejections of the claims.
CONCLUSION
32. The prior art made of record and not relied upon is considered pertinent to applicant s disclosure.
Yu et al (US 12204565 B1) discloses A system facilitates a process for automatically generating artificial intelligence (AI) models.
Shrivastava et al (US 20230245654 A1) discloses instructions to receive a user input, process the user input using the ASR module, the NLU module, the dialog manager, one or more of the agents, the arbitrator, and the delivery system, and provide a response to the user input.
ZIOLKOWSKI et al (US 20240411528 A1) discloses A content editor or a plugin thereto automatically generates authorship tokens that identify content authored by a human author or an artificial author.
Zaremoodi et al (US 20230098783 A1) discloses focused training of language models and end-to-end hypertuning of the framework. In one aspect, a method is provided that includes obtaining a machine learning model pre-trained for language modeling, and post-training the machine learning model for various tasks to generate a focused machine learning model.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HICHAM SKHOUN whose telephone number is (571)272-9466. The examiner can normally be reached Normal schedule: Mon-Fri 10am-6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at 5712701698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HICHAM SKHOUN/ Primary Examiner, Art Unit 2164