DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-20 are pending of which claims 1, 8 and 15 are in independent form.
Claims 1-20 are rejected under 35 U.S.C. 101, abstract idea.
Claims 1-20 are rejected under 35 U.S.C. 103(a).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
The claim(s) recite(s) audiovisual content item transcript search.
With respect to step 1 of the patent subject matter eligibility analysis, the claims are directed to a process, machine, manufacture, or composition of matter.
Independent claim 1 is directed to a method, which is a process.
Independent claim 8 is directed to a system which includes a memory and one or more processing device, which is directed to one of the four statutory subject matters.
Independent claim 15 directed to non-transitory computer readable medium, which is directed to one of the four statutory subject matters.
Independent All other claims depend on claims 1, 8 and 15. As such, claims 1-20 are directed to a statutory category.
Regarding claims 1, 8 and 15:
With respect to step 2A, prong one (Judicial Exception), the claims recite an abstract idea, law of nature, or natural phenomenon. Specifically, the following limitations recite mathematical concepts and/or mental processes and/or certain methods of organizing human activity.
The claim recites sequence of operations that amount to information organization, searching, evaluation, and ranking directed to an abstract idea:
Receiving data in multiple ML interface windows;
Using identifiers to reference data from another interface;
Generating a prompt based on the combined data;
Sending prompt to an ML model;
Receiving a response; and
Displaying the response in another UI window.
On a high level, the claims in substance show: collecting information; combining information; generating a request based on the information; presenting results.
These steps fall into recognized abstract idea:
Mental Process: associating data with identifiers, referencing prior information, reformulating a prompt from existing inputs;
Information Organization, Processing and Presentation: coordinating interactions between multiple models/interfaces resembles managing interaction or communication workflow.
Data Manipulation/information processing: the claims manipulates information rather than improving computing technology.
There are no steps performed that provides a technical improvement to the computing system itself.
Thus, the claims recite an abstract idea (mental process/information organization).
With respect to step 2A, Prong Two (Particular Application), the claims do not recite additional elements that integrate the judicial exception into a practical application. The following limitations are considered “additional elements” and explanation will be given as to why these “additional elements” do not integrate the judicial exception into a practical application.
The claims add:
ML model interface window;
Prompt entry field;
Identifiers;
ML model;
Displaying response in UI.
The claims:
Uses generic UI windows and generic ML models.
Does not improve:
Model architecture
Prompt processing algorithms
Memory structures
Latency
Network efficiency,
Parallel computation mechanism
These components merely use conventional computer components as tools to execute the abstract idea. Instead, the claims merely apply the abstract idea in a routine UI environment.
The “parallel interaction” language appears to describe how uses interact with models, not an improvement to the underlaying computer or ML technology.
The limitations fail to transform the exception into a practical application. There is also no improvements to computer functionality or any specific technical solution to a computer centric problems.
Instead the claims recite conventional and generic computer functions performed in a routine manner, which does not amount to a practical application.
With respect to Step 2B. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The recited components are merely generic computer/database elements performing their routine, well-understood, and conventional functions. See Alive, MPEP 2016.05(d).
The steps mentioned in the independent claims merely constitutes generic ML models; generic interface windows, and generic data receiving and displaying steps. Courts have consistently helped such high level information management operations are conventional.
The claims provide no improvements to computer functionality; does not solve a technical problem in this computing environment, or add unconventional architecture. All are routine, conventional operations.
Considering claims as a whole, the ordered combination of elements also reflects nothing more than the typical workflow of distributed systems, and therefore DOES NOT add “significantly more” than the abstract idea.
Such generic, high‐level, and nominal involvement of a computer or computer‐based elements for carrying out the invention merely serves to tie the abstract idea to a particular technological environment, which is not enough to render the claims patent‐eligible, as noted at pg.74624 of Federal Register/Vol. 79, No. 241, citing Alice, which in turn cites Mayo. Further, See, e.g., Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 134 S. Ct. 2347, 2359‐60, 110 USPQ2d 1976, 1984 (2014). See also OIP Techs. v. Amazon.com, 788 F.3d 1359, 1364, 115 USPQ2d 1090, 1093‐94 (Fed. Cir. 2015) ("Just as Diehr could not save the claims in Alice, which were directed to 'implement[ing] the abstract idea of intermediated settlement on a generic computer', it cannot save O/P's claims directed to implementing the abstract idea of price optimization on a generic computer.") (citations omitted). See also, Affinity Labs of Texas LLC v. DirecTV LLC, 838 F.3d 1253, 1257‐1258 (Fed. Cir. 2016) (mere recitation of a GUI does not make a claimpatent‐eligible); Intellectual Ventures I LLC v. Capital One Bank, 792 F.3d 1363, 1370 (Fed. Cir. 2015) ("the interactive interface limitation is a generic computer element".).
The additional elements are broadly applied to the abstract idea at a high level of generality ("similar to how the recitation of the computer in the claims in Alice amounted to mere instructions to apply the abstract idea of intermediated settlement on a generic computer,") as explained in MPEP § 2106.05(f)) and they operate in a well‐understood, routine, and conventional manner.
MPEP § 2106.0S(d)(II) sets forth the following:
The courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity.
• Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec ... ; TLI Communications LLC v. AV Auto. LLC ... ; OIP Techs., Inc., v. Amazon.com, Inc ... ; buySAFE, Inc. v. Google, Inc ... ;
• Performing repetitive calculations, Flook ... ; Bancorp Services v. Sun Life ... ;
• Electronic recordkeeping, Alice Corp ... ; Ultramercial ... ;
• Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc ... ;
• Electronically scanning or extracting data from a physical document, Content Extraction and Transmission, LLC v. Wells Fargo Bank ... ; and
• A web browser's back and forward button functionality, Internet Patent
• Corp. v. Active Network, Inc. ...
. . . Courts have held computer-implemented processes not to be significantly more than an abstract idea (and thus ineligible) where the claim as a whole amounts to nothing more than generic computer functions merely used to implement an abstract idea, such as an idea that could be done by a human analog (i.e., by hand or by merely thinking).
In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrate the abstract idea into a practical application. Their collective functions merely provide conventional computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that the ordered combination amounts to significantly more than the abstract idea itself.
The dependent claims have been fully considered as well, however, similar to the findings for claims above, these claims are similarly directed to the “Mental Processes” grouping of abstract ideas set forth in the 2019 PEG, without integrating it into a practical application and with, at most, a general purpose computer that serves to tie the idea to a particular technological environment, which does not add significantly more to the claims. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea.
Looking at the claim as a whole does not change this conclusion and the claim is ineligible.
Regarding dependent claims 2-7, 9-14 and 16-20:
Primarily adds information management, UI options, or data storage features.
The claims do not:
Improve ML model operation,
Improve computer architecture,
Reduce processing load,
Or solve a technical problem in computing.
All the dependent claims remain directed to the same abstract idea as claims 1, 8 and 15 and fail to add an inventive concept.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 8 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over (AAPA from IDS) Md Mahadi HASSAN et al., "ChatGPT as your Personal Data Scientist", arxiv.org, Cornell University Library, 201 Olin Library, Cornell University, Ithaca, NY 14853, 23 May 2023, XP091515688. [Hassan] in view of Yannam; Ramakrishna R. et al. (US 11893356 B1) [Yannam].
Regarding claims 1, 8 and 15, Hassan discloses, a method for parallel interaction with machine learning models, comprising: receiving first data in a first machine learning (ML) model [interface window of a parallel interaction user interface], wherein the first ML model [interface window] is associated with a first identifier (Hassan: section 3.2.1 on page 9, "Dataset Summarizer micro-agent" corresponds to the first MLM agent which generates as first data a comprehensive summary of a given dataset. This first data can be referenced in the prompt of another MLM agent using the placeholders {microprocess} and {mp_resp}; i.e. the {microprocess} corresponds to the first identifier and the {mp_resp} corresponds to the first data. Regarding “interface window of a parallel interaction user interface” (implicitly discloses): page 5, 3 MODEL ARCHITECTURE: VIDS employs stateless global micro-agents, functioning independently of any state-related data or history, to create an overarching structure that enables fluid transitions throughout the dialogue, irrespective of the specific state. This stateless design ensures a smooth narrative flow and avoids complications of state-dependent biases or entanglements, thus bolstering the versatility and adaptability of our dialogue system. Alongside these global agents, local micro-agents, each tailored to a specific dialogue state, proficiently handle the nuances of user utterances and conversation contexts, facilitating smooth transitions between states in line with the evolving dialogue. Examiner specifies that, this description implies the use of a parallel interaction user interface, where multiple micro-agents (global and local) work simultaneously to ensure seamless interaction and fluid transitions between different dialogue states. These micro-agents interact in parallel to manage different aspects of the user-system interaction, such as data visualization, task formulation, prediction engineering, and result summarization [Fig. 2]);
receiving, within a prompt entry field in a second ML model [interface window of the parallel interaction user interface], second data, wherein the second data includes the first identifier (the Directive of the "Conversation Manager" in section 3.1.3 references the output of a MLM agent using the identifiers {microprocess} and {mp_resp}; i.e. the Directive is the second data);
responsive to a presence of the first identifier, generating a first ML model prompt based on the first data and the second data (caption of Table 3 on page 8, "The details of prompt design for the Conversation Manager microprocess. In the prompt, {state}, {input}, {microprocess}, and {mp_resp} are placeholders which will be replaced dynamically during the conversation");
providing the first ML model prompt to an ML model (page 19, 3.4.3 AutoML interfacer Micro-Agent: AutoML platforms automate the process of applying machine learning to real-world problems, making the technology accessible to non-experts and improving efficiency of experts. In this step, the prepared dataset is fed into an AutoML system, which automatically selects the most suitable machine learning algorithm, optimizes its parameters, and trains the model. Examiner specifies that, this step is where the system first provides the ML model prompt (in the form of the PeTEL representation and prepared dataset) to the AutoML system for training and model generation.);
receiving, from the ML model, a first model response (Page 19, 3.5.1 Result Summarizer Micro-Agent: Currently, we have implemented the Result Summarization micro-agent, where the system produces a comprehensive summary of the findings once the machine learning tasks have been executed. Utilizing an AutoML library such as Auto-SKLearn, the system trains all specified models, equipping users with a broad comparison to discern the most effective solution. This process distills the results into an accessible format, enabling users to grasp the essence of the findings quickly. Examiner specifies that, this subsection describes how the system receives the first model response from the AutoML system after training the models. The Result Summarizer micro-agent processes the response, summarizes the findings, and presents them to the user in a comprehensible format); and
displaying the first model response in the second ML model interface window (page 20, 3.5.2 Result Visualizer Micro-Agent (Future work): Looking forward, we aim to implement the Result Visualization micro-agent. Visualizing the outcomes can significantly aid users’ understanding and facilitate more informed decision-making. We plan to develop a process that generates suitable visualizations based on the results, such as performance metrics or feature importance, offering a more intuitive perspective of the findings. Examiner specifies that, this subsection outlines the future work of implementing a Result Visualizer micro-agent, which will display the model response (e.g., performance metrics or feature importance) in a visual forma).
However, Hassan does not explicitly facilitate interface window of a parallel interaction user interface; interface window; interface window of the parallel interaction user interface.
Yannam discloses, interface window of a parallel interaction user interface; interface window; interface window of the parallel interaction user interface (The AI system may include second machine executable instructions. The second machine executable instructions may implement an agent interface. The agent interface may include software tools for a first agent to concurrently interact directly with the user and directly with the automated chatbot [col. 3, ll. 59-64]. The agent interface may present the first, second, third and fourth interface windows concurrently. Based on the second automated response generated in response to the agent inputs, the agent interface may determine a location for each of the first, second, third and fourth interface windows within a user interface of the agent interface [col. 6, ll. 9-14]).
It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Yannam's system would have allowed Hassan to facilitate interface window of a parallel interaction user interface; interface window; interface window of the parallel interaction user interface. The motivation to combine is apparent in the Hassan’s reference, because there is a need for an improved apparatus and methods for efficiently integrating a chatbot system into an agent-user interaction.
Claim(s) 2, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Hassan in view of Yannam in view of Madisetti; Vijay et al. (US 20240370473 A1) [Madisetti].
Regarding claims 2 and 16, the combination of Hassan and Yannam teaches all the limitations of claims 1 and 15.
However neither Hassan nor Yannam explicitly facilitates receiving the second data in the second ML model interface window that corresponds to a second ML model; generating the first ML model prompt based on the first data and the second data; establishing communication between the first ML model and the second ML model; providing the first ML prompt to the first ML model and the second ML model; parsing the first ML prompt by the first ML model and the second ML model; and receiving from the first ML model and the second ML model, the first model response.
Madisetti discloses, receiving the second data in the second ML model interface window that corresponds to a second ML model (Referring now to FIG. 10 is an illustration of using multiple h-LLMs to answer questions from specific input documents, is described in more detail. User 900 enters a prompt in user interface 902. The prompt is sent to AI Input Broker 810 which generates multiple derived prompts for different categories 924. The prompts are converted into embeddings using multiple embedding models 926. The prompt embeddings 928 are sent to a vector database 930 which returns a list of knowledge documents 934 that are relevant to the prompt based on the similarity of their embeddings to the user's prompt. The knowledge documents 934 are sent to the AI Input Broker 810 which creates new context-aware prompts based on the user's initial prompt 916, derived prompts 924 and the retrieved knowledge documents 934 as context and sends it to multiple h-LLMs 912. The results produced by multiple h-LLMs are processed by the AI Output Broker 908 and the best result is sent to the user 900 along with citations from the knowledge documents 934 ¶ [0063]);
generating the first ML model prompt based on the first data and the second data (Referring now to FIG. 9 is an illustration of generating derived prompts for different categories and using them with multiple h-LLMs to generate the best results, is described in more detail. User 800 enters a prompt in user interface 802. The prompt is sent to the AI Input Broker 810 which generates multiple derived prompts for different categories. The derived prompts 822 are sent multiple h-LLMs 824 which produce the results. The results 816 are sent to the AI Output Broker 814 which processes the results and performs tasks such as filtering, ranking, weighting, assigning priorities, and then sends the best results to the user 800. The h-LLMs 824 can have varying levels of accuracy, and optimized for different tasks such as Question Answering, Information Extraction, Sentiment Analysis, Image Captioning, Object Recognition, Instruction Following, Classification, Inferencing, and Sentence Similarity, for instance ¶ [0062]);
establishing communication between the first ML model and the second ML model (A method for training large language models (LLMs) and using them for inference including training a base LLM, creating connected models while training the base LLM by coupling multiple trained base a parallel, series, or hybrid architecture, creating specialized connected language models for different specialized processing tasks by supplementally training two or more connected models different respective training sets, supplementally training a subset of connected language models by selectively routing one or more training data inputs to one or more specialized connected language models of the plurality of specialized connected language models responsive to at least one of accuracy optimization and task specialization, and routing prompts or derived prompts to the subset [Abstract]. Also see Figs. 5 and 7);
providing the first ML prompt to the first ML model and the second ML model (see Fig. 10, elements 924-926);
parsing the first ML prompt by the first ML model and the second ML model (See Fig. 10. Elements 926-934); and
receiving from the first ML model and the second ML model, the first model response (See Fig. 10. Elements 914-900. See Fig. 9. Elements 824-800).
It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Madisetti's system would have allowed Hassan and Yannam to facilitate receiving the second data in the second ML model interface window that corresponds to a second ML model; generating the first ML model prompt based on the first data and the second data; establishing communication between the first ML model and the second ML model; providing the first ML prompt to the first ML model and the second ML model; parsing the first ML prompt by the first ML model and the second ML model; and receiving from the first ML model and the second ML model, the first model response. The motivation to combine is apparent in the Hassan and Yannam’s reference, because there is a need for an improved apparatus and methods for efficient ways to improve response times, accuracies, and reduce computational load is required to improve both cost and scalability and expandability of existing AI models.
Claim(s) 3, 9 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Hassan in view of Yannam in view of Gurtin; Liza et al. (US 10331303 B1) [Gurtin].
Regarding claims 3, 9 and 17, the combination of Hassan and Yannam teaches all the limitations of claims 1, 8 and 15.
However, neither Hassan nor Yannam explicitly facilitates further comprising parsing the second data to determine presence of one or more identifiers, including the first identifier, wherein each of the one or more identifiers is associated with a respective ML model interface window.
Gurtin discloses, further comprising parsing the second data to determine presence of one or more identifiers, including the first identifier, wherein each of the one or more identifiers is associated with a respective ML model interface window (The apparatus is optionally configured to determine whether the common message UI identifier is associated with at least one of the plurality of group-based communication interface elements. In circumstances where the common message UI identifier is determined to be not associated with any one of the plurality of group-based communication interface elements, [col.2, ll. 30-35]. Also see [col. 9, ll. 36-41], [col. 22, ll. 24-29]. . Each message sent or posted to a group-based communication channel of the group-based communication system includes metadata comprising the following: a sending user identifier, a message identifier, message contents, a group identifier, and a group-based communication channel identifier. Each of the foregoing identifiers may comprise ASCII text, a pointer, a memory address, and the like [col. 7, ll. 18-25]).
It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Gurtin's system would have allowed Hassan and Yannam to facilitates further comprising parsing the second data to determine presence of one or more identifiers, including the first identifier, wherein each of the one or more identifiers is associated with a respective ML model interface window. The motivation to combine is apparent in the Hassan and Yannam’s reference, because there is a need for maintaining and updating a common message user interface (UI) shared among a plurality of group-based communication interfaces in a group-based communication system.
Claim(s) 4-7, 10-14, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hassan in view of Yannam in view of Gurtin; Liza et al. (US 10331303 B1) [Gurtin].
Regarding claims 4, 10 and 18, the combination of Hassan and Yannam teaches all the limitations of claims 1, 8 and 15.
However, neither Hassan nor Yannam explicitly facilitates further receiving an indication to save the first ML model prompt to a prompt database; saving the first ML model prompt to the prompt database; and retrieving the saved first ML model prompt from the prompt database for: editing the saved first ML model prompt; sharing the saved first ML model prompt; commenting on the saved first ML model prompt, or any combination thereof.
Qadrud discloses, receiving an indication to save the first ML model prompt to a prompt database; saving the first ML model prompt to the prompt database; and retrieving the saved first ML model prompt from the prompt database for: editing the saved first ML model prompt; sharing the saved first ML model prompt; commenting on the saved first ML model prompt, or any combination thereof (In some embodiments, the prompt testing utility 226 is configured to perform operations such as testing prompts created based on prompt templates against tests stored in the test repository 224 ¶ [0057]. At 410, one or more prompts based on the prompt templates are determined. In some embodiments, a prompt may be determined by supplementing and/or modifying a prompt template based on the input text. For instance, a portion of input text may be added to a prompt template at an appropriate location. As one example, a prompt template may include a set of instructions for causing a large language model to generate a correspondence document. The prompt template may be modified to determine a prompt by adding a portion of input text that characterizes the nature of the correspondence document to be generated. The added input text may identify information such as the correspondence recipient, source, topic, and discussion points ¶ [0077]).
It would have been obvious to one ordinary skilled in the art at the time of the present invention to combine the teachings of the cited references because Qadrud's system would have allowed Hassan and Yannam to facilitates further receiving an indication to save the first ML model prompt to a prompt database; saving the first ML model prompt to the prompt database; and retrieving the saved first ML model prompt from the prompt database for: editing the saved first ML model prompt; sharing the saved first ML model prompt; commenting on the saved first ML model prompt, or any combination thereof. The motivation to combine is apparent in the Hassan and Yannam’s reference, because there is a need for improved techniques for accessing information encoded in natural language are desired.
Regarding claims 5, 11 and 19, the combination of Hassan, Yannam and Qadrud discloses, receiving metadata regarding the first ML model prompt; saving the metadata in the prompt database in association with the first ML model prompt; and retrieving the saved metadata from the prompt database for: editing the saved metadata; sharing the saved metadata; commenting on the saved metadata, or any combination thereof (Qadrud: If the external text generation modeling system imposes a threshold (e.g., 8,193 tokens), the text generation interface system 230 may need to impose a somewhat lower threshold when dividing input text in order to account for the metadata included in the prompt and/or the response provided by the large language model ¶ [0087]. An input metadata extraction prompt is determined at 1310 based on the text chunk and a clause splitting prompt template. In some embodiments, the input metadata extraction prompt may be determined by supplementing and/or modifying the input metadata extraction prompt based on the one or more clauses and the one or more data fields. For instance, the one or more clauses and a description of the one or more data fields may be added to a prompt template at an appropriate location. As one example, a prompt template may include a set of instructions for causing a large language model to identify values for the one or more data fields based on the one or more clauses ¶ [0284]).
Regarding claims 6, 12 and 20, the combination of Hassan, Yannam and Qadrud discloses, receiving a mode selection in the second ML model interface window enabling a chat mode, wherein generating the first ML model prompt based on the first data and the second data is further based on a configured number of previous interactions with the ML model (Qadrud: At 806, the text generation interface system 210 determines a chat prompt 808 based on the chat input message 804. The chat prompt 808 may include one or more instructions for implementation by the text generation modeling system 270. Additionally, the chat prompt 808 includes a chat message 810 determined based on the chat input message 804 ¶ [0116]-[0122]).
Regarding claim 7, and 13, the combination of Hassan, Yannam and Qadrud discloses, receiving a mode selection in the second ML model interface window enabling a chat mode, wherein generating the first ML model prompt based on the first data and the second data is further based on a configured number of previous interactions with the ML model (Qadrud: In some embodiments, determining the chat prompt at 806 may involve selecting a chat prompt template configured to instruct the text generation modeling system 270 to suggest one or more skills ¶ [0124], [0126], [0139]-[0140]).
Regarding claim 14, the combination of Hassan, Yannam and Qadrud discloses, receive a selection of a second ML model prompt stored in a prompt database; edit the second ML model prompt; provide the edited second ML model prompt to the ML model; and receive, from the ML model, a second model response (At 410, one or more prompts based on the prompt templates are determined. In some embodiments, a prompt may be determined by supplementing and/or modifying a prompt template based on the input text. For instance, a portion of input text may be added to a prompt template at an appropriate location. As one example, a prompt template may include a set of instructions for causing a large language model to generate a correspondence document. The prompt template may be modified to determine a prompt by adding a portion of input text that characterizes the nature of the correspondence document to be generated. The added input text may identify information such as the correspondence recipient, source, topic, and discussion points ¶ [0077]. An input metadata extraction prompt is determined at 1310 based on the text chunk and a clause splitting prompt template. In some embodiments, the input metadata extraction prompt may be determined by supplementing and/or modifying the input metadata extraction prompt based on the one or more clauses and the one or more data fields. For instance, the one or more clauses and a description of the one or more data fields may be added to a prompt template at an appropriate location ¶ [0284]).
Conclusion
The examiner requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application.
When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD S ROSTAMI whose telephone number is (571)270-1980. The examiner can normally be reached Mon-Fri From 9 a.m. to 5 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571)270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
2/16/2026
/MOHAMMAD S ROSTAMI/ Primary Examiner, Art Unit 2154