Prosecution Insights
Last updated: April 19, 2026
Application No. 18/606,363

METHOD AND SYSTEM FOR ARTIFICIAL INTELLIGENCE ASSISTED CONTENT LIFECYCLE MANAGEMENT

Non-Final OA §102§103
Filed
Mar 15, 2024
Examiner
BOGGS JR., JAMES
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Jpmorgan Chase Bank N A
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
64 granted / 107 resolved
-2.2% vs TC avg
Strong +39% interview lift
Without
With
+38.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
28 currently pending
Career history
135
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
16.2%
-23.8% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 107 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference characters not mentioned in the description: “500” in Figure 5 “600” in Figure 6 “700” in Figure 7. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference characters in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 – 3, 6 – 12 and 15 – 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Hudetz et al. (US Patent Application Publication No. 2024/0370479), hereinafter Hudetz. Regarding claim 1, Hudetz discloses a method for facilitating content lifecycle management via artificial intelligence, the method being implemented by at least one processor (Paragraph 0057, lines 6-8, "The one or more processors can be internal or external to the apparatus and can execute at least a part of the software or firmware application."), the method comprising: receiving, by the at least one processor via an application programming interface, at least one inquiry in a natural language format, each of the at least one inquiry including freeform data (Paragraph 0062, lines 1-7, "The system 100 may implement various search tools and algorithms designed to search for information within an electronic document or across a collection of electronic documents. Within the context of a cloud computing system, the system 100 may implement a cloud search service accessible to users via a web interface or web portal front-end server system."; Paragraph 0083, lines 1-6, "In general operation, the search manager 124 may receive a search query 144 to search for information within an electronic document 142 by a cloud search service, such as an online electronic document management system. The search query 144 may comprise any free form text in a natural language representation of a human language."; A web interface reads on an application programming interface, and a search query comprising free form text in a natural language representation reads on at least one inquiry in a natural language format including freeform data.); vectorizing, by the at least one processor, the at least one inquiry to generate at least one numeric sequence (Paragraph 0083, lines 6-11, "The search manager 124 may generate a contextualized embedding for the search query 144 to form a search vector. A contextualized embedding may comprise a vector representation of a sequence of words in the search query 144 that includes contextual information for the sequence of words."; Generating a contextualized embedding for the search query to form a search vector reads on vectorizing the at least one inquiry to generate at least one numeric sequence.); identifying, by the at least one processor using at least one model, at least one topic for each of the at least one inquiry based on the corresponding at least one numeric sequence, each of the at least one topic including a subject matter value and a sentiment value (Paragraph 0226, lines 1-10, "The logic flow 1300 may also include retrieving the set of candidate document vectors that are semantically similar to the search vector using a semantic ranking algorithm. A semantic ranking algorithm is a type of algorithm that ranks search results or recommendations based on their semantic relevance to the search query 144. Semantic ranking algorithms may use various NLP techniques, such as entity recognition, sentiment analysis, and topic modeling, to extract meaningful features and representations from the query and documents."; Using topic modeling and sentiment analysis to rank search results or recommendations based on their semantic relevance to the search query vector reads on identifying at least one topic for the inquiry.); aggregating, by the at least one processor, information that corresponds to the at least one topic from at least one source, the at least one source including a preconfigured data lake (Paragraph 0083, lines 12-21, "The search manager 124 may search a document index of contextualized embeddings for the electronic document 142 with the search vector. Each contextualized embedding may comprise a vector representation of a sequence of words in the electronic document that includes contextual information for the sequence of words. The search process may produce a set of search results 146. The search results 146 may include a set of candidate document vectors that are semantically similar to the search vector of the search query 144."; Paragraph 0143, lines 1-4, "In one embodiment, for example, the document index 730 may be implemented as an inverted index. An inverted index is a data structure used to efficiently search through and retrieve information from a large corpus of text."; Search results including a set of candidate document vectors reads on aggregating information that corresponds to the at least one topic from at least one source, and a large corpus of text with a document index reads on a preconfigured data lake.); determining, by the at least one processor using the at least one model, at least one solution in the natural language format for each of the at least one inquiry based on the aggregated information, the at least one solution including at least one recommended action based on a predetermined setting (Paragraph 0106, lines 1-12, "As previously described with reference to FIGS. 1, 2, the systems 100, 200 may implement some or all of the artificial intelligence architecture 300 to support various use cases and solutions for various AI/ML tasks suitable for supporting or automating document management operations. In various embodiments, the artificial intelligence architecture 300 may be implemented by the search manager 124 of the server device 102 for the systems 100, 200. In one embodiment, for example, the search manager 124 may implement the artificial intelligence architecture 300 to train and deploy an ML model 312 as a neural network, as described in more detail with reference to FIG. 4."; Paragraph 0226, lines 1-6, "The logic flow 1300 may also include retrieving the set of candidate document vectors that are semantically similar to the search vector using a semantic ranking algorithm. A semantic ranking algorithm is a type of algorithm that ranks search results or recommendations based on their semantic relevance to the search query 144."; Paragraph 0271, lines 1-6, "In some embodiments, the UI 1806 may be configured to display not only the summary of the electronic document, but also, as shown in FIGS. 26-28, a table of contents of the document, a search window, one or more common or typical queries that may be requested in relation to this type of electronic document."; An artificial intelligence architecture reads on at least one model, providing search recommendations based on their semantic relevance to the search query reads on at least one solution in the natural language format for each of the at least one inquiry based on the aggregated information, and the user interface display being configured reads on a predetermined setting.); and generating, by the at least one processor using the at least one model, a response that includes the at least one solution (Paragraph 0266, lines 1-10, "In some embodiments, a content summary of the electronic document may be generated as a result of the query being executed. The summary may be provided to one or more generative AI models to generate an abstractive summary of content in the electronic document. The abstractive summary may then be used by the generative AI model to generate a response to the specific query. Once the response to the query is generated, it may be transmitted or sent for presentation on the GUI view of the user's computing device."; Generate a response to the specific query reads on generating a response that includes the at least one solution.). Regarding claim 2, Hudetz discloses the method as claimed in claim 1. Hudetz further discloses: displaying, by the at least one processor via a graphical user interface, the response together with at least one graphical element that is configured to receive a user input, wherein the at least one graphical element includes at least one from among an edit graphical element that enables modification of the response, a regenerate response graphical element that generates a new response, an update response graphical element that incorporates the at least one recommended action into the response, and a publish graphical element that persists the response as documentation (Paragraph 0318, lines 1-19, "As depicted in FIG. 27, the GUI view 2700 may comprise a GUI element 2702 which is a main view presenting an electronic document 2704 and text information 2706 for the electronic document 2706, similar to the GUI view 2600. In addition, the GUI view 2700 may include a GUI element 2722 that is a sub-view for a document summary 2712 of the entire text information 2706 of the electronic document 2704, such as an abstractive summary 148. The GUI element 2722 may replace the AI sub-view when the arrow icon for the GUI element 2612 is activated by a user. The GUI element 2722 further includes a GUI element 2714 that presents text information “Was this helpful?” with associated thumbs up and thumbs down icons that when selected by a user gives user feedback for the document summary 2712. The GUI element 2722 also includes a GUI element 2716 with a text description of “12 RELATED RESULTS” which when selected by a user presents a GUI view of related search results or alternative document summaries."; A graphical user interface element that is a sub-view for a document summary reads on displaying the response via a graphical user interface, and a graphical user interface element that when selected by a user presents alternative document summaries reads on a graphical element that is configured to receive a user input that is a regenerate response graphical element that generates a new response.). Regarding claim 3, Hudetz discloses the method as claimed in claim 2. Hudetz further discloses: collecting, by the at least one processor, feedback data in real-time for each of the at least one inquiry, wherein the feedback data includes feedback information that corresponds to the at least one solution, the at least one recommended action, and the user input (Paragraph 0275, lines 1-19, "In some embodiments, the UI 1806 may be configured to generate a GUI view that may include at least one of the following: one or more views of the one or more electronic documents, one or more views of text information associated with the one or more electronic documents, one or more views of graphical elements associated with one or more AI assistants, one or more views of one or more tables of contents associated with the one or more electronic documents, one or more views of common searches, one or more views of one or more defined search queries, one or more views of one or more document summaries associated with entireties of the one or more electronic documents, one or more views associated with one or more portions of the one or more electronic documents, one or more views of one or more feedback icons, one or more views of one or more search queries, one or more views of one or more text snippets associated with the one or more electronic documents and related to the abstractive summary, and any combinations thereof."; Paragraph 0318, lines 1-15, "As depicted in FIG. 27, the GUI view 2700 may comprise a GUI element 2702 which is a main view presenting an electronic document 2704 and text information 2706 for the electronic document 2706, similar to the GUI view 2600. In addition, the GUI view 2700 may include a GUI element 2722 that is a sub-view for a document summary 2712 of the entire text information 2706 of the electronic document 2704, such as an abstractive summary 148. The GUI element 2722 may replace the AI sub-view when the arrow icon for the GUI element 2612 is activated by a user. The GUI element 2722 further includes a GUI element 2714 that presents text information “Was this helpful?” with associated thumbs up and thumbs down icons that when selected by a user gives user feedback for the document summary 2712."; A graphical user interface that includes views of defined search queries, views of document summaries, and views of feedback icons, reads on collecting feedback data in real-time for each of the at least one inquiry, wherein the feedback data includes feedback information that corresponds to the at least one solution, the at least one recommended action, and the user input.). Regarding claim 6, Hudetz discloses the method as claimed in claim 1. Hudetz further discloses: wherein the at least one recommended action includes at least one from among an editorial action that relates to recommended phrasing based on a predetermined style guide, a proof-reading action that identifies a plurality of transcription errors, and a summarization action that outlines the at least one topic (Paragraph 0050, lines 1-17, "While semantic searching provides clear technical advantages over lexical searches, semantic search by itself may not provide a user, such as a legal representative or business person, with a clear understanding of the entire context of the information for which they are searching. Consequently, as an addition or alternative, the AI/ML techniques are designed to implement a generative artificial intelligence (AI) platform that uses a large language module (LLM) to assist in contract management. A combination of both semantic search capabilities with a short summary of the relevant information based on a search query provides an optimal solution. This combination provides an overview of the information and highlights it in the agreement to make sure none of the details are missed. A user may use the semantic search capability to quickly locate relevant information and then use the summarization to get a clear understanding of the details."; Generating a summary of the relevant information based on a search query reads on the at least one recommended action including a summarization action that outlines the at least one topic.). Regarding claim 7, Hudetz discloses the method as claimed in claim 1. Hudetz further discloses: wherein the at least one solution is determined by using the at least one model based on the aggregated information and user historical data, the user historical data including at least one from among aggregated historical information from a plurality of users and personal historical information from a user (Paragraph 0083, lines 12-21, "The search manager 124 may search a document index of contextualized embeddings for the electronic document 142 with the search vector. Each contextualized embedding may comprise a vector representation of a sequence of words in the electronic document that includes contextual information for the sequence of words. The search process may produce a set of search results 146. The search results 146 may include a set of candidate document vectors that are semantically similar to the search vector of the search query 144."; Paragraph 0090, lines 1-8, "FIG. 3 illustrates an artificial intelligence architecture 300 suitable for use by the search manager 124 of the server device 102. The artificial intelligence architecture 300 is an example of a system suitable for implementing various artificial intelligence (AI) techniques and/or machine learning (ML) techniques to perform various document management tasks on behalf of the various devices of the systems 100, 200."; Paragraph 0098, lines 1-3, "As depicted in FIG. 3, the artificial intelligence architecture 300 includes a set of data sources 302 to source data 304 for the artificial intelligence architecture 300."; Paragraph 0099, lines 1-4, "The data sources 302 may source difference types of data 304. For instance, the data 304 may comprise structured data from relational databases, such as customer profiles, transaction histories, or product inventories."; The artificial intelligence architecture using a set of search results reads on determining a solution by using the at least one model based on the aggregated information, and the artificial intelligence architecture using data sources, where the data sources include transaction histories, reads on determining a solution by using the at least one model based on the personal historical information from a user.). Regarding claim 8, Hudetz discloses the method as claimed in claim 1. Hudetz further discloses: initiating, by the at least one processor, at least one function to modify the information that corresponds to the at least one topic in the at least one source based on the at least one solution, wherein the at least one function includes at least one from among a generation function, an update function, and a delete function (Paragraph 0050, lines 1-17, "While semantic searching provides clear technical advantages over lexical searches, semantic search by itself may not provide a user, such as a legal representative or business person, with a clear understanding of the entire context of the information for which they are searching. Consequently, as an addition or alternative, the AI/ML techniques are designed to implement a generative artificial intelligence (AI) platform that uses a large language module (LLM) to assist in contract management. A combination of both semantic search capabilities with a short summary of the relevant information based on a search query provides an optimal solution. This combination provides an overview of the information and highlights it in the agreement to make sure none of the details are missed. A user may use the semantic search capability to quickly locate relevant information and then use the summarization to get a clear understanding of the details."; Generating a summary of the relevant information based on a search query reads on initiating a function to modify the information that corresponds to the at least one topic in the at least one source based on the at least one solution, wherein the at least one function is a generation function.). Regarding claim 9, Hudetz discloses the method as claimed in claim 1. Hudetz further discloses: the at least one model includes at least one from among a large language model, a deep learning model, a neural network model, a natural language processing model, a machine learning model, a mathematical model, and a process model (Paragraph 0054, lines 1-14, "The method may further include sending a natural language generation (NLG) request to a generative artificial intelligence (AI) model. The generative AI model may comprise a machine learning model that implements a large language model (LLM) to support natural language processing (NLP) operations, such as natural language understanding (NLU), natural language generation (NLG), and other NLP operations. The NLG request may request an abstractive summary of document content for a subset of candidate document vectors from the set of candidate document vectors. The abstractive summary may comprise a natural language representation of the human language. The method may include receiving a NLG response with the abstractive summary from the generative AI model."; A machine learning model that implements a large language model (LLM) to support natural language processing (NLP) operations reads on a model including a large language model and a natural language processing model.). Regarding claim 10, arguments analogous to claim 1 are applicable. In addition, Hudetz discloses a computing device (Paragraph 0064, lines 1-3, “As depicted in FIG. 1, the system 100 may comprise a server device 102 communicatively coupled to a set of client devices 112 via a network 114.) configured to implement an execution of a method for facilitating content lifecycle management via artificial intelligence, the computing device comprising: a processor (Paragraph 0057, lines 6-8, "The one or more processors can be internal or external to the apparatus and can execute at least a part of the software or firmware application."); a memory (Paragraph 0067, lines 1-5, “The memory 106 may store a set of software components, such as computer executable instructions, that when executed by the processing circuitry 104, causes the processing circuitry 104 to implement various operations for an electronic document management platform.); and a communication interface coupled to each of the processor and the memory (Paragraph 0269, lines 1-5, “In some example embodiments, one or more components of the system 1800 may execute one or more applications, such as software applications, that enable, for example, network communications with one or more components of system 1800 and transmit and/or receive data.”), wherein the processor is configured to perform the steps of claim 1. Regarding claim 11, arguments analogous to claim 2 are applicable. Regarding claim 12, arguments analogous to claim 3 are applicable. Regarding claim 15, arguments analogous to claim 6 are applicable. Regarding claim 16, arguments analogous to claim 7 are applicable. Regarding claim 17, arguments analogous to claim 8 are applicable. Regarding claim 18, arguments analogous to claim 9 are applicable. Regarding claim 19, arguments analogous to claim 1 are applicable. In addition, Hudetz discloses a non-transitory computer readable storage medium storing instructions (Paragraph 0325, lines 1-9, “Apparatus 3000 may comprise any non-transitory computer-readable storage medium 3002 or machine-readable storage medium, such as an optical, magnetic or semiconductor storage medium. In various embodiments, apparatus 3000 may comprise an article of manufacture or a product. In some embodiments, the computer-readable storage medium 3002 may store computer executable instructions with which circuitry can execute.”) for facilitating content lifecycle management via artificial intelligence, the storage medium comprising executable code which, when executed by a processor (Paragraph 0057, lines 6-8, "The one or more processors can be internal or external to the apparatus and can execute at least a part of the software or firmware application."), causes the processor to perform the steps of claim 1. Regarding claim 20, arguments analogous to claim 2 are applicable. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Hudetz in view of Menon et al. (US Patent No. 11,855,934), hereinafter Menon. Regarding claim 4, Hudetz discloses the method as claimed in claim 3, but does not specifically disclose further comprising: determining, by the at least one processor, whether at least one data inconsistency exists in the response based on the collected feedback data, the at least one data inconsistency corresponding to a data point in the response; updating, by the at least one processor, the response by removing the data point when the corresponding at least one data inconsistency is determined; and training, by the at least one processor, the at least one model based on the updated response. Menon teaches: determining, by the at least one processor, whether at least one data inconsistency exists in the response based on the collected feedback data, the at least one data inconsistency corresponding to a data point in the response (Column 5, lines 14-24, "The reinforcement learning at operation 125 can convert the detected answers and feedback to the knowledge of the ML models to train the ML models. Every suggestion, option, answer that customers entered in a conversation will be captured in real-time by the RL mechanism as a new pattern to learn and train. As a result, the chatbot can understand new answers, suggestions, options from users in subsequent conversations, and, in turn, provide more meaningful responses to the users. In this way, the chatbot corrects responses/recommendations with user feedback as depicted at operation 130."; Column 10, line 50 - Column 11, line 2, "At operation 512, feedback analyzer 404 performs sentiment analysis on the conversation lines in the tail end of the conversation to identify negative and positive sentiments or parts. If feedback analyzer 404 determines only positive sentiment(s) from the conversation lines, the recommendation presented by a chatbot agent to a user should not be changed. Therefore, next time when a user starts a conversation with the same or similar intent or goal, feedback analyzer 404 would notify the chatbot agent to regenerate the option with positive sentiment. However, if feedback analyzer 404 determines that the conversation lines in the tail end contain negative sentiment(s), this feedback will be used through reinforcement learning to change (i.e., self-correct) the response/recommendation provided by the chatbot agent. In some embodiments, one or more of the sequence of questions, the way that questions are phrased, the sequence of options in a recommendation, the options provided in a recommendation, or other factors related to a recommendation (e.g., patterns, formats) can be changed to self-correct a recommendation/response."; Determining that conversation lines contain negative sentiment based on user feedback reads on determining at least one data inconsistency exists in the response based on the collected feedback data, the at least one data inconsistency corresponding to a data point in the response.); updating, by the at least one processor, the response by removing the data point when the corresponding at least one data inconsistency is determined (Column 10, line 50 - Column 11, line 2, "At operation 512, feedback analyzer 404 performs sentiment analysis on the conversation lines in the tail end of the conversation to identify negative and positive sentiments or parts. If feedback analyzer 404 determines only positive sentiment(s) from the conversation lines, the recommendation presented by a chatbot agent to a user should not be changed. Therefore, next time when a user starts a conversation with the same or similar intent or goal, feedback analyzer 404 would notify the chatbot agent to regenerate the option with positive sentiment. However, if feedback analyzer 404 determines that the conversation lines in the tail end contain negative sentiment(s), this feedback will be used through reinforcement learning to change (i.e., self-correct) the response/recommendation provided by the chatbot agent. In some embodiments, one or more of the sequence of questions, the way that questions are phrased, the sequence of options in a recommendation, the options provided in a recommendation, or other factors related to a recommendation (e.g., patterns, formats) can be changed to self-correct a recommendation/response."; Changing the response provided by the chatbot agent by determining that conversation lines contain negative sentiment based on user feedback reads on updating the response by removing the data point when the corresponding at least one data inconsistency is determined.); and training, by the at least one processor, the at least one model based on the updated response (Column 10, line 50 - Column 11, line 2, "At operation 512, feedback analyzer 404 performs sentiment analysis on the conversation lines in the tail end of the conversation to identify negative and positive sentiments or parts. If feedback analyzer 404 determines only positive sentiment(s) from the conversation lines, the recommendation presented by a chatbot agent to a user should not be changed. Therefore, next time when a user starts a conversation with the same or similar intent or goal, feedback analyzer 404 would notify the chatbot agent to regenerate the option with positive sentiment. However, if feedback analyzer 404 determines that the conversation lines in the tail end contain negative sentiment(s), this feedback will be used through reinforcement learning to change (i.e., self-correct) the response/recommendation provided by the chatbot agent. In some embodiments, one or more of the sequence of questions, the way that questions are phrased, the sequence of options in a recommendation, the options provided in a recommendation, or other factors related to a recommendation (e.g., patterns, formats) can be changed to self-correct a recommendation/response."; Using reinforcement learning to change the response provided by the chatbot agent by determining that conversation lines contain negative sentiment based on user feedback reads on training the at least one model based on the updated response.). Menon is considered to be analogous to the claimed invention because it is in the same field of natural language response generation systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hudetz to incorporate the teachings of Menon to determine that conversation lines contain negative sentiment based on user feedback and use reinforcement learning to change the response provided by the chatbot agent by determining that conversation lines contain negative sentiment based on user feedback. Doing so would allow for improving the efficiency of communication or information search and discovery using chatbot conversations (Menon; Column 3, line 64 - Column 4, line 1). Regarding claim 13, arguments analogous to claim 4 are applicable. Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Hudetz in view of De Luis Balaguer et al. (US Patent Application Publication No. 2025/0245215), hereinafter De Luis Balaguer. Regarding claim 5, Hudetz discloses the method as claimed in claim 1, but does not specifically disclose: wherein the at least one recommended action includes at least one automatically generated prompt that represents the at least one topic, the at least one automatically generated prompt including a new phrasing that is different than the corresponding at least one inquiry. De Luis Balaguer teaches: wherein the at least one recommended action includes at least one automatically generated prompt that represents the at least one topic, the at least one automatically generated prompt including a new phrasing that is different than the corresponding at least one inquiry (Paragraph 0029, lines 1-17, "FIG. 2 also provides an example of a custom prompt 200. The system of this disclosure does not pass the text of the natural language query 102 directly to the LLM. The query 102 provided by the user (or another computer system) is processed to generate a custom prompt 200 that is provided to the LLM. The custom prompt 200 guides the LLM to use database look-up tools provided to answer the natural language query 102. Prompt engineering, or intentional design of a string of natural language input to elicit a specific type of response from a LLM, can significantly affect the output of an LLM. The custom prompt 200 is generated from the natural language query 102 and any additional queries 106 to create an input that will cause the LLM to access a database and retrieve the desired information. For example, the custom prompt 200 may rephrase the natural language query 102, the additional query 106, and responses from the LLM into a single prompt."; Generating large language model prompts that rephrase the natural language query reads on automatically generated prompt that represents the at least one topic, the at least one automatically generated prompt including a new phrasing that is different than the corresponding at least one inquiry.). De Luis Balaguer is considered to be analogous to the claimed invention because it is in the same field of natural language response generation systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hudetz to incorporate the teachings of De Luis Balaguer to generate large language model prompts that rephrase the natural language query. Doing so would allow for performing information retrieval from a database using natural language queries and refining or modifying queries through a conversational interaction (De Luis Balaguer; Paragraph 0003, lines 1-9). Regarding claim 14, arguments analogous to claim 5 are applicable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Neervannan et al. (US Patent No. 11,861,148) Roy et al. (US Patent No. 11,120,229) Eggebraaten et al. (US Patent No. 9,400,841) Nagaraj et al. (US Patent Application Publication No. 2025/0245084) Khosla et al. (US Patent Application Publication No. 2025/0005057) Yang et al. (US Patent Application Publication No. 2020/0210524) Relangi et al. (US Patent Application Publication No. 2019/0222540) Any inquiry concerning this communication or earlier communications from the examiner should be directed to James Boggs whose telephone number is (571)272-2968. The examiner can normally be reached M-F 8:00 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES BOGGS/Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Mar 15, 2024
Application Filed
Jan 14, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586600
Streaming Vocoder
2y 5m to grant Granted Mar 24, 2026
Patent 12573406
VOICE AUTHENTICATION BASED ON ACOUSTIC AND LINGUISTIC MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 10, 2026
Patent 12572752
DYNAMIC CONTENT GENERATION METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12562170
BIOMETRIC AUTHENTICATION DEVICE, BIOMETRIC AUTHENTICATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12554931
Method and System of Improving Communication Skills for High Client Conversation Rate
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
99%
With Interview (+38.8%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 107 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month