Prosecution Insights
Last updated: April 19, 2026
Application No. 18/644,436

SYSTEM AND METHOD FOR GENERATING DYNAMIC CONVERSATIONAL AI EXPERIENCES USING LARGE LANGUAGE MODELS AND DECISIONING SYSTEMS

Non-Final OA §101§103
Filed
Apr 24, 2024
Examiner
SINGH, SATWANT K
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Verizon Patent and Licensing Inc.
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
707 granted / 788 resolved
+27.7% vs TC avg
Moderate +10% lift
Without
With
+9.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
13 currently pending
Career history
801
Total Applications
across all art units

Statute-Specific Performance

§101
20.2%
-19.8% vs TC avg
§103
26.4%
-13.6% vs TC avg
§102
34.8%
-5.2% vs TC avg
§112
3.0%
-37.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 788 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-10, 12-17, and 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The independent claims 1, 8, and 15 relate the statutory category of method/process and machine/apparatus. The independent claims 1, 8, and 15 recite “…receiving a natural language question from a user; determining, using a large language model, whether the natural language question is a transactional question or an informational question; generating, using a first generative artificial intelligence (AI) model, a first response to the natural language question when the natural language question is an informational question; generating, using a transaction generative AI model that interfaces with a decisioning system based on contracts of the decisioning system, a second response to the natural language question when the natural language question is a transactional question, wherein the decisioning system contracts define business logic, rules, and decision-making processes for handling transactional requests; generating, using a sentiment-based response generator, a third response based on one of the first response or the second response and a sentiment of the natural language question; and presenting the third response to the user”. The limitations of claims 1, 8, and 15 of “…receiving…”, “…determining…”, “…generating…”, “…generating…”, “…generating…”, and “…presenting…” as drafted covers mental activity. More specifically, for claim 1, a human after receiving a question from another person can determine if the question is asking about information about a topic or information about financial transactions dealing with business contracts, decisions, and rules. Depending on what the question relates to determines the response. Once the response to the question is determined (informational or transactional), it is determined what the sentiment or emotion behind the original question was and another response is determined based on the original response and the sentiment/emotion behind the original question. This judicial exception is not integrated into a practical application. In particular, claims 8 and 15 recite the additional elements of “processor” and “storage medium” which are recited generally in the specification. For example in paragraphs [0018] and [0144] of the as filed specification, there is a description of using a general purpose operating system. Additionally, claims 1, 8, and 15 recite “large language model” and “generative artificial intelligence (AI) model”. In paragraphs [0025]-[0026] of the as filed specification, there is a description of simulating a conversation using generative AI and LLMs. However, no functional specifics on how the models are trained are recited in the specification. A general operating system is being used for simulating the conversation. Therefore, it is being interpreted that these additional elements are being applied similar to the general operating system as recited above. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea int a practical application, the additional element of using a computer as a general computer is noted. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. With respect claims 2, 9, and 16, the claims relate to generating a new response which is compassionate response to the original question and the sentiment/emotion behind the question and which is an acceptable response. The claims relate to a mental activity of showing compassion when responding someone’s question. No additional limitations are present. With respect claims 3, 10, and 17, the claims relate to generating a response to a transactional question based on body of the question, dialog with the person asking the question so as to determine what exactly they want, etc. The claims relate to a mental activity of gathering the necessary information in order to provide the correct information. No additional limitations are present. With respect claims 5, 12, and 19, the claims relate to retrieving the information necessary to generate a response to an informational question. The claims relate to a mental activity of collecting the information necessary in order to provide the correct response. No additional limitations are present. With respect claims 6, 13, and 20, the claims relate to accessing logs, databases, records, etc. to gather the information necessary to provide the correct response and updating the information being used based what is being collected. The claims relate to a mental activity of gathering information necessary and updating the information as it changes. No additional limitations are present. With respect claims 7 and 14, the claims relate to having a dialog to set up and determine goals based on one’s knowledge about information being requested. Action items are determined in order to meet the goals and the performance is evaluated. The goals are updated based on feedback from the evaluation. The claims relate to a mental activity of setting goals and evaluating if they have been met. No additional limitations are present. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 6, 8-10, 12, 13, 15-17, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Galitsky (US 10,817,670) in view of Kanagovi et al. (US 2025/0245670). Regarding Claim 1, Galitsky teaches a method comprising: receiving a natural language question from a user (For example, users can interact with the Intelligent Bots Platform through a conversational interaction. This interaction, also called the conversational user interface (UI), is a dialog between the end user and the chatbot, just as between two human beings) (col. 14, lines 9-13); determining, whether the natural language question is a transactional question or an informational question (It could be as simple as the end user saying “Hello” to the chatbot and the chatbot responding with a “Hi” and asking the user how it can help, or it could be a transactional interaction in a banking chatbot, such as transferring money from one account to the other, or an informational interaction in a HR chatbot, such as checking for vacation balance, or asking an FAQ in a retail chatbot, such as how to handle returns) (col. 14, lines 13-20); generating, using a first generative artificial intelligence (AI) model (A chatbot (which may also be called intelligent bots or virtual assistant, etc.) is an “intelligent” machine that, for example, replaces human B and to various degrees mimics the conversation between two humans) (col. 13, lines 62-65), a first response to the natural language question when the natural language question is an informational question (Similarly, when an autonomous agent receives a request from a person to share knowledge about a particular item, the search result should contain an intent to receive a recommendation) (col. 15, lines 14-17); generating, using a transaction generative AI model (A chatbot (which may also be called intelligent bots or virtual assistant, etc.) is an “intelligent” machine that, for example, replaces human B and to various degrees mimics the conversation between two humans) (col. 13, lines 62-65) that interfaces with a decisioning system based on contracts of the decisioning system, a second response to the natural language question when the natural language question is a transactional question (For example, when an autonomous agent receives an indication from a person that the person desires to sell an item with certain features, the autonomous agent should provide a search result that not only contains the features but also indicates an intent to buy) (col. 15, lines 9-13), wherein the decisioning system contracts define business logic, rules, and decision-making processes for handling transactional requests (Response: “The property tax is assessed on property that you own. Just because you chose to not register it does not mean that you don't own it, so the tax is not refundable. Even if you have not titled the vehicle yet, you still own it within the boundaries of the tax district, so the tax is payable. Note that all states give you a limited amount of time to transfer title and pay the use tax. If you apply late, there will be penalties on top of the normal taxes and fees. You don't need to register it at the same time, but you absolutely need to title it within the period of time stipulated in state law.”) (col. 15, lines 38-48); generating, using a sentiment-based response generator, a third response based on one of the first response or the second response and a sentiment of the natural language question (A chatbot can use the information in Table 4 to personalize responses or tailor search results or opinionated data to user expectations. For example, a chatbot can consider political viewpoint when providing news to a user) (col. 49, lines 7-18); and presenting the third response to the user (A chatbot can use the information in Table 4 to personalize responses or tailor search results or opinionated data to user expectations. For example, a chatbot can consider political viewpoint when providing news to a user) (col. 49, lines 7-18). Galitsky fails to teach a method, using a large language model. Kanagovi et al teaches a method, using a large language model (an insights model (e.g., an LLM that includes a neural network with various parameters, trained on large quantities of unlabeled text using self-supervised learning or semi-supervised learning)) (pages 13-14, paragraph [0125]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Galitsky with the teachings of Kanagovi to help improve natural language processing by using a large language model to determine the type of question being asked. Regarding Claim 2, Galitsky teaches the method, wherein generating the third response using the sentiment-based response generator comprises: receiving the third response and a query sentiment of the natural language question (A chatbot can use the information in Table 4 to personalize responses or tailor search results or opinionated data to user expectations. For example, a chatbot can consider political viewpoint when providing news to a user) (col. 49, lines 7-18); generating a new response based on the third response and the query sentiment using an empathy-driven natural language generation model (In this manner, the aspects described herein enable a chatbot can behave like a companion, by showing empathy and ensuring that the user does not feel irritated by the lack of common ground with the chatbot) col. 49, lines 7-18); validating a syntactic correctness of the new response using a syntactic parser (In one aspect of the present invention, two Rhetorical Structure Theory (RST) discourse parsers are used: CoreNLPProcessor which relies on constituent syntax, and FastNLPProcessor which uses dependency syntax) (col. 12, lines 64-67) and (In addition, the above two discourse parsers, i.e., CoreNLPProcessor and FastNLPProcessor use Natural Language Processing (NLP) for syntactic parsing) (col. 13, lines 4-6); validating a semantic coherence of the new response using a semantic parser (By using both rhetoric relations and communicative actions, aspects described herein can correctly recognize valid request-response pairs) (col. 15, lines 6-7); and using the new response as the third response if the syntactic correctness and semantic coherence are valid (To do so, aspects correlate the syntactic structure of a question with that of an answer. By using the structure, a better answer can be determined) (col. 15, lines 7-8). Regarding Claim 3, Galitsky teaches the method, wherein generating the second response using the transaction generative AI model comprises: processing the natural language question using a generative AI model (A chatbot (which may also be called intelligent bots or virtual assistant, etc.) is an “intelligent” machine that, for example, replaces human B and to various degrees mimics the conversation between two humans) (col. 13, lines 62-65); extracting entities from the natural language question using natural language processing (NLP) entity extraction (Natural language processing (NLP) and machine learning (ML) algorithms combined with other approaches can be used to classify end user intent. An intent at a high level is what the end user would like to accomplish (e.g., get account balance, make a purchase)) (col. 14, lines 20-24) and (Request: “My husbands' grandmother gave him his grandfather's truck. She signed the title over but due to my husband having unpaid fines on his license, he was not able to get the truck put in his name. I wanted to put in my name and paid the property tax and got insurance for the truck. By the time it came to sending off the title and getting the tag, I didn't have the money to do so. Now, due to circumstances, I am not going to be able to afford the truck. I went to the insurance place and was refused a refund. I am just wondering that since I am not going to have a tag on this truck, is it possible to get the property tax refunded?”) (col. 15, lines 27-37); generating the second response using an output of the generative AI model and the entities (Response: “The property tax is assessed on property that you own. Just because you chose to not register it does not mean that you don't own it, so the tax is not refundable. Even if you have not titled the vehicle yet, you still own it within the boundaries of the tax district, so the tax is payable. Note that all states give you a limited amount of time to transfer title and pay the use tax. If you apply late, there will be penalties on top of the normal taxes and fees. You don't need to register it at the same time, but you absolutely need to title it within the period of time stipulated in state law.”) (col. 15, lines 48); incorporating flow-specific prompts and persona instructions into the second response (For example, users can interact with the Intelligent Bots Platform through a conversational interaction. This interaction, also called the conversational user interface (UI), is a dialog between the end user and the chatbot, just as between two human beings) (col. 14, lines 9-13); and validating the second response (By using both rhetoric relations and communicative actions, aspects described herein can correctly recognize valid request-response pairs. To do so, aspects correlate the syntactic structure of a question with that of an answer. By using the structure, a better answer can be determined) (col. 15, lines 3-8) using a semantic and syntactic parser (In one aspect of the present invention, two Rhetorical Structure Theory (RST) discourse parsers are used: CoreNLPProcessor which relies on constituent syntax, and FastNLPProcessor which uses dependency syntax) (col. 12, lines 64-67) and (In addition, the above two discourse parsers, i.e., CoreNLPProcessor and FastNLPProcessor use Natural Language Processing (NLP) for syntactic parsing) (col. 13, lines 4-6). Regarding Claim 5, Galitsky teaches the method, wherein generating the first response using the first generative AI model comprises: retrieving relevant information from a document data source and a web data source based on the natural language question using a retrieval step (to facilitate data collection, we designed a crawler which searched a specific set of sites, downloaded web pages, extracted candidate text and verified that it adhered to a question-or-request vs response forma) (col. 33, lines 15-20); and synthesizing the first response using a retrieval-augmented generative AI model based on the retrieved information and customer dynamic data (Therefore, based on the phrases uttered by the user in the chatbot, these are mapped that to a specific and discrete use case or unit of work, for e.g. check balance, transfer money and track spending are all “use cases” that the chatbot should support and be able to work out which unit of work should be triggered from the free text entry that the end user types in a natural language) (col. 14, lines 26-33). Regarding Claim 6, Galitsky teaches the method, further comprising: accessing a source of knowledge containing chat logs, transcripts, and transaction records (The answer could take the form of, for example, in some aspects, the AI constructing an answer from its extensive knowledge base(s) or from matching the best existing answer from searching the internet or intranet or other publically/privately available data sources) (col. 14, lines 47-52). Galitsky fails to teach the method, training the transaction generative AI model using a supervised learning approach and a reinforcement learning approach based on the source of knowledge; and updating the transaction generative AI model based on the training. Kanagovi et al teaches the method, training the transaction generative AI model using a supervised learning approach ((i) generate, train, update, and implement an insights model (e.g., an LLM that includes a neural network with various parameters, trained on large quantities of unlabeled text using self-supervised learning or semi-supervised learning)) (pages 13-14, paragraph [0125]) and a reinforcement learning approach based on the source of knowledge ((iv) obtain (from the database (e.g., 102, FIG. 1)) and use historical CTAs (and their texts, see e.g., FIG. 2.2, that indicate historical customer engagements) to fine-tune the insights model) (pages 13-14, paragraph [0125]); and updating the transaction generative AI model based on the training ((i) generate, train, update, and implement an insights model (e.g., an LLM that includes a neural network with various parameters, trained on large quantities of unlabeled text using self-supervised learning or semi-supervised learning)) (pages 13-14, paragraph [0125]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Galitsky with the teachings of Kanagovi to help improve natural language processing by training and updating the chatbot with the latest knowledge. Regarding Claim 8, Galitsky teaches a non-transitory computer-readable storage medium for tangibly storing computer program instructions (Storage subsystem 4618 may also provide a tangible computer-readable storage medium for storing the basic programming and data constructs that provide the functionality of some aspects) (col. 79, lines 9-19) capable of being executed by a computer processor (Software (programs, code modules, instructions) that when executed by a processor provide the functionality described above may be stored in storage subsystem 4618) (col. 17, lines 9-19), the computer program instructions defining steps of: receiving a natural language question from a user (For example, users can interact with the Intelligent Bots Platform through a conversational interaction. This interaction, also called the conversational user interface (UI), is a dialog between the end user and the chatbot, just as between two human beings) (col. 14, lines 9-13); determining, whether the natural language question is a transactional question or an informational question (It could be as simple as the end user saying “Hello” to the chatbot and the chatbot responding with a “Hi” and asking the user how it can help, or it could be a transactional interaction in a banking chatbot, such as transferring money from one account to the other, or an informational interaction in a HR chatbot, such as checking for vacation balance, or asking an FAQ in a retail chatbot, such as how to handle returns) (col. 14, lines 13-20); generating, using a first generative artificial intelligence (AI) model (A chatbot (which may also be called intelligent bots or virtual assistant, etc.) is an “intelligent” machine that, for example, replaces human B and to various degrees mimics the conversation between two humans) (col. 13, lines 62-65), a first response to the natural language question when the natural language question is an informational question (Similarly, when an autonomous agent receives a request from a person to share knowledge about a particular item, the search result should contain an intent to receive a recommendation) (col. 15, lines 14-17); generating, using a transaction generative AI model (A chatbot (which may also be called intelligent bots or virtual assistant, etc.) is an “intelligent” machine that, for example, replaces human B and to various degrees mimics the conversation between two humans) (col. 13, lines 62-65) that interfaces with a decisioning system based on contracts of the decisioning system, a second response to the natural language question when the natural language question is a transactional question (For example, when an autonomous agent receives an indication from a person that the person desires to sell an item with certain features, the autonomous agent should provide a search result that not only contains the features but also indicates an intent to buy) (col. 15, lines 9-13), wherein the decisioning system contracts define business logic, rules, and decision-making processes for handling transactional requests (Response: “The property tax is assessed on property that you own. Just because you chose to not register it does not mean that you don't own it, so the tax is not refundable. Even if you have not titled the vehicle yet, you still own it within the boundaries of the tax district, so the tax is payable. Note that all states give you a limited amount of time to transfer title and pay the use tax. If you apply late, there will be penalties on top of the normal taxes and fees. You don't need to register it at the same time, but you absolutely need to title it within the period of time stipulated in state law.”) (col. 15, lines 38-48); generating, using a sentiment-based response generator, a third response based on one of the first response or the second response and a sentiment of the natural language question (A chatbot can use the information in Table 4 to personalize responses or tailor search results or opinionated data to user expectations. For example, a chatbot can consider political viewpoint when providing news to a user) (col. 49, lines 7-18); and presenting the third response to the user (A chatbot can use the information in Table 4 to personalize responses or tailor search results or opinionated data to user expectations. For example, a chatbot can consider political viewpoint when providing news to a user) (col. 49, lines 7-18). Galitsky fails to teach a non-transitory computer-readable storage medium, using a large language model. Kanagovi et al teaches a non-transitory computer-readable storage medium, using a large language model (an insights model (e.g., an LLM that includes a neural network with various parameters, trained on large quantities of unlabeled text using self-supervised learning or semi-supervised learning)) (pages 13-14, paragraph [0125]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Galitsky with the teachings of Kanagovi to help improve natural language processing by using a large language model to determine the type of question being asked. Claims 9 and 16 are rejected for the same reason as claim 2. Claims 10 and 17 are rejected for the same reason as claim 3. Claims 12 and 19 are rejected for the same reason as claim 5. Claims 13 and 20 are rejected for the same reason as claim 6. Regarding Claim 15, Galitsky teaches a device comprising: a processor (Software (programs, code modules, instructions) that when executed by a processor provide the functionality described above may be stored in storage subsystem 4618) (col. 17, lines 9-19); and a storage medium for tangibly storing thereon program logic for execution by the processor (Storage subsystem 4618 may also provide a tangible computer-readable storage medium for storing the basic programming and data constructs that provide the functionality of some aspects) (col. 79, lines 9-19), the program logic comprising: logic, executed by the processor, for receiving a natural language question from a user (For example, users can interact with the Intelligent Bots Platform through a conversational interaction. This interaction, also called the conversational user interface (UI), is a dialog between the end user and the chatbot, just as between two human beings) (col. 14, lines 9-13); logic, executed by the processor, for determining, whether the natural language question is a transactional question or an informational question (It could be as simple as the end user saying “Hello” to the chatbot and the chatbot responding with a “Hi” and asking the user how it can help, or it could be a transactional interaction in a banking chatbot, such as transferring money from one account to the other, or an informational interaction in a HR chatbot, such as checking for vacation balance, or asking an FAQ in a retail chatbot, such as how to handle returns) (col. 14, lines 13-20); logic, executed by the processor, for generating, using a first generative artificial intelligence (AI) model (A chatbot (which may also be called intelligent bots or virtual assistant, etc.) is an “intelligent” machine that, for example, replaces human B and to various degrees mimics the conversation between two humans) (col. 13, lines 62-65), a first response to the natural language question when the natural language question is an informational question (Similarly, when an autonomous agent receives a request from a person to share knowledge about a particular item, the search result should contain an intent to receive a recommendation) (col. 15, lines 14-17); logic, executed by the processor, for generating, using a transaction generative AI model (A chatbot (which may also be called intelligent bots or virtual assistant, etc.) is an “intelligent” machine that, for example, replaces human B and to various degrees mimics the conversation between two humans) (col. 13, lines 62-65) that interfaces with a decisioning system based on contracts of the decisioning system, a second response to the natural language question when the natural language question is a transactional question (For example, when an autonomous agent receives an indication from a person that the person desires to sell an item with certain features, the autonomous agent should provide a search result that not only contains the features but also indicates an intent to buy) (col. 15, lines 9-13), wherein the decisioning system contracts define business logic, rules, and decision-making processes for handling transactional requests (Response: “The property tax is assessed on property that you own. Just because you chose to not register it does not mean that you don't own it, so the tax is not refundable. Even if you have not titled the vehicle yet, you still own it within the boundaries of the tax district, so the tax is payable. Note that all states give you a limited amount of time to transfer title and pay the use tax. If you apply late, there will be penalties on top of the normal taxes and fees. You don't need to register it at the same time, but you absolutely need to title it within the period of time stipulated in state law.”) (col. 15, lines 38-48); logic, executed by the processor, for generating, using a sentiment-based response generator, a third response based on one of the first response or the second response and a sentiment of the natural language question (A chatbot can use the information in Table 4 to personalize responses or tailor search results or opinionated data to user expectations. For example, a chatbot can consider political viewpoint when providing news to a user) (col. 49, lines 7-18); and logic, executed by the processor, for presenting the third response to the user (A chatbot can use the information in Table 4 to personalize responses or tailor search results or opinionated data to user expectations. For example, a chatbot can consider political viewpoint when providing news to a user) (col. 49, lines 7-18). Galitsky fails to teach a device, using a large language model. Kanagovi et al teaches a device, using a large language model (an insights model (e.g., an LLM that includes a neural network with various parameters, trained on large quantities of unlabeled text using self-supervised learning or semi-supervised learning)) (pages 13-14, paragraph [0125]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Galitsky with the teachings of Kanagovi to help improve natural language processing by using a large language model to determine the type of question being asked. Claims 4, 11 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Galitsky and Kanagovi et al as applied to claims 3, 10, and 17 above, and further in view of Ibrahim et al. (US 2025/0238968). Regarding Claim 4, Galitsky and Kanagovi et al fail to teach the method, wherein the generative AI model comprises: an embedding layer that converts input text into dense vector representations; one or more transformer encoders, each including a multi-head attention mechanism and a feed-forward network; and an output embedding layer. Ibrahim et al teaches the method, wherein the generative AI model comprises: an embedding layer that converts input text into dense vector representations (the parameters of the generative AI model include an embedding matrix (for determining decoder input embedding vectors from first input tokens, for determining encoder input embedding vectors from second input tokens, (page 23, paragraph [0180]); one or more transformer encoders, each including a multi-head attention mechanism and a feed-forward network (weights and offsets (for feed-forward neural networks of the generative AI model)) (page 23, paragraph [0180]); and an output embedding layer (and/or for determining predicted tokens from output embedding vectors) (page 23, paragraph [0180]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Galitsky and Kanagovi with the teachings of Ibrahim to improve natural language processing by using partially decompressed data so the volume of data can be significantly reduced. Claims 11 and 18 are rejected for the same reason as claim 4. Allowable Subject Matter. Claims 7 and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Cited Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Garg et al. (US 2024/0070434) discloses an information system providing a conversational knowledge base for responding to user queries. Nair et al. (US 2024/0303496) discloses exploiting domain-specific language characteristics for language model pretraining. Wright et al. (US 12,125,059) discloses training an artificial intelligence engine for most appropriate actions. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SATWANT K SINGH whose telephone number is (571)272-7468. The examiner can normally be reached Monday thru Friday 9:00 AM to 6:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571}270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SATWANT K SINGH/ Primary Examiner, Art Unit 2653
Read full office action

Prosecution Timeline

Apr 24, 2024
Application Filed
Mar 15, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602550
NATURAL LANGUAGE DRIVEN PLANNING WITH MACHINE LEARNING MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12602411
Method for Collaborative Knowledge Base Development
2y 5m to grant Granted Apr 14, 2026
Patent 12585881
NATURAL LANGUAGE PROCESSING SYSTEM, NATURAL LANGUAGE PROCESSING METHOD, AND NATURAL LANGUAGE PROCESSING PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12587274
SATELLITE OPTIMIZATION MANAGEMENT SYSTEM BASED ON NATURAL LANGUAGE INPUT AND ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Mar 24, 2026
Patent 12579368
System, device, and method to provide generalized knowledge routing utilizing machine learning to a user within the system
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+9.7%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 788 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month