Prosecution Insights
Last updated: April 19, 2026
Application No. 18/656,113

COMPUTING SYSTEM, METHOD, AND MEDIUM FOR PROCESSING CUSTOMER INQUIRIES USING SPEECH-TO-TEXT, LANGUAGE MODEL ANALYSIS, AND TEXT-TO-SPEECH SERVICES

Final Rejection §103
Filed
May 06, 2024
Examiner
ZHANG, LESHUI
Art Unit
2695
Tech Center
2600 — Communications
Assignee
Cdw LLC
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
719 granted / 928 resolved
+15.5% vs TC avg
Strong +36% interview lift
Without
With
+36.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
47 currently pending
Career history
975
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
42.5%
+2.5% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
28.7%
-11.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 928 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to the claim amendment filed on February 20, 2026 and wherein claims 1-2, 6-7, 8-21 amended. In virtue of this communication, claims 1-21 are currently pending in this Office Action. With respect to the objection of drawings due to formality issues related to claimed terms, as set forth in the previous Office Action, applicant argument, see paragraphs 2-4, paragraphs 1-4 of page 8, and paragraphs 1-2 of page 9 in Remarks filed on February 20, 2026, has been fully considered and the argument found persuasive and therefore, the drawing objections, as set forth in the previous Office Action, has been withdrawn. With respect to the specification objection due to formality issues about claimed terms, as set forth in the previous Office Action, the applicant argument, see paragraph 3 of page 9 in Remarks filed on February 20, 2026, has been fully considered and wherein applicant pointed that support can be found in application specification paragraphs 60-67, and wherein claim “semantic caching mechanism configured to summarize … key aspects of interactions for future …” read as “summarizing and storing key aspects of interactions using a semantic caching mechanism … efficiently organizes and stores interaction summaries”, etc. and the argument found persuasive. Therefore, the specification objection due to the formality issues, as set forth in the previous Office Action, has been withdrawn. With respect to the Claim Interpretation of claimed terms such as “a retrieval augmented generation RAG module configured to ...” and “a semantic caching mechanism configured to summarize …”and “an autonomous agent configured to …” as recited in claim 1 under 35 U.S.C. 112(f), applicant argument, see paragraph 5 of page 9 and paragraphs 1-3 of page 10 in remarks filed on February 20, 2026 has been fully considered and found persuasive, and wherein, as requested by applicant in the Remarks above, those claimed terms should be interpreted as plain meaning without invocation of 35 U.S.C. 112(f), and therefore, the Claim Interpretation of claim 1 under 35 U.S.C. 112(f), as set forth in the previous office action has been withdrawn. With respect to the rejection of claims 1-21 under 35 USC §112(b), as set forth in the previous Office Action, applicant argument, see paragraphs 2-4 of page 11 and paragraphs 1-2of page 12 in Remarks filed on February 20, 2026, have been fully considered and the argument is persuasive. Therefore, the rejection of claims 1-21 under 35 USC § 112(b), as set forth in the previous Office Action, has been withdrawn and wherein as requested by applicant in the argument of the paragraphs above, “a human in the loop interface” should be referred to “the autonomous agent’s processing of user interactions (paragraph 2 of page 11). With respect to the rejection of claims 1-21 under 35 USC §101, as set forth in the previous Office Action, the Applicant’s amendment, and argument, see paragraph 4 of page 14, paragraph 1-3 of page 15, and paragraphs 1-4 of page 16 in Remarks filed on February 20, 2026, have been fully considered and the argument is persuasive. Therefore, the rejection of claims 1-21 under 35 USC §101, as set forth in the previous Office Action, has been withdrawn. The Office appreciates the explanation of the amendment and analyses of the prior arts, and however, although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993) and MPEP 2145. For example, human in the loop refers to aninterface/module” specifically “facilitates escalations/assistance by a human agent as part of the overall autonomous contact center workflow …” would not be read from specification to claims, etc. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-21 are rejected under 35 U.S.C. 103 as being unpatentable over Beaver (US 20220141335 A1) and in view of reference Auffarth Ben et al. (IDS, “Generative AI with LangChain Build large language model LLM apps with Python, ChatGPT, and other LLMs”, December 1, 2023, p.1-360, https://sciendo.com/2/v2/download/book/9781835088364.pdf?Token=eyJhbGciOiJIUzl1 NilslnR5cCl6IkpXVCJ9.eyJ1c2Vycyl6W3sic3ViljoyNTY3ODUxNywicHVicmVmljoiNzY0NDg4IiwibmFtZSl6Ikdvb2dsZSBHb29nbGVib3QgLSBXZWlgQ3Jhd2xlciBTRU8iLCJ0eXBlljoiaW5zdGI0dXRpb24iLCJsb2dvdXRfbGluayl6Imh0dHBzOi8vY29ubmVjdC5saWJseW54LmNv, the IDS submitted on August 15, 2025, hereinafter Auffarth). Claim 1: Beaver teaches an autonomous communication system (title and abstract, ln 1-11, a system in fig. 1 and implemented by a computing device in fig. 6, including method of automation of text chat conversations in fig. 2, para 33, and as partial of the automation of text chat conversation, training a language model as partial automation of text chat conversations in fig. 5, para 48) comprising: a language model (a language model 120 in fig. 1 and as collection of syntax, semantics, and grammar rules, para 26) configured to process and generate responses to user inputs (generating the response by consulting an external table of response rules, para 27 and based on analyses of chats and text conversations in real time, para 30); a module (part of active learning engine 130 comprising a training engine 135, as part of the automation of the text chat conversations, in fig. 1) configured to enhance the language model's response generation (by retrieving labeled training set from a pool 470 to modify and update the language model so that the accuracy and efficiency of the NLU 115 comprising the language model 120 are increased, para 28) by retrieving relevant information from a knowledge base (discussed above, retrieving training set from the pool 470 and knowledge added through selected samples 440 and interference from humans 450 in fig. 4, para 44-46, and retrieving from knowledge bases of the past input-response pairs, para 53); a semantic caching mechanism (a group of machine learning models as other part of active learning engine 130 and the training engines 135, para 44) configured to processing (e.g., collecting and clustering, para 70) and store key aspects of interactions (most informative samples or input-pairs of the automation of text chat conversations added into a labeled training set470 through step 460 through oracle or human 450 in fig. 4, para 46) for future reference by the language model (used for training models 480 in fig. 4, para 45-46 and the input-response pair occurred in interaction of the user 102 with IVA 158 and agent 152 are also added into unlabeled pool of the text data at 510 in fig. 5, para 48); an autonomous agent (part of agent computing device 155 with engagement management engine 320 in fig. 1, including IVA 158) configured to manage and direct user interactions based on processed inputs and generated responses (performing a switch of the agent 152 and IVA 158 among the conversations in the queue 310 based on intents and tasks of each conversation, para 41, and intents are derived through the semantic processing of the language model applied to the user inputs and based on occurred input-response pairs for training of the language model, as part of the automation of text chat conversations discussed above); and a human in the loop interface (an agent 152 comprising a representative, employee, associate, etc., para 19 or oracles 450, para 52-53) configured to allow human intervention in the autonomous agent's processing of user interactions when necessary (more complex rules and actions, as necessary condition, implemented based on the texting from at least agent 152, para 23, e.g., handling particular intents by the human agent 152, para 37 and the IVA 158 transferring the question to the agent 152 if the intention derived does not match the user’s request, steps 240-260 in fig. 2, para 47). However, Beaver does not explicitly teach wherein the module is a retrieval augmented generation RAG module and summarizing key aspects of interactions in the processing by the semantic caching mechanism and the language model is trained. Auffarth teaches an analogous field of endeavor by disclosing an autonomous communication system (title and abstract, ln 1-* and fig. *) and wherein a retrieval augmented generation RAG module is disclosed (retrieval augmented generation RAG with Tree-of-Thought, etc., session The Future of Generative Models, p.299) to enhance the language model's response generation by retrieving relevant information from a knowledge base (enhancing text generation by retrieving relevant information from sources, in making generative AI more accessible and effective, session Trends in Model Development, p.306 and enhancing Chatbots by grounding their responses with the aid of external evidence sources) and further teaches an large language model LLM that is trained LLM (fine-tuning or further train LLM, i.e., pre-trained LLM, para 2, p.101) for benefits of improving a performance of the automation of the conversations (leading to more accurate and informative answers by retrieving relevant passages from corpora to condition the language model’s generation process, session 5 Building a Chatbot like ChatGPT, p.131) and a semantic caching mechanism (a langChain including a ConversationSummaryMemory, session Remembering conversation summaries, p.164) is disclosed to summarize and store key aspects of interactions (by using save_context method to save the interaction context using LangChain and semantic reasoning, session Storing knowledge graphs, p.164, and retrieving the summarized conversation history by using the load_memory_veriables method, the session above, p.164) for benefits of saving the memory space (for extended conversations including previous messages that might exceed token limits, the same session above, p.164). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the retrieval augmented generation RAG module and the semantic caching mechanism, and wherein the retrieval augmented generation RAG module is configured to enhance the language model's response generation by retrieving the relevant information from the knowledge base and the semantic caching mechanism is configured to summarize and store the key aspects of the interactions for future reference by the language model and pretrained language model, as taught by Auffarth, to the module and the semantic caching mechanism and the language model, respectively in the autonomous communication system, as taught by Beaver for the benefits discussed above. Claim 8 recited a method associated with the autonomous communication system of claim 1 and implemented steps performed by the autonomous communication system of claim 1 and thus, rejected according to claim 1 above. Claim 15 has been analyzed and rejected according to claims 1, 8 above and the combination of Beaver and Auffarth further teaches a computer-readable medium (Beaver, memory 604) having stored thereon instructions (Beaver, storing computer readable instructions and program modules, para 65) that when executed cause a computer to: process user inputs using a language model according to claims 1, 8 above (Beaver, multiprocessor and executed on a laptop or personal computers PCs, para 60 and Auffarth, implemented as model compression or other computer architectural optimization for more efficient deployment in latency and speed, Chapter Generative AI in Production, p.260);. Claim 2: The combination of Beaver and Auffarth further teaches, according to claim 1 above, wherein the pretrained language model is further configured to utilize chain of thought reasoning to improve the processing of complex user inputs (Beaver, the language model discussed in claim 1 above, and Auffarth, creating LLMChain class representing the language model by using Chain of Thought CoT by thinking step by step, Chapter 5, p.169). Claim 3: The combination of Beaver and Auffarth further teaches, according to claim 1 above, wherein the retrieval augmented generation RAG module is further configured to utilize external APIs or tools as part of its information retrieval process to enhance response accuracy (Beaver, the module in claim 1 above, and Auffarth, RAG implemented by retrieving relevant information from sources, Chapter The Future of Generative Models, p.306 and including from access to external knowledge to improve accuracy and domain-specific proficiency, or via knowledge bases, Chapter LangChain for LLM Apps and to perform reasoning algorithms like chain-of-thought in fig. 2.7, p.44). Claim 4: The combination of Beaver and Auffarth in the embodiment above further teaches, according to claim 1 above, wherein the semantic caching mechanism is further configured to employ a database for efficient storage and retrieval of interaction summaries (Beaver, the schematic caching mechanism for storing data, and Auffarth, session Remembering conversation summaries, p.164, and discussed in claim 1 above, p.164), except explicitly teaching that the database is a vector database. Auffarth teaches another embodiment (building a chatbot like ChatGPT, Chapter 5, p.131) and wherein a vector database is disclosed (vector database for managing embeddings, Chapter 5, p.139) for efficient storage and retrieval of interaction data (efficient search and vector storage mechanisms, Session Vector storage, p.139, specifically for efficient storing, managing, and retrieving large sets of vectors, session Building a Chatbot like ChatGPT, p.140). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have applied the vector database, as taught by Auffarth in the another embodiment above, to the database in the autonomous communication system, as taught by the combination of Beaver and Auffarth, for the benefits discussed above. Claim 5: The combination of Beaver and Auffarth further teaches, according to claim 1 above, wherein the autonomous agent is further configured to selectively engage additional specialized agents based on the context of the user interaction (Beaver, assigning an instance of the IVA to a plurality of human agents, based on the monitored intent of the text conversation, para 69, and special cases or topics where the sensitivity and compassion are required, specific human agents are selected, para 47), each specialized agent being trained for specific interaction types (Beaver, the agents assigned for specific topics as interaction types, para 47). Claim 6: The combination of Beaver and Auffarth further teaches, according to claim 1 above, wherein the human in the loop interface is further configured to provide feedback mechanisms for human operators to refine the responses generated by the trained language model (Beaver, human loop in fig. 4, and Auffarth, reinforcement learning with human feedback RLHF within fine-tuning processes, para 3, Chapter Customizing LLMs and Their Output, p.228) and to update the knowledge base used by the retrieval augmented generation RAG module (Beaver, labeled training set is updated according to labeled samples from humans 450 in fig. 4 and Auffarth, edited logs based on the feedback annotated by reviewing are added to a dataset for fine-tuning the model, para 3 of p.272 and para 2-3 of p.303, Chapter 10). Claim 7: The combination of Beaver and Auffarth further teaches, according to claim 1 above, wherein the autonomous agent is further configured to utilize a framework for combining external knowledge with the trained language model to enhance reasoning capabilities during user interactions (Beaver, the language model modified or updated through a learning cycle in fig. 4 and Auffarth, combining external services like databases and APIs with models to extend their capability, para 2 of p.55, and for effectively combining generative AI models with other tools to create LLM applications, para 1 of Session Comparing LangChain with other frameworks, p.60). Claim 9 has been analyzed and rejected according to claims 8, 2 above. Claim 10 has been analyzed and rejected according to claims 8, 3 above. Claim 11 has been analyzed and rejected according to claims 8, 4 above. Claim 12 has been analyzed and rejected according to claims 8, 5 above. Claim 13 has been analyzed and rejected according to claims 8, 6 above. Claim 14 has been analyzed and rejected according to claims 8, 7 above. Claim 16 has been analyzed and rejected according to claims 15, 2 above. Claim 17 has been analyzed and rejected according to claims 15, 3 above. Claim 18 has been analyzed and rejected according to claims 15, 4 above. Claim 19 has been analyzed and rejected according to claims 15, 5 above. Claim 20 has been analyzed and rejected according to claims 15, 6 above. Claim 21 has been analyzed and rejected according to claims 15, 7 above. Response to Arguments Applicant's arguments filed on February 20, 2026 have been fully considered and but are moot in view of the new ground(s) of rejection necessitated by the applicant amendment, e.g., “trained language model”, and “a human in the loop interface” being equivalent to “autonomous agent”, etc., as discussed above. Although a new ground of rejection has been used to address additional limitations and new interpretation of claimed terms, that have been added to claims 1-2, 6-9, 13-16, 20-21, a response is considered necessary for several of applicant’s arguments since references Beaver and Auffarth will continue to be used to meet several claimed limitations. With respect to the prior art rejection of independent claim 1, similar to claims 8, 15, under 35 USC §103(a), as set forth in the Office Action, applicant argued: Beaver does not teach “retrieving relevant information from a knowledge base to provide additional context for generating a response to current user inputs, as required by the claimed RAG element” because Beaver “is materially different from RAG-style retrieval-augmented generation” and because Beaver’s “not retrieving relevant information from a knowledge base”, but “retrieving” is directed to selecting and labeling training samples and updating models/templates”, etc., as asserted in paragraph 4 of page 12 in Remarks filed on February 20, 2026. In response to the argument above, the Office respectfully disagrees because (1) claim does not recite and require “provide additional context for generating a response to current user inputs” for claimed “retrieving”, but merely “retrieving relevant information from a knowledge base”, (2) claim broadly recited “retrieving relevant information from a knowledge base” and Beaver teaches this feature by disclosing “retrieving (retrieving the added input-response in a poor as training sample, para 48)” “relevant information (added input-response pair as history data with respect to later training, para 48)” from “a knowledge base (a pool by accepting the added input-response, para 48)”, and such disclosure specifically for the claimed “enhance the (trained) language model (by increasing the accuracy and efficiency of the natural language understanding component 115 including the language model 120 in fig. 1), which essentially anticipated the broadly recited “retrieving”, “information” from “knowledge base” and “enhance the (trained) language model” and applicant would be notified that a difference between claimed feature and the disclosure from the prior art does not mean no anticipation and in the instantly claimed feature above, Beaver’s “input-answer pair” is a type of knowledge for the training and for increasing accuracy of the language model, and claim does not recite any refrain or prohibition from Beaver’s “selecting”, “labeling”, etc., as listed in the argument, i.e., Beaver disclosed narrow feature that would anticipated the broadly claimed “retrieving” feature and thus, the argument is moot. Applicant further challenged a combination of Beaver and Auffarth and argued “rationale is generic and did not explain … why a person of ordinary skill in the art would have modified Beaver’s disclosed intent/rule/templated-response and active-learning retraining architecture into a material different runtime pipeline that retrieves external evidence to condition response generation and that stores/reuses summarized interaction memory, the Office action did not identify how the proposed combination would be implemented in Beaver without substantial redesign of Beaver’s response-generation approach, nor did it address whether such redesign would alter Beaver’s intended operation”, as asserted in paragraph 2 of page 13 in Remarks filed on February 20, 2026. In response to the argument above, the Office further disagrees because the test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one or all of the references. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). In the instant prior art application, the Office Action clearly indicated applying Auggarth’s “RAG module” to Beaver’s “module” as feature enhancement that meets the intention of both prior arts for accuracy, and the rationale of the combination of the both arts, as discussed in the Office action above, is neither ground on alleged “modification of beaver’s disclosed intent/rule/templated-response and active-learning retraining architecture”, etc., nor redesign, “runtime pipeline”, etc., and applicant is in silence about applying Auggarth’s module to Beaver’s module for common enhancement as indicated in office action above, and thus, the argument above is also moot. For the at least similar reasons above, the prior art rejection of other independent claims 8, 15 and dependent claims 2-7, 9-14, 16-21 maintained. In the response to this office action, the Office respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Office in prosecuting this application. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LESHUI ZHANG whose telephone number is (571)270-5589. The examiner can normally be reached Monday-Friday 6:30amp-4:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vivian Chin can be reached at 571-272-7848. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LESHUI ZHANG/ Primary Examiner, Art Unit 2695
Read full office action

Prosecution Timeline

May 06, 2024
Application Filed
Nov 17, 2025
Non-Final Rejection — §103
Feb 18, 2026
Applicant Interview (Telephonic)
Feb 20, 2026
Response Filed
Feb 23, 2026
Examiner Interview Summary
Mar 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585677
AUTOMATED GENERATION OF IMPROVED LIST-TYPE ANSWERS IN QUESTION ANSWERING SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12572757
VIDEO PROCESSING METHOD, VIDEO PROCESSING APPARATUS, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12567423
SYSTEM AND METHODS FOR UPSAMPLING OF DECOMPRESSED SPEECH DATA USING A NEURAL NETWORK
2y 5m to grant Granted Mar 03, 2026
Patent 12567424
METHOD AND DEVICE FOR MULTI-CHANNEL COMFORT NOISE INJECTION IN A DECODED SOUND SIGNAL
2y 5m to grant Granted Mar 03, 2026
Patent 12561354
SYSTEMS AND METHODS FOR ITEM-SPECIFIC KEYWORD RECOMMENDATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+36.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 928 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month