Prosecution Insights
Last updated: April 19, 2026
Application No. 18/459,352

TECHNOLOGIES FOR LEVERAGING IMPROVED CONVERSATIONAL BOTS IN A CONTACT CENTER SYSTEM

Final Rejection §103
Filed
Aug 31, 2023
Examiner
GAY, SONIA L
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Genesys Cloud Services Inc.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
701 granted / 855 resolved
+20.0% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
33 currently pending
Career history
888
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 855 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to the amendment filed on 12/08/2025. Response to Amendment Applicant’s amendment filed on 12/08/2025 has been entered. No claims have been amended. No claims have been canceled. No claims have been added. Claims 1- 20 are still pending in this application, with claims 1 and 11 being independent. Allowable Subject Matter Claims 6 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. After search and consideration of the prior art, it has been determined that the prior art fails to teach or suggest the following: removing tokens from the initial multilingual vocabulary based on the grouping of languages with linguistic similarities comprises: determining one or more languages relevant to a locale handled by the contact center system; and removing tokens from the initial multilingual vocabulary associated with languages not within a group of languages relevant to the locale handled by the contact center system. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 – 3, 7, 8, 11 – 13, 17 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over and further in view of Pabolu et al. (US 2024/0127008) (“Pabolu”) in view of Wang et al. (US 2025/0006196) (“Wang”) and further in view of Wada et al. (“Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language Models”). For claim 1, Pabolu discloses a method of leveraging improved conversational bots in a contact center system (Abstract), the system and method comprising: performing, by a computing system ([0350 -0354] [0357 - 0368]), knowledge distillation through machine learning ([0002] [0008]) to teach a student artificial intelligence model (base model, Fig.1A. 148 and Fig.1C, 148 and Fig.4B; [0079] [0099] [0100] [0110] [0115]) based on a teacher artificial intelligence model (input model, e.g. mT5 model, Fig.1A, 113 and Fig.1C, 113 and Fig.4A; [0100] [0101 -0110]) (The student (base) model is a modified architecture of the teacher (input) model. Knowledge from the teacher model is distilled to the student model by maintaining the already learned sentence embeddings of the teacher model in the student model, [0115 - 0118]) and reduce a size of an initial multilingual vocabulary, wherein the student artificial intelligence model includes fewer machine learning embedding layers than the teacher artificial intelligence model (The constrained vocabulary embedding layer of the student model is smaller than input embedding layer of the teacher model, [0111 – 0120]); removing, by the computing system, tokens from the initial multilingual vocabulary based on a grouping of languages to reduce the size of the initial multilingual vocabulary (“The base model training subsystem 156 uses the indexes of the base vocabulary 123 generated by the base vocabulary generation subsystem 122 and prunes all the remaining input embeddings that the mT5 model has been trained on.” The input embeddings are correlated to tokens in a vocabulary, [0083] [0084] [0089 – 0097] [0111] [0113 - 0116]). Yet, Pabolu fails to teach the following: grouping the languages according to linguistic similarities; parsing, by the computing system, user text from a human user into one or more tokens using natural language processing; identifying, by the computing system, token indexes associated with the respective one or more tokens in a reduced multilingual vocabulary, wherein the reduced multilingual vocabulary is generated from performing the knowledge distillation and removing the tokens from the initial multilingual vocabulary; determining, by the computing system, embedding values associated with the identified token indexes; and generating, by the computing system, a multilingual embedding output for the user text based on the embedding values using machine learning, wherein the multilingual embedding output is indicative of an intent of the user text. However, Wang discloses a method for generating a prompt for a language model (Abstract), wherein a computing system performs the following: receiving user text (user input) from a human user (Fig.1, 127; [0025] [0026]); inputting the user text to a language model (first language model which is a text summarization model, [0029]); and generating an output of the user text, wherein the output is indicative of the intent of the user (The output of the first language model comprises a representation/intent of a user’s input, [0026] [0028]). Moreover, Pabolu discloses the following: a user text is parsed into one or more tokens using natural language processing ([0093] [0111]); token indexes associated with the respective one or more tokens are identified in a vocabulary ([0111]); embedding values associated with the one or more identified token indexes are determined ([0111]); and an embedding output (either the output of an encoder or the output of a decoder) for the user text based on the embedding values using the machine learning is generated([0112]). Furthermore, Wada discloses a method for generating multilingual word embeddings (Abstract), wherein generated groups of source- target languages ( German- English, Spanish- English) are linguistically similar or dissimilar (1. Introduction, 3.1 Overview, 4.1.1 Cross-lingual Word Embedding, 4.1.2 Multilingual Word Embedding, 4.2 Evaluation, pg. 3113, 3115 – 3117). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve Pabolu’s invention in the same way that Wang’s invention has been improved to achieve the following, predictable results for the purpose of enhancing (increase efficiency, resource usage, etc.) a natural language processing system by using large language models to determine an action responsive to multilingual, user input (Pabolu, [0011]) (Wang, [0013 – 0018]): the student model (which performs, multilingual text summarization, Pabolu, [0011] [0108] [0113 – 0118] [0129]) is further incorporated into a natural language processing system which receives and processes user input to determine an action and cause performance of the action; user text is received from a human user; the user text is input into the student model; and an output of the user text is generated, wherein the output is indicative of the intent of the user. Additionally, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Pabolu and Wang in the same way that the additional embodiments of Pabolu have been improved to achieve the following predictable results for the purpose of enhancing (increase efficiency, resource usage, etc.) a natural language processing system by using large language models to determine an action responsive to multilingual, user input (Pabolu, [0011])(Wang, [0013 – 0018]): further parse the user text into one or more tokens using natural language processing; identifying token indexes associated with the respective one or more tokens in the reduced multilingual vocabulary of the student model, wherein the reduced multilingual vocabulary is generated from performing the knowledge distillation and removing the tokens from the initial multilingual vocabulary; determine embedding values associated with the identified token indexes, wherein the generated embedding output is a multilingual embedding output. Furthermore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Pabolu and Wang in the same way that Wada’s invention has been improved for the purpose of providing multilingual, natural language processing using models with faster inferencing speeds (Pabolu, [0005 – 0007] [0119] [0120]): the languages are linguistically grouped (source- target) based on similarity/difference, wherein the tokens from the initial multilingual vocabulary are removed based on groups of languages which are linguistically similar (English- Spanish as source/target language, Pabolu ([0113]) or different. For claims 2 and 12, Pabolu further discloses, wherein performing knowledge distillation through machine learning to teach the student artificial intelligence model based on the teacher artificial intelligence model and reduce the size of the initial multilingual vocabulary comprises reducing a memory consumption of the initial multilingual vocabulary (Pabolu, A smaller model with faster inferencing/processing speed implies less memory consumption implies less memory consumption, [0070]). For claims 3 and 13, Pabolu and Wada further disclose, wherein removing the tokens from the initial multilingual vocabulary based on the grouping of languages with linguistic similarities to reduce the size of the initial multilingual vocabulary comprises reducing a memory consumption of the initial multilingual vocabulary (Pabolu, A smaller model with faster inferencing/processing speed implies less memory consumption, [0070] [0115 -0121]) (Wada, 1. Introduction, 3.1 Overview, 4.1.1 Cross-lingual Word Embedding, 4.1.2 Multilingual Word Embedding, 4.2 Evaluation, pg. 3113, 3115 – 3117). For claims 7 and 17, Pabolu further discloses, wherein generating the multilingual embedding output for the user text (output of an encoder) based on the embedding values comprises generating a fixed length vector representation of the user text based on the embedding values (Pabolu, [0111 - 0117]) For claim 8 and 18, Pabolu further discloses, wherein generating the multilingual embedding output for the user text based on the embedding values using machine learning comprises generating the multilingual embedding output for the user text based on the embedding values using a neural network (Pabolu, [0111 - 0117]). For claim 11, Pabolu discloses a computing system for leveraging improved conversational bots in a contact center system (Abstract), the computing system comprising: at least one processor ([0029]); and at least one memory comprising a plurality of instructions stored thereon that, in response to execution by the at least one processor [0029]), causes the computing system to: perform knowledge distillation through machine learning ([0002] [0008]) to teach a student artificial intelligence model (base model, Fig.1A. 148 and Fig.1C, 148 and Fig.4B; [0079] [0099] [0100] [0110] [0115]) based on a teacher artificial intelligence model and reduce a size of an initial multilingual vocabulary (input model, e.g. mT5 model, Fig.1A, 113 and Fig.1C, 113 and Fig.4A; [0100] [0101 -0110]) (The student (base) model is a modified architecture of the teacher (input) model. Knowledge from the teacher model is distilled to the student model by maintaining the already learned sentence embeddings of the teacher model in the student model, [0115 - 0118]), wherein the student artificial intelligence model includes fewer machine learning embedding layers than the teacher artificial intelligence model (The constrained vocabulary embedding layer of the student model is smaller than input embedding layer of the teacher model, [0111 – 0120]); remove tokens from the initial multilingual vocabulary based on a grouping of languages to reduce the size of the initial multilingual vocabulary (“The base model training subsystem 156 uses the indexes of the base vocabulary 123 generated by the base vocabulary generation subsystem 122 and prunes all the remaining input embeddings that the mT5 model has been trained on.” The input embeddings are correlated to tokens in a vocabulary, [0083] [0084] [0089 – 0097] [0111] [0113 - 0116]). Yet, Pabolu fails to teach the following: grouping the languages according to linguistic similarities; parsing, by the computing system, user text from a human user into one or more tokens using natural language processing; identifying, by the computing system, token indexes associated with the respective one or more tokens in a reduced multilingual vocabulary, wherein the reduced multilingual vocabulary is generated from performing the knowledge distillation and removing the tokens from the initial multilingual vocabulary; determining, by the computing system, embedding values associated with the identified token indexes; and generating, by the computing system, a multilingual embedding output for the user text based on the embedding values using machine learning, wherein the multilingual embedding output is indicative of an intent of the user text. However, Wang discloses a method for generating a prompt for a language model (Abstract), wherein a computing system performs the following: receiving user text (user input) from a human user (Fig.1, 127; [0025] [0026]); inputting the user text to a language model (first language model which is a text summarization model, [0029]); and generating an output of the user text, wherein the output is indicative of the intent of the user (The output of the first language model comprises a representation/intent of a user’s input, [0026] [0028]). Moreover, Pabolu discloses the following: a user text is parsed into one or more tokens using natural language processing ([0093] [0111]); token indexes associated with the respective one or more tokens are identified in a vocabulary ([0111]); embedding values associated with the one or more identified token indexes are determined ([0111]); and an embedding output (either the output of an encoder or the output of a decoder) for the user text based on the embedding values using the machine learning is generated ([0112]). Furthermore, Wada discloses a method for generating multilingual word embeddings (Abstract), wherein generated groups of source- target languages ( German- English, Spanish- English) are linguistically similar or dissimilar (1. Introduction, 3.1 Overview, 4.1.1 Cross-lingual Word Embedding, 4.1.2 Multilingual Word Embedding, 4.2 Evaluation, pg. 3113, 3115 – 3117). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve Pabolu’s invention in the same way that Wang’s invention has been improved to achieve the following, predictable results for the purpose of enhancing (increase efficiency, resource usage, etc.)a natural language processing system by using large language models to determine an action responsive to multilingual, user input (Pabolu, [0011]) (Wang, [0013 – 0018]): the student model (which performs, multilingual text summarization, Pabolu, [0011] [0108] [0113 – 0118] [0129]) is further incorporated into a natural language processing system which receives and processes user input to determine an action and cause performance of the action; user text is received from a human user; the user text is input into the student model; and an output of the user text is generated, wherein the output is indicative of the intent of the user. Additionally, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Pabolu and Wang in the same way that the additional embodiments of Pabolu have been improved to achieve the following predictable results for the purpose of enhancing (increase efficiency, resource usage, etc.) a natural language processing system by using large language models to determine an action responsive to multilingual, user input (Pabolu, [0011])(Wang, [0013 – 0018]): further parse the user text into one or more tokens using natural language processing; identifying token indexes associated with the respective one or more tokens in the reduced multilingual vocabulary of the student model, wherein the reduced multilingual vocabulary is generated from performing the knowledge distillation and removing the tokens from the initial multilingual vocabulary; determine embedding values associated with the identified token indexes, wherein the generated embedding output is a multilingual embedding output. Furthermore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Pabolu and Wang in the same way that Wada’s invention has been improved for the purpose of providing multilingual, natural language processing using models with faster inferencing speeds (Pabolu, [0005 – 0007] [0119] [0120]): the languages are linguistically grouped (source- target) based on similarity/difference, wherein the tokens from the initial multilingual vocabulary are removed based on groups of languages which are linguistically similar (English- Spanish as source/target language, Pabolu ([0113]) or different. Claim(s) 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over and further in view of Pabolu et al. (US 2024/0127008) (“Pabolu”) in view of Wang et al. (US 2025/0006196) (“Wang”), and further in view of Wada et al. (“Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language Models”) and further in view of Lebed (“Customer Service Cheat Sheet for Live Chat”). For claims 4 and 14, the combination of Pabolu, Wang and Wada further disclose, wherein performing knowledge distillation through machine learning to teach the student artificial intelligence model based on the teacher artificial intelligence model comprises performing machine learning to train the student artificial intelligence model with respect to the teacher artificial intelligence model's sentence embeddings which were learned from training data scraped from the web ([0110] [0118]). Yet, the combination of Pabolu, Wang and Wan fails to teach the training data comprised contact center keywords. However, Lebed discloses a web site (Abstract), wherein the web site comprises contact center keywords (1. How to start a chat, 5. How to put on hold). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to modify the combined teachings of Pabolu, Wang and Wada with Lebed’s teachings so that the training data scraped from the web site includes contact center keywords for the purpose of providing multilingual, natural language processing across a variety of domains, including a contact center domain (Pabolu, [0005 – 0007] [0119] [0120]). Claim(s) 5 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over and further in view of Pabolu et al. (US 2024/0127008) (“Pabolu”) in view of Wang et al. (US 2025/0006196) (“Wang”), and further in view of Wada et al. (“Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language Models”) and further in view of Mindsight IT (“Contact Centers have a Case of TMA!”). For claims 5 and 15, the combination of Pabolu, Wang and Wada further disclose, wherein performing knowledge distillation through machine learning to teach the student artificial intelligence model based on the teacher artificial intelligence model comprises performing machine learning to train the student artificial intelligence model with respect to the teacher artificial intelligence model's sentence embeddings which were learned from training data scraped from the web ([0110] [0118]). Yet, the combination of Pabolu, Wang and Wan fails to teach the training data comprised domain-specific keywords. However, Mindsight IT discloses a web site (Abstract), wherein the web site comprises domain- specific keywords (Commonly Used Contact Center Acronyms in Cisco’s Unified Contact Center Express (UCCX) Systems). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to modify the combined teachings of Pabolu, Wang and Wada with Mindsight IT’s teachings so that the training data scraped from the web site includes domain-specific keywords for the purpose of providing multilingual, natural language processing across a variety of domains (Pabolu, [0005 – 0007] [0119] [0120]). Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over and further in view of Pabolu et al. (US 2024/0127008) (“Pabolu”) in view of Wang et al. (US 2025/0006196) (“Wang”), and further in view of Wada et al. (“Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language Models”) and further in view of Mishra (US 2023/0368773). For claims 9 and 19, the combination of Pabolu, Wang and Wada fails to teach, receiving, by the computing system, the user text from an interaction between the human user and a conversational bot of the contact center system. However, Mishra discloses a system and method for facilitating communications with a user (Abstract), wherein user text is obtained from an interaction between a human user (Fig.1, 104) and a conversational bot (virtual agent/personal virtual agent, wherein the virtual agent/personal virtual agent comprises a large language model Fig.1,128;) of a contact center (Fig.1, 120) ([0021] [0023] [0027 - 0033] [0059][0089 – 0094]). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to improve the invention disclosed by the combination of Pabolu Wang and Wada in the same way that Mishra’s invention has been improved to achieve the following, predictable results for the purpose of increasing the operating efficiency of a contact center by automating processes including using large language models to determine an action responsive to multilingual, user input (Pabolu, [0011]) (Wang, [0013 – 0018]) (Mishra, [0003]): the user text is further received from an interaction between the human user and a conversational bot of the contact center system. Claim(s) 10 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over and further in view of Pabolu et al. (US 2024/0127008) (“Pabolu”) in view of Wang et al. (US 2025/0006196) (“Wang”), and further in view of Wada et al. (“Unsupervised Multilingual Word Embedding with Limited Resources using Neural Language Models”) and further in view of Sapugay et al. (US 2021/0004441) (“Sapugay”). For claims 10 and 20, the combination of Pabolu, Wang and Wada fails to teach, wherein each token of the initial multilingual vocabulary is represented by a multi-dimensional vector of floating point values. However, Sapugay discloses a system and method for processing natural language utterances (Abstract), wherein a token (word) is represented by a multi-dimensional vector of floating point values ([0029] [0030]). Therefore, it would have been obvious to one of ordinary skill in the art at the time of applicant’s filing to modify the combined teachings of Pabolu, Wang and Wada with Sapaguy’s teachings so the token (word) is further represented by a multi-dimensional vector of floating point values for the purpose of providing multilingual, natural language processing using machine learning which improves operating efficiency (Pabolu, [0005 – 0007] [0119] [0120]). Response to Arguments Applicant's arguments filed 12/08/2025 have been fully considered but they are not persuasive. On page 11 of the remarks, applicant argues that Pabolu fails to teach “removing, by the computing system, tokens from the initial multilingual vocabulary based on a grouping of languages to reduce the size of the initial multilingual vocabulary.” The broadest reasonable interpretation of the term, a grouping of languages, is that it is a noun phrase which refers to a group comprising languages. Pabolu teaches decreasing the size of an initial model by decreasing the built-in vocabulary ([0070]). Pabolu further teaches that the initial model, a mT5, is trained on a multilingual vocabulary of over 100 languages ([0110]). Additionally, Pabolu teaches removing tokens from the initial multilingual vocabulary comprising over 100 languages. These removed tokens include tokens which are unrelated to English and another language (The base vocabulary includes words/tokens selected from English and Spanish. The base model training subsystem 156 uses the indexes of the base vocabulary 123 generated by the base vocabulary generation subsystem 122 and prunes all the remaining input embeddings that the mT5 model has been trained on., [0083] [0092] [0094] [0095] [0116]). By combining English with a second language, e.g. Spanish ([0113] [0124]), Pabolu teaches a group of languages. Furthermore, Wan teaches that English and Spanish are linguistically similar languages (4.1.2 Multilingual Word Embedding). In combination, Pabolu and Wan teach the following limitations: “removing, by the computing system, tokens from the initial multilingual vocabulary based on a grouping of languages with linguistic similarities to reduce the size of the initial multilingual vocabulary.” On page 12 of the remarks, applicant argues that the combination of Pabolu, Wang, Wada and Lebed fails to teach the following limitation recited in claim 4: "wherein performing the knowledge distillation through machine learning to teach the student artificial intelligence model based on the teacher artificial intelligence model comprises performing machine learning to train the student artificial intelligence model with respect to the teacher artificial intelligence model's representation of contact center keywords." Pabolu teaches that the teacher artificial intelligence model, mT5, is trained on vocabulary scraped from the web (C4) ([0109] [0110]). Pabolu further teaches that the mT5 model is designed to follow the T5 model as closely as possible. The Google Machine Learning glossary defines T5 as follows: A text-to-text transfer learning model introduced by Google AI in 2020. T5 is an encoder-decoder model, based on the Transformer architecture, trained on an extremely large dataset. It is effective at a variety of natural language processing tasks, such as generating text, translating languages, and answering questions in a conversational manner. (https://developers.google.com/machine-learning/glossary/#t) Lebed discloses a web site which comprises contact center keywords as discussed above. The latest publication date shown for the website/keywords is August 22, 2018. Since the mT5 model is defined as a version of a model which was introduced in 2020, the vocabulary scraped from the web and used for training could comprise the contact center keywords. On page 12 of the remarks, applicant argues that the combination of Pabolu, Wang, Wada and Mindsight IT fails to teach the following limitation recited in claim 5: "wherein performing the knowledge distillation through machine learning to teach the student artificial intelligence model based on the teacher artificial intelligence model comprises performing machine learning to train the student artificial intelligence model with respect to the teacher artificial intelligence model's representation of domain-specific keywords." Pabolu teaches that the teacher artificial intelligence model, mT5, is trained on vocabulary scraped from the web (C4) ([0109] [0110]). Pabolu further teaches that the mT5 model is designed to follow the T5 model as closely as possible. The Google Machine Learning glossary defines T5 as follows: A text-to-text transfer learning model introduced by Google AI in 2020. T5 is an encoder-decoder model, based on the Transformer architecture, trained on an extremely large dataset. It is effective at a variety of natural language processing tasks, such as generating text, translating languages, and answering questions in a conversational manner. (https://developers.google.com/machine-learning/glossary/#t) Mindsight IT discloses a web site which comprises domain-specific keywords as discussed above. The latest publication date shown for the website/keywords is January 12, 2015. Since the mT5 model is defined as a version of a model which was introduced in 2020, the vocabulary scraped from the web and used for training could comprise the contact center keywords. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SONIA L GAY whose telephone number is (571)270-1951. The examiner can normally be reached Monday-Friday 9-5 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SONIA L GAY/Primary Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Aug 31, 2023
Application Filed
Sep 06, 2025
Non-Final Rejection — §103
Dec 08, 2025
Response Filed
Mar 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602617
DATA MANUFACTURING FRAMEWORKS FOR SYNTHESIZING SYNTHETIC TRAINING DATA TO FACILITATE TRAINING A NATURAL LANGUAGE TO LOGICAL FORM MODEL
2y 5m to grant Granted Apr 14, 2026
Patent 12602408
STREAMING OF NATURAL LANGUAGE (NL) BASED OUTPUT GENERATED USING A LARGE LANGUAGE MODEL (LLM) TO REDUCE LATENCY IN RENDERING THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12602539
PROACTIVE ASSISTANCE VIA A CASCADE OF LLMS
2y 5m to grant Granted Apr 14, 2026
Patent 12596708
SYSTEMS AND METHODS FOR AUTOMATED CODE GENERATION FOR CALCULATION BASED ON ASSOCIATED FORMAL SPECIFICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12591604
INTELLIGENT ASSISTANT
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+11.4%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 855 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month