Prosecution Insights
Last updated: April 19, 2026
Application No. 18/212,224

METHOD AND SYSTEM FOR PROCESSING MULTILINGUAL USER INPUTS VIA APPLICATION PROGRAMMING INTERFACE

Final Rejection §103§DP
Filed
Jun 21, 2023
Examiner
SIRJANI, FARIBA
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Rajiv Trehan
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
414 granted / 547 resolved
+13.7% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
578
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 547 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-20 are pending. Claims 1, 9 and 17 are independent. Only Claim 1 is amended to overcome an Objection. This Application was published as U.S. 2023/0334265. Apparent priority: 25 May 2021. Please note that the Examiner’ comments on pp. 12-13 of the previous Office action (repeated below). Applicant’s amendments and arguments are considered but are either unpersuasive or moot in view of the new grounds of rejection that, if presented, were necessitated by the amendments to the Claims. This action is Final. Response to Amendments Objection to Claim 1 is withdrawn in view of the amendments to this Claim. Obviousness Double Patenting rejection over U.S. 11741317 is withdrawn in view of the filing of the Terminal Disclaimer. Response to Arguments Applicant’s arguments are not persuasive because they are not supported by the language of Claim or properly mapped to Zhang. Applicant’s main argument, which is the foundation of the most of its other arguments, characterizes the Zhang as being directed to extracting and translating keywords whereas the invention translates whole sentences: PNG media_image1.png 162 746 media_image1.png Greyscale Response 13. PNG media_image2.png 162 760 media_image2.png Greyscale Response 14. In Reply, Claim does not refer to a sentence and does not include the word “entire” and even if it did an “entire sentence” could be a single word. Dependent Claim 8 refutes Applicant’s arguments by saying that the input could be a word. See below for elaboration. Not only does the Claim does not state that the input is an “entire sentence” or that the entire sentence is being translated into the intermediate language, there is dependent Claim 8 which removes any ambiguity in this respect and expressly states that the input can be as short as a phoneme: “8. The method of claim 1, wherein the at least one verbal input from the user is in form of a sentence, a phrase, a word, or a phoneme in context.” Accordingly, Claim 1 covers the situation where the input is a single “word” which would be taught by the “keywords” of the rebutted reference Zhang. When the input is a single word and Zhang goes to extract a keyword, it will extract the single word which is the entirety of the input or the “entire sentence” as argued. Additionally, there are single-word sentences that fall under the teachings of Zhang. These situations must be excluded from the Claim before the arguments of the Response are made. In order for the Applicant to rely on the currently submitted arguments, Claim 8 and its counterparts need to be canceled or amended and Claim 1 and its counterparts need to be amended to expressly state the following or some equivalent of it: “wherein the textual input or the verbal input comprise of an input sentence including more than one word, and wherein the translating the user input generates translations of the input sentence as a whole as the translated user inputs.” Applicant’s other part of the first argument regarding “confidence” is ancillary to and based on the argument regarding the “entire sentence”: PNG media_image3.png 154 710 media_image3.png Greyscale … PNG media_image4.png 164 738 media_image4.png Greyscale … PNG media_image5.png 208 722 media_image5.png Greyscale Response 14. In Reply, considering that the “whole translation level” can be degraded to the translation of a single word, the “confidence” argument, which is based on Applicant’s primary basis of “entire sentence,” is also unpersuasive. Further note that the Claim merely asks for a confidence score, “wherein a confidence score is associated with each of the plurality of translated user inputs,” without specifying any particular method of calculating the confidence score. Applicant’s second argument (p. 16) is again based on the “complete input text” presumption that was replied to and refuted above: PNG media_image6.png 314 720 media_image6.png Greyscale Response 16. Further in Reply, there is no indication of using multiple models in Zhang. Zhang mentions Google Translate ([0007]) and suggests an improvement to it by presenting its own approach. The above argument again emphasizes the refuted “computer input text” and is founded upon it. As to the “intent mapping” while Zhang effectively and impliedly teaches the intent mapping by teaching intent determination, another reference was added to include the word “map.” PNG media_image7.png 116 714 media_image7.png Greyscale Office action of 5/29/2025, p. 17. See also the mappings on pp. 15-16 which are repeated below as the rejection has not been modified. Moreover, as provided in the previous Office action, “However, a key part of the definition relies on “input intent maps” that also have a specific meaning ([0051]) that is not currently claimed inside the Claim language. Note that the last two limitations of the independent Claims that perform the translation and rendering have no dependence on the identification of the intent map. Two separate and unrelated processes are occurring in the Claim. Please address.” Office action of 5/29/2025, 12-13. In other words, please (1) Define your “intent map” based on your Specification and (2) connect the limitations so they are relying on the “intent map.” As to the Applicant’s third argument (p. 17) Applicant argues: PNG media_image8.png 372 744 media_image8.png Greyscale Response 18. In Reply, the keywords of Zhang are extracted to indicate intent and as the mapping to Zhang provided the context sensitivity in Zhang teaches the intent of the Claim. The mapping of the Claim with respect to the intent feature is elaborate and detailed because the Examiner noted that the “intent map” is a key feature. However, as provided in the Office action of 5/29/2025, pages 12-13, right before the presentation of the mapping, the Examiner requested (1) more particularity with respect to the definition of the intent map and (2) connecting the last two limitations to intent map to show their reliance on it. With respect to the fourth argument (p. 19) Applicant argues that “determining the distance” is not taught: PNG media_image9.png 642 748 media_image9.png Greyscale Response 20. In Reply, Word2Vec, similarity, and clustering are all based on “distances.” Additionally, the cited portions of Zhang expressly include the word “distance.” “[0048] … For example, fuzzy matching (e.g., based on edit distance) may be used by the system to detect misspelling or spelling variants which may be clustered together by the system. Keyword units that are determined by the system to belong to the same synset according to an English-language lexical database may be considered by the system to be synonyms of each other and thus clustered together by the system. One example of a suitable English-language lexical database that may be used by the system for this purpose is the WordNet database available on the internet in the domain wordnet.princeton.edu, the entire contents of which is hereby incorporated by reference. Keyword units may also be converted by the system to word vectors in an embedding space and word vectors that are close in distance in the embedding space may be clustered together by the system. Distance between word vectors in the embedding space may be measured according to a distance measure such as, for example, cosine similarity, or the like. …” This concept is well-known and mundane in the art. Applicant’s fifth and final argument (p. 22) goes back to distinguishing Zhang because it is directed to translating keywords which was refuted above. PNG media_image10.png 362 740 media_image10.png Greyscale Response 22. In Reply, as provided above an entire sentence is not claimed and further Claim 8 equates the input to a single word and also an entire sentence could be a single word unless the Claim specifies that it is dealing with multi-word sentences that are being translated as a whole. This will impact the method of arriving at the intent map which also needs to be claimed with particularity. With respect to the portion of the last argument that states Zhang does not provide the translated responses (p. 23) back to the user, see Figure 4, 408: “present set of target key phrases in computer graphical interface.” In translation terminology the first and second languages are called source and target. The targe is the translated. The previous Office action included the citation: PNG media_image11.png 132 738 media_image11.png Greyscale Office action of 5/29/2025, p. 17. Claim 1 provides: 1. A method for processing user inputs in multiple languages using a Single Natural Language Processing (SNLP) model, the method comprising: receiving, via a communication device, a user input from a user in a source language, wherein the user input is at least one of a textual input and a verbal input; translating, using a machine translation model, the user input to generate a plurality of translated user inputs in an intermediate language, wherein a confidence score is associated with each of the plurality of translated user inputs, and wherein each of the plurality of translated user inputs is in text form; generating for the plurality of translated user inputs, by the SNLP model configured only using the intermediate language, a plurality of sets of intermediate input vectors in the intermediate language; processing, via an Application Programming Interface (API), the plurality of sets of intermediate input vectors using a predefined mechanism, wherein the API is associated with a domain from a plurality of domains; and retrieving a predetermined response from the API based on processing the plurality of sets of intermediate input vectors, wherein the predefined mechanism comprises an elastic stretching mechanism, and wherein the elastic stretching mechanism comprises: generating for the plurality of sets of intermediate input vectors, a plurality of sets of input intent maps in the intermediate language, wherein each of the plurality of sets of input intent maps is associated with one of the plurality of translated user inputs; matching each of the plurality of sets of input intent maps in the intermediate language with each of a plurality of pre-stored sets of intent maps in the intermediate language, wherein each of the plurality of pre-stored sets of intent maps is generated from a single predefined training input in the intermediate language and is mapped to a predefined intent and the predetermined response retrieved from the API in the intermediate language; determining a distance of each of the plurality of sets of input intent maps relative to each of the plurality of pre-stored sets of intent maps; identifying a pre-stored intent map from the plurality of pre-stored sets of intent maps closest to the plurality of sets of input intent maps; translating the predetermined response mapped to the pre-stored intent map into the source language to generate a translated response; and rendering, to the user, the translated response. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (U.S. 20200311203) in view of Anand (U.S. 20190213284) and further in view of Chappidi (U.S. 20200380963). Instant Application is directed to a Chatbot that receives commands or queries for performance of a task and first translates the input command/query into an intermediate language, then determines intent from the translated command/query, then according to intent routes the command/query to the appropriate API, receives a response from the API, translates back to user language and outputs the response. “Elastic stretching mechanism” is not a term of art in NLP and while the inventor is his own lexicographer, as recognized by the Applicant, the term needs definition inside the Claim. However, a key part of the definition relies on “input intent maps” that also have a specific meaning ([0051]) that is not currently claimed inside the Claim language. Note that the last two limitations of the independent Claims that perform the translation and rendering have no dependence on the identification of the intent map. Two separate and unrelated processes are occurring in the Claim. Please address. Regarding Claim 1, Zhang teaches: 1. A method for processing user inputs in multiple languages using a Single Natural Language Processing (SNLP) model, the method comprising: receiving, via a communication device, a user input from a user in a source language, wherein the user input is at least one of a textual input and a verbal input; [Zhang, Figure 4, 402. “[0063] Process 400 includes the operations of the system receiving a selection of a set of source multi-language survey comments 402,…” The comments are written text.] translating, using a machine translation model, the user input to generate a plurality of translated user inputs in an intermediate language, wherein a confidence score is associated with each of the plurality of translated user inputs, and wherein each of the plurality of translated user inputs is in text form; [Zhang, Figure 4, 404, 406. “[0063] … the system determining a set of source keyword units in the intermediate language that are most relevant to the selected set of source multi-language survey comments 404, the system mapping the set of source keyword units in the intermediate language to the set of target keyword units in the target language 406…” The comments are first translated into an intermediate language. A confidence score is associated with the translation that indicates the accuracy of the translation: “[0022] … The more often the same translation pair occurs in the sets, the higher the confidence assigned by the system that the particular intermediate language keyword unit of the translation pair is an accurate translation in context of the particular target language keyword unit of the translation pair. …”] generating for the plurality of translated user inputs, by the SNLP model configured only using the intermediate language, a plurality of sets of intermediate input vectors in the intermediate language; [Zhang, Figure 2, 206, the input corpus is in vector form: “[0048] Given this linguistic and semantic redundancy in the salient keyword units included in the global keyword unit dictionary, salient keyword units included in global keyword unit dictionary may be clustered 206 by the system based on linguistic and/or semantic similarity.…. Keyword units may also be converted by the system to word vectors in an embedding space and word vectors that are close in distance in the embedding space may be clustered together by the system….”] processing, via an Application Programming Interface (API), the plurality of sets of intermediate input vectors using a predefined mechanism, wherein the API is associated with a domain from a plurality of domains; and [Zhang, “[0038] While the Google Cloud Translation API is used in an implementation, another natural language machine translator may be used that offers a programmatic API to automatically convert one natural language text body into another, while aiming to preserve the meaning of the input text and produce fluent text in the output language….”] retrieving a predetermined response from the API based on processing the plurality of sets of intermediate input vectors, wherein the predefined mechanism comprises an elastic stretching mechanism, and [Zhang, the formation of the “intermediate language to target language dictionary 306” in Figure 3 means that API generates the “response” / translation by using the dictionary of previously processed intermediate language vectors. “[0037] To facilitate construction 204 of the global keyword unit dictionary, those survey comments of the set of global multi-language survey comments that are not already in the intermediate language may be translated by the system to the intermediate language using a natural language machine translator. Various different natural language machine translators may be used, and no particular natural language machine translator is required. One example of a suitable natural language machine translator is the Cloud Translation API offered by Google, Inc. of Mountain View, Calif. More information on the Google Cloud Translation API is available on the internet at /translate in the cloud.google.com domain, the entire contents of which is hereby incorporated by reference.”] wherein the elastic stretching mechanism comprises: generating for the plurality of sets of intermediate input vectors, a plurality of sets of input intent maps in the intermediate language, wherein each of the plurality of sets of input intent maps is associated with one of the plurality of translated user inputs; [Zhang Figure 3 shows the construction/generation of an intermediate language-to-target language dictionary which teaches the establishing of a correspondence/map between the translated/target language and the intermediate language. See [0024] and [0025]. The “context sensitivity” of Zhang teaches the “intent” domains of this Claim. “[0051] Turning next to FIG. 3, it is process 300 for constructing an intermediate language-to-target language dictionary, according to an implementation of the present invention. The dictionary may be used to map keyword units in the intermediate language to keyword units in the target language in a context-sensitive manner. For example, the dictionary may be used to map keyword units of an intermediate language tag cloud to keyword units that can be included in a corresponding target language tag cloud. Because of the context-sensitive mapping by the dictionary, the keyword units included in the corresponding target language tag cloud better preserve the meaning of the corresponding keyword units of the intermediate language tag cloud in the context of the survey comments from which the corresponding keyword units are derived.”] matching each of the plurality of sets of input intent maps in the intermediate language with each of a plurality of pre-stored sets of intent maps in the intermediate language, wherein each of the plurality of pre-stored sets of intent maps is generated from a single predefined training input in the intermediate language and is mapped to a predefined intent and the predetermined response retrieved from the API in the intermediate language; [Zhang, Figure 3, once the dictionary of intermediate to target is generated at 306 and is context/intent dependent a subsequent input may be mapped to the “prestored sets” in the dictionary. Figure 4, 406: “[0063] … the system mapping the set of source keyword units in the intermediate language to the set of target keyword units in the target language 406 …”] determining a distance of each of the plurality of sets of input intent maps relative to each of the plurality of pre-stored sets of intent maps; [Zhang, Figure 2, “[0048] … Keyword units may also be converted by the system to word vectors in an embedding space and word vectors that are close in distance in the embedding space may be clustered together by the system. Distance between word vectors in the embedding space may be measured according to a distance measure such as, for example, cosine similarity, or the like….”] identifying a pre-stored intent map from the plurality of pre-stored sets of intent maps closest to the plurality of sets of input intent maps; [Zhang, Figure 2, the clusters generated at 206 in Figure 2, indicate a particular context/intent. “[0049] As a result of clustering 206 the salient keyword units of constructed 204 global keyword unit dictionary, there may be a number of resulting clusters. Each represents a salient concept expressed in the set of global intermediate language survey comments….”] translating the predetermined response mapped to the pre-stored intent map into the source language to generate a translated response; and [Zhang, Figure 4, 406: “[0063] … the system mapping the set of source keyword units in the intermediate language to the set of target keyword units in the target language 406 ….”] rendering, to the user, the translated response. [Zhang, Figure 4, 408: “[0063] … the system causing the set of target keyword units to be presented in the target language in a computer graphical user interface 408.”] Zhang derives the meanings of the keywords based on their context/intent and the word “map” is not defined in the Claim. However, a reference is added that expressly uses a semantic intent graph to teach the “intent map” of the Claim. Anand teaches: generating for the plurality of sets of intermediate input vectors, a plurality of sets of input intent maps in the intermediate language, wherein each of the plurality of sets of input intent maps is associated with one of the plurality of translated user inputs; [Anand, Figure 3, the “semantic parser 305” operates on the user “utterance 305” and based on the “semantic interpretation 308” of the input utterance and based on context from “context resolver 311,” and a “graph integrator 307” generate a “meaning representation graph 312 (MR graph)” which teaches the “intent map” of the Claim. See [0043] and “[0046] The semantic meaning representation graph 312 is incorporated into the united semantic graph 321 which is a contextual graph of the conversational content. In referred embodiments, the graphs 312 and 321 are merged as described below in the section entitled “Semantic Integration for Conversational Content”. Relevant information (given a user intent) is integrated by any or several of known types of integration process such as cross-sentence, cross-turn, cross-interlocutor and cross-knowledge-base….”] Zhang and Anand pertain to NLP and incorporation of context in order to determine intent and it would have been obvious to use the semantic graphs of Anand with the system of Zhang that uses context to determine meaning but does not show a graph of intent. This combination falls under combining prior art elements according to known methods to yield predictable results or simple substitution of one known element for another to obtain predictable results. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Zhang and Anand do not expressly teach the use of different APIs associated with different domains. Chappidi teaches: processing, via an Application Programming Interface (API), the plurality of sets of intermediate input vectors using a predefined mechanism, wherein the API is associated with a domain from a plurality of domains; [Chappidi teaches different APIs for different intent domains: “[0031] FIG. 1 illustrates a block diagram of system 100 according to various examples. In some examples, system 100 implements a digital assistant. The terms “digital assistant,” “virtual assistant,” “intelligent automated assistant,” or “automatic digital assistant” refer to any information processing system that interprets natural language input in spoken and/or textual form to infer user intent, and performs actions based on the inferred user intent. For example, to act on an inferred user intent, the system performs one or more of the following: identifying a task flow with steps and parameters designed to accomplish the inferred user intent, inputting specific requirements from the inferred user intent into the task flow; executing the task flow by invoking programs, methods, services, APIs, or the like; and generating output responses to the user in an audible (e.g., speech) and/or visual form.”] Zhang/Anand and Chappidi pertain to NLP and determination of intent of a command and it would have been obvious to use the various APIs of Chappidi that pertain to different intent domains with the system of combination for an express statement that different domains may have their own API. This combination falls under combining prior art elements according to known methods to yield predictable results or simple substitution of one known element for another to obtain predictable results. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Regarding Claim 2, Zhang teaches: 2. The method of claim 1, wherein the predefined response comprises at least one of providing a service, or performing an action. [Zhang, Figure 4, 408, generates and presents a translation which is providing a translation service.] Regarding Claim 3, Zhang teaches: 3. The method of claim 1, wherein the predefined mechanism comprises at least one of a statistical mechanism, an artificial intelligence (AI) mechanism, and a machine learning (ML) mechanism. [Zhang, “[0038] While the Google Cloud Translation API is used in an implementation, another natural language machine translator may be used that offers a programmatic API to automatically convert one natural language text body into another, while aiming to preserve the meaning of the input text and produce fluent text in the output language. The natural language machine translator may use a combination of techniques including statistical techniques, deep linguistic analyses, and/or large-scale empirical techniques.”] Regarding Claim 5, Zhang extracts and translates the keywords in context/intent such that intent/context is included in the translated keyword in the dictionary that is constructed in Figure 3, 306. Anand teaches: 4. The method of claim 1, further comprising: generating the plurality of sets of input intent maps based on the plurality of sets of intermediate input vectors, wherein generating the plurality of sets of input intent maps comprises processing the plurality of sets of intermediate input vectors through at least one of a plurality of intent map transforming algorithms. [Anand, Figure 3, “meaning representation processor 309” to “graph integrator 307” to “united semantic graph 321.” Table 1 in [0072] presents an algorithm for semantic graph integration. “[0072] The integration process for generating a contextual graph for embodiments of the invention is described in Table 1. Let K be the core domain concepts, S be the domain propositions (triples)…. Table 1 describes that given a sequence of sentences S and an empty or existing unified semantic graph G, the sentences in S are integrated with G. First, the system identifies the directly overlapped nodes between gi and G(b), updates G accordingly; and then semantically matches gi with the domain knowledge K and updates the unified semantic graph G accordingly.”] Rationale for combination as provided for Claim 1. This Claim expands on the concept of semantic/contextual mapping which is more express in Anand. Regarding Claim 5, Zhang starts from written comments and does not teach speech recognition. Anand teaches: 5. The method of claim 1, further comprising: converting the verbal input in the source language into a plurality of source textual inputs in the source language using a Speech-to-Text (STT) mechanism. [Anand, “[0052] User utterances include important contextual information that usually determines the course of conversations between the system and the user. For the purposes of the description, “user utterances” include both spoken utterances interpreted by a speech recognition system and written responses and queries to a conversational system….”] Rationale for combination as provided for Claim 1. In the context of NLP, voice and text are often interchangeable in that voice/speech can be converted to text. Regarding Claim 6, Zhang teaches: 6. The method of claim 5, wherein each of the plurality of source textual inputs in the source language is translated to the intermediate language to generate the plurality of translated user inputs.[Zhang, Figure 4, “[0063] Process 400 includes the operations of the system receiving a selection of a set of source multi-language survey comments 402, the system determining a set of source keyword units in the intermediate language that are most relevant to the selected set of source multi-language survey comments 404 ….”] Regarding Claim 7, Zhang teaches: 7. The method of claim 5, wherein the confidence score associated with a translated user input from the plurality of translated user inputs corresponds to at least one of: accuracy of conversion of the verbal input in the source language into a source textual input associated with the translated user input; and accuracy of the translation of the translated user input in the intermediate language. [Zhang addresses the translation confidence: “[0022] … The more often the same translation pair occurs in the sets, the higher the confidence assigned by the system that the particular intermediate language keyword unit of the translation pair is an accurate translation in context of the particular target language keyword unit of the translation pair. … If the translation pair (“custom-character”, “career”) occurs more than a threshold number of times in the sets, then the system may assign a higher confidence that the accurate translation of the English-language keyword unit “career” is the Chinese language keyword unit “custom-character.””] Regarding Claim 8, Zhang teaches: 8. The method of claim 1, wherein the at least one verbal input from the user is in form of a sentence, a phrase, a word, or a phoneme in context. [Zhang, Figure 4, translates “survey comments” which may be phrases or sentences. “[0035] It should be understood that reference herein to “keyword unit” is intended to encompass a single word as well as word phrases. A word phrase is a group of words that express a concept and that may be used as a unit within a sentence. …” “[0002] … To this end, web-based computing platforms exist to solicit and obtain text comments from employees. These platforms allow the company to present prompts for comments to employees in a web-based user interface. Using the web-based user interface, the employees can provide comments about the company and the employment experience in a free-form text format.”] Claim 9 is a system claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale. Additionally, the hardware features such as processors, memory, and computer program products are taught by Figure 6 of Zhang. Claim 10 is a system claim with limitations corresponding to the limitations of Claim 2 and is rejected under similar rationale. Claim 11 is a system claim with limitations corresponding to the limitations of Claim 3 and is rejected under similar rationale. Claim 12 is a system claim with limitations corresponding to the limitations of Claim 4 and is rejected under similar rationale. Claim 13 is a system claim with limitations corresponding to the limitations of Claim 5 and is rejected under similar rationale. Claim 14 is a system claim with limitations corresponding to the limitations of Claim 6 and is rejected under similar rationale. Claim 15 is a system claim with limitations corresponding to the limitations of Claim 7 and is rejected under similar rationale. Claim 16 is a system claim with limitations corresponding to the limitations of Claim 8 and is rejected under similar rationale. Claim 17 is a computer program product system claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale. Additionally, the hardware features such as processors, memory, and computer program products are taught by Figure 6 of Zhang. Claim 18 is a system claim with limitations corresponding to the limitations of Claim 2 and is rejected under similar rationale. Claim 19 is a system claim with limitations corresponding to the limitations of Claim 3 and is rejected under similar rationale. Claim 20 is a system claim with limitations corresponding to the limitations of Claim 4 and is rejected under similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARIBA SIRJANI whose telephone number is (571)270-1499. The examiner can normally be reached on 9 to 5, M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Desir can be reached on 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Fariba Sirjani/ Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jun 21, 2023
Application Filed
May 28, 2025
Non-Final Rejection — §103, §DP
Sep 24, 2025
Response Filed
Oct 24, 2025
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603099
SELF-ADJUSTING ASSISTANT LLMS ENABLING ROBUST INTERACTION WITH BUSINESS LLMS
2y 5m to grant Granted Apr 14, 2026
Patent 12579482
Schema-Guided Response Generation
2y 5m to grant Granted Mar 17, 2026
Patent 12572737
GENERATIVE THOUGHT STARTERS
2y 5m to grant Granted Mar 10, 2026
Patent 12537013
AUDIO-VISUAL SPEECH RECOGNITION CONTROL FOR WEARABLE DEVICES
2y 5m to grant Granted Jan 27, 2026
Patent 12492008
Cockpit Voice Recorder Decoder
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+31.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 547 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month