Prosecution Insights
Last updated: April 19, 2026
Application No. 18/528,091

GRAMMATICAL ERROR DETECTION UTILIZING LARGE LANGUAGE MODELS

Final Rejection §103
Filed
Dec 04, 2023
Examiner
ADESANYA, OLUJIMI A
Art Unit
2658
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
66%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
91%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
430 granted / 655 resolved
+3.6% vs TC avg
Strong +26% interview lift
Without
With
+25.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
35 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
40.6%
+0.6% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 655 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Regarding non-provisional reference Shevchenko (US 2025/0053735 A1), Applicant argues that Shevchenko’s provisional application fails to provide support for para. [00316]; [0349]; [0353] and [0374] of the non-provisional reference Shevchenko and as such, argues that a prima facie case is not established in the rejection of the claims (Arguments, pg. 10-11). Examiner respectfully disagrees.as there is no requirement that a provisional application be completely identical to its corresponding non-provisional application – “ exact terms need not be used in haec verba to satisfy the written description requirement of 35 U.S.C. 112(a)” see Eiselstein v. Frank, 52 F.3d 1035, 1038, 34 USPQ2d 1467, 1470 (Fed. Cir. 1995); In re Wertheim, 541 F.2d 257, 265, 191 USPQ 90, 98 (CCPA 1976) and “it is unnecessary to spell out every detail of the invention in the specification; only enough must be included to convince a person of skill in the art that the inventor possessed the invention.” LizardTech, Inc. v. Earth Res. Mapping, Inc., 424 F.3d 1336, 1345 (Fed. Cir. 2005). The argued portions of non-provisional Shevchenko are as follows: [0316] Various aspects of the disclosure describe computer-implemented methods that can be used to obtain results of a prompt or a query from a client device to a large language model (LLM). For generality, the prompt of a query from the client device is referred to here as request data. Furthermore, the input for an LLM is generally referred to herein as a prompt, while the context data related to such a prompt is referred to as prompt-associated context. [0349] If the request data is associated with a query in the form of a template with placeholders, generating the prompt can include filling up the template with the prompt-associated context. This can be achieved, for example, by using a suitable large language model (LLM) to identify which placeholder needs to be replaced with which context from the prompt-associated context. This process might involve using embeddings for placeholders and comparing these embeddings with those associated with different text sections of the prompt-associated context. [0353] At step 3320, method 3300 includes programmatically communicating the prompt to an LLM. The LLM can be any suitable model, like ChatGPT. In some cases, the LLM may be a sequential transformer model, which uses a series of transformer layers to process input text. Hierarchical transformer models can also be used. Furthermore, bidirectional contextual models can be used. These models can enhance the capability of LLMs by processing text in both forward and backward directions. Further attention-based encoder-decoder models can be used. Such LLMs include an encoder that processes the input text and a decoder that generates the output text. The encoder uses attention mechanisms to capture important features from the input, while the decoder constructs the output word by word. In some cases, sparse transformer models, which use sparse attention mechanisms to focus on the most relevant parts of the input text, rather than attending to every word equally can be used. Additionally, a mixture of experts' models can be used. These models employ a combination of specialized sub-models, each focusing on different aspects of language understanding and generation. [0374] FIG. 34A shows a workflow process 3401. Process 3401 begins with Client 3411 initiating a session and sending a request containing request data to CAPI 3413. CAPI 3413 retrieves context information related to the request from KE 3415, which manages and stores contextual data. This context information, referred to as ctx_0 and accompanied by info_0, is then utilized to perform specific actions. The information info_0 can be used for actions such as rewriting a selected text or retrieving additional context or related information from various sources. The ctx_0 is stored for further use when generating a final prompt for the LLM. Based on the specific actions (for example, rewriting selected text, where a portion of text containing request data is selected to be rewritten), additional context such as ctx_1 and associated information info_1 can be retrieved. Multiple iterations of retrieving content may be performed in response to multiple actions. For example, FIG. 34A shows that ctx_0 and ctx_1 can be retrieved due to various actions and combined into ctx_2. Subsequently, ctx_2 and the request data are communicated to Composer 3417, which organizes the received data into a prompt. This prompt is then provided to LLM 3419, and the results from LLM 3419 are delivered back to Client 3411. As provided above by the citations, 1. Para. [0316] refers to receiving a result from a LLM as a result of input of a query/prompt/request data into the LLM. Para. [0277]-[0278] of the provisional application describes providing input to the LLM and receiving responses from the LLM. 2. Para. [0349] refers to generating a prompt using templates and associated context information. Para. [0277] and para. [0197] of the provisional application describes retrieving templates used in generating a prompt for an LLM where the templates include text and placeholders 3. Para. [0353] refers to communicating an input/prompt into a LLM that could be a ChatGPT model. Para. [0277] of the provisional application describes providing a user query and context to an LLM and para. [0276] of the provisional application describes the LLM as including ChatGPT 4. Para. [0374] refers to communicating a prompt/query/request to a LLM and receiving results from the LLM that are delivered to a client device. Para. [0278] of the provisional application describes transmitting a final response of an LLM to a client device. Therefore, the portions are adequately supported by the provisional application. Regarding the 35 U.S.C. 102 rejection of the claims with reference Shevchenko, Applicant argues that para. [0264] of Shevchenko describes identifying a grammatical error and issuing a warning and displaying the corrected version of a provided phrase so the user can correct themselves, but do not disclose the progressive steps of rendering the "feedback output" as "an audible beep or a visual flash" responsive to receiving the "NL based input" when "the NL based input is grammatically incorrect" and then subsequently rendering the "additional feedback output' responsive to receiving the "additional NL based input, that is received "as an attempt by the user to correct grammar of the NL based input, when "the additional NL based output is" also "grammatically incorrect", and as such, argues that Shevchenko fails to disclose limitations “the feedback output comprising one or more of: an audible beep rendered via one or more speakers of the client device or the additional client device, or a visual flash rendered via one or more LEDs of the client device or the additional client device; and subsequent to causing the feedback output to be rendered: receiving an additional NL based input as an attempt by the user to correct grammar of the NL based input; determining whether the additional NL based input is grammatically incorrect; and responsive to determining that the additional NL based input is grammatically incorrect, causing an additional feedback output to be rendered at the client device or the additional client device, the additional feedback output to correct grammar of the additional NL based input that is grammatically incorrect” (Argument, pg. 11-12). Examiner respectfully disagrees. Shevchenko discloses its AR/VR facility intercepting a voice communication of a user and analyzing the communication for correctness, clarity, and effectiveness, while alerting the user in case of any significant issues, and suggesting improvements (para. [0261]), where analyzing involves identifying a grammatical error in the user's speech, and alerting the user involves issuing a warning and displaying a suggested corrected version of the phrase so that the user could repeat and correct themselves, and where the warnings may be a visual or an audio alert with a short communication from AIA (para. [0264]), and where responsive to the warning/alerting and suggestions, the user decides to re-record the communication, the AR/VR facility again intercepts the new recording, and re-analyzes (i.e., analyzing the communication for correctness, clarity, effectiveness and grammatical error) the communication while providing feedback on the new version of the communication (para. [0261]), where for repeated takes of the communication, the system may compare the analysis of the previous take and providing incremental feedback and providing general feedback while suggesting specific modifications to the communication (para. [0266]), implying additional incorrect grammar input in response to initial warnings/alerts, and as such limitations “responsive to determining that the NL based input is grammatically incorrect, causing a feedback output to be rendered at the client device or an additional client device, the feedback output to indicate to a user that the NL based input is grammatically incorrect, and the feedback output comprising one or more of: an audible alert rendered via one or more speakers of the client device or the additional client device, or a visual alert rendered via the client device or the additional client device, subsequent to causing the feedback output to be rendered: receiving an additional NL based input as an attempt by the user to correct grammar of the NL based input, determining whether the additional NL based input is grammatically incorrect, responsive to determining that the additional NL based input is grammatically incorrect, causing an additional feedback output to be rendered at the client device or the additional client device, the additional feedback output to correct grammar of the additional NL based input that is grammatically incorrect”. What Shevchenko does not explicitly disclose is its visual or audible alert/waning feedback as including an audible beep or a visual flash rendered via one or more LEDs. Applicant’s arguments with respect to claims 1, 18 and 21 and Shevchenko not disclosing limitation “the feedback output comprising one or more of: an audible beep rendered via one or more speakers of the client device or the additional client device, or a visual flash rendered via one or more LEDs of the client device or the additional client device” (Arguments, pg. 11-12) have been considered but are moot in light of new grounds of rejection with reference Cyr as provided in the rejection below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claims 1, 3, 4, 9, 10, 14, 15, 17-19, 21, 23-26 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Shevchenko et al US 2025/0053735 A1 (“Shevchenko”) in view of Cyr et al US 2022/0036878 A1 (“Cyr”) Per claim 1, Shevchenko discloses a method implemented by one or more processors, the method comprising: receiving natural language (NL) based input associated with a client device (FIG. 2 illustrates an AIA communication system model, where a user generates a communication as an input to the AIA , such as a written electronic text (for example, email, text message, document, and the like), voice communication (for example, voice input to a telecommunications system) …, para. [0178]); generating, based on the NL based input, a structured large language model (LLM) query, wherein the structured LLM query comprises the NL based input and an LLM prompt to cause an LLM to generate an LLM response that includes an indication of whether the NL based input is grammatically incorrect (para. [0042]; the AIA 400 may provide the ability to augment real-time conversations being carried out through augmented reality (AR) or virtual reality (VR) platforms through an AIA AR/VR communication facility …, para. [0260]; The AR/VR communication facility may analyze the user's pronunciation and issue a warning in case of incorrectly pronounced words that may lead to a misunderstanding, and suggest the correct pronunciation (for example, with generated voice or text notation); identify a grammatical error in the user's speech and issue a warning …, para. [0264]; para. [0277]-[0278]; para. [0303]; The composer instructions 2806 are further programmed to call the LLM/GPT API 2810 and send an engineered prompt, user query, and context, [0304]; para. [0430]); generating the LLM response based on causing the structured LLM query to be processed using the LLM, the LLM response including the indication of whether the NL based input is grammatically incorrect (The AR/VR communication facility may analyze the user's pronunciation and issue a warning in case of incorrectly pronounced words that may lead to a misunderstanding, and suggest the correct pronunciation (for example, with generated voice or text notation); identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase …, para. [0264]; para. [0303]-[0304]; In response to receiving a response from the LLM, the composer instructions 2806 are programmed to format, filter, and/or return the response to the CAPI 2804, which can further format or filter the response and transmit a final response to the client process …, para. [0305]; para. [0430]); determining, based on processing the indication included in the LLM response, whether the NL based input is grammatically incorrect (identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase …, para. [0264] …, para. [0264]; para. [0430]); responsive to determining that the NL based input is grammatically incorrect, causing a feedback output to be rendered at the client device or an additional client device, the feedback output to indicate to a user that the NL based input is grammatically incorrect, and the feedback output comprising one or more of: an audible alert rendered via one or more speakers of the client device or the additional client device, or a visual alert rendered via the client device or the additional client device (identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase …. The warnings may be a visual or an audio alert with a short communication from AIA …, para. [0264]), subsequent to causing the feedback output to be rendered: receiving an additional NL based input as an attempt by the user to correct grammar of the NL based input (the AR/VR communication facility may intercept a voice or a video communication the user has recorded … extract the content (for example, text and non-verbal signals from the voice tone, pauses, body language, and the like), analyze correctness, clarity, and effectiveness, alert the user in case of any significant issues, and suggest improvements. The AR/VR communication facility may then provide general feedback on the communication, such as suggesting modifications or rewrites to the communication … At this point the user may decide to re-record the communication … If the user re-records the communication … If the user re-records the communication, the AR/VR communication facility may intercept the new recording, re-analyze, compare to the previous communication, and provide feedback on the new version …, para. [0261]; identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase so that the user could repeat and correct themselves …, para. [0264], receiving re-recording in response to alerting and feedback/suggestion as receiving additional NL input); determining whether the additional NL based input is grammatically incorrect (the AR/VR communication facility may intercept a voice or a video communication the user has recorded … extract the content (for example, text and non-verbal signals from the voice tone, pauses, body language, and the like), analyze correctness, clarity, and effectiveness, alert the user in case of any significant issues, and suggest improvements. The AR/VR communication facility may then provide general feedback on the communication, such as suggesting modifications or rewrites to the communication … At this point the user may decide to re-record the communication … If the user re-records the communication … If the user re-records the communication, the AR/VR communication facility may intercept the new recording, re-analyze, compare to the previous communication, and provide feedback on the new version …, para. [0261]; para. [0261]; para. [0264]; para. [0266], re-analyzing re-recording after alert and feedback/suggestions as subsequently analyzing additional NL input for correctness, clarity, and effectiveness (i.e., grammar)); and responsive to determining that the additional NL based input is grammatically incorrect, causing an additional feedback output to be rendered at the client device or the additional client device, the additional feedback output to correct grammar of the additional NL based input that is grammatically incorrect (At this point the user may decide to re-record the communication or discard the feedback/suggestions. For instance, if the user discards the feedback (such as repeatedly) … If the user re-records the communication, the AR/VR communication facility may intercept the new recording, re-analyze, compare to the previous communication, and provide feedback on the new version …, para. [0261]; The AR/VR communication facility may analyze the user's pronunciation … identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase so that the user could repeat and correct themselves …, para. [0264]; For repeated takes on the message, the system may compare the analysis of the previous take and providing incremental feedback 1310, providing general feedback 1312, and suggesting specific modifications to the message …, para. [0266], analyzing the user’s communication as involving issuing a warning in case of incorrectly pronounced words, re-analyzing repeated takes of the communication and suggesting specific modifications to the repeated takes on the message/communication as implying identifying grammatically incorrect additional input) Shevchenko does not explicitly disclose the feedback output comprising one or more of: an audible beep rendered via one or more speakers of the client device or the additional client device, or a visual flash rendered via one or more LEDs of the client device or the additional client device However, this feature is taught by Cyr (para. [0048]; user 104 may be provided with audible feedback, such as a tone, beep or other sound indicating that the individual has made an error in pronunciation, grammar, etc., …, para. [0103]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Cyr with the method of Shevchenko in arriving at the missing features of Shevchenko, because such combination would have resulted in helping a user improve his or her communication (Cyr, para. [0134]). Per claim 3, Shevchenko in view of Cyr discloses the method of claim 1, Shevchenko discloses determining whether to generate the structured LLM query, and wherein generating the structured LLM query is performed in response to determining to generate the structured LLM query (para. [0202]; The user 404 may provide an initial communication input 402 (for example, draft a message, upload a document, or just start typing or dictating)…., para. [0209]; para. [0245]; para. [0254]; identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception …, para. [0264]; The user may also have the option of requesting the AIA 400 to evaluate the text at any point in time…., para. [0268]; para. [0303]-[0305]; para. [0345]; para. [0430]). Per claim 4, Shevchenko in view of Cyr discloses the method of claim 3, Shevchenko discloses wherein determining whether to generate the structured LLM query is based on receiving an indication of a user input to initiate incorrect grammar detection (para. [0202]; The user 404 may provide an initial communication input 402 (for example, draft a message, upload a document, or just start typing or dictating)…., para. [0209]; para. [0245]; para. [0254]; identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception …, para. [0264]; The user may also have the option of requesting the AIA 400 to evaluate the text at any point in time…., para. [0268]; para. [0303]-[0305]; para. [0345]; para. [0430]). Per claim 9, Shevchenko in view of Cyr discloses the method of claim 1, Shevchenko discloses wherein determining whether the additional NL based input is grammatically incorrect comprises: generating, based on the additional NL based input, an additional structured LLM query (At this point the user may decide to re-record the communication … If the user re-records the communication …, para. [0261]; identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase so that the user could repeat and correct themselves …, para. [0264]; para. [0303]-[0305]; para. [0316]; para. [0349]; para. [0353]; para. [0430]) generating an additional LLM response based on causing the additional structured LLM query to be processed using the LLM or a different LLM, wherein the additional LLM response includes an indication of whether the additional NL based input is grammatically incorrect (The AR/VR communication facility may then capture the context of the communication … and analyze verbal and non-verbal content for correctness … The AR/VR communication facility may then provide general feedback on the communication, such as suggesting modifications or rewrites … At this point the user may decide to re-record the communication … If the user re-records the communication …, para. [0261]; identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase so that the user could repeat and correct themselves …, para. [0264]; para. [0303]-[0305]; para. [0316]; para. [0349]; para. [0353]; para. [0430]); and determining, based on processing the additional LLM response, whether the additional NL based input is grammatically incorrect (para. [0261]; para. [0264]; para. [0303]-[0305]). Per claim 10, Shevchenko in view of Cyr discloses the method of claim 1, Shevchenko discloses wherein determining whether the additional NL based input is grammatically incorrect comprises performing a comparison between at least a portion of the additional NL based input and at least a portion of the grammatically correct version of the NL based input (The AR/VR communication facility may then capture the context of the communication … and analyze verbal and non-verbal content for correctness … The AR/VR communication facility may then provide general feedback on the communication, such as suggesting modifications or rewrites to the communication (or parts of the communication)…. At this point the user may decide to re-record the communication … If the user re-records the communication, the AR/VR communication facility may intercept the new recording, re-analyze, compare to the previous communication …, para. [0261]; identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase so that the user could repeat and correct themselves …, para. [0264]; para. [0266], re-recording as additional input, identifying error as based on comparison); wherein the additional NL based input is determined to be grammatically incorrect based on a result of the comparison (para. [0261]; identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase so that the user could repeat and correct themselves …, para. [0264]; para. [0266]). Per claim 14, Shevchenko in view of Cyr discloses the method of claim 1, Shevchenko discloses: wherein: the NL based input comprises a query for information (para. [0178]; para. [0259]-[0260]; para. [0316]); the LLM response comprises content responsive to the query for information (para. [0178]; para. [0259]-[0260]; para. [0316]-[0322]; para. [0430]); and the feedback output is caused to be rendered in lieu of rendering the content responsive to the query for information (para. [0259]-[0260]). Per claim 15, Shevchenko in view of Cyr discloses the method of claim 1, Shevchenko discloses: wherein the indication in the LLM response is indicative of at least one of: a relative importance of a grammatical error identified in the NL based input; a location of a grammatical error identified in the NL based input; or a type of grammatical error identified in the NL based input (grammatical errors (for example, sentence fragments, run-on sentences, subject-verb mismatch, repeated words, missing word errors, all-caps words, abbreviated words, letter inversions …, para. [0189]; identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception …, para. [0264]) Per claim 17, Shevchenko in view of Cyr discloses the method of claim 1, Shevchenko discloses wherein causing the feedback output to be rendered at the client device or the additional client device comprises transmitting data to the client device or the additional device that is operable for causing the client device or the additional device to render the feedback output (para.[0061]; The AR/VR communication facility may analyze the user's pronunciation and issue a warning in case of incorrectly pronounced words…. The warnings may be a visual or an audio alert with a short communication from AIA …, para. [0264]; para. [0305]). Per claim 18, Shevchenko discloses method implemented by one or more processors, the method comprising: receiving natural language (NL) based input associated with a client device (FIG. 2 illustrates an AIA communication system model, where a user generates a communication as an input to the AIA , such as a written electronic text (for example, email, text message, document, and the like), voice communication (for example, voice input to a telecommunications system) …, para. [0178]); determining whether to generate a structured large language model (LLM) query, the structured LLM query comprising the NL based input and an LLM prompt to cause an LLM to generate an LLM response that includes an indication of whether the NL based input is grammatically incorrect (para. [0042]; the AIA 400 may provide the ability to augment real-time conversations being carried out through augmented reality (AR) or virtual reality (VR) platforms through an AIA AR/VR communication facility …, para. [0260]; The AR/VR communication facility may analyze the user's pronunciation and issue a warning in case of incorrectly pronounced words that may lead to a misunderstanding, and suggest the correct pronunciation (for example, with generated voice or text notation); identify a grammatical error in the user's speech and issue a warning …, para. [0264]; para. [0277]-[0278]; para. [0303]; The composer instructions 2806 are further programmed to call the LLM/GPT API 2810 and send an engineered prompt, user query, and context, [0304]; In response to receiving a response from the LLM, the composer instructions 2806 are programmed to format, filter, and/or return the response to the CAPI 2804, which can further format or filter the response and transmit a final response to the client process …, para. [0305]; para. [0430]); in response to determining to generate the structured LLM query: generating the structured large language model (LLM) query based on the NL based input and the LLM prompt (para. [0042]; the AIA 400 may provide the ability to augment real-time conversations being carried out through augmented reality (AR) or virtual reality (VR) platforms through an AIA AR/VR communication facility …, para. [0260]; The AR/VR communication facility may analyze the user's pronunciation and issue a warning in case of incorrectly pronounced words that may lead to a misunderstanding, and suggest the correct pronunciation (for example, with generated voice or text notation); identify a grammatical error in the user's speech and issue a warning …, para. [0264]; para. [0277]-[0278]; para. [0303]-[0305]; para. [0430]); and generating the LLM response based on processing the structured LLM query using the LLM, wherein the LLM response includes the indication of whether the NL based input is grammatically incorrect (The AR/VR communication facility may analyze the user's pronunciation and issue a warning in case of incorrectly pronounced words that may lead to a misunderstanding, and suggest the correct pronunciation (for example, with generated voice or text notation); identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase …, para. [0264]; para. [0303]-[0304]; In response to receiving a response from the LLM, the composer instructions 2806 are programmed to format, filter, and/or return the response to the CAPI 2804, which can further format or filter the response and transmit a final response to the client process …, para. [0305]; para. [0430]); determining, based on processing the LLM response, whether the NL based input is grammatically incorrect (identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase …, para. [0264]; para. [0430]); responsive to determining that the NL based input is grammatically incorrect, causing a feedback output to be rendered at the client device or an additional client device, the feedback output to indicate to a user that the NL based input is grammatically incorrect and the feedback output comprising one or more of: an audible alert rendered via one or more speakers of the client device or the additional client device, or a visual alert rendered via the client device or the additional client device (identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase …. The warnings may be a visual or an audio alert with a short communication from AIA …, para. [0264]), subsequent to causing the feedback output to be rendered: receiving an additional NL based input as an attempt by the user to correct grammar of the NL based input (The AR/VR communication facility may then capture the context of the communication … and analyze verbal and non-verbal content for correctness … The AR/VR communication facility may then provide general feedback on the communication, such as suggesting modifications or rewrites … At this point the user may decide to re-record the communication … If the user re-records the communication …, para. [0261]; identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase so that the user could repeat and correct themselves …, para. [0264]); determining whether the additional NL based input is grammatically incorrect (The AR/VR communication facility may then capture the context of the communication … and analyze verbal and non-verbal content for correctness … The AR/VR communication facility may then provide general feedback on the communication, such as suggesting modifications or rewrites to the communication (or parts of the communication)…. At this point the user may decide to re-record the communication … If the user re-records the communication, the AR/VR communication facility may intercept the new recording, re-analyze, compare to the previous communication …, para. [0261]; para. [0264]; para. [0266]); and responsive to determining that the additional NL based input is grammatically incorrect, cause an additional feedback output to be rendered at the client device or the additional client device, the additional feedback output to correct grammar of the additional NL based input that is grammatically incorrect (At this point the user may decide to re-record the communication or discard the feedback/suggestions. For instance, if the user discards the feedback (such as repeatedly) … If the user re-records the communication, the AR/VR communication facility may intercept the new recording, re-analyze, compare to the previous communication, and provide feedback on the new version …, para. [0261]; The AR/VR communication facility may analyze the user's pronunciation and issue a warning in case of incorrectly pronounced words that may lead to a misunderstanding, and suggest the correct pronunciation (for example, with generated voice or text notation); identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase so that the user could repeat and correct themselves …, para. [0264]; For repeated takes on the message, the system may compare the analysis of the previous take and providing incremental feedback 1310, providing general feedback 1312, and suggesting specific modifications to the message …, para. [0266], analyzing the user’s communication as involving issuing a warning in case of incorrectly pronounced words, re-analyzing re-recorded communication and suggesting specific modifications to repeated takes on the message/communication as suggesting limitation) Shevchenko does not explicitly disclose the feedback output comprising one or more of: an audible beep rendered via one or more speakers of the client device or the additional client device, or a visual flash rendered via one or more LEDs of the client device or the additional client device However, this feature is taught by Cyr (para. [0048]; user 104 may be provided with audible feedback, such as a tone, beep or other sound indicating that the individual has made an error in pronunciation, grammar, etc., …, para. [0103]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Cyr with the method of Shevchenko in arriving at the missing features of Shevchenko, because such combination would have resulted in helping a user improve his or her communication (Cyr, para. [0134]). Per claim 19, Shevchenko in view of Cyr discloses the method of claim 18, Shevchenko discloses wherein determining whether to generate a structured large language model (LLM) query is based on receiving an indication of a user input to initiate incorrect grammar detection (para. [0202]; The user 404 may provide an initial communication input 402 (for example, draft a message, upload a document, or just start typing or dictating)…., para. [0209]; para. [0245]; para. [0254]; identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception …, para. [0264]; The user may also have the option of requesting the AIA 400 to evaluate the text at any point in time…., para. [0268]; para. [0345]). Per claim 21, Shevchenko discloses a system comprising: at least one processor (para. [0283]); and memory storing instructions that, when executed by the at least one processor, cause the at least one processor (para. [0283]) to be operable to: receive natural language (NL) based input associated with a client device (FIG. 2 illustrates an AIA communication system model, where a user generates a communication as an input to the AIA , such as a written electronic text (for example, email, text message, document, and the like), voice communication (for example, voice input to a telecommunications system) …, para. [0178]); generate, based on the NL based input, a structured large language model (LLM) query, wherein the structured LLM query comprises the NL based input and an LLM prompt to cause an LLM to generate an LLM response that includes an indication of whether the NL based input is grammatically incorrect (para. [0042]; the AIA 400 may provide the ability to augment real-time conversations being carried out through augmented reality (AR) or virtual reality (VR) platforms through an AIA AR/VR communication facility …, para. [0260]; The AR/VR communication facility may analyze the user's pronunciation and issue a warning in case of incorrectly pronounced words that may lead to a misunderstanding, and suggest the correct pronunciation (for example, with generated voice or text notation); identify a grammatical error in the user's speech and issue a warning …, para. [0264]; para. [0277]-[0278]; para. [0303]-[0304]; para. [0430]); generate the LLM response based on causing the structured LLM query to be processed using the LLM, the LLM response including the indication of whether the NL based input is grammatically incorrect (The AR/VR communication facility may analyze the user's pronunciation and issue a warning in case of incorrectly pronounced words that may lead to a misunderstanding, and suggest the correct pronunciation (for example, with generated voice or text notation); identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase …, para. [0264]; para. [0303]-[0304]; In response to receiving a response from the LLM, the composer instructions 2806 are programmed to format, filter, and/or return the response to the CAPI 2804, which can further format or filter the response and transmit a final response to the client process …, para. [0305]; para. [0430]); determine, based on processing the indication included in the LLM response, whether the NL based input is grammatically incorrect (identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase …, para. [0264]); and responsive to determining that the NL based input is grammatically incorrect, cause a feedback output to be rendered at the client device or an additional client device, the feedback output to indicate to a user that the NL based input is grammatically incorrect, and the feedback output comprising one or more of: an audible alert rendered via one or more speakers of the client device or the additional client device, or a visual alert rendered via the client device or the additional client device (identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase …. The warnings may be a visual or an audio alert with a short communication from AIA …, para. [0264]), subsequent to causing the feedback output to be rendered: receive an additional NL based input as an attempt by the user to correct grammar of the NL based input (The AR/VR communication facility may then capture the context of the communication … and analyze verbal and non-verbal content for correctness … The AR/VR communication facility may then provide general feedback on the communication, such as suggesting modifications or rewrites … At this point the user may decide to re-record the communication … If the user re-records the communication …, para. [0261]; identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase so that the user could repeat and correct themselves …, para. [0264]); determine whether the additional NL based input is grammatically incorrect (The AR/VR communication facility may then capture the context of the communication … and analyze verbal and non-verbal content for correctness … The AR/VR communication facility may then provide general feedback on the communication, such as suggesting modifications or rewrites to the communication (or parts of the communication)…. At this point the user may decide to re-record the communication … If the user re-records the communication, the AR/VR communication facility may intercept the new recording, re-analyze, compare to the previous communication …, para. [0261]; para. [0264]; para. [0266]); and responsive to determining that the additional NL based input is grammatically incorrect, cause an additional feedback output to be rendered at the client device or the additional client device, the additional feedback output to correct grammar of the additional NL based input that is grammatically incorrect (At this point the user may decide to re-record the communication or discard the feedback/suggestions. For instance, if the user discards the feedback (such as repeatedly) … If the user re-records the communication, the AR/VR communication facility may intercept the new recording, re-analyze, compare to the previous communication, and provide feedback on the new version …, para. [0261]; The AR/VR communication facility may analyze the user's pronunciation and issue a warning in case of incorrectly pronounced words that may lead to a misunderstanding, and suggest the correct pronunciation (for example, with generated voice or text notation); identify a grammatical error in the user's speech and issue a warning when it may lead to miscommunication or bad perception, and display a corrected version of the phrase so that the user could repeat and correct themselves …, para. [0264]; For repeated takes on the message, the system may compare the analysis of the previous take and providing incremental feedback 1310, providing general feedback 1312, and suggesting specific modifications to the message …, para. [0266], analyzing the user’s communication as involving issuing a warning in case of incorrectly pronounced words, re-analyzing re-recorded communication and suggesting specific modifications to repeated takes on the message/communication as suggesting limitation) Shevchenko does not explicitly disclose the feedback output comprising one or more of: an audible beep rendered via one or more speakers of the client device or the additional client device, or a visual flash rendered via one or more LEDs of the client device or the additional client device However, this feature is taught by Cyr (para. [0048]; user 104 may be provided with audible feedback, such as a tone, beep or other sound indicating that the individual has made an error in pronunciation, grammar, etc., …, para. [0103]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Cyr with the system of Shevchenko in arriving at the missing features of Shevchenko, because such combination would have resulted in helping a user improve his or her communication (Cyr, para. [0134]). Per claim 23, Shevchenko in view of Cyr discloses the system of claim 21, System claim 2 and method claim 3 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 23 is similarly rejected under the same rationale as applied above with respect to claim 3. Per claim 24, Shevchenko in view of Cyr discloses the system of claim 23, System claim 24 and method claim 4 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 24 is similarly rejected under the same rationale as applied above with respect to claim 4. Per claim 25, Shevchenko in view of Cyr discloses the system of claim 21, System claim 25 and method claim 9 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 25 is similarly rejected under the same rationale as applied above with respect to claim 9. Per claim 26, Shevchenko in view of Cyr discloses the system of claim 21, System claim 26 and method claim 10 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 26 is similarly rejected under the same rationale as applied above with respect to claim 10. Per claim 28, Shevchenko in view of Cyr discloses the system of claim 21, System claim 28 and method claim 17 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 28 is similarly rejected under the same rationale as applied above with respect to claim 17. 2. Claims 2 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Shevchenko in view of Cyr as applied to claims 1 and 21 above, and further in view of Vleugels et al US 2024/0274025 A1 (“Vleugels”) Per claim 2, Shevchenko in view of Cyr discloses the method of claim 1, Shevchenko does not explicitly disclose wherein the LLM prompt is a predetermined prompt that has been stored prior to receiving the NL based input However, this feature is taught by Vleugels (fig. 1C; fig. 1G; para. [0063]; The prompts generator 123 may use a general-purpose processor to execute program instructions to generate or select one or more prompts 125 used to query content generator 122. A list, array, or database of prompts may be stored in memory …, [0064]; para. [0113]; para. [0150]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Vleugels with the method of Shevchenko in arriving at the missing features of Shevchenko, because such combination would have resulted in obtaining relevant prompts used in querying an AI model (Vleugels, para. [0113]; para. [0150]). Per claim 22, Shevchenko in view of Cyr discloses the system of claim 21, System claim 22 and method claim 2 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 22 is similarly rejected under the same rationale as applied above with respect to claim 2. 3. Claims 16 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Shevchenko in view of Cyr as applied to claims 1 and 21 above, and further in view of Benkreira et al US 2022/137785 A1 (“Benkreira”) Per claim 16, Shevchenko in view of Cyr discloses the method of claim 1, Shevchenko does not explicitly disclose wherein the indication in the LLM response comprises an error value, the error value being indicative of a relative importance of at least one grammatical error contained in the NL based input, wherein causing the feedback output to be rendered is based on a magnitude of the error value However, this feature is suggested by Benkreira (Abstract; para. [0047]; the error characterization model engine 130 may include an error characterization machine learning model to determine an error type classification and an error severity score for each potential error detected by the context identification model engine 120 …, para. [0059]; para. [0081]) It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to combine the teachings of Benkreira with the method of Shevchenko in view of Cyr in arriving at the missing features of Shevchenko in view of Cyr, because such combination would have resulted in determining the type of error that warrants interrupting a user (Benkreira, para. [0058]). Per claim 27, Shevchenko in view of Cyr discloses the system of claim 21, System claim 27 and method claim 16 are related as system and the method of using same, with each claimed element's function corresponding to the claimed method step. Accordingly claim 27 is similarly rejected under the same rationale as applied above with respect to claim 16. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892 form. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLUJIMI A ADESANYA whose telephone number is (571)270-3307. The examiner can normally be reached Monday-Friday 8:30-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at 571-272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUJIMI A ADESANYA/Primary Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Dec 04, 2023
Application Filed
Dec 10, 2025
Non-Final Rejection — §103
Mar 05, 2026
Interview Requested
Mar 17, 2026
Examiner Interview Summary
Mar 17, 2026
Applicant Interview (Telephonic)
Mar 17, 2026
Response Filed
Mar 30, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591739
METHOD AND SYSTEM FOR DIACRITIZING ARABIC TEXT
2y 5m to grant Granted Mar 31, 2026
Patent 12585686
EVENT DETECTION AND CLASSIFICATION METHOD, APPARATUS, AND DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12585481
METHOD AND ELECTRONIC DEVICE FOR PERFORMING TRANSLATION
2y 5m to grant Granted Mar 24, 2026
Patent 12578779
Multiple Stage Network Microphone Device with Reduced Power Consumption and Processing Load
2y 5m to grant Granted Mar 17, 2026
Patent 12579181
Synchronization of Sensor Network with Organization Ontology Hierarchy
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
66%
Grant Probability
91%
With Interview (+25.5%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 655 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month