Prosecution Insights
Last updated: April 19, 2026
Application No. 17/982,511

NETWORK-BASED CONVERSATION CONTENT MODIFICATION

Final Rejection §101§103
Filed
Nov 07, 2022
Examiner
WEAVER, ADAM MICHAEL
Art Unit
2658
Tech Center
2600 — Communications
Assignee
AT&T Intellectual Property I, L.P.
OA Round
2 (Final)
92%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
11 granted / 12 resolved
+29.7% vs TC avg
Strong +20% interview lift
Without
With
+20.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
27 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
33.2%
-6.8% vs TC avg
§103
44.7%
+4.7% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
2.1%
-37.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed on 11/21/2025 has been entered. Claims 8-9 and 14 have been cancelled. New claims 21-23 have been added. Claims 1-7, 10-13, and 15-23 are therefore pending in this application. Response to Arguments Applicant’s arguments filed 11/21/2025 have been fully considered but are not persuasive. With respect to the 35 U.S.C. 101 rejection, on pages 8-10, the Applicant asserts that the claims, as amended, are not directed towards a mental process or an abstract idea. The Applicant asserts that the claims, as amended, integrate any alleged judicial exceptions into a practical application. They argue that the addition and application of a machine learning model, or in this case an encoder-decoder neural network, is not something that can be performed in the human mind. The Applicant also asserts that the claims, as amended, include additional elements that improve the technical field of communication network operations and network-based communication applications. They state that the amended claims further demonstrate that the claimed embodiments may provide enhanced and improved network-based communications and enhanced and improved network-based communication systems. The Examiner respectfully disagrees. It appears that the Applicant is merely restating what is in the claim language without specifically identifying what elements and how each limitation is significantly more. The amended claim, taken as a whole, is simply the altering or changing of an emotion to better convey information. This can easily be performed by a human, as humans constantly change their emotions and demeanors when conversing with each other. The usage and application of a machine learning model or an encoder-decoder neural network is purely recitation of generic computer components. Choosing to use an encoder-decoder neural network to convey a specific emotion in a video call, an audio call, or a text chat does not improve the technical field of network-based communications in a practical manner. The Applicant has not provided any reasoning or evidence as to why the noted individual limitations are not mental activities. The Examiner has considered all of the limitations as noted by the Applicant as part of the abstract idea as mental activities. The Examiner also notes in the rejection noted below that the claims only recited a few additional limitations of “a processing system including at least one processor”, “at least one machine learning model”, and “an encoder-decoder neural network”. These elements, as stated below, are general purpose computing elements. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Hence, the Applicant’s arguments are not persuasive. With respect to the 35 U.S.C. 102 rejection, pages 10-12, of claims 1-7 and 18-20 under Reece et al. (US Patent No. 11,417,330), hereinafter referred to as Reece, the Applicant asserts that the cited art fails to disclose the claims as amended. This argument is considered but is moot as the new ground of rejection applied below does not rely on Reece to teach these amended limitations. The amended limitations are taught by Shah et al. (US Patent Application Publication No. 2021/0334472), hereinafter referred to as Shah, in the Non-Final Office Action filed 08/21/2025. With respect to the 35 U.S.C. 103 rejection, pages 12-18, of claims 8, 9, 12, and 14-17 under Reece, in view of Shah, and claims 10, 11, and 13 under Reece, in view of Shah, and further in view of Yang et al. (US Patent Application Publication No. 2021/0074261), the Applicant asserts that Reece and Shah, alone or in any permissible combination, fail to describe or suggest a method comprising “obtaining, by a processing system including at least one processor, at least a first objective associated with a demeanor of at least a first participant to convey a selected demeanor to at least a second participant for a conversation; activating, by the processing system, at least one machine learning model associated with the at least the first objective; applying, by the processing system, a conversation content of the at least the first participant as at least a first input to the at least one machine learning model; and performing, by the processing system, at least one action in accordance with an output of the at least one machine learning model, wherein the at least one action comprises altering the conversation content of the at least the first participant to align to the selected demeanor, wherein the altering comprises applying the conversation content of the at least the first participant as an input to an encoder- decoder neural network, and wherein an output of the encoder-decoder neural network comprises an altered conversation content of the at least the first participant” with respect to claim 1. The Applicant also asserts that Reece nor Shah does describe or suggest a machine learning model (MLM) that has (a) an input of a conversation content and (b) an output of altered conversation content of the at least first participant. In response to the argument that Reece and Shah do not disclose or suggest “obtaining, by a processing system including at least one processor, at least a first objective associated with a demeanor of at least a first participant to convey a selected demeanor to at least a second participant for a conversation; activating, by the processing system, at least one machine learning model associated with the at least the first objective; applying, by the processing system, a conversation content of the at least the first participant as at least a first input to the at least one machine learning model; and performing, by the processing system, at least one action in accordance with an output of the at least one machine learning model, wherein the at least one action comprises altering the conversation content of the at least the first participant to align to the selected demeanor, wherein the altering comprises applying the conversation content of the at least the first participant as an input to an encoder- decoder neural network, and wherein an output of the encoder-decoder neural network comprises an altered conversation content of the at least the first participant”, Shah para [0142] states "Next, in step 910 a determination of a target emotion is made, based on the communication content." Reece Fig. 4 reference character 406 shows conversation features, i.e. emotional information, being input into a machine learning model, which is also taught by Shah para [0027]: “In another embodiment, some or all of the above factors may be input into a machine learning algorithms seeded to produce an accurate, and continually improving, outcome describing the “how” a particular type of sentence should be delivered for a particular type of customer given the context at hand.” Reece Fig. 4 reference characters 404 and 409 shows conversation features, i.e. emotional information, being input into a machine learning model. Reece Fig. 4 reference character 413 shows action results being performed per the output of the machine learning model. Shah Fig. 9 reference characters 910-916 show changing the emotional context of a message or conversation to align with a preferred emotional context. Both Reece and Shah, as shown above, disclose the usage of machine learning models to perform this action. Reece, in particular, discloses the usage of multiple LSTM models to construct the used neural network. Encoder-decoder architecture can simply be dual LSTM networks that are used to transform input sequences into outputs. As disclosed in Reece, it would have been obvious to one of ordinary skill in the art to use multiple networks of a multitude of LSTM models in tandem for better performance, especially because encoder-decoder architectures are more well known for sequence-to-sequence tasks, such as translation, summarization, and in this case, conversational contexts. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-23 rejected under 35 U.S.C. 101 because are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1, 19, and 20 recite “obtaining at least a first objective of at least a first participant to convey a selected demeanor”, “activating at least one machine learning model”, “applying a conversation content of the at least the first participant as at least a first input”, and “performing at least one action”. These limitations, as drafted, are a process that, under a broadest reasonable interpretation, covers the abstract idea of “mental processes” because they cover concepts performed in the human mind, including observation, evaluation, judgement, and opinion. See MPEP 2106.04(a)(2). That is, other than reciting “a processing system including at least one processor”, “at least one machine learning model”, and “an encoder-decoder neural network”, nothing in the claimed elements preclude the steps from being practically performed by a person communicating with another party, transcribing their emotional state, and transcribing the content of the conversation to use in accomplishing a task. This judicial exception is not integrated into a practical application because the additional elements “a processing system including at least one processor”, “at least one machine learning model”, and “an encoder-decoder neural network” are generic computer components and are recited at such a high level of generality. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus, the claims as a whole are directed to an abstract idea (Step 2A, prong two). Claims 1, 19, and 20 do not include any additional elements that are sufficient to amount to significantly more than the judicial exception because, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “a processing system including at least one processor”, “at least one machine learning model”, and “an encoder-decoder neural network” amount to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (Step 2B). Dependent claims 2-18 and 21-23 are directed to describing the aspects of obtaining the objective associated with the demeanor, activating the machine learning model, applying the conversation content, and performing an action in accordance with the output. These limitations are also related to the abstract idea of “mental processes”. That is, nothing in the claimed elements preclude the steps from practically being performed by a person communicating with another party, transcribing their emotional state, and transcribing the content of the conversation to use in accomplishing a task. No additional elements are present. The added “performing a speech-to-text conversion to obtain a generated text” are not recited with sufficient specificity as to provide any details about how the neural network operates or how the speech-to-text conversion is performed. Thus, the claims as a whole are directed to an abstract idea (Step 2A, prong two). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-7, 12, and 15-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Reece, in view of Shah. Regarding claim 1, Reece discloses activating, by the processing system, at least one machine learning model associated with the at least the first objective (Reece Fig. 4 reference character 406); applying, by the processing system, a conversation content of the at least the first participant as at least a first input to the at least one machine learning model (Reece Fig. 4 reference characters 404 and 409); and performing, by the processing system, at least one action in accordance with an output of the at least one machine learning model (Reece Fig. 4 reference character 413), applying the conversation content of the at least the first participant as an input to an encoder-decoder neural network (Reece col. 13 line 57 through col. 14 line 3, Reece, in particular, discloses the usage of multiple LSTM models to construct the used neural network. Encoder-decoder architecture can simply be dual LSTM networks that are used to transform input sequences into outputs. As disclosed in Reece, it would have been obvious to one of ordinary skill in the art to use multiple networks of a multitude of LSTM models in tandem for better performance, especially because encoder-decoder architectures are more well known for sequence-to-sequence tasks, such as translation, summarization, and in this case, conversational contexts). However, Reece fails to disclose, a method comprising: obtaining, by a processing system including at least one processor, at least a first objective to convey a selected demeanor to at least a second participant for a conversation; wherein the at least one action comprises altering the conversation content of the at least the first participant to align to the selected demeanor, and wherein an output of the encoder-decoder neural network comprises an altered conversation content of the at least the first participant. Shah teaches a method for adaptive emotion based electronic communications. Shah teaches obtaining, by a processing system including at least one processor, at least a first objective to convey a selected demeanor to at least a second participant for a conversation ("Next, in step 910 a determination of a target emotion is made, based on the communication content,” Shah para [0142]); wherein the at least one action comprises altering the conversation content of the at least the first participant to align to the selected demeanor (Shah Fig. 9 reference characters 910-916), and wherein an output of the encoder-decoder neural network comprises an altered conversation content of the at least the first participant ("wherein the processor performs selecting the at least one alternate message from a machine learned model of a pool of prior communications wherein the presentation emotion was modified from the default emotion," Shah para [0036]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Reece’s method of determining conversation analysis indicators for multiple parties in a conversation by including Shah’s method of adaptive emotion based replies to electronic communications. The ability to alter text or audio to convey a selected emotion helps to improve and facilitate communication between the connected parties. Emotion can be difficult to interpret solely from textual communications, so including the ability to more directly align the textual communications with a preferred emotion helps to maintain positive relations and feelings with the party you are communicating with. This combination would have been obvious to one of ordinary skill in the art. Regarding claim 2, Reece, in view of Shah, discloses all of the limitations of claim 1. Reece further discloses wherein the conversation comprises at least one of: a text-based conversation; a speech-based conversation; or a video-based conversation (Reece Fig. 5 reference characters 506, 508, and 510). Regarding claim 3, Reece, in view of Shah, discloses all of the limitations of claim 1. Reece further discloses wherein selected demeanor comprises a measured demeanor of the at least the first participant during the conversation ("The sequential machine learning system sequentially processes utterances to generate conversation analysis indicators (e.g., coaching statistics, emotional labels)," Reece col. 5 lines 1-3). Regarding claim 4, Reece, in view of Shah, discloses all of the limitations of claim 3. Reece further discloses wherein the applying further comprises applying at least a second input to the at least one machine learning model (Reece Fig. 5 reference characters 506, 508, and 510), and wherein the at least the second input comprises at least one of: biometric data of the at least the first participant; or image data of the at least the first participant ("For example, the conversation features can include…, participant biometrics, etc.," Reece col. 10 lines 28-41). Regarding claim 5, Reece, in view of Shah, discloses all of the limitations of claim 4. Reece further discloses wherein the at least one machine learning model comprises at least two machine learning models ("As further used herein, a sequential machine learning system can be one or more models trained to receive input," Reece col. 3 lines 49-51), and wherein the at least two machine learning models comprise at least: a first demeanor detection model that is configured to detect a first demeanor from the at least the first input (Reece Fig. 3 reference character 344 and 351); and a second demeanor detection model that is configured to detect a second demeanor from the at least the second input, wherein the second demeanor comprises the measured demeanor (Reece Fig. 3 reference character 344 and 351). Regarding claim 6, Reece, in view of Shah, discloses all of the limitations of claim 5. Reece further discloses wherein the output of the at least one machine learning model comprises an indicator of a discrepancy between the first demeanor and the second demeanor ("In some cases, the conversation analysis indicators can include other values, such as values indicating the emotional content, engagement, genuineness, intensity, etc., for all or parts of the conversation," Reece col. 10 lines 60-64 and "wherein the conversation analysis indicators include a series of instant scores, comparison scores can be determined at multiple points throughout the duration of the conversation. The mapping can include a rule to identify when there is a threshold difference between instant scores, which can correspond to various inferences to label the changes, e.g., as a change in emotional levels," Reece col. 12 lines 16-23). Regarding claim 7, Reece, in view of Shah, discloses all of the limitations of claim 6. Reece further discloses wherein the at least one action further comprises: presenting the indicator to the at least the first participant of the discrepancy ("The system can map a change in instant scores above a threshold to an action to provide an alert to one or both users, e.g., using notifications 410," Reece col. 12 lines 40-42). Regarding claim 12, Reece, in view of Shah, discloses all of the limitations of claim 1. Reece fails to disclose wherein the at least one action further comprises: presenting the altered conversation content of the at least the first participant to the at least [[ a ]] the second participant Shah teaches wherein the at least one action further comprises: presenting the altered conversation content of the at least the first participant to the at least [[ a ]] the second participant (Shah Fig. 9 reference character 920). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Reece’s method of determining conversation analysis indicators for multiple parties in a conversation by including Shah’s method of adaptive emotion based replies to electronic communications. The ability to alter text or audio to convey a selected emotion and then presenting that to the participating party helps to improve and facilitate communication between the connected parties. Emotion can be difficult to interpret solely from textual communications, so including the ability to more directly align the textual communications with a preferred emotion helps to maintain positive relations and feelings with the party you are communicating with. This combination would have been obvious to one of ordinary skill in the art. Regarding claim 15, Reece, in view of Shah, discloses all of the limitations of claim 1. Reece further discloses wherein the at least one machine learning model comprises a demeanor detection model that is configured to detect at least a first demeanor from the at least the first input ("In some cases, the conversation analysis indicators can include other values, such as values indicating the emotional content, engagement, genuineness, intensity, etc., for all or parts of the conversation," Reece col. 10 lines 60-64). Regarding claim 16, Reece, in view of Shah, discloses all of the limitations of claim 15. Reece fails to disclose wherein the output of the at least one machine learning model comprises an indicator of a discrepancy between the at least the first demeanor and the selected demeanor. Shah teaches wherein the output of the at least one machine learning model comprises an indicator of a discrepancy between the at least the first demeanor and the selected demeanor (Shah Fig. 9 reference character 912). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Reece’s method of determining conversation analysis indicators for multiple parties in a conversation by including Shah’s method of adaptive emotion based replies to electronic communications. The ability to indicate a difference between emotions the first communicating party is showing versus the selected emotion they’ve chosen is important because it allows you to not accidentally communicate with the second party using an erroneous emotion. Including this ability helps to maintain positive relations and feelings with the party you are communicating with. This combination would have been obvious to one of ordinary skill in the art. Regarding claim 17, Reece, in view of Shah, discloses all of the limitations of claim 16. Reece further discloses wherein the at least one action further comprises: presenting the indicator of the discrepancy to the at least the first participant ("The system can map a change in instant scores above a threshold to an action to provide an alert to one or both users, e.g., using notifications 410," Reece col. 12 lines 40-42). Regarding claim 18, Reece, in view of Shah, discloses all of the limitations of claim 1. Reece further discloses wherein the at least the first objective is at least one of: obtained in accordance with at least one input of the at least the first participant; or determined in accordance with one or more factors, wherein the one or more factors include: a user profile of the at least the first participant; a user profile of the at least [[ a ]] the second participant; a relationship between the at least the first participant and the at least the second participant; at least one communication modality of the conversation; at least one location of at least one of: the at least the first participant or the at least the second participant; or at least one topic of the conversation ("In one implementation, interface and mapping system 2008 queries a user profile database to retrieve conversation analysis indicators (e.g., emotional labels, higher order conversation features)," Reece col. 30 lines 56-59). As to claim 19, computer-readable medium (CRM) claim 19 and method claim 1 are related as method and CRM of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 19 is similarly rejected under the same rationale as applied above with respect to the method claim. As to claim 20, system claim 20 and method claim 1 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 20 is similarly rejected under the same rationale as applied above with respect to the method claim. As to claim 21, system claim 21 and method claim 2 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 21 is similarly rejected under the same rationale as applied above with respect to the method claim. Regarding claim 22, Reece, in view of Shah, discloses all of the limitations of claim 20. Reece further discloses wherein the selected demeanor comprises a measured demeanor of the at least the first participant during the conversation ("The sequential machine learning system sequentially processes utterances to generate conversation analysis indicators (e.g., coaching statistics, emotional labels)," Reece col. 5 lines 1-3). As to claim 23, system claim 23 and method claim 4 are related as method and system of using same, with each claimed element’s function corresponding to the method step. Accordingly, claim 23 is similarly rejected under the same rationale as applied above with respect to the method claim. Claim(s) 10, 11, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Reece, further in view of Shah, and further in view of Yang et al. (US Patent Application Publication No. 2021/0074261), hereinafter referred to as Yang. Regarding claim 10, Reece, in view of Shah, discloses all of the limitations of claim 1. Reece further discloses wherein the conversation content of the at least the first participant comprises recorded speech, and wherein the altering further comprises: performing a speech-to-text conversion to obtain a generated text ("Acoustic processing component 348 can extract the audio data from a conversation or utterance from a conversation and can encapsulate it as a conversation feature for use by a machine learning system," Reece col. 8 lines 37-40), wherein the generated text comprises the input to the encoder-decoder neural network (Reece Fig. 4 reference characters 404 and 406 and Reece col. 13 line 57 through col. 14 line 3). However, Reece does not disclose and applying the altered conversation content of the at least the first participant to a text-to-speech module that is configured to output generated speech. Yang teaches a method for synthesized speech generation using emotional information. Yang teaches and applying the altered conversation content of the at least the first participant to a text -to- speech module that is configured to output generated speech ("the apparatus comprises an input unit receiving text and a first emotion information vector configured for the text; an output unit outputting synthesized speech," Yang para [0015]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Reece’s method of determining conversation analysis indicators for multiple parties in a conversation by including Yang’s method of synthesized speech generation using emotional information. Utilizing text-to-speech for the altered conversation content helps to improve accessibility of the method/device that uses this method; it also helps to more directly convey emotion as opposed to solely using textual communications. Emotional context is more present in audio and particularly in speech. Including this ability to more readily convey emotion with the use of text-to speech helps to maintain positive relations and feelings with the party you are communicating with. This combination would have been obvious to one of ordinary skill in the art. Regarding claim 11, Reece, in view of Shah, and further in view of Yang, discloses all of the limitations of claim 10. Reece fails to disclose wherein the text-to-speech module is configured to output the generated speech that is representative of a voice of the at least the first participant. Yang teaches wherein the text-to-speech module is configured to output the generated speech that is representative of a voice of the at least the first participant ("the apparatus comprises an input unit receiving text and a first emotion information vector configured for the text; an output unit outputting synthesized speech," Yang para [0015]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Reece’s method of determining conversation analysis indicators for multiple parties in a conversation by including Yang’s method of synthesized speech generation using emotional information. Utilizing text-to-speech for the altered conversation content created by the first participant in the communication helps to improve accessibility of the method/device that uses this method; it also helps to more directly convey emotion to the second participant as opposed to solely using textual communications. Emotional context is more present in audio and particularly in speech. Including this ability to more readily convey emotion with the use of text-to-speech helps to maintain positive relations and feelings with the second participant you are communicating with. This combination would have been obvious to one of ordinary skill in the art. Regarding claim 13, Reece, in view of Shah, discloses all of the limitations of claim 1. Reece fails to disclose wherein the altered conversation content is of a different language than the conversation content of the at least the first participant. Yang teaches wherein the altered conversation content is of a different language than the conversation content of the at least the first participant ("In particular, the AI agent module 62 may perform various natural language processes, including machine translation," Yang para [0149]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Reece’s method of determining conversation analysis indicators for multiple parties in a conversation by including Yang’s method of synthesized speech generation using emotional information. Including this ability of machine translation would help to facilitate communication between parties that do not share a common, understood language. This ability would increase the positive relations between both parties involved in communication, allowing them to maintain a positive feeling about each other. This combination would have been obvious to one of ordinary skill in the art. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM MICHAEL WEAVER whose telephone number is (571)272-7062. The examiner can normally be reached Monday-Friday, 8AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Richemond Dorvil can be reached at (571) 272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADAM MICHAEL WEAVER/Examiner, Art Unit 2658 /RICHEMOND DORVIL/Supervisory Patent Examiner, Art Unit 2658
Read full office action

Prosecution Timeline

Nov 07, 2022
Application Filed
Aug 14, 2025
Non-Final Rejection — §101, §103
Nov 21, 2025
Response Filed
Mar 05, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591752
ZERO-SHOT DOMAIN TRANSFER WITH A TEXT-TO-TEXT MODEL
2y 5m to grant Granted Mar 31, 2026
Patent 12585765
SYSTEM AND METHOD FOR ROBUST NATURAL LANGUAGE CLASSIFICATION UNDER CHARACTER ENCODING
2y 5m to grant Granted Mar 24, 2026
Patent 12579375
IMPLEMENTING ACTIVE LEARNING IN NATURAL LANGUAGE GENERATION TASKS
2y 5m to grant Granted Mar 17, 2026
Patent 12562077
METHOD, COMPUTING DEVICE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM TO TRANSLATE AUDIO OF VIDEO INTO SIGN LANGUAGE THROUGH AVATAR
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+20.0%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month