Prosecution Insights
Last updated: April 19, 2026
Application No. 18/118,561

Near Real-Time Natural Language Sequence Generation

Non-Final OA §101§103§112
Filed
Mar 07, 2023
Examiner
LELAND III, EDWIN S
Art Unit
2654
Tech Center
2600 — Communications
Assignee
T-Mobile Innovations LLC
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
74%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
338 granted / 452 resolved
+12.8% vs TC avg
Minimal -0% lift
Without
With
+-0.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
18 currently pending
Career history
470
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
45.4%
+5.4% vs TC avg
§102
16.8%
-23.2% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 452 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are pending in this application. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 11 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “near real-time” in claim 11 is a relative term which renders the claim indefinite. The term “near real-time” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims do not fall within at least one of the four categories of patent eligible subject matter because the claims cover both statutory and non-statutory embodiments (under the broadest reasonable interpretation of the claim when read in light of the specification and in view of one skilled in the art) and embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter. As per claims 19-20, a “computer storage media” may be interpreted as a transitory signal, which is non-statutory subject matter, if not modified by a limitation rendering it non-transitory. See paragraph [0023] of the Specification as filed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 10-14 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Can (U.S. Patent Application Publication 2024/0073321) in view of Gorodetski et al. (U.S. Patent Application Publication 2016/0217793). As per claims 1, 10 and 19, Can discloses: A system (Figure 14 and Paragraphs [0157-0166]) comprising: at least one computer processor (Figure 14, item 1404 and Paragraphs [0158-0159]); and one or more computer storage media storing computer-useable instructions (Figure 14, items 1408 & 1410 and Paragraphs [0161-0164]) that, when used by the at least one computer processor, cause the at least one computer processor to perform operations comprising: detecting a first natural language utterance (Paragraph [0032] – calls are received and transcribed to text); based on training a first model and parsing the first natural language utterance, generating a first score, the first score indicates whether the first natural language utterance was uttered by a customer service agent or a customer (Figure 4, item 412 and paragraph [0153] – a model recognizes previous customers and routes calls according to that specific customers characteristics, which inherently involves a score); based on: fine-tuning a second model or the first model, the parsing, and the first score indicating that the first natural language utterance was uttered by the customer, generating a second score, the second score indicates a first level of satisfaction of the customer (Figure 4, item 414, Figure 3, items 324 & 326 and Paragraphs [0026], [0048], [0086], [0105], [0129-130] & [0154] – both customer satisfaction and sentiment are determined via model); based on the first score and the second score, generating a first natural language sequence that is a candidate for the customer service agent to utter or not utter at least partially responsive to the first natural language utterance (Figure 4, item 416, Figure 3, items 324 & 326 and Paragraphs [0026], [0048], [0086], [0105], [0129-130] & [0155-0156] – based on customer identity, customer satisfaction and sentiment, an alert is generated and suggested responses are provided to the customer service agent) ; and causing presentation, at a user device associated with the customer service agent, of the first natural language sequence (Figure 4, item 416, Figure 3, items 324 & 326 and Paragraphs [0026], [0048], [0086], [0105], [0129-130] & [0155-0156] – based on customer identity, customer satisfaction and sentiment, an alert is generated and suggested responses are provided to the customer service agent). Can fails to disclose but Gorodetski et al. in the same field of endeavor discloses: The first model determines when the utterance was from a customer service agent or a particular customer (Figure 1, item 114 and Paragraphs [0043-0046] – conversation participants are identified as customer service agents or customers). It would be obvious for a person having ordinary skill in the art at the effective filing date of the invention to modify the system, method and computer storage media of Can with the speaker diarization capabilities of Gorodetscki et al. because it is a case of simple substitution of one known element for another to obtain predictable results. Can uses a machine learning model to identify specific customers but is silent on the ability to identify customer service agents, which Gorodetski et al. uses a similar model to identify specific customers and specific customer service agents. The simple substitution of the model in Gorodetski et al. for the model in Can would provide predicable results. Claim 10 is directed to the method for using the system of claim 1, so is rejected for similar reasons. Claim 19 is directed to a computer storage media containing instructions to cause a processor to act as the system of claim 1 so is rejected for similar reasons. As per claim 2, the combination of Can and Gorodetski et al. discloses all of the limitations of claim 1 above. Can in the combination further discloses: causing presentation, at the user device, of an indicator representing the first level of satisfaction of the customer (Paragraphs [0040], [0047-0048] & [0105] – the customer satisfaction level may be presented to the agent). As per claim 3, the combination of Can and Gorodetski et al. discloses all of the limitations of claim 1 above. The combination further discloses: subsequent to the detecting of the first natural language utterance, detecting a second natural language utterance; based on parsing the second natural language utterance, generating a third score, the third score indicates whether the second natural language utterance was uttered by the customer service agent or the customer (Gorodetski et al. - Figure 1, item 114 and Paragraphs [0043-0046] – conversation participants are identified as customer service agents or customers for each utterance); and based on the parsing of the second natural language utterance and the third score indicating that the second natural language utterance was uttered by the customer, changing the second score to a fourth score, the changing of the second score indicates that the first level of satisfaction of the customer has changed to a second level of satisfaction for the customer (Can - Paragraphs [0040], [0047-0048] & [0105] – the customer satisfaction level is determined on an utterance by utterance basis). As per claim 4, the combination of Can and Gorodetski et al. discloses all of the limitations of claim 3 above. Can in the combination further discloses: based on the changing of the second score to the fourth score, generating a second natural language sequence that is another candidate for the customer service agent to utter responsive to the second natural language utterance; and causing presentation, at the user device, of the second natural language sequence and an indicator of the second level of satisfaction for the customer (Figure 4, item 416, Figure 3, items 324 & 326 and Paragraphs [0026], [0040], [0047-0048], [0086], [0105], [0129-130] & [0155-0156] – based on customer identity, customer satisfaction and sentiment, an alert is generated and suggested responses are provided to the customer service agent which can include the satisfaction level). As per claim 5, the combination of Can and Gorodetski et al. discloses all of the limitations of claim 1 above. Can in the combination further discloses: detecting of the first natural language utterance includes encoding audio speech to first text data at a transcript document and performing natural language processing of the first text data to determine the first natural language utterance (Paragraphs [0051-0052], [0057-0058], [0067], [0074] & [0084]). As per claim 11, the combination of Can and Gorodetski et al. discloses all of the limitations of claim 10 above. Can in the combination further discloses: causing presentation, at the user device, of the first natural language sequence and the indicator in near real-time relative to the receiving of the first natural language utterance (Paragraphs [0040], [0047-0048], [0105] & [0155-0156]) – the customer satisfaction level may be presented to the agent). As per claim 12, the combination of Can and Gorodetski et al. discloses all of the limitations of claim 10 above. Gorodetski et al. in the combination further discloses: generating a second score, the second score indicates whether the first natural language utterance was uttered by a customer service agent or a customer (Figure 1, item 114 and Paragraphs [0043-0046] – conversation participants are identified as customer service agents or customers). As per claim 13, the combination of Can and Gorodetski et al. discloses all of the limitations of claim 12 above. Can in the combination further discloses: generating of the first natural language sequence is further based on the generating of the second score (Figure 4, item 416, Figure 3, items 324 & 326 and Paragraphs [0026], [0048], [0086], [0105], [0129-130] & [0155-0156] – based on customer identity, customer satisfaction and sentiment, an alert is generated and suggested responses are provided to the customer service agent). As per claim 14, the combination of Can and Gorodetski et al. discloses all of the limitations of claim 10 above. Can in the combination further discloses: receiving of the first natural language utterance includes encoding audio speech to first text data at a transcript document and performing natural language processing of the first text data to determine the first natural language utterance (Paragraphs [0051-0052], [0057-0058], [0067], [0074] & [0084]). Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Can (U.S. Patent Application Publication 2024/0073321) and Gorodetski et al. (U.S. Patent Application Publication 2016/0217793) in view of Zeng et al. (Chinese Patent Application 10884612). As per claims 6 and 15, the combination of Can and Gorodetski et al. discloses all of the limitations of claim 5 and 14 above. The combination fails to disclose but Zeng et al. in the same field of endeavor teaches: pre-processing the transcript document by applying a Term Frequency- Inverse Document Frequency (TF-IDF) algorithm at the transcript document and performing sparse normalization in preparation for the first model to generate the first score ( Claims 1-3 - “extracting said text from said text word features, and by the maximum likelihood method to determine the highest probability of text features, the text feature comprises: a word weight, term frequency and inverse document frequency. wherein the generating of the neural network training model comprising: obtaining the target training text, and performing normalization processing to the target training text; based on the random number and connecting value and the preset threshold value, the target training text after normalization processing for sparse logistic regression, obtaining target training set;”). It would be obvious for a person having ordinary skill in the art at the effective filing date of the invention to modify the system, method and computer storage media of Can & Gorodetscki et al. with the TD-IDF & sparse normalization capabilities of Zeng et al. because it is a case of combining prior art elements according to known methods to yield predictable results;. Can uses a non-specified pre-processing for its NLP capabilities, while Zheng et al. specifically uses TD-IDF and sparse normalization. The TD-IDF and sparse normalization would provide predicable results. Claims 7, 16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Can (U.S. Patent Application Publication 2024/0073321) and Gorodetski et al. (U.S. Patent Application Publication 2016/0217793) in view of Saillet et al. (U.S. Patent Application 2022/0100899). As per claims 7, 16 and 20, the combination of Can and Gorodetski et al. discloses all of the limitations of claim 5, 14 and 19 above. The combination fails to disclose but Saillet et al. in the same field of endeavor teaches: prior to the generating of the second score, removing sensitive data or biased data by scrubbing the transcript document according to one or more policies (Paragraph [0003]). It would be obvious for a person having ordinary skill in the art at the effective filing date of the invention to modify the system, method and computer storage media of Can & Gorodetscki et al. with the data privacy capabilities of Saillet et al. because it is very important for companies and their customers that sensitive data is kept private (Paragraph [0002]). Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Can (U.S. Patent Application Publication 2024/0073321) and Gorodetski et al. (U.S. Patent Application Publication 2016/0217793) in view of Churgin et al. (U.S. Patent 12,159,647). As per claims 8 and 17, the combination of Can and Gorodetski et al. discloses all of the limitations of claim 1 and 10 above. The combination fails to disclose but Churgin et al. in the same field of endeavor teaches: the first model includes a Gradient Boosting machine learning model (Col 13, lines 3-20). It would be obvious for a person having ordinary skill in the art at the effective filing date of the invention to modify the system, method and computer storage media of Can & Gorodetscki et al. with the Gradient Boosting capabilities of Churgin et al. because it is a case of simple substitution of one known element for another to obtain predictable results. Can & Gorodetscki et al. use a machine learning model to identify specific customers and customer service agents but is silent on the method, while Churgin et al. uses a gradient boosting model for multi-class classification and speaker identification. The simple substitution of the model in Churgin et al. for the model in Gorodetski et al. would provide predicable results. Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Can (U.S. Patent Application Publication 2024/0073321) and Gorodetski et al. (U.S. Patent Application Publication 2016/0217793) in view of Mohanty et al. (U.S. Patent Application Publication 2020/0159778). As per claims 9 and 18, the combination of Can and Gorodetski et al. discloses all of the limitations of claim 1 and 10 above. Can in the combination further discloses: wherein second model includes a Natural Language Processing (NLP) model (Paragraphs [0051-0052], [0057-0058], [0067], [0074] & [0084]). The combination fails to disclose but Mohanty et al. in the same field of endeavor teaches: wherein the generating of the second score is further based on using at least one of: a Gradient Boosting machine learning model and a Recurrent Neural Network (RNN) (Paragraphs [0023], [0035], [0041] & [0064]). It would be obvious for a person having ordinary skill in the art at the effective filing date of the invention to modify the system, method and computer storage media of Can & Gorodetscki et al. with the recurrent network capabilities of Mohanty et al. because it is a case of simple substitution of one known element for another to obtain predictable results. Can & Gorodetscki et al. use a machine learning model to calculate satisfaction and provide a response but is silent on the method, while Mohanty et al. uses a Recurrent Network model for customer satisfaction identification. The simple substitution of the model in Mohanty et al. for the model in Can & Gorodetski et al. would provide predicable results. Examiner Notes The Examiner cites particular columns and line numbers in the references as applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully considers the references in its entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or as disclosed by the Examiner. Communications via Internet e-mail are at the discretion of the applicant and require written authorization. Should the Applicant wish to communicate via e-mail, including the following paragraph in their response will allow the Examiner to do so: “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with me concerning any subject matter of this application by electronic mail. I understand that a copy of these communications will be made of record in the application file.” Should e-mail communication be desired, the Examiner can be reached at Edwin.Leland@USPTO.gov Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWIN S LELAND III whose telephone number is (571)270-5678. The examiner can normally be reached 8:00 - 5:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hai Phan can be reached at 571-272-6338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWIN S LELAND III/Primary Examiner, Art Unit 2654
Read full office action

Prosecution Timeline

Mar 07, 2023
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §103, §112
Apr 09, 2026
Applicant Interview (Telephonic)
Apr 14, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596869
DETECTING ARTIFICIAL INTELLIGENCE GENERATED TEXT
2y 5m to grant Granted Apr 07, 2026
Patent 12591602
TRAINING MACHINE LEARNING BASED NATURAL LANGUAGE PROCESSING FOR SPECIALTY JARGON
2y 5m to grant Granted Mar 31, 2026
Patent 12579370
MULTILINGUAL CHATBOT
2y 5m to grant Granted Mar 17, 2026
Patent 12579986
Systems and Methods for Distinguishing Between Human Speech and Machine Generated Speech
2y 5m to grant Granted Mar 17, 2026
Patent 12536385
SYSTEMS AND METHODS FOR A READING AND COMPREHENSION ASSISTANCE TOOL
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
74%
With Interview (-0.3%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 452 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month