Prosecution Insights
Last updated: April 19, 2026
Application No. 18/784,206

TOOL FOR CATEGORIZING AND EXTRACTING DATA FROM AUDIO CONVERSATIONS

Non-Final OA §DP
Filed
Jul 25, 2024
Examiner
SIDDO, IBRAHIM
Art Unit
2681
Tech Center
2600 — Communications
Assignee
Twilio Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
97%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
397 granted / 474 resolved
+21.8% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
17 currently pending
Career history
491
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
61.8%
+21.8% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 474 resolved cases

Office Action

§DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,079,573. Although the claims at issue are not identical, they are not patentably distinct from each other because they are obvious variant of each other. Instant Application 18/784206 Claim 1: A method comprising: Patent N0. 12,079573 Claim 1: A computer-implemented method comprising: accessing a transcript that includes a plurality of sentences; accessing, by one or more processors, a transcript of a conversation, the transcript including text for a plurality of sentences; by one or more machine-learning models, for each sentence in the plurality of sentences, classifying that sentence into a corresponding predefined state among a plurality of predefined states; separately for each sentence in the plurality of sentences and by a first machine-learning (ML) model, classifying the sentence to determine if that sentence is associated with a predefined state among a plurality of predefined states; by the one or more machine-learning models, for each sentence classified into one or more of the plurality of predefined states, determining a corresponding parameter value within the predefined state into which that sentence is classified; separately for each sentence associated with the predefined state, extracting, by a second ML model, a parameter value associated with the predefined state for that sentence; storing predefined states of classified sentences among the plurality of sentences and determined corresponding parameter values of the classified sentences; and storing, by the one or more processors, classifications of predefined states for the plurality of sentences of the transcript and extracted parameter values for the plurality of sentences; and causing presentation of a user interface operable to search transcripts for sentences based on at least one of a stored predefined state among the stored predefined states or a determined parameter value among the determined parameter values. causing presentation, by the one or more processors, of a user interface (UI) with an option to search transcripts for sentences based on at least one of a predefined state among the plurality of predefined states or an extracted parameter value among the extracted parameter values. Claim 2: The method of claim 1, further comprising: Claim 2: The method as recited in claim 1, further comprising: by the one or more machine-learning models, for each sentence classified into one or more of the plurality of predefined states, normalizing the determined corresponding parameter value based on a predetermined format. normalizing, by a third ML model, the extracted parameter value to convert the extracted parameter value to a predefined format. Claim 3: The method of claim 1, further comprising: Claim 2: The method as recited in claim 1, further comprising: by the one or more machine-learning models, for each sentence classified into one or more of the plurality of predefined states, converting the determined corresponding parameter value into a predetermined format. normalizing, by a third ML model, the extracted parameter value to convert the extracted parameter value to a predefined format. Claim 4: The method of claim 1, further comprising: Claim 7: The method as recited in claim 1, further comprising: selecting the plurality of sentences from the transcript based on a filter that specifies a party that spoke the plurality of sentences. before classifying each sentence, applying a filter to select sentences from the transcript for the classifying, the filter comprising at least one of selecting sentences spoken by one party or selecting a period of time within the conversation. Claim 5: The method of claim 1, further comprising: Claim 7: The method as recited in claim 1, further comprising: selecting the plurality of sentences from the transcript based on a filter that specifies a time period in which the plurality of sentences was spoken. before classifying each sentence, applying a filter to select sentences from the transcript for the classifying, the filter comprising at least one of selecting sentences spoken by one party or selecting a period of time within the conversation. Claim 6: The method of claim 1, wherein: Claim 6: The method as recited in claim 1, the one or more machine-learning models are trained based on training data that includes transcripts of conversations in which multiple turns are identified by the training data and predefined states of the multiple turns are identified by the training data. wherein the second ML model is obtained by training a second ML program with training data, the training data comprising a set of transcripts from conversations, turns identified within the conversations, and parameter values extracted from the conversations. Claim 8 (similarly Claim 15): A system comprising: Claim 11 (similarly Claim 16): A system comprising: one or more processors; and a memory comprising instructions; and a memory storing instructions that, when executed by the one or more processors, cause the system to perform operations comprising: one or more computer processors, wherein the instructions, when executed by the one or more computer processors, cause the system to perform operations comprising: accessing a transcript that includes a plurality of sentences; accessing a transcript of a conversation, the transcript including text for a plurality of sentences; by one or more machine-learning models, for each sentence in the plurality of sentences, classifying that sentence into a corresponding predefined state among a plurality of predefined states; separately for each sentence in the plurality of sentences and by a first machine-learning (ML) model, classifying the sentence to determine if that sentence is associated with a predefined state among a plurality of predefined states; by the one or more machine-learning models, for each sentence classified into one or more of the plurality of predefined states, determining a corresponding parameter value within the predefined state into which that sentence is classified; separately for each sentence associated with the predefined state, extracting, by a second ML model, a parameter value associated with the predefined state for that sentence; storing predefined states of classified sentences among the plurality of sentences and determined corresponding parameter values of the classified sentences; and storing classifications of predefined states for the plurality of sentences of the transcript and extracted parameter values for the plurality of sentences; and causing presentation of a user interface operable to search transcripts for sentences based on at least one of a stored predefined state among the stored predefined states or a determined parameter value among the determined parameter values. causing presentation of a user interface (UI) with an option to search transcripts for sentences based on at least one of a predefined state among the plurality of predefined states or an extracted parameter value among the extracted parameter values. Claim 9 (similarly Claim 16): The system of claim 8, wherein the operations further comprise: Claim 12 (similarly Claim 17): The system as recited in claim 11, by the one or more machine-learning models, for each sentence classified into one or more of the plurality of predefined states, normalizing the determined corresponding parameter value based on a predetermined format. wherein the instructions further cause the one or more computer processors to perform operations comprising: normalizing, by a third ML model, the extracted parameter value to convert the extracted parameter value to a predefined format. Claim 10 (similarly Claim 17): The system of claim 8, wherein the operations further comprise: Claim 12 (similarly Claim 17): The system as recited in claim 11, by the one or more machine-learning models, for each sentence classified into one or more of the plurality of predefined states, converting the determined corresponding parameter value into a predetermined format. wherein the instructions further cause the one or more computer processors to perform operations comprising: normalizing, by a third ML model, the extracted parameter value to convert the extracted parameter value to a predefined format. Claim 11 (similarly Claim 18): The system of claim 8, wherein the operations further comprise: Claim 7: The method as recited in claim 1, further comprising: selecting the plurality of sentences from the transcript based on a filter that specifies a party that spoke the plurality of sentences. before classifying each sentence, applying a filter to select sentences from the transcript for the classifying, the filter comprising at least one of selecting sentences spoken by one party or selecting a period of time within the conversation. Claim 12 (similarly Claim 19): The system of claim 8, wherein the operations further comprise: Claim 7: The method as recited in claim 1, further comprising: selecting the plurality of sentences from the transcript based on a filter that specifies a time period in which the plurality of sentences was spoken. before classifying each sentence, applying a filter to select sentences from the transcript for the classifying, the filter comprising at least one of selecting sentences spoken by one party or selecting a period of time within the conversation. Claim 13 (similarly Claim 20): The system of claim 8, Claim 6: The method as recited in claim 1, wherein: the one or more machine-learning models are trained based on training data that includes transcripts of conversations in which multiple turns are identified by the training data and predefined states of the multiple turns are identified by the training data. wherein the second ML model is obtained by training a second ML program with training data, the training data comprising a set of transcripts from conversations, turns identified within the conversations, and parameter values extracted from the conversations. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to IBRAHIM SIDDO whose telephone number is (571)272-4508. The examiner can normally be reached 9:00-5:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at 5712703438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IBRAHIM SIDDO/Primary Examiner, Art Unit 2681
Read full office action

Prosecution Timeline

Jul 25, 2024
Application Filed
Feb 27, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592233
GAZE-BASED COMMAND DISAMBIGUATION
2y 5m to grant Granted Mar 31, 2026
Patent 12587606
METHOD FOR MANUFACTURING A DECORATIVE SHEET AND A METHOD FOR MANUFACTURING A DECORATIVE PANEL COMPRISING A DECORATIVE SHEET
2y 5m to grant Granted Mar 24, 2026
Patent 12572092
OPTICAL DEVICE, IMAGE READING DEVICE, AND ASSEMBLING METHOD
2y 5m to grant Granted Mar 10, 2026
Patent 12572573
SESSION-BASED USER AWARENESS IN LARGE LANGUAGE MODELS
2y 5m to grant Granted Mar 10, 2026
Patent 12574465
ELECTRONIC DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
97%
With Interview (+13.3%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 474 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month