Prosecution Insights
Last updated: April 19, 2026
Application No. 18/766,142

System and Method for Secure Data Augmentation for Speech Processing Systems

Non-Final OA §112§DP
Filed
Jul 08, 2024
Examiner
GAY, SONIA L
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
93%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
701 granted / 855 resolved
+20.0% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
33 currently pending
Career history
888
Total Applications
across all art units

Statute-Specific Performance

§101
10.2%
-29.8% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 855 resolved cases

Office Action

§112 §DP
DETAILED ACTION This action is in response to the initial filing of application no. 18/766,142 on 07/08/2024. Claims 1- 17 are still pending in this application, with claims 1, 7 and 13 being independent. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Allowable Subject Matter Aside from the non-prior art rejections, the prior art fails to teach or suggest in reasonable combination the limitations recited in independent claim 1, 7 and 13. For example, Gkoulalas-Divanis et al. (US 11,217,223) ("GK") discloses the following limitations recited by each of the independent claims: receiving an input speech signal and extracting a speaker embedding from the input speech signal (Fig.4, 400 and 410 and Fig.7,702; column 10 lines 54 - column 11 line 46, column 28 lines 63 – column 29 line 4); receiving a transcription of the input speech signal and generating an obscured transcription for the transcription, wherein the obscured transcription includes representations of sensitive content from the transcript (Fig.4, 420, 430, 440, 450, 460 and Fig.7, 703, 704, 705; column 11 lines 47 - column 16 line 45, column 29 lines 5 -14); extracting acoustic properties from the input speech signal (Fig.4, 410; column 11 lines 3 – 30); and generating a synthetic speech signal using a synthetic speaker embedding (Fig.4, 470, 480 and 490 and Fig.7, 706 – 708; column 16 lines 46 – column 18 line 3, column 29 lines 15 – 27). Yet, GK fails to teach or suggest in reasonable combination the following limitations recited by each of the independent claims: generating an obscured speech signal based upon, at least in part, the extracted speaker embedding, wherein the obscured speech signal includes obscured representations of sensitive content from the input speech signal; generating the synthetic speech signal from the obscured speech signal using the synthetic speaker embedding; and augmenting the obscured speech signal based upon, at least in part, the extracted acoustic properties. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1- 6, 8, 9, 12, 14, 15 and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1 - 6 recite the limitation "the obscured synthetic speech signal. There is insufficient antecedent basis for this limitation in the claim. Claims 2, 8 and 14 recite the following limitation: wherein the sensitive content from the input speech signal and the transcription includes personally identifiable information (PII) and/or protected health information (PHI). It is unclear if the limitations are interpreted as conjunctive (both) or disjunctive (or). For definiteness, the language could be amended as follows: wherein the sensitive content from the input speech signal and the transcription includes at least one of personally identifiable information (PII) or protected health information (PHI). Claims 3, 9 and 15 (and dependent claims 6, 12 and 17) recite the following limitation: wherein extracting the acoustic properties from the input speech signal includes: extracting an acoustic embedding from the input speech signal; estimating acoustic metrics within the input speech signal; extracting a noise spectrum from the input speech signal; measuring spectral balance within the input speech signal; and/or measuring a speaking rate within the input speech signal. It is unclear if the limitations are interpreted as conjunctive (both) or disjunctive (or). For definiteness, the language could be amended as follows: wherein extracting the acoustic properties from the input speech signal includes at least one or more of: extracting an acoustic embedding from the input speech signal; estimating acoustic metrics within the input speech signal; extracting a noise spectrum from the input speech signal; measuring spectral balance within the input speech signal; or measuring a speaking rate within the input speech signal. Claims 6, 12 and 17 recite the following limitation: wherein augmenting the obscured speech signal based upon, at least in part, the extracted acoustic properties includes: applying a speaking rate augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties; applying a room impulse response to the obscured speech signal based upon, at least in part, the extracted acoustic properties; adding a noise signal to the obscured speech signal based upon, at least in part, the extracted acoustic properties; and/or applying a spectral balance augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties. It is unclear if the limitations are interpreted as conjunctive (both) or disjunctive (or). For definiteness, the language could be amended as follows: wherein augmenting the obscured speech signal based upon, at least in part, the extracted acoustic properties includes at least one or more of: applying a speaking rate augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties; applying a room impulse response to the obscured speech signal based upon, at least in part, the extracted acoustic properties; adding a noise signal to the obscured speech signal based upon, at least in part, the extracted acoustic properties; or applying a spectral balance augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1 – 17 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 2- 7 and 9- 20 of U.S. Patent No. 12,033,614 Although the claims at issue are not identical, they are not patentably distinct from each other. The claim mapping is as follows. Current Application 1. A computer-implemented method, executed on a computing device, comprising: receiving an input speech signal; receiving a transcription of the input speech signal; extracting a speaker embedding from the input speech signal; extracting acoustic properties from the input speech signal; generating an obscured transcription from the transcription, wherein the obscured transcription includes obscured representations of sensitive content from the transcription; generating an obscured speech signal based upon, at least in part, the extracted speaker embedding and the obscured transcription, wherein the obscured speech signal includes obscured representations of sensitive content from the input speech signal; generating a synthetic speech signal from the obscured speech signal using a synthetic speaker embedding; and augmenting the obscured synthetic speech signal based upon, at least in part, the extracted acoustic properties. 2. The computer-implemented method of claim 1, wherein the sensitive content from the input speech signal and the transcription includes personally identifiable information (PII) and/or protected health information (PHI). 3. The computer-implemented method of claim 1, wherein extracting the acoustic properties from the input speech signal includes: extracting an acoustic embedding from the input speech signal; estimating acoustic metrics within the input speech signal; extracting a noise spectrum from the input speech signal; measuring spectral balance within the input speech signal; and/or measuring a speaking rate within the input speech signal. 4. The computer-implemented method of claim 1, further comprising: discarding the input speech signal in response to generating the obscured speech signal. 5. The computer-implemented method of claim 1, wherein generating the obscured speech signal includes: modifying the extracted speaker embedding, thus defining a synthetic speaker embedding. 6. The computer-implemented method of claim 3, wherein augmenting the obscured speech signal based upon, at least in part, the extracted acoustic properties includes: applying a speaking rate augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties; applying a room impulse response to the obscured speech signal based upon, at least in part, the extracted acoustic properties; adding a noise signal to the obscured speech signal based upon, at least in part, the extracted acoustic properties; and/or applying a spectral balance augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties. 7. A computing system comprising: a memory; and a processor configured to receiving an input speech signal, wherein the processor is further configured to receive a transcription of the input speech signal, wherein the processor is further configured to extract a speaker embedding from the input speech signal, wherein the processor is further configured to extract acoustic properties from the input speech signal, wherein the processor is further configured to generate an obscured transcription from the transcription, wherein the obscured transcription includes obscured representations of sensitive content from the transcription, wherein the processor is further configured to generate an obscured speech signal based upon, at least in part, the extracted speaker embedding and the obscured transcription, wherein the obscured speech signal includes obscured representations of sensitive content from the input speech signal, wherein the processor is further configured to generate a synthetic speech signal from the obscured speech signal using a synthetic speaker embedding, and wherein the processor is further configured to augment the obscured speech signal based upon, at least in part, the extracted acoustic properties. 8. The computing system of claim 7, wherein the sensitive content from the input speech signal and the transcription includes personally identifiable information (PII) and/or protected health information (PHI). 9. The computing system of claim 7, wherein extracting the acoustic properties from the input speech signal includes: extracting an acoustic embedding from the input speech signal; estimating acoustic metrics within the input speech signal; extracting a noise spectrum from the input speech signal; measuring spectral balance within the input speech signal; and/or measuring a speaking rate within the input speech signal. 10. The computing system of claim 7, wherein the processor is further configured to: discard the input speech signal in response to generating the obscured speech signal. 11. The computing system of claim 7, wherein generating the obscured speech signal includes: modifying the extracted speaker embedding, thus defining a synthetic speaker embedding. 12. The computing system of claim 9, wherein augmenting the obscured speech signal based upon, at least in part, the extracted acoustic properties includes: applying a speaking rate augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties; applying a room impulse response to the obscured speech signal based upon, at least in part, the extracted acoustic properties; adding a noise signal to the obscured speech signal based upon, at least in part, the extracted acoustic properties; and/or applying a spectral balance augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties. 13. A computer program product residing on a non-transitory computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations comprising: receiving an input speech signal; receiving a transcription of the input speech signal; extracting a speaker embedding from the input speech signal; extracting acoustic properties from the input speech signal; generating an obscured transcription from the transcription, wherein the obscured transcription includes obscured representations of sensitive content from the transcription; generating an obscured speech signal based upon, at least in part, the extracted speaker embedding and the obscured transcription, wherein the obscured speech signal includes obscured representations of sensitive content from the input speech signal; generating a synthetic speech signal from the obscured speech signal using a synthetic speaker embedding; discarding the input speech signal in response to generating the obscured speech signal; and augmenting the obscured speech signal based upon, at least in part, the extracted acoustic properties. 14. The computer program product of claim 13, wherein the one or more sensitive content portions include: personally identifiable information (PII); and/or protected health information (PHI). 15. The computer program product of claim 13, wherein extracting the acoustic properties from the input speech signal includes: extracting an acoustic embedding from the input speech signal; estimating acoustic metrics within the input speech signal; extracting a noise spectrum from the input speech signal; measuring spectral balance within the input speech signal; and/or measuring a speaking rate within the input speech signal. 16. The computer program product of claim 13, wherein generating the extracted speaker embedding using a speaker embedding modification system includes modifying the extracted speaker embedding until a speaker verification system is unable to verify a speaker’s identity using the synthetic speaker embedding. 17. The computer program product of claim 15, wherein augmenting the obscured speech signal based upon, at least in part, the extracted acoustic properties includes: applying a speaking rate augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties; applying a room impulse response to the obscured speech signal based upon, at least in part, the extracted acoustic properties; adding a noise signal to the obscured speech signal based upon, at least in part, the extracted acoustic properties; and/or applying a spectral balance augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties. US 12,033,614 1. A computer-implemented method, executed on a computing device, comprising: receiving an input speech signal; receiving a transcription of the input speech signal; extracting a speaker embedding from the input speech signal; extracting acoustic properties from the input speech signal; generating an obscured transcription from the transcription, wherein the obscured transcription includes obscured representations of sensitive content from the transcription; generating an obscured speech signal based upon, at least in part, the extracted speaker embedding and the obscured transcription, wherein generating the obscured speech signal includes defining a synthetic speaker embedding by modifying the extracted speaker embedding using a speaker embedding modification model, wherein the obscured speech signal includes obscured representations of sensitive content from the input speech signal; and augmenting the obscured speech signal based upon, at least in part, the extracted acoustic properties. 2. The computer-implemented method of claim 1, wherein the sensitive content from the input speech signal and the transcription includes personally identifiable information (PII) and/or protected health information (PHI). 3. The computer-implemented method of claim 1, wherein extracting the acoustic properties from the input speech signal includes: extracting an acoustic embedding from the input speech signal; estimating acoustic metrics within the input speech signal; extracting a noise spectrum from the input speech signal; measuring spectral balance within the input speech signal; and/or measuring a speaking rate within the input speech signal. 4. The computer-implemented method of claim 1, further comprising: discarding the input speech signal in response to generating the obscured speech signal. 5. The computer-implemented method of claim 1, wherein generating the obscured speech signal includes: modifying the extracted speaker embedding, thus defining a synthetic speaker embedding. 6. The computer-implemented method of claim 1, wherein generating the obscured speech signal includes: generating a synthetic speech signal from the obscured speech signal using a synthetic speaker embedding. 7. The computer-implemented method of claim 3, wherein augmenting the obscured speech signal based upon, at least in part, the extracted acoustic properties includes: applying a speaking rate augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties; applying a room impulse response to the obscured speech signal based upon, at least in part, the extracted acoustic properties; adding a noise signal to the obscured speech signal based upon, at least in part, the extracted acoustic properties; and/or applying a spectral balance augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties. 8. A computing system comprising: a memory; and a processor configured to receiving an input speech signal, wherein the processor is further configured to receive a transcription of the input speech signal, wherein the processor is further configured to extract a speaker embedding from the input speech signal, wherein the processor is further configured to extract acoustic properties from the input speech signal, wherein the processor is further configured to generate an obscured transcription from the transcription, wherein the obscured transcription includes obscured representations of sensitive content from the transcription, wherein the processor is further configured to generate an obscured speech signal based upon, at least in part, the extracted speaker embedding and the obscured transcription, wherein generating the obscured speech signal includes defining a synthetic speaker embedding by modifying the extracted speaker embedding using a speaker embedding modification model, wherein the obscured speech signal includes obscured representations of sensitive content from the input speech signal, and wherein the processor is further configured to augment the obscured speech signal based upon, at least in part, the extracted acoustic properties. 9. The computing system of claim 8, wherein the sensitive content from the input speech signal and the transcription includes personally identifiable information (PII) and/or protected health information (PHI). 10. The computing system of claim 8, wherein extracting the acoustic properties from the input speech signal includes: extracting an acoustic embedding from the input speech signal; estimating acoustic metrics within the input speech signal; extracting a noise spectrum from the input speech signal; measuring spectral balance within the input speech signal; and/or measuring a speaking rate within the input speech signal. 11. The computing system of claim 8, wherein the processor is further configured to: discard the input speech signal in response to generating the obscured speech signal. 12. The computing system of claim 8, wherein generating the obscured speech signal includes: modifying the extracted speaker embedding, thus defining a synthetic speaker embedding. 13. The computing system of claim 8, wherein generating the obscured speech signal includes: generating a synthetic speech signal from the obscured speech signal using a synthetic speaker embedding. 14. The computing system of claim 10, wherein augmenting the obscured speech signal based upon, at least in part, the extracted acoustic properties includes: applying a speaking rate augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties; applying a room impulse response to the obscured speech signal based upon, at least in part, the extracted acoustic properties; adding a noise signal to the obscured speech signal based upon, at least in part, the extracted acoustic properties; and/or applying a spectral balance augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties. 15. A computer program product residing on a non-transitory computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations comprising: receiving an input speech signal; receiving a transcription of the input speech signal; extracting a speaker embedding from the input speech signal; extracting acoustic properties from the input speech signal; generating an obscured transcription from the transcription, wherein the obscured transcription includes obscured representations of sensitive content from the transcription; generating an obscured speech signal based upon, at least in part, the extracted speaker embedding and the obscured transcription, wherein generating the obscured speech signal includes defining a synthetic speaker embedding by modifying the extracted speaker embedding using a speaker embedding modification model, wherein the obscured speech signal includes obscured representations of sensitive content from the input speech signal; discarding the input speech signal in response to generating the obscured speech signal; and augmenting the obscured speech signal based upon, at least in part, the extracted acoustic properties. 16. The computer program product of claim 15, wherein the one or more sensitive content portions include: personally identifiable information (PII); and/or protected health information (PHI). 17. The computer program product of claim 15, wherein extracting the acoustic properties from the input speech signal includes: extracting an acoustic embedding from the input speech signal; estimating acoustic metrics within the input speech signal; extracting a noise spectrum from the input speech signal; measuring spectral balance within the input speech signal; and/or measuring a speaking rate within the input speech signal. 18. The computer program product of claim 15, wherein generating the obscured speech signal includes: modifying the extracted speaker embedding, thus defining a synthetic speaker embedding. 19. The computer program product of claim 15, wherein modifying the extracted speaker embedding using a speaker embedding modification system includes modifying the extracted speaker embedding until a speaker verification is unable to verify a speaker's identity using the synthetic speaker embedding. 20. The computer program product of claim 17, wherein augmenting the obscured speech signal based upon, at least in part, the extracted acoustic properties includes: applying a speaking rate augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties; applying a room impulse response to the obscured speech signal based upon, at least in part, the extracted acoustic properties; adding a noise signal to the obscured speech signal based upon, at least in part, the extracted acoustic properties; and/or applying a spectral balance augmentation to the obscured speech signal based upon, at least in part, the extracted acoustic properties. As shown above, the limitations of claim 6 of US 12,033,614 anticipate the limitations of claim 1 of the currently pending application. Thus, claim 1 of the currently pending application is an obvious variant of claim 6 of US 12,033,614. As shown above, the limitations of claims 2 – 5 and 7 of US 12,033,614 recite the limitations of claim 2 – 6 of the currently pending application, respectively, except for generating a synthetic speech signal speech signal from the obscured signal using a synthetic speaker signal. However, claims 2 – 5 and 7 of US 12,033,614 further recite the following: defining a synthetic speaker embedding by modifying the extracted speaker embedding using a speaker embedding modification model. Using the synthetic speaker to generate a synthetic to generate a synthetic speech signal is considered an obvious variant. Thus, claims 2- 6 are obvious variants of claims 2 - 5 and 7 of US 12,033,614. As shown above, the limitations of claim 13 of US 12,033,614 anticipate the limitations of claim 7 of the currently pending application. Thus, claim 7 of the currently pending application is an obvious variant of claim 13 of US 12,033,614. As shown above, the limitations of claims 9 – 12 and 14 of US 12,033,614 recite the limitations of claim 8 - 12 of the currently pending application, respectively, except for generating a synthetic speech signal speech signal from the obscured signal using a synthetic speaker signal. However, claims 9 -12 and 14 of US 12,033,614 further recite the following: defining a synthetic speaker embedding by modifying the extracted speaker embedding using a speaker embedding modification model. Using the synthetic speaker to generate a synthetic to generate a synthetic speech signal is considered an obvious variant. Thus, claims 8 – 12 of the currently pending application are obvious variants of claims 9 – 12 and 14 of US 12,033,614. As shown above, the limitations of claim 15 – 20 of US 12,033,614 recite the limitations of claim 13 - 17 of the currently pending application, respectively, except for generating a synthetic speech signal speech signal from the obscured signal using a synthetic speaker signal. However, claims 15 - 20 of US 12,033,614 further recites the following: defining a synthetic speaker embedding by modifying the extracted speaker embedding using a speaker embedding modification model. Using the synthetic speaker to generate a synthetic to generate a synthetic speech signal is considered an obvious variant. Thus, claims 13 – 17 of the currently pending application are obvious variants of claims 15 - 20 of US 12,033,614. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SONIA L GAY whose telephone number is (571)270-1951. The examiner can normally be reached Monday-Friday 9-5 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SONIA L GAY/Primary Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Jul 08, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602617
DATA MANUFACTURING FRAMEWORKS FOR SYNTHESIZING SYNTHETIC TRAINING DATA TO FACILITATE TRAINING A NATURAL LANGUAGE TO LOGICAL FORM MODEL
2y 5m to grant Granted Apr 14, 2026
Patent 12602408
STREAMING OF NATURAL LANGUAGE (NL) BASED OUTPUT GENERATED USING A LARGE LANGUAGE MODEL (LLM) TO REDUCE LATENCY IN RENDERING THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12602539
PROACTIVE ASSISTANCE VIA A CASCADE OF LLMS
2y 5m to grant Granted Apr 14, 2026
Patent 12596708
SYSTEMS AND METHODS FOR AUTOMATED CODE GENERATION FOR CALCULATION BASED ON ASSOCIATED FORMAL SPECIFICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12591604
INTELLIGENT ASSISTANT
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
93%
With Interview (+11.4%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 855 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month