Prosecution Insights
Last updated: April 19, 2026
Application No. 18/662,533

METHOD AND APPARATUS FOR SELF-CONSISTENCY BOOSTS CALIBRATION FOR MATH REASONING

Final Rejection §101§103
Filed
May 13, 2024
Examiner
PATEL, SHREYANS A
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Tencent America LLC
OA Round
2 (Final)
89%
Grant Probability
Favorable
3-4
OA Rounds
2y 3m
To Grant
96%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
359 granted / 403 resolved
+27.1% vs TC avg
Moderate +7% lift
Without
With
+7.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
46 currently pending
Career history
449
Total Applications
across all art units

Statute-Specific Performance

§101
21.3%
-18.7% vs TC avg
§103
36.0%
-4.0% vs TC avg
§102
22.6%
-17.4% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 403 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments with respect to 35 U.S.C. 101 Abstract Idea in regards to claims 1-20 have been considered, however are not found to be persuasive due to the following reasons. See detailed rejection below. Applicant's arguments with respect to 35 U.S.C. 112 rejection of claims 11 and 12 have been considered and found persuasive, and the rejection has been withdrawn. Applicant's arguments with respect to 35 U.S.C. 102 in regards to claims 1, 11 and 20 have been considered but are moot due to new grounds of rejection necessitated by amendments. See detailed rejection below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-9, 11-12 and 14-20 are rejected under 35 U.S.C. 101. Claims 1, 11 and 20 are directed to an abstract idea. These claims are an abstract idea because it is essentially: (1) collect information (get a query and generate multiple outputs), (2) analyze/organize the information (cluster the outputs), (3) compute a score (calibration score), and (4) make a decision using a rule/threshold (answer vs. ask user to rephrase). Those are “information processing” steps that fall within the USPTO’s recognized groupings of abstract ideas such as mental processes and mathematical concepts (e.g., scoring, comparing, and selecting). The claims does not tie the abstract idea to a practical technological application. It does not claim a specific improvement to computer functionality. It broadly uses an LLM as a tool and then applies a generic decision rule to decide whether to output a response or ask the user to rephrase, more managing the content of information presented to a user than improving a technical system. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims are (i) mere instructions to implement the idea on a computer, and/or (ii) recitation of generic computer structure that serves to perform generic computer functions that are well-understood, routine, and conventional activities previously known to the pertinent industry. Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. There is further no improvement to the computing device. Dependent claims 2, 4-9, 12 and 14-19 further recite an abstract idea performable by a human and do not amount to significantly more than the abstract idea as they do not provide steps other than what is conventionally known in information processing. Claims 2 and 12: organizing information (a mental/process-of-sorting concept) with no technical improvement. Claims 4 and 14: a mathematical/statistical evaluation of grouped data. Claims 5 and 15: a straightforward mathematical calculation (normalization) applied to the abstract scoring idea. Claims 6 and 16: just measuring and evaluating data group sizes—an abstract information-analysis step. Claims 7 and 17: a basic math operation applied to the abstract scoring concept. Claims 8 and 18: a mathematical formula applied to data group counts, without adding a concrete technological improvement. Claims 9 and 19: just a field-of-use limitation. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-9, 11-12 and 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (“Self-Consistency Boosts Calibration for Math Reasoning” Mar. 14, 2024, hereinafter Wang24) in view Mirkovic et al. (US 2006/0271364). Claims 1, 11 and 20, Wang24 teaches generating N sample responses based on the natural language input query by inputting the natural language input query into a large language model (LLM) N number of times, N being an integer greater than zero ([eq. 1] [2] [Settings] sample various reasoning paths r1…rN from the LLM given input x; use nucleus sample to obtain N=16 sample (positive integer)); organizing the N sample responses into one or more clusters ([3] obtain a set of cluster C = {c1…cc} with each cluster ci comprising ni sampled responses with the same answers); performing a calibration process on the one or more clusters in which the calibration process determines a calibration score ([1] [3] strategies to estimate the confidence … cluster size (see eq. 3) … cluster number (see eq. 2) … pairwise comparison (see eq. 4)); and outputting a response to the natural language input query based on the calibration process ([1] picking one from the largest cluster as the response to each input query). The difference between the prior art and the claimed invention is that Wang24 does not explicitly teach processor; receiving a natural language input query; wherein based on a determination the calibration score is greater than or equal to a threshold, the outputted response is one of the N sample responses, and wherein based on a determination the calibration score is less than the threshold, the outputted response is an output requesting a user to rephrase the natural language input query. Mirkovic teaches a method performed by at least one processor ([Fig. 1] processor), the method comprising: receiving a natural language input query ([0004] input utterance); wherein based on a determination the calibration score is greater than or equal to a threshold, the outputted response is one of the N sample responses ([0117-0119] [0142] [claim 7] defining a first confidence threshold to specify a level at which a highest scoring dialogue move candidate is accepted), and wherein based on a determination the calibration score is less than the threshold, the outputted response is an output requesting a user to rephrase the natural language input query ([0121-0122] [0142] if the confidence is low, give the user a specific hint how to rephrase his request). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Wang24 with teachings of Mirkovic by modifying the self-consistency boosts calibration for math reasonings as taught by Wang24 to include processor; receiving a natural language input query; wherein based on a determination the calibration score is greater than or equal to a threshold, the outputted response is one of the N sample responses, and wherein based on a determination the calibration score is less than the threshold, the outputted response is an output requesting a user to rephrase the natural language input query as taught by Mirkovic for the benefit of resolution preventing noun-phrases from being properly resolved until the appropriate device has been determined ([0006] Mirkovic). Claims 2 and 12, Wang24 further teaches the method according to claim 1, wherein the organizing the N sample responses into the one or more clusters comprises organizing each sample response having a same answer into a same cluster ([1] [3] obtaining a set of cluster with each c comprising n sampled responses with the same answer). Claims 4 and 14, Wang24 further teaches the method according to claim 1, wherein the calibration process comprises determining a calibration score based on a number of clusters ([3] obtain a set of cluster C1-c with each cluster Ci comprising ni sampled responses with the same answers; the characteristics of these clusters are to estimate the confidence of LLM). Claims 5 and 15, Wang24 further teaches the method according to claim 4, wherein the calibration score is normalized based on dividing the number of clusters by N ([3] divide the cluster number by the sample size N to normalize the score into the range [0,1]). Claims 6 and 16, Wang24 further teaches the method according to claim 1, wherein the calibration process comprises determining a calibration score based on a cluster size of each of the one or more clusters ([3] three different ways to calibrate a set of cluster; cluster number, cluster size (the number of samples within a specific cluster; computing a proportion relative to the total sample size to normalize the score range), pairwise comparison)). Claims 7 and 17, Wang24 further teaches the method according to claim 6, wherein the cluster size of each of the one or more clusters is normalized by dividing each of the one or more clusters by N ([3] divide the cluster number by the sample size N to normalize the score into the range [0,1]). Claims 8 and 18, Wang24 further teaches the method according to claim 1, wherein the calibration process comprises, for each cluster: determining a cluster size of each cluster from the one or more clusters ([3] cluster size), and determining, for each cluster, the calibration score based on a product of (i) the cluster size of a respective cluster divided by a sum of the cluster size of the respective cluster and the cluster size of a first cluster other than the respective cluster with (ii) the cluster size of the respective cluster divided by a sum of the cluster size of the respective cluster and the cluster size of a second cluster other than the respective cluster ([3] [eq. 3] ni is the number of samples; N is the normalized score; Fcs(x,0)=ni/N). Claim 9, Wang24 the method of claim 1, wherein the input query is a word math problem ([Abstract] math reasoning tasks). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHREYANS A PATEL whose telephone number is (571)270-0689. The examiner can normally be reached Monday-Friday 8am-5pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Desir can be reached at 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SHREYANS A. PATEL Primary Examiner Art Unit 2653 /SHREYANS A PATEL/Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

May 13, 2024
Application Filed
Nov 20, 2025
Non-Final Rejection — §101, §103
Jan 20, 2026
Examiner Interview Summary
Jan 20, 2026
Applicant Interview (Telephonic)
Feb 13, 2026
Response Filed
Feb 24, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586597
ENHANCED AUDIO FILE GENERATOR
2y 5m to grant Granted Mar 24, 2026
Patent 12586561
TEXT-TO-SPEECH SYNTHESIS METHOD AND SYSTEM, A METHOD OF TRAINING A TEXT-TO-SPEECH SYNTHESIS SYSTEM, AND A METHOD OF CALCULATING AN EXPRESSIVITY SCORE
2y 5m to grant Granted Mar 24, 2026
Patent 12548549
ON-DEVICE PERSONALIZATION OF SPEECH SYNTHESIS FOR TRAINING OF SPEECH RECOGNITION MODEL(S)
2y 5m to grant Granted Feb 10, 2026
Patent 12548583
ACOUSTIC CONTROL APPARATUS, STORAGE MEDIUM AND ACCOUSTIC CONTROL METHOD
2y 5m to grant Granted Feb 10, 2026
Patent 12536988
SPEECH SYNTHESIS METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
89%
Grant Probability
96%
With Interview (+7.4%)
2y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 403 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month