Prosecution Insights
Last updated: April 19, 2026
Application No. 18/397,510

Text error correction method, system, device, and storage medium

Non-Final OA §101§103§112
Filed
Dec 27, 2023
Examiner
TSUI, WILSON W
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
China Unicom (Guangdong) Industrial Internet Co. Ltd.
OA Round
1 (Non-Final)
62%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
365 granted / 593 resolved
+6.6% vs TC avg
Strong +58% interview lift
Without
With
+58.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
44 currently pending
Career history
637
Total Applications
across all art units

Statute-Specific Performance

§101
15.5%
-24.5% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 593 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings filed on: 12/27/2023 are accepted. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/27/2023 is being considered by the examiner. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: With regards to claim 7: “… text preprocessing module is configured to split …”, “… the phoneme extractor is configured to acquire …”, “the phoneme feature encoder is configured to encode … “the language feature encoder is configured to encode …”, “… the feature combination module is configured to combine …”, “ the decoder is configured to decode … “, the determination model is configured to determine … and “ the text combination module is configured to combine …” With regards to claim 8: “language model is configured to determine …”, “the first perplexity determination module is configured to obtain …”, “the second perplexity determination module is configured to obtain …”, and “.. the correct text determination module is configured to compare …”. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With regards to claim 1: The claim recites "… which are synchronously updated respective parameters thereof during training of the error correction model by inputting a text sample into the error correction " … it is unclear/indefinite what are the sources of data for synchronously updated respective parameters. For example, it is unclear if each of the members of a following set in brackets [ a phoneme extractor, a phoneme extractor, a phoneme feature encoder, a language feature encoder, a feature combination module and a decoder ] synchronously provide the parameter data, or whether the decoder provides data to synchronously update parameters, or whether the one short sentence are the source of the parameters. The examiner will assume, for purposes of examination, the members of the set referenced above are the sources of data that provide the data to update the respective parameters. Additionally, the claim (claim 1) also recites “… obtaining a short sentence after error correction …”, it is unclear/indefinite if the ‘short sentence after error correction’ is necessarily a result of error correction upon the short sentence due to the wording of after error correction. In other words any short sentence independent of error correction can be encompassed by the claim language, which appears to be inconsistent with how the instant application’s specification describes the error correction. For purposes of examination, the examiner will assume for purposes of examination that the applicant intended ‘an error corrected short sentence as a result of the conducted error correction applied upon the short sentence’ Additionally, the claim (claim 1) also recites “determining a text perplexity after error correction as first perplexity” and “determining a text perplexity of the short sentence before error correction as second perplexity”. According to the instant application’s specification, ‘perplexity’ is a ‘smoothness’ and ‘reasonableness’ of processed text. Yet, ‘smoothness’ and ‘reasonableness’ is a subjective and/or relative term, which renders the claim indefinite. Also according to the instant filed perplexity can also be calculated based on criteria (however the term ‘perplexity’ is not defined as such to encompass a calculation). The examiner will assume for purposes of examination that the applicant intended a more objective way to derive perplexity rather the relative nature of assessing ‘smoothness’ or ‘reasonableness’ (such as perplexity represented as a value through a type of objective calculation). Additionally, the claim (claim 1) also recites “… comparing the first perplexity and the second perplexity of the short sentence to determine whether the short sentence before error correction or the short sentence after error correction as a correct short sentence …”. This limitation can be interpreted to encompass the act of performing the comparison (independent of the result of the comparison) is sufficient to determine whether the short sentence before error correction or the short sentence after error correction as a correct short sentence). This appears to be different than as disclosed in the instant application’s specification (as the specification discloses assessing a result of the comparison against a specific criteria to make the assessment of a ‘correct short sentence’). For purposes of examination, the examiner will assume the applicant intended determining a result of a comparison between the first perplexity and the second perplexity and objectively (rather than relatively) evaluating the result to make the assessment of correctness. Additionally, the claim (claim 1) also recites “combining the correct short sentence of each of the short sentences …”. Here, this limitation appears to reference a plurality of short sentences have undergone SAME error correction operations as the ‘short sentence’ initially disclosed earlier. However the claim does not disclose repeating THE error corrections for subsequent short sentences from the short sentences. Furthermore, the claim does not include antecedent reference to THE error correction operations , as it references a phrase that can encompass different/other ‘error correction operations’ (in contrast, the instant application’s specification appears to support repeating the same error correction operations). For purposes of examination, the examiner will assume the applicant intended to reference the same error correction operations, and also that the applicant intended to invoke and repeat the same set of error correction operations for subsequent short sentences (of the short sentences) after each ‘correct short sentence’ assessment step. With regards to claim 2, the claim recites (with respect to first perplexity) “… inputting the short sentence after error correction into two language models …. “. This limitation recites the phrase “the short sentence” , however there are multiple instances of the short sentence earlier in the claim (one instance being the original first instance of the short sentence, and another different instance of a corrected version of the short sentence). Since this limitation happens after error correction, instead of before for the first perplexity, the examiner will make an interpretation (for purposes of examination) that the applicant intended ‘the short sentence’ to be the corrected version of the short sentence. The examiner suggests the applicant consider uniquely labelling different versions of short sentence such as ‘the acquired short sentence’ and ‘the corrected short sentence for purposes of clarity and definitiveness. Likewise, the claim (claim 2) recites (with respect to second perplexity) “… inputting the short sentence after error correction into the two language models …. “. This limitation recites the phrase “the short sentence” , however there are multiple instances of the short sentence earlier in the claim (one instance being the original first instance of the short sentence, and another different instance of a corrected version of the short sentence). Since this limitation happens before error correction, the examiner will make an interpretation (for purposes of examination) that the applicant intended ‘the short sentence’ to be the original acquired version of the short sentence. The examiner suggests the applicant consider uniquely labelling different versions of short sentence such as ‘the acquired short sentence’ and ‘the corrected short sentence for purposes of clarity and definitiveness. With regards to claim 3, it does not resolve the deficiencies explained for the rejection of claim 2 and also does not resolve the deficiencies explained for the rejection of claim 1, and thus it is rejected under similar rationale for depending upon subject matter (included /referenced in claim 2 and claim 1) for which it depends upon. With regards 5, it does not resolve the deficiencies explained for claim 1 and thus it is rejected under similar rationale (as explained for the rejection of claim 1) for depending upon subject matter for which it depends upon. With regards to claim 6, it does not resolve the deficiencies explained for claim 1 and thus it is rejected under similar rationale (as explained for the rejection of claim 1) for depending upon subject matter for which it depends upon. With regards to claim 7, it is rejected under similar rationale as claim 1. With regards to claim 8, it is rejected under similar rationale as claim 2. With regards to claim 9, it is rejected under similar rationale as claim 1. With regards to claim 10, it is rejected under similar rationale as claim 1. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because: With regards to claim 10, the claim recites “a computer readable storage medium”, encompass signals per se since the instant application’s specification does not define the term, and thus, the term can be openly interpreted to encompass different forms of mediums such as signal media. See MPEP 2106 below: “Non-limiting examples of claims that are not directed to any of the statutory categories include: … Transitory forms of signal transmission (often referred to as "signals per se"), such as a propagating electrical or electromagnetic signal or carrier wave … Even when a product has a physical or tangible form, it may not fall within a statutory category. For instance, a transitory signal, while physical and real, does not possess concrete structure that would qualify as a device or part under the definition of a machine, is not a tangible article or commodity under the definition of a manufacture (even though it is man-made and physical in that it exists in the real world and has tangible causes and effects), and is not composed of matter such that it would qualify as a composition of matter. Nuijten, 500 F.3d at 1356-1357, 84 USPQ2d at 1501-03. As such, a transitory, propagating signal does not fall within any statutory category. Mentor Graphics Corp. v. EVE-USA, Inc., 851 F.3d 1275, 1294, 112 USPQ2d 1120, 1133 (Fed. Cir. 2017); Nuijten, 500 F.3d at 1356-1357, 84 USPQ2d at 1501-03”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, 7, 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Baker (US Application: US 20040148169, published: Jul. 29, 2004, filed: Jan. 23, 2003) in view of ITSUI (US Application: US 20190080688, published: Mar. 14, 2019, filed: Oct. 8, 2015). With regards to claim 1, Baker teaches A text error correction method (paragraph 0073: a computer implemented method is implementing using at least a storage medium and processor), comprising: splitting a text obtained by automatic speech recognition into a plurality of short sentences (paragraphs 0029, 0056: speech/text are broken into sequences of units of speech ); and executing error correction operations on each of the short sentences (paragraphs 0007 and 0008: correction can be performed ) comprising inputting one short sentence of the short sentences into a trained error correction model (paragraph 0057: models used for correction can be trained), wherein the error correction model includes a phoneme extractor a phoneme feature encoder (paragraphs 0058, 0082: phonemes can be extracted and encoded as a sequence of acoustic observations and labels), a language feature encoder (paragraph 0049: a language model is implemented to generate an encoding of a sequence of linguistic elements subject to a grammar or statistical model)), a feature combination module (paragraph 0087: phoneme observation data is combined with language/grammar for matching (which can include concatenating acoustic data and grammar models), and a decoder (paragraph 0087: the data is ‘decoded’/correlated to a match determination), which are synchronously updated respective parameters thereof during training of the error correction model by inputting a text sample into the error correction model (paragraph 0088 and 0089: model data/metadata is updated); acquiring, by the phoneme extractor, phoneme information of the short sentence (paragraphs 0058, 0082: phonemes can be extracted and encoded as a sequence of acoustic observations and labels); converting, by the phoneme feature encoder, the phoneme information into a phoneme feature through encoding the phoneme information paragraphs 0058, 0082: phonemes can be extracted and encoded as a sequence of acoustic observations and labels); obtaining, by the language feature encoder, a language feature of the short sentence through encoding the short sentence (paragraph 0049: a language model is implemented to generate an encoding of a sequence of linguistic elements subject to a grammar or statistical model)); combining, by the feature combination module, the phoneme feature and the language feature to obtain a combined feature of the short sentence (paragraph 0087: phoneme observation data is combined with language/grammar for matching (which can include concatenating acoustic data and grammar models). However Baker does not expressly teach decoding, by the decoder, the combined feature to conduct error correction on the short sentence, and obtaining a short sentence after error correction, determining text perplexity of the short sentence after error correction as first perplexity, determining text perplexity of the short sentence before error correction as second perplexity, and comparing the first perplexity and the second perplexity of the short sentence to determine whether the short sentence before error correction or the short sentence after error correction as a correct short sentence, and combining the correct short sentence of each of the short sentences in order into a correct text, after all of the short sentences executed error correction operations. Yet Itsui teaches decoding, by the decoder, the combined feature to conduct error correction on the short sentence, and obtaining a short sentence after error correction, determining text perplexity of the short sentence after error correction as first perplexity, determining text perplexity of the short sentence before error correction as second perplexity, and comparing the first perplexity and the second perplexity of the short sentence to determine whether the short sentence before error correction or the short sentence after error correction as a correct short sentence, and combining the correct short sentence of each of the short sentences in order into a correct text, after all of the short sentences executed error correction operations (Fig 8, paragraph 0068-0073, 0080-088: perplexity is calculated for different phrases of the short sentence that include an original representation and other variant/synonym (interpreted as ‘corrected’) phrases and a combination of phrases are selected based upon comparison and perplexity values (Likelihood is based on perplexity). Depending on values associated with perplexity and a threshold, particular variants are selected with respect to being higher or less than the threshold). It would have been obvious to one of ordinary skill in the art before the effective filing of the invention to have modified Baker’s ability to acquire and extract phoneme and language feature data to derive sentence/phrase data such that the sentence/phrase data is further broken down and combined through perplexity value based comparisons, as taught by Itsui. The combination would have allowed Baker to have improved speech recognition results and improved search accuracy by reducing misrecognition (Itsui, paragraph 0003). With regards to claim 4. The text error correction method according to claim 1, the combination of Baker and Itsui teaches wherein the method of comparing the first perplexity and the second perplexity to determine whether the short sentence after error correction or the short sentence before error correction as the correct short sentence includes: determining whether the first perplexity is equal to or less than the second perplexity, setting the short sentence after error correction as the correct short sentence if YES, and setting the short sentence before error correction as the correct short sentence if NO (as similarly explained in the rejection of claim 1, Itsui was explained to teach a correct short sentence is selected compared to a variant of the short sentence based upon value comparison and a threshold), and is rejected under similar rationale. With regards to claim 7, the combination of Baker and Itsui teaches a text error correction system, comprising: a text preprocessing module; an error correction model; a determination model; and a text combination module, wherein the text preprocessing module is configured to split a text obtained by automatic speech recognition into short sentences, and input each of the short sentences to a trained error correction model, the error correction model includes a phoneme extractor, a phoneme feature encoder, a language feature encoder, a feature combination module, and a decoder, which are synchronously updated respective parameters thereof during training of the error correction model by inputting a text sample into the error correction model, the phoneme extractor is configured to acquire phoneme information of each of the short sentences, and input the phoneme information of each short sentence into the phoneme feature encoder, and is further configured to directly input each short sentence into the language feature encoder and the determination model, the phoneme feature encoder is configured to encode the phoneme information of each of the short sentences to convert the phoneme information of each short sentence into a phoneme feature thereof. the language feature encoder is configured to encode each of the short sentences to obtain a language feature thereof, the feature combination module is configured to combine the phoneme feature and the language feature of each of the short sentences to obtain a combined feature thereof, and input the combined feature of each of the short sentences into the decoder, the decoder is configured to decode the combined feature of each of the short sentences to conduct an error correction to obtain a short sentence after error correction, and is further configured to input the short sentence after error correction into the determination model, the determination model is configured to determine text perplexity of the short sentence after error correction of each short sentence as first perplexity thereof, and determine text perplexity of the short sentence before error correction of each short sentence as second perplexity thereof, and is further configured to compare the first perplexity and the second perplexity of each of the short sentences to determine whether the short sentence before error correction or the short sentence after error correction as a correct short sentence, and the text combination module is configured to combine the correct short sentence of each of the short sentences in order into a correct text, as similarly explained in the rejection of claim 1, and is rejected under similar rationale. With regards to claim 9, the combination of Baker and Itsui teaches a computer device, comprising a storage and a processor, wherein the storage stores a computer program executable by the processor to perform the text error correction method according to claim 1, as similarly explained in the rejection of claim 1, and is rejected under similar rationale. With regards to claim 10, the combination of Baker and Itsui teaches a computer-readable storage medium, which stores a computer program executable by a processor to perform the text error correction method according to claim 1, as similarly explained in the rejection of claim 1, and is rejected under similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Xu et al (US Application: US 20060206313): This reference teaches dictionary learning method and word encoding into a small size dictionary. Yoon et al (US Application: US 2021/0097242): This reference teaches sentence correction and comparing perplexity values for a particular language model. Gao et al (US Application: US 20020188446): This reference teaches a modified language model that is used to generate probabilities used in a perplexity calculation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILSON W TSUI whose telephone number is (571)272-7596. The examiner can normally be reached Monday - Friday 9 am -6 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILSON W TSUI/Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Dec 27, 2023
Application Filed
Dec 14, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602535
COMMENT DISPLAY METHOD AND APPARATUS OF A DOCUMENT, AND DEVICE AND MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12589766
AUTONOMOUS DRIVING SYSTEM AND METHOD OF CONTROLLING SAME
2y 5m to grant Granted Mar 31, 2026
Patent 12570284
AUTONOMOUS DRIVING METHOD AND DEVICE FOR A MOTORIZED LAND VEHICLE
2y 5m to grant Granted Mar 10, 2026
Patent 12552376
VEHICLE CONTROL APPARATUS
2y 5m to grant Granted Feb 17, 2026
Patent 12511993
SYSTEMS AND METHODS FOR CONFIGURING A HIERARCHICAL TRAFFIC MANAGEMENT SYSTEM
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+58.1%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 593 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month