Prosecution Insights
Last updated: April 19, 2026
Application No. 18/418,137

LANGUAGE CORRECTION SYSTEM, METHOD THEREFOR, AND LANGUAGE CORRECTION MODEL LEARNING METHOD OF SYSTEM

Non-Final OA §103§112
Filed
Jan 19, 2024
Examiner
SPOONER, LAMONT M
Art Unit
2657
Tech Center
2600 — Communications
Assignee
Llsollu Co. Ltd.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
86%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
445 granted / 603 resolved
+11.8% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
22 currently pending
Career history
625
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
12.6%
-27.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 603 resolved cases

Office Action

§103 §112
DETAILED ACTION Introduction This office action is in response to applicant’s claims filed 1/19/2024. Claims 1-19 are currently pending Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the enablement requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to enable one skilled in the art to which it pertains, or with which it is most nearly connected, to make and/or use the invention. More specifically, in claim 1, lines 15-20, “a pre-processing part that performs a filtering task into a monolingual sentence and a data purification and normalization task by performing language detection on the ungrammatical sentence data” is deemed to lack explicitly teaching by which method the filtering is performed, such that in paragraphs [0067, 0068] of applicant’s published specification, US 2024/0160839, lack teaching a filtering task into a monolingual sentence… applying a language identification technology filters the ungrammatical and/or grammatical sentence data. The Examiner notes, the applicant only states there is a large amount of data which is filtered into a monolingual sentence. It is further unclear, based on the claim language, wherein the applicant teaches in the cited section, filtering task into monolingual sentences, based on the explanation of the filtering. Yet, language detection is not deemed to teach an act of filtering data. The applicant is not specific to how the filtering is taking place, such that there is a language translation from all multiple languages into a single language, such that the multi-lingual text is filtered into a single language via text translation, or in a multilingual data environment, after the language of each text element is identified, only a single language of the data is selected and from the multilingual text data, thereby a same language text is filtered. Or some combination of the above. Thus, based on the assertion that in any reasonable interpretation of “language detection technology” and any and all statistical and/or rule-based steps in determining or detecting a language, there isn’t a filtering process, such that through identifying a language, text is filtered, and thus the enablement issue is manifested. Each claim that depends from claim 1 is deemed to inherit this issue, and are deemed non-enabled. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. More specifically, In claim 1, lines 15 and 16, “a pre-processing part that performs a filtering task into a monolingual sentence” is deemed confusing. The Examiner notes the applicant does not teach or make clear how a “filtering task” wherein the task itself is filtered into a monolingual sentence (wherein the applicant provides no object being filtered within the clause other than the “task”). The applicant provides absolutely no claim limitation that clarifies what is being filtered. In claim 1, lines 17 and 18, “performing language detection on the ungrammatical sentence data” is ambiguous, wherein there are two sets of “ungrammatical sentence data”, the first being part of a plurality of data sets, and the second being “ungrammatical sentence data” to be corrected. Each claim that depends from claim 1 is deemed to inherit the above issues, and are deemed indefinite and confusing and require clarity. In claim 3, lines 4-5, “the parallel data construction task for machine learning in the learning processing part includes a task of constructing parallel data using a parallel corpus formed by pairing ungrammatical sentences unnecessary for correction and corresponding grammatical sentences.” is deemed unclear and confusing. The Examiner notes, as discussed in the paragraphs [0018, 0019, 0097, 0098], “ungrammatical sentence data” and “error-free grammatical sentence data” appear to be different, in that one is error free, and is unnecessary for correction (See applicant’s specification, US 2024/0160839), the applicant does not teach how an ungrammatical sentence is also unnecessary for correction. If the ungrammatical sentence is ungrammatical, and in the augmented state is further processed to include “noise” (also defined as “words/spelling omissions, substitutions, additions, spacing errors, and foreign language additions”), how this ungrammatical sentence is any different from a grammatical sentence, if they both do not require any corrections, and thus, they are paired. It appears the grammatical sentence and the ungrammatical sentence (which also has no errors necessary for correction), such that it is in a state of being unnecessary for correction, is in direct opposition to the teachings in the applicant’s specification, and claims, such that in claim 1, lines 4 and 5, appear to include a data set of ungrammatical sentence data and error-free grammatical sentence data. The Examiner notes, the prior art listed, does not teach pairing sentences, to generate machine learning, using a pair of data that is unnecessary for correction, paired with data unnecessary for correction, wherein both are unnecessary for correction, which is interpreted as due to the absence of errors, noise, and the like. In claim 12, lines 19-20, “the parallel data construction task for machine learning includes a task of constructing parallel data using a parallel corpus formed by pairing ungrammatical sentences unnecessary for correction and corresponding grammatical sentences.” based on similar reasons and rationale discussed above, with respect to claim 3, lines 4 and 5. The Examiner notes, the standard definition and also appears to be a distinction in the applicant’s claim 1, lines 4, 5, is that “ungrammatical” (as defined by multiple dictionaries) is defined as “grammatically incorrect” and/or “violates the rules of syntax and morphology, leading to confusion or ambiguity” and thus, the Examiner is unable to determine how a ungrammatical sentence would be unnecessary for correction, and paired with a grammatical sentence (the grammatical sentence, appears by definition, and as cited by the applicant, to be error-free, and thus unnecessary for correction). Dependent claims 2-11 and 13-15, inherit the above issues, regarding claim 1 and 12, respectively and are thus rejected according to similar reasons and rationale. The Examiner notes, with respect to the ambiguity found in claim 1, the prior art does not read on the claims as they currently stand, wherein the Examiner notes the closest prior art does not perform filtering task into a monolingual sentence… and by performing language detection on the ungrammatical sentence data, based on the ambiguity of the multiple ungrammatical sentence data found in the claim. Thus, the prior art rejection is held in abeyance, pending clarification. The Examiner further notes, the closest and relevant prior art does not teach that which is claimed in claim 12, and found as lacking clarity, with respect to ungrammatical sentence data unnecessary for correction, while simultaneously having ungrammatical sentence data and error free grammatical sentence data, and pairing the ungrammatical sentence data unnecessary for correction and corresponding grammatical sentences. Thus, the prior art rejection is held in abeyance, pending clarification. The following claims lack antecedent basis. - Claim 17 recites the limitation "the sentence" in line 11. - Claim 17 recites the limitation “the error sentence” in line 12. - Claim 18 cites “the non-error sentence” in line 3. There is insufficient antecedent basis for these limitations in the claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chiba et al. (Chiba, US 2017/0220360) in view of Tetreault et al. (Tetreault, US 2019/0158439), and further in view of Pak et al. (Pak, US 2014/0278361). As per claim 16, Chiba teaches a method for enabling a language correction system to perform a language correction [based on machine learning], the method comprising: performing spelling error correction on sentences to be corrected (paragraph [0024], his spell checker, grammar checker and style checker, corresponding sequence of spelling, grammar to style corrections, see Figs. 1, 5-7, 9 and 10, which illustrate the spelling, and corresponding selection of grammar, based on corrected spelling, and corresponding language modeling/style correction based on the corrected grammatical sentences); and generating a corrected sentence by performing grammar correction by using a correction model on the spelling error-corrected sentence (ibid), performing language modeling on the corrected sentence [by using a preset recommendation sentence] (ibid); and [performing post-processing of indicating a corrected portion during the generating of the corrected sentence to output the corrected portion together with the corrected sentence, wherein: the correction model is generated by performing supervised learning-based machine learning on a plurality of data sets consisting of ungrammatical sentence data and error- free grammatical sentence data corresponding to the ungrammatical sentence data, respectively]. Chiba lacks explicitly teaching that which Tetreault teaches performing language modeling on the corrected sentence by using a preset recommendation sentence (paragraphs [0095-0097]-his trained formality set of sentences, used to generate a level of formality to a sentence); wherein: the correction model is generated by performing supervised learning-based machine learning on a plurality of data sets consisting of ungrammatical sentence data and error-free grammatical sentence data corresponding to the ungrammatical sentence data, respectively (paragraphs [0056-0060, 0086, 0087, 0026, 0036]-his “supervised” learning, corresponding training pairs, as error and error free sentence data). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Chiba and Tetreault to combine the prior art element of spelling, grammar and language modeling of input data as taught by Chiba with using supervised machine learning, based on example data sets, for training as taught by Tetreault as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be having a grammar and language model checking system which utilizes machine learning to natural language processing techniques to correct a message (ibid, Tetreault, see also his abstract). The above combination lacks explicitly teaching that which Pak teaches performing post-processing of indicating a corrected portion during the generating of the corrected sentence to output the corrected portion together with the corrected sentence (paragraphs [0014, 0036]-his highlighting corresponding corrected portion and corrected sentence displayed to the user). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Chiba and Tetreault and Pak to combine the prior art element of spelling, grammar and language modeling of input data as taught by Chiba with using supervised machine learning, based on example data sets, for training as taught by Tetreault with indicating corrections to a user along with a corrected output as taught by Pak as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be having a grammar and language model checking system which utilizes machine learning to natural language processing techniques to correct a message, allowing a user, after correction, to visualize by indication, any corrected portions of text (ibid, Tetreault, see also his abstract, ibid-Pak). As per claim 17, Chiba with Tetreault with Pak make obvious the method of claim 16, further comprising: as lacking by Chiba, and taught by Tetreault, before the performing the spelling error correction, performing a sentence segmentation, for the sentences to be corrected, in a unit of sentence and performing pre-process of tokenizing the segmented sentences (ibid-Tetreault, paragraph [0043-0046, 0061, 0097, 0098]-his message content and corresponding sentence parts, as identified and tagging/annotations thereof, before subsequent spelling error correction); and as taught by Chiba, classifying the sentences to be corrected that has been pre-processed into error sentences and non-error sentences by using a binary classifier (ibid-Chiba, Fig. 9-his binary, Yes, No, spelling error, see also Figs. 3, 5 and 10-see his sentences and corrections), wherein: the classifying of the error sentences and the non-error sentences includes performing the spelling error correction when the sentence to be corrected is classified as the error sentence (ibid-see claim 1, Chiba, spelling error correction discussion). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Chiba and Tetreault and Pak to combine the prior art element of spelling, grammar and language modeling of input sentences as taught by Chiba with segmenting and annotating the sentences for correction as taught by Tetreault as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be having a grammar and language model checking system which utilizes machine learning to natural language processing techniques to correct a message, allowing a user, after correction, to visualize by indication, any corrected portions of text (ibid, Tetreault, see also his abstract, ibid-Pak). As per claim 18, Chiba with Tetreault with Pak make obvious the method of claim 17, wherein classifying of the error sentences and the non-error sentences further includes classifying the error sentence and the non- error sentence according to reliability information recognized when the sentence to be corrected is classified (ibid-Tetreault, paragraphs [0045, 0046]-his highlighted error, as classified and determined to contain error, and corresponding discrepancy determination as reliability information recognized). Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Chiba et al. (Chiba, US 2017/0220360) in view of Tetreault et al. (Tetreault, US 2019/0158439) in view of Pak, as applied to claim 17 above, and further in view of Green et al. (Green, US 2005/0288920). As per claim 19, Chiba with Tetreault with Pak make obvious the method of claim 17, but lack teaching that which Green teaches wherein: the language correction system includes a user dictionary including a source word registered by a user and a target word corresponding thereto, in which each of the source word and the target word is at least one word (paragraph [0018, 0093, 0094, 0135, 0455]-as his user created dictionary for source word and target word, “custom transformation” dictionary), and the method further comprises, before the performing the spelling error correction: determining whether the word included in the user dictionary is included in the sentence to be corrected (paragraphs [0078, 0084, 0188, 0311]-his identification of a word that the rule applies to, his rule set as original word found in the source sentence to be corrected, before spell checking); and replacing a word commonly included in the user dictionary and the sentence to be corrected with a preset user dictionary marker when the word included in the user dictionary is included in the sentence to be corrected (paragraphs [0078, 0084, 0188, 0311]-his identification of a word that the rule applies to, his rule set as original word found in the source sentence to be corrected, and his replacement via insertions of labels/tags, in the text to be corrected), and the method further comprises after the generating of the corrected sentence (paragraphs [0349, 0350]-his correct misspellings and corresponding subsequent translation of specialty terms as the final generated corrected sentence): checking whether the user dictionary marker is included in the generated corrected sentence (ibid-see above final corrected sentence discussion); and generating a final corrected sentence (ibid-see above final corrected sentence discussion), when the user dictionary marker is included in the generated corrected sentence, by replacing the word in the user dictionary corresponding to the word in the sentence to be corrected corresponding to a position of the included user dictionary marker (ibid, paragraphs [0167, 0100-0188]-see his position indicated in his rule set, for word replacement, see Fig. 5, replacement text, based on normalized content using the user created dictionary, and position indicators for the content to be normalized). Thus, it would have been obvious to one of ordinary skill in the linguistics art, before the effective filing date of the invention, as all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (computer implemented techniques and algorithms combining processes and steps in natural language processing), in view of the teachings of Chiba and Tetreault and Pak and Green to combine the prior art element of spelling, grammar and language modeling of input sentences as taught by Chiba with generating a final spelling correction as taught by Green as each element performs the same function as it does separately, as the combination would yield predictable results, KSR International Co. v. Teleflex Inc., 550 US. -- 82 USPQ2nd 1385 (2007), wherein the predictable result would be producing a final output based on normalization rules, relating to terminology, replacement, joining, ordering, etc. (ibid, Green, paragraph [0100]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (See PTO-892). Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAMONT M SPOONER whose telephone number is (571)272-7613. The examiner can normally be reached 8:00 AM -5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached on (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LAMONT M SPOONER/Primary Examiner, Art Unit 2657 lms 11/4/2023
Read full office action

Prosecution Timeline

Jan 19, 2024
Application Filed
Nov 05, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602542
Text Analysis System, and Characteristic Evaluation System for Message Exchange Using the Same
2y 5m to grant Granted Apr 14, 2026
Patent 12596881
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12591737
Systems and Methods for Word Offensiveness Detection and Processing Using Weighted Dictionaries and Normalization
2y 5m to grant Granted Mar 31, 2026
Patent 12572744
Generative Systems and Methods of Feature Extraction for Enhancing Entity Resolution for Watchlist Screening
2y 5m to grant Granted Mar 10, 2026
Patent 12518107
COMPUTER IMPLEMENTED METHOD FOR THE AUTOMATED ANALYSIS OR USE OF DATA
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
86%
With Interview (+11.8%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 603 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month