Prosecution Insights
Last updated: April 19, 2026
Application No. 18/681,251

TEXT GENERATION SUPPORT APPARATUS, TEXT GENERATION SUPPORT METHOD, AND PROGRAM

Non-Final OA §101§103
Filed
Feb 05, 2024
Examiner
DUGDA, MULUGETA TUJI
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Nippon Telegraph and Telephone Corporation
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
40 granted / 49 resolved
+19.6% vs TC avg
Strong +19% interview lift
Without
With
+18.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
19 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
57.6%
+17.6% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 49 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The information disclosure statements (IDS) submitted on 02/05/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The independent claims 1 and 7 recite “perform morpheme analysis on data of a document and dividing the document into words; set a mask word by masking a predetermined word among the divided words; and use … to search for a likelihood in consideration of a word possibility and a context based on the mask word, and determine a substitute possibility for the mask word in accordance with the likelihood to complete a document possibility…” Performing the morpheme analysis is like basically a linguistic analysis of word structure that is also similar to compositional analysis, where a word is broken down into its smallest meaningful parts called morphemes. Performing the morpheme analysis involve processing of identifying the grammatical category of each word in a sentence and individuals who might be linguists or educators can mentally break down words into their smallest meaningful units (morphemes). To set a mask word, one can define a specific pattern using a predetermined mask in a text field. The process of finding the "likelihood" or "possibility" of a word within a specific context would lead to a "mask word." Thus, these claims, even including the “likelihood in consideration of a word possibility,” require just a mental process that can be performed using pen and paper. The “trained natural language processing model” is just an additional element. These claims are mental processes, but they might at most be implemented using a conventional/generic (general-purpose) computer (Spec., para 0012). Moreover, these claims are not tied to some practical application as there are not clearly stated applications and uses or no clearly stated or specific improvements, as well as no clearly stated step by step training procedures. The claimed invention is, therefore, directed to an abstract idea and a mental process without significantly more and thus, claims 1 and 7 are rejected under 35 U.S.C. 101. Claim 2 recites “… generate a word sequence by setting the mask word, and … search for the likelihood based on the generated word sequence using the ...” that require a mental process for searching for the likelihood based on the generated word sequence. Generating a word sequence by setting the mask words is possible by providing an incomplete sentence with placeholders (masks). For instance, a human can create an input with Masked Words "The quick brown [MASK] jumps over the [MASK] dog" and a trained person with such masked words would likely predict mentally "fox" for the first mask and "lazy" or another suitable adjective for the second mask, based on the surrounding context. This can be performed mentally or at most using a conventional/generic (general-purpose) computer. These claimed limitations as drafted cover a mental process and abstract idea of data processing (e.g., “generate”, “search for the likelihood”, etc.). These are mental processes directed to an abstract idea of mental process for data processing and data analysis using a conventional/generic (general-purpose) computer as well and thus, claim 2 is rejected under 35 U.S.C. 101. Claim 3 recites “… determine the substitute possibility at a probability proportional to the likelihood, or select a predetermined number of words having a high likelihood and randomly determine ....” These claims as drafted cover a mental process and abstract idea of data gathering and processing (e.g., “determine”, “possibility at a probability proportional to the likelihood,” “select,” and “randomly determine”). For instance, a human can create an input with Masked Words "The quick brown [MASK] jumps over the [MASK] dog" and a trained person with such masked words would likely predict mentally "fox" for the first mask and "lazy" or another suitable adjective for the second mask, based on the surrounding context. This can be performed mentally to determine the substitute possibility at a probability proportional to the likelihood, or select a predetermined number of words having a high likelihood. Thus, these are mental processes directed to an abstract idea for data processing and data analysis using a conventional/generic (general-purpose) computer as well and thus, claim 3 is rejected under 35 U.S.C. 101. Claim 4 recites “… determine the substitute possibility word by word for a plurality of the mask words....” One might mentally determine the substitute possibility word by word for "several words for 'mask' " or for "different words related to 'mask'." Therefore, this claim as drafted covers a mental process and abstract idea of data analysis/processing (i.e., “determine the substitute possibility word”). These are mental processes directed to an abstract idea of for data processing and data analysis using a conventional/generic (general-purpose) computer as well and thus, claim 4 is rejected under 35 U.S.C. 101. Claim 6 recites “a document output means for sorting and outputting the plurality of document possibilities completed by the word search means based on an evaluation index...” These claims as drafted cover a mental process and abstract idea of data gathering/processing and data evaluation/analysis. The document output means for sorting and outputting would be mentally organizing and present various potential results related to document processing. These are mental processes directed to an abstract idea for data processing and data analysis using a conventional/generic (general-purpose) computer as well and thus, claim 6 is rejected under 35 U.S.C. 101. This judicial exception is not integrated into a practical application. In particular, claims 1 and 8 recite additional element of “processor,” “memory” and “non-transitory computer-readable storage medium” as per the independent and dependent claims. Claims 2 and 5 also recite “trained natural language processing model,” “bidirectional encoder” and “transformers” which are just other additional elements being used as a tool. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. For instance, these claims might at most be implemented using a conventional/generic (general-purpose) computer (Spec., para 0012) and the bidirectional encoder and transformer are well known in the art (Spec., para 0003). The claim is directed to an abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional element of using a computer is noted as a general computer as noted. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the additional limitation in the claims noted above are directed towards insignificant solution activity. The claims are not patent eligible. Dependent claims 2-6 and 8 are also directed toward an abstract idea and do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. Therefore, claims 1-8 do not contain patent eligible subject matter that has been identified by the courts. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional general purpose computer implementation. Claims 1-8, are therefore not drawn to patent eligible subject matter as they are directed to an abstract idea without significantly more. Thus, the claimed invention is directed to an abstract idea and a mental process without significantly more and thus, claims 1-8 are rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8 are rejected under 35 U.S.C. 103 as being unpatentable over Pawar et al. Pat App No. US 20220182253 A1 (Pawar) in view of Qiang et al., “A simple framework for lexical simplification,” arXiv preprint arXiv:2006.14939. 2020 Jun 25. Regarding Claim 1, Pawar discloses a document generation supporting device (Pawar, para 0023, the present specification describes a method of electronically supporting a meeting… Examples of this method may include, during the meeting… processing the audio of the meeting to produce a record of the meeting) comprising: a processor (Pawar, para 0130, Among these hardware components may be a number of processors 1559); and a memory storing program instructions that cause the processor (Pawar, para 0132-0133, The data storage device 1569 may specifically store computer code representing a number of applications that the processor 1559 executes to implement at least the functionality described herein. The data storage device 1569 may include various types of memory modules, including volatile and nonvolatile memory. For example, the data storage device 1569 of the present example includes Random Access Memory (RAM) 1571…) to: perform morpheme analysis on data of [[the]] a document and dividing the document into words (Pawar, para 0109, split the sentences in text and build a document term matrix); set a mask word by masking a predetermined word among the divided words (Pawar, para 0101, mask or remove words per a list of words or keywords to mask; [“per a list of words or keywords” as “predetermined word among the divided words”]); and Pawar does not specifically disclose use a trained natural language processing model to search for a likelihood in consideration of a word possibility and a context based on the mask word, and determine a substitute possibility for the mask word in accordance with the likelihood to complete a document possibility. However, Qiang, in the same field of endeavor, discloses use a trained natural language processing model to search for a likelihood in consideration of a word possibility and a context based on the mask word (Qiang, 4th page, 1st col, 5th para – 2nd col, 1st para, Bert prediction order. On this step of substitute generation, we obtain the probability distribution of the vocabulary corresponding to the mask word. Because LSBert already incorporates the context information on the step of substitution generation, the word order of Bert prediction is a crucial feature which includes the information of both the context and the complex word itself), and determine a substitute possibility for the mask word in accordance with the likelihood to complete a document possibility (Qiang, 3rd page, 2nd col, 4th – 7th para, Given a sentence S and the complex word w, the aim of substitution generation (SG) is to produce the substitute candidates for the complex word w. LSBert produces the substitute candidates for the complex word using pretrained language model Bert. we briefly summarize the Bert model, and then describe how we extend it to do lexical simplification…Bert to obtain the probability distribution of the vocabulary p(·|S, S0\{w}) corresponding to the mask word). Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the method of Qiang in the method of Pawar because this would enable to replace complex words in a given sentence with their simpler alternatives of equivalent meaning, to simplify the sentence (Qiang, Abstract). Regarding Claim 2, Pawar in view of Qiang disclose the document generation supporting device (Pawar, para 0023, the present specification describes a method of electronically supporting a meeting… Examples of this method may include, during the meeting… processing the audio of the meeting to produce a record of the meeting) according to claim1. Furthermore, Qiang discloses: wherein the program instructions cause the processor to generate a word sequence by setting the mask word (Qiang, 3rd page, Figure 2, Step 1: Complex word identification, Step 2: Substitute Generation; We adopt a new strategy to compute the likelihood of W. We first replace the original word w with the substitution candidate; Qiang, 2nd col, 4th para, produce the substitute candidates for the complex word w; [i.e., “Complex word identification” as “setting the mask word”; “Substitute Generation” as “generate a word sequence”; “compute the likelihood …” implies “program instructions cause the processor to …”; “produce the substitute candidates for the complex word w” as “generate a word sequence”]; Alternative, Qiang, 1st page, 2nd col, Figure 1, substitute candidates of complex words and simplified sentence), and wherein the program instructions cause the processor to search for the likelihood based on the generated word sequence using the trained natural language processing model (Qiang, 2nd page, 1st col, 2nd para, we mask the complex word w of the original sentence S as a new sentence S0, and concatenate the original sequence S and S0 for feeding into the Bert to obtain the probability distribution of the vocabulary corresponding to the masked word. Then we choose the top probability words as substitute candidates). Regarding Claim 3, Pawar in view of Qiang disclose the document generation supporting device according to claim 1. Furthermore, Qiang discloses: wherein the program instructions cause the processor to determine the substitute possibility at a probability proportional to the Likelihood (Qiang, 3rd page, 2nd col, 4th – 7th para, Given a sentence S and the complex word w, the aim of substitution generation (SG) is to produce the substitute candidates for the complex word w. LSBert produces the substitute candidates for the complex word using pretrained language model Bert. we briefly summarize the Bert model, and then describe how we extend it to do lexical simplification…Bert to obtain the probability distribution of the vocabulary p(·|S, S0\{w}) corresponding to the mask word), or select a predetermined number of words having a high likelihood and randomly determine the substitute possibility from the selected words. Regarding Claim 4, Pawar in view of Qiang disclose the document generation supporting device according to claim 1. Furthermore, Qiang discloses: wherein the program instructions cause the processor to determine the substitute possibility word by word for a plurality of the mask words (Qiang, 2nd page, 2nd col., 2nd para, Lexical simplification (LS) needs to identify complex words and find the best candidate substitution for these complex words; [i.e., “complex words” as “a plurality of the mask words”]). Regarding Claim 5, Pawar in view of Qiang disclose the document generation supporting device according to claim 1. Furthermore, Qiang discloses: wherein the trained natural language processing model is bidirectional encoder representations from transformers (Qiang, 2nd page, 1st col., 1st para, this paper has the following two contributions: LSBert is a novel Bert-based method for LS, which can take full advantages of Bert to generate and rank substitute candidates. To our best knowledge, this is the first attempt to apply pretrained transformer language models for LS). Regarding Claim 6, Pawar in view of Qiang disclose the document generation supporting device according to claim 1. Furthermore, Qiang discloses: a document output means for sorting and outputting the plurality of document possibilities completed by the word search means based on an evaluation index (TABLE 1 - Evaluation results of substitute generation on three datasets; Qiang, 6th page, 1st col., 4th para – 8th para, (1) Evaluation of Substitute Candidates - Precision (PRE): The proportion of generated candidates that are in the gold standard. Recall (RE): The proportion of gold-standard substitutions that are included in the generated substitutions. F1: The harmonic mean between Precision and Recall. The results are shown in Table 1. As can be seen, our model LSBert obtains the highest Recall and F1 scores on three datasets, largely outperforming the previous best baseline Paetzold-NE, increasing 37.4%, 19.1% and 41.4% using F1 metric. The baseline Paetzold-NE by combining the Newsela parallel corpus and context-aware word embeddings obtains better results on PRE than LSBert, because it uses a different calculation method. If one candidate exists in the gold standard, different morphological derivations of the candidate in substitute candidates are all counted into the PRE metric). Regarding Claim 7, A document generation supporting method (Pawar, para 0023, the present specification describes a method of electronically supporting a meeting… Examples of this method may include, during the meeting… processing the audio of the meeting to produce a record of the meeting) comprising: performing morpheme analysis on data of a document and dividing the document into words (Pawar, para 0109, split the sentences in text and build a document term matrix); setting a maskword by masking a predetermined word among the divided words (Pawar, para 0101, mask or remove words per a list of words or keywords to mask; [“per a list of words or keywords” as “predetermined word among the divided words”]); and Pawar does not specifically disclose using a trained natural language processing model to search for a likelihood in consideration of a word possibility and a context based on the mask word, and determining a substitute possibility for the mask word in accordance with the likelihood to complete a document possibility. However, Qiang, in the same field of endeavor, discloses using a trained natural language processing model to search for a likelihood in consideration of a word possibility and a context based on the mask word (Qiang, 4th page, 1st col, 5th para – 2nd col, 1st para, Bert prediction order. On this step of substitute generation, we obtain the probability distribution of the vocabulary corresponding to the mask word. Because LSBert already incorporates the context information on the step of substitution generation, the word order of Bert prediction is a crucial feature which includes the information of both the context and the complex word itself), and determining a substitute possibility for the mask word in accordance with the likelihood to complete a document possibility (Qiang, 3rd page, 2nd col, 4th – 7th para, Given a sentence S and the complex word w, the aim of substitution generation (SG) is to produce the substitute candidates for the complex word w. LSBert produces the substitute candidates for the complex word using pretrained language model Bert. we briefly summarize the Bert model, and then describe how we extend it to do lexical simplification…Bert to obtain the probability distribution of the vocabulary p(·|S, S0\{w}) corresponding to the mask word). Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to incorporate the method of Qiang in the method of Pawar because this would enable to replace complex words in a given sentence with their simpler alternatives of equivalent meaning, to simplify the sentence (Qiang, Abstract). Regarding Claim 8, Pawar in view of Qiang disclose the non-transitory computer-readable recording medium having stored therein a program causing a computer to execute the method according to claim 7 (Pawar, para 0062, FIG. 6 is a diagram of a non-transient, computer-readable storage medium containing an agent, consistent with the disclosed implementations. As shown in FIG. 6, a non-transitory computer-readable medium 600 supports a computerized agent 650, as described herein, for supporting a planned meeting. The agent 650 includes: instructions 652 for receiving meeting information about the planned meeting, including meeting participants and a meeting agenda; and instructions 654 for, in response to receiving the meeting information, communicating electronically with the meeting participants to request information related to the meeting agenda). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MULUGETA T. DUGDA whose telephone number is (703)756-1106. The examiner can normally be reached Mon - Fri, 4:30am - 7:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D. Shah can be reached at 571-270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MULUGETA TUJI DUGDA/Examiner, Art Unit 2653 /Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 11/29/2025
Read full office action

Prosecution Timeline

Feb 05, 2024
Application Filed
Nov 29, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597424
METHOD AND APPARATUS FOR DETERMINING SKILL FIELD OF DIALOGUE TEXT
2y 5m to grant Granted Apr 07, 2026
Patent 12592244
REDUCED-BANDWIDTH SPEECH ENHANCEMENT WITH BANDWIDTH EXTENSION
2y 5m to grant Granted Mar 31, 2026
Patent 12579366
DEVELOPMENT PLATFORM FOR FACILITATING THE OPTIMIZATION OF NATURAL-LANGUAGE-UNDERSTANDING SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12573417
A COMPUTER-IMPLEMENTED METHOD OF PROVIDING DATA FOR AN AUTOMATED BABY CRY ASSESSMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12567419
VOICEPRINT DRIFT DETECTION AND UPDATE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+18.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 49 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month