Prosecution Insights
Last updated: April 19, 2026
Application No. 19/234,201

TEXT PROCESSING METHOD, TEXT PROCESSING APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM

Non-Final OA §101§102§103
Filed
Jun 10, 2025
Examiner
WALDRON, SCOTT A
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
387 granted / 474 resolved
+26.6% vs TC avg
Strong +31% interview lift
Without
With
+31.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
17 currently pending
Career history
491
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
32.8%
-7.2% vs TC avg
§102
22.4%
-17.6% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 474 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented for examination. Information Disclosure Statement The information disclosure statement (IDS) submitted on 06/10/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the judicial exception of an abstract idea without significantly more. Step 1 The claims recite a method, electronic device, and non-transitory computer-readable storage medium (claims 1, 16 & 20). These claims fall within at least one of the four categories of patentable subject matter. Step 2A Prong One Independent claim 1 recites “invoking a search engine interface based on the query text, to obtain a plurality of text search results corresponding to the query text; obtaining, from the plurality of text search results, a plurality of answer text segments matching the query text; and determining a relevance between the query text and each of the plurality of answer text segments, and determining one of the plurality of answer text segments that corresponds to a maximum relevance as a reference text of the query text”. These steps perform analysis on information which has been received, which are acts of evaluating information that can be practically performed in the human mind. Thus, these steps are an abstract idea in the “mental process” grouping. Claims 2-15 recite limitations that are further extensions of the identified grouping. Claims 16-19 recite limitations which correspond to claims 1-4, respectively. Claim 20 recites limitations which corresponds to claim 1. Step 2A Prong Two This judicial exception is not integrated into a practical application because the combination of additional elements includes only generic computer elements which do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. These additional elements include: electronic device, search engine interface, language model, memory, processor, and non-transitory computer-readable storage medium. Independent claim 1 recites “obtaining a query text; and invoking a language model based on the query text and the reference text, to obtain a reply text of the query text”. The claim recites limitations which amount to insignificant extra-solution activity of data gathering, such as receiving input, transmitting output, and updating/modifying data. Claims 16 & 20 recite limitations which correspond to claim 1. Step 2B The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the recitations of generic computer components performing generic computer functions at a high level of generality do not meaningfully limit the claim. Further, the insignificant extra-solution activities of data gathering and presentation do not meaningfully limit the claim. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2, 16, 17 & 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Manavoglu et al. (US 2024/0256757 A1, hereinafter “Manavoglu”). Manavoglu teaches: 1. A text processing method, performed by an electronic device, comprising: obtaining a query text [Manavoglu, ¶ 0027]; invoking a search engine interface based on the query text, to obtain a plurality of text search results corresponding to the query text [Manavoglu, ¶ 0027]; obtaining, from the plurality of text search results, a plurality of answer text segments matching the query text [Manavoglu, ¶ 0027]; determining a relevance between the query text and each of the plurality of answer text segments, and determining one of the plurality of answer text segments that corresponds to a maximum relevance as a reference text of the query text [Manavoglu, ¶ 0039]; and invoking a language model based on the query text and the reference text, to obtain a reply text of the query text [Manavoglu, ¶ 0039]. 2. The method according to claim 1, wherein invoking the search engine interface based on the query text, to obtain the plurality of text search results includes: invoking the search engine interface based on the query text, to cause the search engine interface searches for the plurality of text search results related to the query text in a sorting manner based on generation time [Manavoglu, ¶ 0030]; and obtaining the plurality of text search results related to the query text from the search engine interface [Manavoglu, ¶ 0039]. Claims 16 & 17 recite limitations corresponding to claims 1 & 2, respectively, and are rejected for the same reasons discussed above. Claim 20 recites limitations corresponding to claim 1 and is rejected for the same reasons discussed above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3-8, 18 & 19 are rejected under 35 U.S.C. 103 as being unpatentable over: (i) Manavoglu et al. (US 2024/0256757 A1, hereinafter “Manavoglu”) in view of (ii) “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” by Jacob Devlin, et al. (published in 2019, hereinafter “Devlin”). Manavoglu does not explicitly teach, but Devlin teaches: 3. The method according to claim 1, wherein obtaining the plurality of answer textsegments includes, for each text search result: segmenting the text search result into a plurality of candidate citation text segments of a fixed length [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; for each of the plurality of candidate citation text segments, obtaining a matching score between the query text and the candidate citation text segment, a start position probability of each element in the candidate citation text segment being a start position of one answer text segment of the plurality of answer text segments, and an end position probability of each element in the candidate citation text segment as an end position of the one answer text segment [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; determining an optimal matching segment, the optimal matching segment being the candidate citation text segment corresponding to a maximum matching score [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; determining a start element and an end element in the optimal matching segment, the start element being an element corresponding to a maximum start position probability, and the end element being an element corresponding to a maximum end position probability [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; and determining a part in the optimal matching segment that is located between the start element and the end element as the one answer text segment [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]. Manavoglu and Devlin are analogous art because they are in the same field of endeavor, language model processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Manavoglu with the model training and processing techniques of Devlin to improve text embedding and scoring beyond conventional information retrieval and transformer techniques. The combination of Manavoglu and Devlin teaches: 4. The method according to claim 3, wherein: the language model is a first language model[Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; and obtaining the matching score between the query text and the candidate citation text segment, and the start position probability and the end position probability of each element in the candidate citation text segment includes [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]: combining the candidate citation text segment and the query text into a text pair [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; and invoking a second language model based on the text pair, to obtain the matching score between the query text and the candidate citation text segment, and the start position probability and the end position probability of each element in the candidate citation text segment [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]. 5. The method according to claim 4, wherein invoking the second language model to obtain the matching score between the query text and the candidate citation text segment, and the start position probability and the end position probability of each element in the candidate citation text segment includes: concatenating elements in the query text and the candidate citation text segment into a token sequence with each element in the query text and the candidate citation text segment being one regular token in the token sequence, the token sequence further including a start token in a head of the token sequence [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; performing embedding on the token sequence, to obtain an embedding feature vector of the token sequence [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; invoking the second language model to encode the embedding feature vector of the token sequence, to obtain a semantic feature vector of the token sequence [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; mapping a semantic feature vector of the start token in the semantic feature vector of the token sequence, to obtain the matching score between the query text and the candidate citation text segment [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; and mapping a semantic feature vector corresponding to each regular token in the semantic feature vector of the token sequence, to obtain the start position probability and the end position probability of each element in the candidate citation text segment [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]. 6. The method according to claim 1, wherein: each of the plurality of answer text segments includes a title text [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; the language model is a first language model [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; and determining the relevance between the query text and each of the plurality of answer text segments includes, for each answer text segment [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]: concatenating elements in the query text, the answer text segment, and the title text into a token sequence with each element in the query text, the answer text segment, and the title text being one regular token in the token sequence, the token sequence further including a start token in a head of the token sequence, and one or more separator token each connecting the regular tokens between the query text and the answer text segment, and between the answer text segment and the title text [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; performing embedding on the token sequence, to obtain an embedding feature vector of the token sequence [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; invoking a second language model to encode the embedding feature vector of the token sequence, to obtain a semantic feature vector of the token sequence [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; and mapping the semantic feature vector of the token sequence, to obtain the relevance between the query text and the answer text segment [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]. 7. The method according to claim 1, wherein invoking the language model to obtain the reply text includes: invoking the language model based on the query text and the reference text to perform prediction processing on the query text, to determine a start element and an end element of the reply text that are in the reference text [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; and determining a text between the start element and the end element of the reference text as the reply text of the query text [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]. 8. The method according to claim 7, wherein invoking the language model to perform prediction processing on the query text and the reference text includes: concatenating elements in the query text and the reference text into a token sequence with each element in the query text and the reference text being one regular token in the token sequence, the token sequence further including a start token in a head of the token sequence and one or more separator tokens each connecting the regular tokens between the query text and the reference text [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; performing embedding on the token sequence, to obtain an embedding feature vector of the token sequence [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; invoking the language model to encode the embedding feature vector of the token sequence, to obtain a semantic feature vector of each regular token in the token sequence [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; mapping the semantic feature vector corresponding to each regular token, to obtain a start probability of each element of the reference text being the start element of the reply text and an end probability of each element of the reference text being the end element of the reply text [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]; and determining the element corresponding to a maximum start probability as the start element of the reply text, and determining the element corresponding to a maximum end probability as the end element of the reply text [Devlin, page 4174, § 3, “Input/Output Representations” and Figure 2]. Claims 18 & 19 recite limitations corresponding to claims 3 & 4, respectively, and are rejected for the same reasons discussed above. Claims 9, 10 & 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over: (i) Manavoglu et al. (US 2024/0256757 A1, hereinafter “Manavoglu”) in view of (ii) Osuala et al. (US 2024/0111794 A1, hereinafter “Osuala”). Manavoglu does not explicitly teach, but Osuala teaches: 9. The method according to claim 1, further comprising, after invoking the language model to obtain the reply text: obtaining a candidate citation text including a material configured for being cited in the reply text [Osuala, ¶¶ 0029 & 0030]; segmenting the reply text into a plurality of answer text segments, and segmenting the candidate citation text into a plurality of citation text segments [Osuala, ¶¶ 0029 & 0030]; and determining at least one matching citation text segment each matching one of at least one answer text segment, and inserting the at least one matching citation text segment into the reply text [Osuala, ¶¶ 0029 & 0030]. Manavoglu and Osuala are analogous art because they are in the same field of endeavor, language model processing. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Manavoglu with the model training and vector embedding techniques of Osuala to improve text embedding and scoring beyond conventional information retrieval techniques. The combination of Manavoglu and Osuala teaches: 10. The method according to claim 9, wherein determining the at least one matching citation text segment includes: combining the plurality of answer text segments and the plurality of citation text segments in pairs, to form a plurality of candidate text pairs [Osuala, ¶¶ 0029 & 0030]; and identifying at least one matching text pair from the plurality of candidate text pairs, the citation text segment and the answer text segment in each of the at least one matching text pair matching each other [Osuala, ¶¶ 0029 & 0030]. 12. The method according to claim 10, further comprising, before identifying the at least one matching text pair, for each candidate text pair: performing keyword identification on the answer text segment and the citation text segment in the candidate text pair [Osuala, ¶ 0028]; and deleting the candidate text pair in response to failing to identify keyword in at least one of the answer text segment or the citation text segment [Osuala, ¶ 0028]. 13. The method according to claim 10, further comprising, before identifying the atleast one matching text pair, for each candidate text pair: encoding the answer text segment and the citation text segment in the candidate text pair, to obtain an embedding feature vector of the answer text segment and an embedding feature vector of the citation text segment [Osuala, ¶ 0028]; determining a similarity between the embedding feature vector of the answer text segment and the embedding feature vector of the citation text segment [Osuala, ¶ 0028]; and deleting the candidate text pair in response to the similarity being less than a similarity threshold [Osuala, ¶ 0028]. 14. The method according to claim 9, wherein inserting the at least one matching citation text segment into the reply text includes: inserting, in response to a quantity of the at least one answer text segment being less than or equal to a quantity threshold, each of the at least one matching citation text segment at a position after a corresponding one of the at least one answer text segment [Osuala, ¶¶ 0106 & 0107]; and inserting, in response to a quantity of the at least one answer text segment being greater than the quantity threshold, the at least one matching citation text segment to an end of the reply text [Osuala, ¶¶ 0106 & 0107]. 15. The method according to claim 9, further comprising, before inserting the at least one matching citation text segment into the reply text: in response to a quantity of the at least one matching citation text segment being greater than a quantity threshold, performing descending sorting based on a similarity between each of the at least one matching citation text segment and a corresponding one of the at least one answer text segment, and determining each of a set quantity or a set proportion of the at least one matching citation text segment starting from a head of the sorted at least one matching citation text segment in a descending sorting result as a candidate text segment for insertion into the reply [Osuala, ¶¶ 0106 & 0107]. Allowable Subject Matter Dependent claim 11 would be allowable if rewritten to overcome the rejection under 35 U.S.C. 101 set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Scott A. Waldron whose telephone number is (571)272-5898. The examiner can normally be reached Monday - Friday 9:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached at (571)270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Scott A. Waldron/Primary Examiner, Art Unit 2152 03/30/2026
Read full office action

Prosecution Timeline

Jun 10, 2025
Application Filed
Mar 30, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596704
Error Prediction Using Database Validation Rules and Machine Learning
2y 5m to grant Granted Apr 07, 2026
Patent 12591784
DECISION-MAKING METHOD FOR AGENT ACTION AND RELATED DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12585667
PROVIDING INFORMATION ASSOCIATED WITH A CONTENT BASED ON CONTEXT
2y 5m to grant Granted Mar 24, 2026
Patent 12585701
IDEATION PLATFORM DEVICE AND METHOD USING DIAGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12579155
SYSTEMS AND METHODS FOR INTERFACE GENERATION USING EXPLORE AND EXPLOIT STRATEGIES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+31.2%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 474 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month