Prosecution Insights
Last updated: April 19, 2026
Application No. 18/667,504

METHOD AND APPARATUS FOR PROCESSING MODEL GENERATION RESULT, ELECTRONIC DEVICE AND STORAGE MEDIUM

Non-Final OA §102§103
Filed
May 17, 2024
Examiner
COLUCCI, MICHAEL C
Art Unit
2655
Tech Center
2600 — Communications
Assignee
BEIJING BAIDU NETCOM SCIENCE TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
91%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
749 granted / 990 resolved
+13.7% vs TC avg
Strong +15% interview lift
Without
With
+15.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
41 currently pending
Career history
1031
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
6.0%
-34.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 990 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Note: The claims are not directed towards patent ineligible subject matter under 35 U.S.C. 101 Step 1: IS THE CLAIM DIRECTED TO A PROCESS, MACHINE, MANUFACTURE OR COMPOSITION OF MATTER? Yes Step 2A.1: IS THE CLAIM DIRECTED TO A LAW OF NATURE, A NATURAL PHENOMENON (PRODUCT OF NATURE) OR AN ABSTRACT IDEA? Yes Step 2A.2: DOES THE CLAIM RECITE ADDITIONAL ELEMENTS THAT INTEGRATE THE JUDICIAL EXCEPTION INTO A PRACTICAL APPLICATION? Yes. The claims seek to improve traditional models which cannot judge whether strings are matched, improved by using model evaluation efficiency by checking for correctness or not, in addition to parsing or disassembling of text supported by the specification, and reflected by the claims e.g. in spec: in at least page 6 paragraph 2. In other words, the claims enable the invention to allow for full-automatic running can be realized, manual processing is avoided, a labor cost can be greatly reduced, an evaluation speed is increased, and an evaluation efficiency of the generative large model is effectively improved Supported by the following: In Finjan Inc. v. Blue Coat Systems, Inc., 879 F.3d 1299, 125 USPQ2d 1282 (Fed. Cir. 2018), the claimed invention was a method of virus scanning that scans an application program, generates a security profile identifying any potentially suspicious code in the program, and links the security profile to the application program. 879 F.3d at 1303-04, 125 USPQ2d at 1285-86. The Federal Circuit noted that the recited virus screening was an abstract idea, and that merely performing virus screening on a computer does not render the claim eligible. 879 F.3d at 1304, 125 USPQ2d at 1286. The court then continued with its analysis under part one of the Alice/Mayo test by reviewing the patent’s specification, which described the claimed security profile as identifying both hostile and potentially hostile operations. The court noted that the security profile thus enables the invention to protect the user against both previously unknown viruses and “obfuscated code,” as compared to traditional virus scanning, which only recognized the presence of previously-identified viruses. The security profile also enables more flexible virus filtering and greater user customization. 879 F.3d at 1304, 125 USPQ2d at 1286. The court identified these benefits as improving computer functionality, and verified that the claims recite additional elements (e.g., specific steps of using the security profile in a particular way) that reflect this improvement. Accordingly, the court held the claims eligible as not being directed to the recited abstract idea. 879 F.3d at 1304-05, 125 USPQ2d at 1286-87. This analysis is equivalent to the Office’s analysis of determining that the additional elements integrate the judicial exception into a practical application at Step 2A Prong Two, and thus that the claims were not directed to the judicial exception (Step 2A: NO). Examples of claims that improve technology and are not directed to a judicial exception include: Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1339, 118 USPQ2d 1684, 1691-92 (Fed. Cir. 2016) (claims to a self-referential table for a computer database were directed to an improvement in computer capabilities and not directed to an abstract idea); McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1315, 120 USPQ2d 1091, 1102-03 (Fed. Cir. 2016) (claims to automatic lip synchronization and facial expression animation were directed to an improvement in computer-related technology and not directed to an abstract idea); Visual Memory LLC v. NVIDIA Corp., 867 F.3d 1253,1259-60, 123 USPQ2d 1712, 1717 (Fed. Cir. 2017) (claims to an enhanced computer memory system were directed to an improvement in computer capabilities and not an abstract idea); Finjan Inc. v. Blue Coat Systems, Inc., 879 F.3d 1299, 125 USPQ2d 1282 (Fed. Cir. 2018) (claims to virus scanning were found to be an improvement in computer technology and not directed to an abstract idea); SRI Int’l, Inc. v. Cisco Systems, Inc., 930 F.3d 1295, 1303 (Fed. Cir. 2019) (claims to detecting suspicious activity by using network monitors and analyzing network packets were found to be an improvement in computer network technology and not directed to an abstract idea). Additional examples are provided in MPEP § 2106.05(a). Regarding the December 5th 2025 Memo in light of September 26, 2025 Appeals Review Panel Decision in Ex parte Desjardins, Appeal 2024-000567 for Application 16/319,040, in deciding if a recited abstract idea does or does not direct the entire claim to an abstract idea, when a claim is considered as a whole: Paragraph 21 of the Specification, which the Appellant cites, identifies improvements in training the machine learning model itself. Of course, such an assertion in the Specification alone is insufficient to support a patent eligibility determination, absent a subsequent determination that the claim itself reflects the disclosed improvement. See MPEP § 2106.05(a) (citing Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316 (Fed. Cir. 2016)). Here, however, we are persuaded that the claims reflect such an improvement. For example, one improvement identified in the 8 Appeal2024-000567 Application 16/319,040 Specification is to "effectively learn new tasks in succession whilst protecting knowledge about previous tasks." Spec. ,r 21. The Specification also recites that the claimed improvement allows artificial intelligence (AI) systems to "us[e] less of their storage capacity" and enables "reduced system complexity." Id. When evaluating the claim as a whole, we discern at least the following limitation of independent claim 1 that reflects the improvement: "adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task." We are persuaded that constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation. Under a charitable view, the overbroad reasoning of the original panel below is perhaps understandable given the confusing nature of existing § 101 jurisprudence, but troubling, because this case highlights what is at stake. Categorically excluding AI innovations from patent protection in the United States jeopardizes America's leadership in this critical emerging technology. Yet, under the panel's reasoning, many AI innovations are potentially unpatentable-even if they are adequately described and nonobvious-because the panel essentially equated any machine learning with an unpatentable "algorithm" and the remaining additional elements as "generic computer components," without adequate explanation. Dec. 24. Examiners and panels should not evaluate claims at such a high level of generality. Specifically, Ex Parte Desjardins explained the following: Enfish ranks among the Federal Circuit's leading cases on the eligibility of technological improvements. In particular, Enfish recognized that “[m]uch of the advancement made in computer technology consists of improvements to software that, by their very nature, may not be defined by particular physical features but rather by logical structures and processes.” 822 F.3d at 1339. Moreover, because “[s]oftware can make non-abstract improvements to computer technology, just as hardware improvements can,” the Federal Circuit held that the eligibility determinations should turn on whether “the claims are directed to an improvement to computer functionality versus being directed to an abstract idea.” Id. at 1336. (Desjardins, page 8). Further in Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), the claimed invention was a method of training a machine learning model on a series of tasks. The Appeals Review Panel (ARP) overall credited benefits including reduced storage, reduced system complexity and streamlining, and preservation of performance attributes associated with earlier tasks during subsequent computational tasks as technological improvements that were disclosed in the patent application specification. Specifically, the ARP upheld the Step 2A Prong One finding that the claims recited an abstract idea (i.e., mathematical concept). In Step 2A Prong Two, the ARP then determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. Importantly, the ARP evaluated the claims as a whole in discerning at least the limitation “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task” reflected the improvement disclosed in the specification. Accordingly, the claims as a whole integrated what would otherwise be a judicial exception instead into a practical application at Step 2A Prong Two, and therefore the claims were The claim itself does not need to explicitly recite the improvement described in the specification (e.g., “thereby increasing the bandwidth of the channel”). See, e.g., Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), in which the specification identified the improvement to machine learning technology by explaining how the machine learning model is trained to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting,” and that the claims reflected the improvement identified in the specification. Indeed, enumerated improvements identified in the Desjardins specification included disclosures of the effective learning of new tasks in succession in connection with specifically protecting knowledge concerning previously accomplished tasks; allowing the system to reduce use of storage capacity; and the enablement of reduced complexity in the system. Such improvements were tantamount to how the machine learning model itself would function in operation and therefore not subsumed in the identified mathematical calculation. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 8-13, and 15-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by US 11847575 B2 Ferrucci; David et al. (hereinafter Ferrucci). Re claim 1, Ferrucci teaches 1. A method for processing a model generation result applied to a text processing field, comprising: (generative model fig. 3 and fig. 4 with col 11 line 25 to col 12 line 5) disassembling a text generation result of a generative large model to obtain a plurality of result logic units; wherein each result logic unit comprises a segment in the text generation result; each segment is capable of independently identifying one premise or conclusion in a logical inference relationship of the text generation result; and the text generation result is a response result generated by the generative large model based on text input information; (dissembling text such as parsing initially with a model generative model and initially with a model/parser, in which one or more groups of words are broken down and rebuilt into contextual hypothesis or premises per se col 4 lines 8-25, such as in fig. 3 and fig. 4 with col 11 line 25 to col 12 line 5) generating a logical inference graph capable of characterizing a logical inference relationship among the plurality of result logic units based on the plurality of result logic units; and (producing an inference graph e.g. element 308 in fig. 3 based on example data graph element 404 in fig. 4 and example data such as reading, questions, and answers thereof utilizing a rule generating model to produce an inference graph col 14 lines 47-58, following dissembling text such as parsing initially with a model generative model and initially with a model/parser, in which one or more groups of words are broken down and rebuilt into contextual hypothesis or premises per se col 4 lines 8-25, such as in fig. 3 and fig. 4 with col 11 line 25 to col 12 line 5) determining whether logical inference of generation of the text generation result by the generative large model is correct or not based on the logical inference graph. (using proofs or correctness based on confidence or ranking metrics col 14 line 64 to col 15 line 27 to produce a valid model iteratively through example sets and output sets to update the generative model, following producing an inference graph e.g. element 308 in fig. 3 based on example data graph element 404 in fig. 4 and example data such as reading, questions, and answers thereof utilizing a rule generating model to produce an inference graph col 14 lines 47-58, following dissembling text such as parsing initially with a model generative model and initially with a model/parser, in which one or more groups of words are broken down and rebuilt into contextual hypothesis or premises per se col 4 lines 8-25, such as in fig. 3 and fig. 4 with col 11 line 25 to col 12 line 5) Re claim 8, this claim has been rejected for teaching a broader, or narrower claim based on general inclusion of hardware alone (e.g. processor, memory, instructions), representation of claim 1 omitting/including hardware for instance, otherwise amounting to a virtually identical scope. For instance, see fig. 2 of Ferrucci showing at least a memory and a processor with various components. Re claim 15, this claim has been rejected for teaching a broader, or narrower claim based on general inclusion of hardware alone (e.g. processor, memory, instructions), representation of claim 1 omitting/including hardware for instance, otherwise amounting to a virtually identical scope Re claims 2, 9, and 16, Ferrucci teaches 2. The method according to claim 1, wherein disassembling the text generation result of the generative large model to obtain the plurality of result logic units comprises: disassembling the text generation result of the generative large model using a pre-trained logic disassembly model to obtain the plurality of result logic units. (dissembling text such as parsing initially initially with a model/parser, in which one or more groups of words are broken down and rebuilt into contextual hypothesis or premises per se col 4 lines 8-25, such as in fig. 3 and fig. 4 with col 11 line 25 to col 12 line 5) Re claims 3, 10, and 17, Ferrucci teaches 3. The method according to claim 1, wherein generating the logical inference graph capable of characterizing the logical inference relationship among the plurality of result logic units based on the plurality of result logic units comprises: generating the logical inference graph capable of characterizing the logical inference relationship among the plurality of result logic units based on the plurality of result logic units using a pre-trained logical inference graph generation model. (producing an inference graph e.g. element 308 in fig. 3 based on example data graph element 404 in fig. 4 and example data such as reading, questions, and answers thereof utilizing a rule generating model to produce an inference graph col 14 lines 47-58, following dissembling text such as parsing initially with a model generative model and initially with a model/parser, in which one or more groups of words are broken down and rebuilt into contextual hypothesis or premises per se col 4 lines 8-25, such as in fig. 3 and fig. 4 with col 11 line 25 to col 12 line 5) Re claims 4, 11, and 18, Ferrucci teaches 4. The method according to claim 1, further comprising: before generating the logical inference graph capable of characterizing the logical inference relationship among the plurality of result logic units based on the plurality of result logic units, disassembling the text input information of the generative large model to obtain a plurality of input logic units; wherein each input logic unit comprises a segment in the text input information, and each segment is capable of independently identifying one premise or conclusion in the logical inference relationship. (prior to creating proofs or any graphs with valid data, dissembling text such as parsing initially with a model/parser, in which one or more groups of words are broken down and rebuilt into contextual hypothesis or premises per se col 4 lines 8-25, such as in fig. 3 and fig. 4 with col 11 line 25 to col 12 line 5) Re claims 5, 12, and 19, Ferrucci teaches 5. The method according to claim 4, wherein generating the logical inference graph capable of characterizing the logical inference relationship among the plurality of result logic units based on the plurality of result logic units comprises: generating the logical inference graph capable of characterizing the logical inference relationship among the plurality of result logic units based on the plurality of result logic units by referring to the plurality of input logic units. (producing an inference graph e.g. element 308 in fig. 3 based on example data graph element 404 in fig. 4 and example data such as reading, questions, and answers thereof utilizing a rule generating model to produce an inference graph col 14 lines 47-58, following dissembling text such as parsing initially with a model generative model and initially with a model/parser, in which one or more groups of words are broken down and rebuilt into contextual hypothesis or premises per se col 4 lines 8-25, such as in fig. 3 and fig. 4 with col 11 line 25 to col 12 line 5) Re claims 6, 13, and 20, Ferrucci teaches 6. The method according to claim 5, wherein generating the logical inference graph capable of characterizing the logical inference relationship among the plurality of result logic units based on the plurality of result logic units by referring to the plurality of input logic units comprises: retrieving a most relevant example in a preset example database based on the plurality of input logic units and the plurality of result logic units; the example database comprising a plurality of groups of examples, and each example comprising a plurality of input example logic units corresponding to input example information, a plurality of result example logic units corresponding to result example information, and an example logical inference graph corresponding to the plurality of result example logic units; and (using express example sets and graphs to generate a new model by iteratively updating all models as the system learns and thereby using previous data to create and use new models col 11 line 55 to col 12 line 7 with col 14 line 14 to col 15 line 41, then producing the inference graph e.g. element 308 in fig. 3 based on example data graph element 404 in fig. 4 and example data such as reading, questions, and answers thereof utilizing a rule generating model to produce an inference graph col 14 lines 47-58, following dissembling text such as parsing initially with a model generative model and initially with a model/parser, in which one or more groups of words are broken down and rebuilt into contextual hypothesis or premises per se col 4 lines 8-25, such as in fig. 3 and fig. 4 with col 11 line 25 to col 12 line 5) generating the logical inference graph corresponding to the plurality of result logic units by another pre-trained generative large model based on the plurality of input logic units, the plurality of result logic units, the plurality of input example logic units, the plurality of result example logic units and the corresponding example logical inference graph. (another new model is used to handle new inference rules for improvement for context shifts, using express example sets and graphs to generate a new model by iteratively updating all models as the system learns and thereby using previous data to create and use new models col 11 line 55 to col 12 line 7 with col 14 line 14 to col 15 line 41, then producing the inference graph e.g. element 308 in fig. 3 based on example data graph element 404 in fig. 4 and example data such as reading, questions, and answers thereof utilizing a rule generating model to produce an inference graph col 14 lines 47-58, following dissembling text such as parsing initially with a model generative model and initially with a model/parser, in which one or more groups of words are broken down and rebuilt into contextual hypothesis or premises per se col 4 lines 8-25, such as in fig. 3 and fig. 4 with col 11 line 25 to col 12 line 5) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 11847575 B2 Ferrucci; David et al. (hereinafter Ferrucci) in view of US 20250209383 A1 Hu; Luhui et al. (hereinafter Hu ). Re claims 7 and 14, Ferrucci teaches 7. The method according to claim 1, wherein determining whether the logical inference of generation of the text generation result by the generative large model is correct or not based on the logical inference graph comprises: (using proofs or correctness based on confidence or ranking metrics col 14 line 64 to col 15 line 27 to produce a valid model iteratively through example sets and output sets to update the generative model, following producing an inference graph e.g. element 308 in fig. 3 based on example data graph element 404 in fig. 4 and example data such as reading, questions, and answers thereof utilizing a rule generating model to produce an inference graph col 14 lines 47-58, following dissembling text such as parsing initially with a model generative model and initially with a model/parser, in which one or more groups of words are broken down and rebuilt into contextual hypothesis or premises per se col 4 lines 8-25, such as in fig. 3 and fig. 4 with col 11 line 25 to col 12 line 5) However, while the concept of disassembly, inference graphs, and correctness via proofs and rules is established in Ferrucci, the concept of using sub-layers and sub-graphs is not: disassembling the logical inference graph into a plurality of two-layer subgraphs; wherein each two-layer subgraph identifies one logical inference step; (Hu using inference graphs with sub-graphs with multiple layers 0019 0031 0037 0043 with sub-nodes as well representing context as in 0004 and 0029 with fig. 3 expressly and fig. 7) judging whether logical inference of each two-layer subgraph is correct or not; and (Hu using inference graphs with sub-graphs with multiple layers 0019 0031 0037 0043 with sub-nodes as well representing context as in 0004 and 0029 with fig. 3 expressly and fig. 7) determining that the logical inference of the generation of the text generation result by the generative large model is correct (Ferrucci using proofs or correctness based on confidence or ranking metrics col 14 line 64 to col 15 line 27 to produce a valid model iteratively through example sets and output sets to update the generative model) in response to determining that the logical inference of each of the plurality of two-layer subgraphs is correct. (Hu using inference graphs with sub-graphs with multiple layers 0019 0031 0037 0043 with sub-nodes as well representing context as in 0004 and 0029 with fig. 3 expressly and fig. 7) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Ferrucci to incorporate the above claim limitations as taught by Hu to allow for combing in prior art elements according to known methods to yield predictable results such as using multiple layers, sub-nodes, and sub-graphs thereof in an analogous knowledge inference graph to extract context candidates, wherein using sub-graph concepts improves performance, scalability, and interpretability by focusing computation on relevant data subsets rather than the entire graph, wherein the combination is adapted to manage complex data structures, allowing models to operate on unseen portions of a graph as shown in Hu’s complex sub-graph component e.g. in at least fig. 3 and fig. 7. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20250148769 A1 BEYKUN; Alexander et al. Inference sub-graphs Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C COLUCCI whose telephone number is (571)270-1847. The examiner can normally be reached on M-F 9 AM - 5 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL COLUCCI/Primary Examiner, Art Unit 2655 (571)-270-1847 Examiner FAX: (571)-270-2847 Michael.Colucci@uspto.gov
Read full office action

Prosecution Timeline

May 17, 2024
Application Filed
Feb 10, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592240
ENCODING AND DECODING OF ACOUSTIC ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586570
CHUNK-WISE ATTENTION FOR LONGFORM ASR
2y 5m to grant Granted Mar 24, 2026
Patent 12573405
WORD CORRECTION USING AUTOMATIC SPEECH RECOGNITION (ASR) INCREMENTAL RESPONSE
2y 5m to grant Granted Mar 10, 2026
Patent 12573380
MANAGING AMBIGUOUS DATE MENTIONS IN TRANSFORMING NATURAL LANGUAGE TO A LOGICAL FORM
2y 5m to grant Granted Mar 10, 2026
Patent 12567414
SYSTEM AND METHOD FOR DETECTING A WAKEUP COMMAND FOR A VOICE ASSISTANT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
91%
With Interview (+15.3%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 990 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month