Prosecution Insights
Last updated: April 19, 2026
Application No. 18/471,317

METHOD AND SYSTEM FOR RECOMMENDING REPORT MATERIAL

Final Rejection §101§103§112
Filed
Sep 21, 2023
Examiner
PINSKY, DOUGLAS W
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Wistron Corporation
OA Round
2 (Final)
26%
Grant Probability
At Risk
3-4
OA Rounds
2y 12m
To Grant
41%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
29 granted / 112 resolved
-26.1% vs TC avg
Strong +16% interview lift
Without
With
+15.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
39 currently pending
Career history
151
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
31.2%
-8.8% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
26.8%
-13.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 112 resolved cases

Office Action

§101 §103 §112
. DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Acknowledgments The submission filed on 10/22/25 is acknowledged. Status of Claims Claims 1-4, 6-14 and 16-20 are pending. In the Amendment filed on 10/22/25, claims 1, 8, 11 and 18 were amended, claims 5 and 15 were cancelled, and no claims were added. Claims 1-4, 6-14 and 16-20 are rejected. Response to Arguments Regarding the rejection under 35 U.S.C. 112 In view of Applicant's remarks, the outstanding rejection is overcome. However, the amendments give rise to a new rejection. Regarding the rejection under 35 U.S.C. 101 Applicant's arguments have been fully considered but are not persuasive. The Office responds to Applicant's arguments below. Regarding step 2A, prong 1, Applicant cites certain limitations of claim 1 (e.g., converting … into a feature vector, inputting feature vectors … to generate a model prediction result, adjusting a model parameter …, determining predicted level information …) and asserts that these limitations cannot be performed in the human mind and are necessarily rooted in computer technology. Response, pp. 12-13. First, the rejection did not assert that the claim limitations were a mental process, so in this respect, the argument is moot. Second, the computer/technology elements in these claim limitations are merely generic computer elements, used in their ordinary capacity, recited at a high level of generality. Third, Applicant has cited only a part of claim 1. Even assuming arguendo that the cited part of claim 1 is not an abstract idea, this does not demonstrate that the remainder of claim 1 fails to include an abstract idea. Indeed, as demonstrated by the rejection, the bolded language of claim 1 shown in the rejection amounts to an abstract idea, namely, generating a report (as per the specification, e.g., 0002-0003, 0024-0025, 0048-0052, a corporate social responsibility report). Regarding step 2A, prong 2, Applicant cites filtering out of certain text materials from the materials to be evaluated for generating a report (referencing specification 0062), and adjusting model parameters, in order to minimize the loss function of the model (referencing specification 0034). Response, p. 13. The filtering out of text materials is not recited as such and in any event would be part of the abstract idea. As such, the filtering cannot integrate the abstract idea into a practical application. The adjusting of model parameters to minimize a loss function of the model is merely a standard/basic aspect of training a machine learning model, as reflected in 0034, which reads as follows: [0034] Then, the processing device 120 may adjust a model parameter of the classification model according to the actual rating level of each evaluated report and the corresponding model prediction result. In detail, the processing device 120 may compare a difference between the model prediction result and the actual rating level to generate a loss value, and update the model parameters of the feature extraction model M11 and the classification model M12 according to a direction of minimizing the loss value. For example, the processing device 120 may input the model prediction result and the actual rating level into a loss function L1 to generate the loss value. In addition, the processing device 120 may select a pre-trained language model as a basic model of the feature extraction model M11 to perform training. (emphasis added) As seen clearly by the underlined/bolded portion of 0034 above, the claimed 'adjusting of model parameters' is merely a matter of comparing the predicted result outputted by the model (model performance) to the actual result and attempting to minimize a loss function of the model (minimize the difference between the model-predicted result and the actual result), which is merely a standard/basic part of training a machine learning model. As such, the recited 'adjusting of model parameters' does not amount to an improvement in computer functioning/other technology or to a practical application in any other way. Regarding the rejections under 35 U.S.C. 102 and 103 Applicant's arguments have been fully considered but are not persuasive. The Office responds to Applicant's arguments below. Applicant argues: FIG. 6 of Chan also shows that the estimation of the ESG score is based on the input of issue date 660, market capital 670, figure 690, and percentage change in figures. That is, the estimation of the ESG score is not based on the features (i.e., elements 430-460). Therefore, Chan does not disclose the feature of "inputting the feature vectors of each of the plurality of reference text materials into the classification model to generate a model prediction result" in claim 1. (Response, p. 16; emphasis in original) The Examiner responds that Applicant's arguments above do not consider/ address the entirety of the prior art subject matter that was cited as teaching the claim limitation in question.1 In particular, in the rejection of claims 5 and 15, elements 430-460 were not the sole entities of Chan cited as teaching the recited "features." Rather, the rejection also cited Chan's teachings of "industry sector" and "ESG topic" as features. See the "converting" step in the rejection of claims 5 and 15. As for the "inputting" step of claims 5 and 15 here argued by Applicant, the Examiner directs Applicant's attention to the following additional teachings of Chau set forth in the portions of Chau that were cited in the rejection of claims 5 and 15: - as per 12:1-4, the figure (690) is annotated with, inter alia, the industry sector and the ESG topic (see 11:27) (features) - as per Fig. 6, the figure (690) is inputted into the model to generate the ESG score (generate model prediction result) - since the figure is annotated with the industry sector and the ESG topic (features) and inputted into the model to generate the ESG score, under broadest reasonable interpretation the features have been inputted into the model to generate the ESG score (generate model prediction result) - again, as per 7:11-17, 7:22-23 and 7:28-8:7, the input data are also associated with these same features, namely, the industry sector and the ESG topic, and accordingly, these sections of Chau also teach that these features (together with the input data) are inputted into the model to generate the ESG score (generate model prediction result), for example, as per 7:28-8:7 the industry sector "determines the weighting of each piece of specific data" used by the model to calculate the ESG score, and therefore it is clear that the industry sector/weighting of the data (feature) is inputted into the model together with the data itself in order to generate the ESG score (generate model prediction result). Note for the sake of completeness: in respect of the "inputting" step argued by Applicant and discussed above, as per the rejection Chau teaches "feature" but does not teach "vector." Rather, Madisetti teaches "vector" ("feature vector"). However, Applicant's substantive argument is directed solely against Chau, and Applicant does not present substantive argument against Madisetti. Claim Rejections - 35 U.S.C. § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1-4, 6-14 and 16-20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Lack of Antecedent Basis/Unclear Antecedent Basis Claim 1 and 11 recite: converting each of the plurality of reference text materials into a feature vector by using the feature extraction model; The underlined language lacks antecedent basis. Prior to the instant Amendment, the underlined language was in claims 5 and 15 (now cancelled), which depended from claims 4 and 14, which provided antecedent basis for the underlined language. Claims 2-4, 6-10, 12-14 and 16-20 are rejected by virtue of their dependency from a rejected claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-20 are directed to a method or system, which are/is one of the statutory categories of invention. (Step 1: YES) Claims 1 and 11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite a method and system for generating a report (as per the specification, e.g., 0002-0003, 0024-0025, 0048-0052, a corporate social responsibility report). For claims 1 and 11 (claim 1 being deemed representative), the limitations (indicated below in bold) of: obtaining a plurality of evaluated reports and an actual rating level of each of the evaluated reports; extracting a plurality of reference text materials related to a rating topic from the evaluated reports; performing a classification model training based on the reference text materials and the actual rating levels of the evaluated reports to establish a text level classification model, comprising: converting each of the plurality of reference text materials into a feature vector by using the feature extraction model: inputting the feature vectors of each of the plurality of reference text materials into the classification model to generate a model prediction result; and adjusting a model parameter of the feature extraction model and a model parameter of the classification model according to the actual rating levels of each of the plurality of evaluated reports and the corresponding model prediction result; determining predicted level information for each of text materials to be evaluated by using the classification model of the established text level classification model, to obtain a recommended order for each of the text materials to be evaluated; and generating a report based on the recommended order of each of the text materials to be evaluated. as drafted, constitute a process that, under the broadest reasonable interpretation, covers "certain methods of organizing human activity," specifically, "fundamental economic practices or principles" and/or "commercial or legal interactions," but for recitation of generic computer components. The Examiner notes that "fundamental economic practices" or "fundamental economic principles" describe concepts relating to the economy and commerce, including hedging, insurance, and mitigating risks, and "commercial interactions" or "legal interactions" include agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations. MPEP 2106.04(a)(2)II.A.,B. If a claim limitation, under its broadest reasonable interpretation, covers "fundamental economic practices or principles" and/or "commercial or legal interactions," but for recitation of generic computer components, then it falls within the "certain methods of organizing human activity" grouping of abstract ideas. Accordingly, claims 1 and 11 recite an abstract idea. (Step 2A - Prong 1: YES. The claims recite an abstract idea.) This judicial exception is not integrated into a practical application. Claims 1 and 11 recite the additional elements of a processing device (the foregoing recited by claim 1), and a storage device, storing a plurality of instructions, and a processing device, coupled to the storage device, and accessing the instructions to execute (operations) (the foregoing recited by claim 11), that implement the abstract idea. These additional elements are not described by the applicant and they are recited at a high level of generality (i.e., one or more generic computer elements performing generic computer functions), such that they amount to no more than mere instructions to apply the exception using generic computer elements. Accordingly, even in combination these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (Step 2A - prong 2: NO. The additional elements do not integrate the abstract idea into a practical application.) The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception itself. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a processing device (the foregoing recited by claim 1), and a storage device, storing a plurality of instructions, and a processing device, coupled to the storage device, and accessing the instructions to execute (operations) (the foregoing recited by claim 11), to perform the noted steps amount to no more than mere instructions to apply the exception using generic computer elements. Mere instructions to apply an exception using generic computer elements cannot provide an inventive concept ("significantly more"). Accordingly, even in combination, these additional elements do not provide significantly more. As such, claims 1 and 11 are not patent eligible. (Step 2B: NO. The claims do not provide significantly more.) Dependent claims 2-10 and 12-20 are similarly rejected because they further define/narrow the abstract idea of independent claims 1 and 11 as discussed above, and/or do not integrate the abstract idea into a practical application or provide an inventive concept such as would render the claims eligible, whether each is considered individually or as an ordered combination. As for further defining/narrowing the abstract idea: Dependent claims 3-17 merely describe receiving a question instruction related to the rating topic, and generating the plurality of reference text materials of the plurality of evaluated reports according to the question instruction (claims 2 and 12); a classification model in the text level classification model (claims 3 and 13); wherein the text level classification model comprises a feature extraction model and a classification model, the plurality of evaluated reports comprise a plurality of corporate social responsibility reports. (claims 4 and 14); wherein the step of performing the classification model training based on the reference text materials and the actual rating levels of the evaluated reports to establish the text level classification model comprises (claim 5) and converting each of the plurality of reference text materials into a feature vector by using the feature extraction model; inputting the feature vectors of each of the plurality of reference text materials into the classification model to generate a model prediction result; and adjusting a model parameter of the feature extraction model and a model parameter of the classification model according to the actual rating levels of each of the plurality of evaluated reports and the corresponding model prediction result (claims 5 and 15); wherein the step of determining the predicted level information for each of the text materials to be evaluated by using the text level classification model, to obtain the recommended order for each of the text materials to be evaluated comprises (claim 6) and converting each of the plurality of text materials to be evaluated into a feature vector by using the feature extraction model; inputting the feature vectors of each of the plurality of text materials to be evaluated into the classification model to generate a classification level of each of the plurality of text materials to be evaluated; and obtaining a recommended order of each of the plurality of text materials to be evaluated by sorting the classification levels of each of the plurality of text materials to be evaluated (claims 6 and 16); wherein the step of determining the predicted level information for each of the text materials to be evaluated by using the text level classification model, to obtain the recommended order for each of the text materials to be evaluated comprises (claim 7) and using a feature extraction model to convert each of the plurality of text materials to be evaluated into a feature vector, wherein the text materials to be evaluated comprise a first text material to be evaluated and a second text material to be evaluated; inputting the feature vector of the first text material to be evaluated and the feature vector of the second text material to be evaluated into a classification model to generate a level comparison result between the first text material to be evaluated and the second text material to be evaluated; and obtaining the recommended order of each of the plurality of text materials to be evaluated according to the level comparison result of the first text material to be evaluated and the second text material to be evaluated (claims 7 and 17); wherein before the step of determining the predicted level information for each of the text materials to be evaluated by using the text level classification model, to obtain the recommended order for each of the text materials to be evaluated, the method further comprises (claim 8) and replacing a numerical value ​​in each of the plurality of text materials to be evaluated with a preset value (claims 8 and 18); wherein the step of generating the report based on the recommended order of each of the text materials to be evaluated comprises (claim 9) and filtering out at least one target text material from the plurality of text materials to be evaluated according to a material limit quantity and the recommended order of each of the plurality of text materials to be evaluated (claims 9 and 19); wherein the step of generating the report based on the recommended order of each of the text materials to be evaluated comprises (claim 10) and to generate report content about the rating topic in the report according to the at least one target text material in the plurality of text materials to be evaluated and a style parameter (claims 10 and 20). As for additional elements: Claims 2 and 12 recite "the processing device" (claim 12), and “performing fine-tune training on a pre-trained language model to establish a generative language model' and (generating content) "through the generative language model” (claims 2 and 12). This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself. Claims 3 and 13 recite “wherein the processing device comprises a central processing unit and a graphics processing unit, and the method comprises: running the generative language model through the graphics processing unit; and running [a classification model in the text level classification model] through the central processing unit" (claim 3); and "wherein the processing device comprises a central processing unit and a graphics processing unit, the graphics processing unit runs the generative language model, and the central processing unit runs [a classification model in the text level classification model]" (claim 13). This recitation is at a high level of generality such that it amounts to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself. Claims 10 and 20 recite "using a generative language model." This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself. Claims 15-20 recite "the processing device." This recitation is at a high level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer element. Even in combination these additional elements do not integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea itself. Claims 4-9 and 14 do not recite any additional elements, and accordingly, for the reasons provided above with respect to the independent claims, are not patent eligible. Therefore, dependent claims 2-10 and 12-20 are not patent eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7, 9, 11-17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Chan et al. (HK 30089811 A) (filed on 03 August 2023), hereafter Chan, in view of Madisetti et al. (U.S. Patent No. 12,001,462), hereafter Madisetti. Regarding Claims 1 and 11 Chan teaches: (claim 1) A method for recommending report material, adapted to a report material recommending system comprising a processing device, the method for recommending report material comprising: (3:11-12 "the present invention provides 1. A process (method) operable using a computerized system (adapted to a … system comprising a processing device) for providing an Environmental, Social and Governance (ESG) Report" (report material); 4:5 "The ESG Report may include recommendations to the company based on 5 the ESG reporting data." (recommending)) (claim 11) a storage device, storing a plurality of instructions; a processing device, coupled to the storage device, and accessing the instructions to execute: (3:11-12 "the present invention provides 1. A process operable using a computerized system for providing an Environmental, Social and Governance (ESG) Report" -- one of ordinary skill in the art understands that a computerized system includes a storage device, storing a plurality of instructions; a processing device, coupled to the storage device; 11:1 Natural Language Processing Unit teaches processing device; one of ordinary skill in the art understands that a process operable using a computerized system includes accessing the instructions to execute) obtaining a plurality of evaluated reports and an actual rating level of each of the evaluated reports; (regarding obtaining a plurality of evaluated reports: 7:18-19 "Such datasets of companies may be derived from the ESG reports of big enterprises wherein their ESG reports are openly available to the public"; regarding and an actual rating level of each of the evaluated reports: 8:8-9 "The pre-trained neural network 120 has been pre-trained utilising a plurality of datasets that are associated with ESG topics with an associated ESG Report including ESG Ratings corresponding") extracting a plurality of reference text materials (input dataset) related to a rating topic from the evaluated reports; (regarding extracting: 7:18-19 "Such datasets of companies may be derived [extracted] from the ESG reports"; 12:21-22 "A flow chart 500 for building ESG evaluation using extracting data from existing ESG reports is illustrated by Figure 5."; Fig. 4, e.g., "Content extraction"; also 10:25-30; regarding a plurality of reference text materials: 11:1-5 to obtain the dataset "a Natural Language Processing Unit is utilised to extract text related information" from the ESG report; Fig. 4, e.g., "Content extraction"; see also 10:25-30, 7:11-23; regarding related to a rating topic: 7:11-13 "an input dataset of the company such as an SME, wherein the input dataset includes information of the company that is associated with ESG topics [related to a rating topic] of said company including environment, social and governance"; Fig. 4, 420 "ESG topic") performing a classification model training based on the reference text materials and the actual rating levels of the evaluated reports to establish a text level classification model; (Teaching #1: 8:8-9 "The pre-trained neural network 120 has been pre-trained [performing a classification model training] utilising a plurality of datasets (reference text materials) that are associated with ESG topics with an associated ESG Report including ESG Ratings corresponding [and the actual rating levels of the evaluated reports]"; per 8:6-7 the "pre-trained neural network 120 is utilised for generating ESG reporting data based on said input data." The generating of ESG reporting data is described by Fig. 4 ("the dataset building process 400 for providing a ESG report," 9:22-23). As part of 400, "a Natural Language Processing Unit is utilised to extract text related information. This is used to specifically extract key sentence 430, key phrase 440 and key words 450" (11:1-3). Further, "[t]he key sentences can be further classified … as [] new action item[s] 460" (11:16-17). Thus, 120 classifies text portions of previous ESG reports (see extracting step above), specifically, as key sentences, key phrases, and key words, and also classifies key sentences as action items. This classifying is both (i) classifying at the text level, and (ii) classifying into/as specific kinds and levels of text. Therefore, by virtue of performing this classifying, 120 is a text level classification model. The training (see above) serves to establish 120 as what it is. Teaching #2: ESG evaluation model (5.2, Fig. 5) first classifies data into subcategories according to industry sector and ESG topic, e.g., emission data pertaining to environment topic (12:23-26), and then estimates an ESG score (605) for that particular subset of data (13:14-18, 5.3, 13:23-25, Fig. 6, 600, 14:12-20). This classification and this scoring (evaluation/rating) constitute (i) classifying text (the aforementioned data), in other words, classifying at the text level, and (ii) classifying text into levels (scores, ratings), and as such the ESG evaluation model (including any submodels thereof) is a text level classification model. Preliminary to performing the classification and scoring, the model (/submodels) performing them are trained (12:24-26, 13:5, 13:11, 13:14-18, 13:23, 14:16-23), so that they can figure out the hidden rules behind ESG evaluation (scoring) and thus perform the scoring (12:21, 13:12-13). Thus, the models are trained (performing a classification model training), and the training is based on both the data (reference text materials), as per above 12:23-26, and the "last ESG score" (actual rating levels), as per above 14:12-20. The training serves to establish the model as what it is.) converting each of the plurality of reference text materials into a feature … by using the feature extraction model; (Under broadest reasonable interpretation, the following extraction/identification/classification teaches "converting into features": Fig. 4, 400, 410-460, "Content extraction," 11:1-17, 7:11-23 the data is classified (converted) into features (each of 410-460 indicates such a feature; the various types of data described at 7:11-23 amount to such features); 5.2, 12:23-26 the data is classified (converted) into industry sector and ESG topic, which are features; therefore, the hardware and/or software that performs this extraction/identification/ classification (conversion), e.g., Natural Language Processing Unit, ESG evaluation model, constitute a feature extraction model) inputting the feature … of each of the plurality of reference text materials into the classification model to generate a model prediction result; and (Fig. 4, 400, 460, 12:1-19 action items, frequencies, and scores for action items and their corresponding topics are generated (generate a model prediction result) by the model that operates Fig. 4 (e.g., NLP unit, 11:1) (classification model), from inputted feature data (key sentences, key phrases, key words, other data as given in Fig. 4, 12:1-13 and 7:11-23); 12:23-26, 13:12-21, Fig. 5, 560, 5.3, 13:23-25, Fig. 6, 600, 605, 14:12-20, 7:11-17, 7:22-23, 7:28-8:7, 8:20-21 ESG score is estimated (generate a model prediction result) from data, including features extracted from data such as industry sector and ESG topic (as explained in "converting" step immediately above, feature data), which are inputted into ESG evaluation model (classification model) (NOTE: a more detailed explanation has been given above in the section entitled "Response to Arguments - Regarding the rejections under 35 U.S.C. 102 and 103")) adjusting a model parameter of the feature extraction model and a model parameter of the classification model according to the actual rating levels of each of the plurality of evaluated reports and the corresponding model prediction result. (13:14-17; note the "neural network" (12:24) / "network" (13:15) encompasses both the model that extracts data from ESG reports (12:23-26) (feature extraction model) and the model that determines/predicts ESG rating (13:14-17) (classification model), so the adjustment taught at 13:14-17 in connection with the classification model also applies to the feature extraction model) determining predicted level information for each of text materials to be evaluated by using the classification model of the established text level classification model, to obtain a recommended order for each of the text materials to be evaluated; and (Teaching #1: Fig. 4, 400, 430-460, 11:1-17, model 120 classifies text portions of previous ESG reports, specifically, as key sentences, key phrases, and key words (which represent different levels in a text document, and which per 11:7-15 are also scored as to frequency, representing different levels), and also classifies key sentences as action items (which are assigned a rating/score, i.e., a level, see Teaching #2 below) (determining predicted level information for each of text materials to be evaluated by using the text level classification model); the levels of the key sentences, key phrases, and key words, by virtue of the rules of grammar, define a recommended order in which they must be put together in the report, and the levels of the action items serve to establish a recommended order (see Teaching #2 below) (to obtain a recommended order for each of the text materials to be evaluated); Teaching #2: 5.3, 13:23-25, Fig. 6, 600, 605, 14:12-20, 8:20-26 ESG score is estimated (determining predicted level information) for particular data subset (e.g., emissions data for environment topic) (12:23-26) (for each of text materials to be evaluated); 12:8-19 note action item is tagged with ESG rating, thus action item is also effectively scored. The generated ESG ratings define an order of the particular datasets/ESG topics (since each dataset corresponds to an ESG topic) and of the action items. The order of the action items is described in detail at 15:24-25, 16:6-9: the higher the rating (and/or the less outdated the action item), the more likely the action item is to be recommended (to obtain a recommended order for each of the text materials to be evaluated)) (Note re claim interpretation: the limitation "to obtain a recommended order for each of the text materials to be evaluated" is interpreted in light of specification 0041-0042: "obtaining the recommended order" amounts to merely inferring the order from the obtained ratings/scores (0041 "Based on table 4, it may be learned that the recommended order of the text material to be evaluated "air conditioner group control" is the first; the recommended order of the text material to be evaluated "air compressor maintenance adjustment" is the second; the recommended order of the text material to be evaluated "green power transfer" is the third.")) generating a report based on the recommended order of each of the text materials to be evaluated. (regarding generating a report: Fig. 6, 675, 5.4, 15:10-14, 8:18-26; regarding based on the recommended order of each of the text materials to be evaluated: the levels of the key sentences, key phrases, and key words, by virtue of the rules of grammar, define a recommended order in which they must be put together in the report, and the levels of the action items serve to establish a recommended order, see 12:8-19, 5.5, Fig. 9, 930-940, 15:24-16:12 higher rated (and/or less outdated) action items are more likely to be recommended for inclusion in the report, hence highest rated (and/or least outdated) action item is first in order of being recommended for inclusion.) Chan teaches features and vectors but does not explicitly disclose "feature vectors" in the claimed context. However, Madisetti teaches: … feature vector ..; (1:64-2:11) … feature vector ..; (1:64-2:11) It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified Chan's systems and methods for generating an ESG report, by incorporating therein these teachings of Madisetti regarding feature vectors, because Chan uses feature vectors in similar contexts (13:19, 15:21-22, 16:1) and thus endorses and accommodates their use in general, and because using feature vectors to analyze text with LLMs is a known "important component of LLM," which facilitates computerized analysis, see Madisetti, 2:5-13, MPEP 2143.I.A.,C. Regarding Claims 2 and 12 Chan in view of Madisetti teaches the limitations of base claims 1 and 11 as set forth above. Chan further teaches: wherein the step of extracting the plurality of reference text materials related to the rating topic from the evaluated reports comprises: (7:18-19, 12:21-22, Fig. 4, 11:1-5, 7:11-13, Fig. 4, 420; the reader is referred to the explanation provided at the "extracting" step of claims 1 and 11, above) performing fine-tune training on a pre-trained language model to establish a generative language model; (8:8-10, 12:23-24) … rating …; and (7:11-13, Fig. 4, 420; the reader is referred to the explanation provided at the "extracting" step of claims 1 and 11, above) generating the plurality of reference text materials of the plurality of evaluated reports … through the generative language model. (7:18-19, 12:21-22, Fig. 4, 11:1-5; the reader is referred to the explanation provided at the "extracting" step of claims 1 and 11, above) Madisetti further teaches: wherein the step of extracting the plurality of reference text materials … comprises: (5:47-49 the LLM/generative AI model may "perform … tasks such as … Information Extraction ..." Thus, although the teachings set forth below are primarily set in the context of a representative task (e.g., answering a question, etc.) other than extracting text, nonetheless the teachings set forth below may indeed be part of a process of extracting texts)receiving a question instruction (prompt) related to the … topic; and (2:29-35 "an input is given to the model in the form of a prompt"; claim 27 "receiving a user prompt") … according to the question instruction …. (2:10-13 "The Decoder, within the Transformer architecture, generates the output," 2:29-35 "where an input is given to the model in the form of a prompt and the model is able to generate coherent and contextually relevant responses based on the query in the prompt.") Alternatively, Madisetti teaches: performing fine-tune training on a pre-trained language model to establish a generative language model; (2:19-35, 5:37-52) It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Chan's systems and methods for generating an ESG report, as modified by Madisetti's teachings regarding feature vectors, by incorporating therein these further teachings of Madisetti regarding generating text by a fine-tuned, pre-trained LLM (generative AI model) in response to a prompt, in order to perform a task of extracting text, because these teachings are known/common ways of performing a task of extracting text, which ways would work in the same manner / yield predicable results in Chan as in Madisetti, see Madisetti, citations given above, MPEP 2143.I.A. Regarding Claims 3 and 13 Chan in view of Madisetti teaches the limitations of base claims 1 and 11 and intervening claims 2 and 12 as set forth above. Chan further teaches: wherein the processing device comprises a central processing unit and a graphics processing unit, and (3:11-12 "the present invention provides 1. A process operable using a computerized system for providing an Environmental, Social and Governance (ESG) Report" -- under the broadest reasonable interpretation, a computerized system teaches a central processing unit; 3:26-27 "The input module may include a text recognition system or a graphics recognition system (graphics processing unit) for converting documents into the dataset of the company.") … running a classification model in the text level classification model through the central processing unit. (models of Figs. 4-6, e.g., neural network 120, NLP unit (11:1), ESG evaluation model (5.2), all of which are classification models (as explained at the "performing" step of claims 1 and 11, above), are run by computer, hence by central processing unit, see 3:11-13, 9:21, claims 1, 9-11) Chan teaches running the generative language model but not running it through (by) a GPU. However, Madisetti further teaches: the method comprises: running the generative language model through the graphics processing unit; and (3:25-27, 9:16-17; note per 1:31-36 an LLM is a generative AI model that generates text (generative language model)) It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Chan's systems and methods for generating an ESG report, as modified by Madisetti's teachings regarding feature vectors, by incorporating therein these further teachings of Madisetti regarding running an LLM (generative AI model) by a GPU, because it is a known/common practice, which would work in the same way / yield predicable results in Chan as in Madisetti, see Madisetti, citations given above, MPEP 2143.I.A. Regarding Claims 4 and 14 Chan in view of Madisetti teaches the limitations of base claims 1 and 11 as set forth above. Chan further teaches: wherein the text level classification model comprises a feature extraction model and a classification model; (regarding feature extraction model: Fig. 4, "Content extraction," where 430, 440 and 450 are features that are extracted by the process/system, which as such constitutes a feature extraction model; 13:19 "The weight in the last layer is the weighting vector of the features extracted in its previous layer." -- indicating that the process/system constitutes a feature extraction model; regarding classification model: Fig. 4, 400, 430-460, 11:1-17 the pre-trained neural network 120 classifies (extracted) text portions of previous ESG reports as key sentences, key phrases, key words, and action items, and accordingly 120 constitutes a classification model (for reference, this is explained in greater detail at claim 1, "performing" step, Teaching #1, and "determining" step, Teaching #1, above); the ESG evaluation model (5.2, Fig. 5) classifies data into subcategories according to industry sector and ESG topic, e.g., emission data pertaining to environment topic (12:23-26), and also estimates an ESG score (605) for that particular subset of data (13:14-18, 5.3, 13:23-25, Fig. 6, 600, 14:12-20), which estimating of ESG score also constitutes classifying, and accordingly the ESG evaluation model constitutes a classification model (for reference, this is explained in greater detail at claim 1, "performing" step, Teaching #2, and "determining" step, Teaching #2, above)) wherein the plurality of evaluated reports comprise a plurality of corporate social responsibility reports. (7:18-19 "Such datasets of companies may be derived from the ESG reports (corporate social responsibility reports) of big enterprises wherein their ESG reports are openly available to the public") Regarding Claims 6 and 16 Chan in view of Madisetti teaches the limitations of base claims 1 and 11 and intervening claims 4 and 14 as set forth above. Chan further teaches: wherein the step of determining the predicted level information for each of the text materials to be evaluated by using the text level classification model, to obtain the recommended order for each of the text materials to be evaluated comprises: converting each of the plurality of reference text materials into a feature … by using the feature extraction model; (Under broadest reasonable interpretation, the following extraction/identification/classification teaches "converting into features": Fig. 4, 400, 410-460, "Content extraction," 11:1-17, 7:11-23 the data is classified (converted) into features (each of 410-460 indicates such a feature; the various types of data described at 7:11-23 amount to such features); 5.2, 12:23-26 the data is classified (converted) into industry sector and ESG topic, which are features; therefore, the hardware and/or software that performs this extraction/identification/ classification (conversion), e.g., Natural Language Processing Unit, ESG evaluation model, constitute a feature extraction model) inputting the feature … of each of the plurality of text materials to be evaluated into the classification model to generate a classification level of each of the plurality of text materials to be evaluated; and (Fig. 4, 400, 460, 12:1-19 action items, frequencies, and scores for action items and their corresponding topics are generated (generate a classification level) by the model that operates Fig. 4 (e.g., NLP unit, 11:1) (classification model), from inputted feature data (key sentences, key phrases, key words, other data as given in Fig. 4, 12:1-13 and 7:11-23); 12:23-26, 13:12-21, Fig. 5, 560, 5.3, 13:23-25, Fig. 6, 600, 605, 14:12-20, 7:11-17, 7:22-23, 7:28-8:7, 8:20-21 ESG score is estimated (generate a classification level) from data (as explained in "converting" step immediately above, feature data) inputted into ESG evaluation model (classification model)) obtaining a recommended order of each of the plurality of text materials to be evaluated by sorting the classification levels of each of the plurality of text materials to be evaluated. (5.5, 15:24-16:12, 12:14-19 action items are sorted to obtain recommended order) (Note re claim interpretation: the limitation is interpreted in light of specification 0041-0042: "obtaining the recommended order … by sorting the classification levels …" amounts to merely inferring the order from the obtained ratings/scores (0041 "Based on table 4, it may be learned that the recommended order of the text material to be evaluated "air conditioner group control" is the first; the recommended order of the text material to be evaluated "air compressor maintenance adjustment" is the second; the recommended order of the text material to be evaluated "green power transfer" is the third.")) Chan teaches features and vectors but does not explicitly disclose "feature vectors" in the claimed context. However, Madisetti further teaches: … feature vector ..; (1:64-2:11) … feature vector ..; (1:64-2:11) It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Chan's systems and methods for generating an ESG report, as modified by Madisetti's teachings regarding feature vectors, by incorporating therein these further teachings of Madisetti regarding feature vectors, because Chan uses feature vectors in similar contexts (13:19, 15:21-22, 16:1) and thus endorses and accommodates their use in general, and because using feature vectors to analyze text with LLMs is a known "important component of LLM," which facilitates computerized analysis, see Madisetti, 2:5-13, MPEP 2143.I.A.,C. Regarding Claims 7 and 17 Chan in view of Madisetti teaches the limitations of base claims 1 and 11 as set forth above. Chan further teaches: wherein the step of determining the predicted level information for each of the text materials to be evaluated by using the text level classification model, to obtain the recommended order for each of the text materials to be evaluated comprises: using a feature extraction model to convert each of the plurality of text materials to be evaluated into a feature …, wherein the text materials to be evaluated comprise a first text material to be evaluated and a second text material to be evaluated; (Under broadest reasonable interpretation, the following extraction/identification/ classification teaches "converting into features": Fig. 4, 400, 410-460, "Content extraction," 11:1-17, 7:11-23 the data is classified (converted) into features (each of 410-460 indicates such a feature; the various types of data described at 7:11-23 amount to such features); 5.2, 12:23-26 the data is classified (converted) into industry sector and ESG topic, which are features; therefore, the hardware and/or software that performs this extraction/identification/ classification (conversion), e.g., Natural Language Processing Unit, ESG evaluation model, constitute a feature extraction model); note since Chan teaches using multiple text data from ESG reports, this teaches a first text material and a second text material) inputting the feature … of the first text material to be evaluated and the feature … of the second text material to be evaluated into a classification model to generate a level comparison result between the first text material to be evaluated and the second text material to be evaluated; and (Fig. 4, 400, 460, 12:1-19 action items, frequencies, and scores for action items and their corresponding topics are generated (generate a classification level result) by the model that operates Fig. 4 (e.g., NLP unit, 11:1) (classification model), from inputted feature data (key sentences, key phrases, key words, other data as given in Fig. 4, 12:1-13 and 7:11-23); 12:23-26, 13:12-21, Fig. 5, 560, 5.3, 13:23-25, Fig. 6, 600, 605, 14:12-20, 7:11-17, 7:22-23, 7:28-8:7, 8:20-21 ESG score is estimated (generate a classification level result) from data (as explained in "converting" step immediately above, feature data) inputted into ESG evaluation model (classification model); regarding generate a level comparison result between the first text material to be evaluated and the second text material to be evaluated: 15:24-25, 16:6-7, 12:14-17 an action item having a higher rating is more likely to be recommended. Note that "higher" is a relative term. An action item has a higher rating not in an absolute sense, but rather only relative to another action item. Therefore, the teaching that an action item having a higher rating is more likely to be recommended is really (a shorthand way of saying that) an action item having a higher rating is more likely to be recommended, as compared to another action item (with a lower rating)) obtaining the recommended order of each of the plurality of text materials to be evaluated according to the level comparison result of the first text material to be evaluated and the second text material to be evaluated. (15:24-25, 16:6-7, 12:14-17 action items are ordered in recommended order, where action item having higher rating than other action item (level comparison result of the first text material to be evaluated and the second text material to be evaluated) is ranked higher (is more likely to be recommended) than other action items) (Note re claim interpretation: the limitation is interpreted in light of specification 0041-0042: "obtaining the recommended order … according to the level comparison result …" amounts to merely inferring the order from the obtained relative/comparative ratings/scores (0041 "Based on table 4, it may be learned that the recommended order of the text material to be evaluated "air conditioner group control" is the first; the recommended order of the text material to be evaluated "air compressor maintenance adjustment" is the second; the recommended order of the text material to be evaluated "green power transfer" is the third.")) Chan teaches features and vectors but does not explicitly disclose "feature vectors" in the claimed context. However, Madisetti further teaches: … feature vector ..; (1:64-2:11) … feature vector … feature vector …; (1:64-2:11) It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Chan's systems and methods for generating an ESG report, as modified by Madisetti's teachings regarding feature vectors, by incorporating therein these further teachings of Madisetti regarding feature vectors, because Chan uses feature vectors in similar contexts (13:19, 15:21-22, 16:1) and thus endorses and accommodates their use in general, and because using feature vectors to analyze text with LLMs is a known "important component of LLM," which facilitates computerized analysis, see Madisetti, 2:5-13, MPEP 2143.I.A.,C. Regarding Claims 9 and 19 Chan in view of Madisetti teaches the limitations of base claims 1 and 11 as set forth above. Chan further teaches: wherein the step of generating the report based on the recommended order of each of the text materials to be evaluated comprises: filtering out at least one target text material from the plurality of text materials to be evaluated according to a material limit quantity and the recommended order of each of the plurality of text materials to be evaluated. (16:10 "Action with the highest similarity value may be selected as the only recommended action." -- the highest rated action item (one target text material) is exclusively selected (filtered out) for inclusion in the report; inasmuch as the highest rated action item is selected, the filtering out is according to the recommended order; inasmuch as only a single action item is selected, the filtering out is according to a material limit quantity) (Note re claim interpretation: the entire "filtering out" step is interpreted in light of specification 0045-0047; in particular, the term "filter out" is interpreted as meaning 'select exclusively for inclusion' and the phrase "according to a material limit quantity" is interpreted in context of the limitation as 'so as not to exceed a limit or cap'; while this interpretation may appear at variance with the ordinary meaning of the terms, the Examiner deems Applicant to be de facto acting as its own lexicographer (in view of the apparent English-language translation of the foreign priority document) even though the specification does not provide explicit definitions of these terms) Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Chan et al. (HK 30089811 A), hereafter Chan, in view of Madisetti et al. (U.S. Patent No. 12,001,462), hereafter Madisetti, and further in view of Ebrahimi et al. (U.S. Patent Application Publication No. 2010/0205189 A1), hereafter Ebrahimi. Regarding Claims 8 and 18 Chan in view of Madisetti teaches the limitations of base claims 1 and 11 as set forth above. Chan further teaches: … the step of determining the predicted level information for each of the text materials to be evaluated by using the text level classification model, to obtain the recommended order for each of the text materials to be evaluated, …. (Fig. 4, 400, 430-460, 11:1-17, 5.3, 13:23-25, Fig. 6, 600, 605, 14:12-20, 8:20-26, 12:23-26, 12:8-19, 15:24-25, 16:6-9; the reader is referred to the explanation provided at the "determining" step of claims 1 and 11, above) Chan in view of Madisetti does not explicitly disclose but Ebrahimi teaches: wherein before …, the method further comprises: replacing a numerical value ​​in each of the plurality of text materials to be evaluated with a preset value. (regarding the "replacing" step: 0057-0058, 0067, 0073, claim 6, Fig. 8, swap function performed on confidential data, see also Figs. 4, 9 conversion of unmasked data to masked data; regarding wherein before …,: note that the swap function / conversion of unmasked data to masked data is performed in a staging table, i.e., is performed on data being staged for subsequent processing/evaluation/etc. of the data, as per 0002-0003 ("A goal of data masking is to obscure sensitive data, so that the sensitive data is not available outside of the authorized environment. Data masking might be done while provisioning non-production environments, so that data used to support test and development processes are not exposing sensitive data. … Unlike encryption, data masking may help the data maintain its usability for activities, like software development, research, testing, etc."), thus teaching before the step of determining …,) (Note re claim interpretation: the "replacing" step is interpreted in light of specification 0043, where the "influence value" refers to the actual numerical value of a KPI (e.g., # of kilotons reduction in carbon emissions) and the "preset value" refers to "X" or "Y," and the purpose of the replacement is to "prevent confidential information of the enterprise from being leaked due to uploading to an external server running the text level classification model".) It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Chan's systems and methods for generating an ESG report, as modified by Madisetti's teachings regarding feature vectors, by incorporating therein these teachings of Ebrahimi regarding masking sensitive/confidential data prior to processing/evaluation/ etc. of the data, in order to prevent disclosure of confidential data to unauthorized external parties while maintaining usability of the data by such external parties, see Ebrahimi, 0002-0003. Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Chan et al. (HK 30089811 A), hereafter Chan, in view of Madisetti et al. (U.S. Patent No. 12,001,462), hereafter Madisetti, and further in view of Krause et al. (U.S. Patent Application Publication No. 2021/0374341 A1), hereafter Krause. Regarding Claims 10 and 20 Chan in view of Madisetti teaches the limitations of base claims 1 and 11 and intervening claims 9 and 19 as set forth above. Chan further teaches: wherein the step of generating the report based on the recommended order of each of the text materials to be evaluated comprises: using a generative language model to generate report content about the rating topic in the report according to the at least one target text material in the plurality of text materials to be evaluated …. (Fig. 6, 675, 5.4, 15:10-14, 8:18-26; 12:8-19, 5.5, Fig. 9, 930-940, 15:24-16:12; the reader is referred to the explanation provided at the "generating" step of claims 1 and 11, above) Chan in view of Madisetti does not explicitly disclose but Krause teaches: … and a style parameter. (0011 "The embodiments disclose a generative-discriminative (GeDi) language modeling technique …. In the GeDi language modeling, class conditional language models (LMs) generate natural language with specific attributes, such as style or sentiment, by conditioning on an attribute label, or “control code”. The class conditional LMs are effective for generating text that resembles the training domains corresponding to the control codes, ….") It would have been obvious to one of ordinary skill in the art not later than the effective filing date of the claimed invention to have modified the combination of Chan's systems and methods for generating an ESG report, as modified by Madisetti's teachings regarding feature vectors, by incorporating therein these teachings of Krause regarding generating text according to a particular style, because it would yield improved output, more suited to the user and/or to the requested output, MPEP 2143.I.C.,D. Conclusion The prior art made of record and not relied upon, as set forth in the accompanying Notice of References Cited (PTO-892), is considered pertinent to applicant's disclosure. Among the cited references: Zhou (Shanghai Zhizhi Intelligent Technology Co. Ltd.) (CN 116432625 A) teaches obtaining annual reports of companies, extracting and classifying text information from the reports, and generating new reports, using a trained LLM such as BERT or GPT-2. Kim (US-20250217381-A1) teaches parsing an ESG document, e.g., a previous ESG document of a target company, screening whether the parsed ESG document includes items corresponding to specified ESG items, and generating new data including the corresponding items, including an ESG auto-completion model that tokenizes text, extracts features in the form of vectors, predicts/recommends text for inclusion in the new document, and adjusts model parameters to minimize the error between the calculated prediction value and the actual value (value of labeled data in training sample). Parham (US-20240248963-A1) teaches generating an ESG report including receiving ESG data from a report, dividing it into portions, analyzing the portions, classifying them into topics, determining the degree of materiality (relevance) of a data portion with respect to a given topic, and scoring the portion accordingly, using general and domain-specific LLMs and text embeddings. Nugent (US-20200387675-A1) teaches pre-training a general LM on a general corpus and then fine-tuning (further training) the general LM on a domain-specific corpus to generate a domain-specific LM, and classifying text using the LM, in the context of CSR/ESG, e.g., the domain of the domain-specific corpus may be ESG. Kao (US-20170300563-A1) teaches extracting a first set of features from text snippets of a plurality of labeled training documents, passing the extracted features and the plurality of labeled training documents to a supervised machine learning algorithm to train a potential snippet relevance score model, extracting a second set of features from a plurality of candidate text snippets in a candidate document, calculating a relevancy score for each of the plurality of candidate text snippets using the potential snippet relevance score model, and selecting one of the plurality of candidate text snippets for display based on the calculated relevancy scores. Yoon (US-20210365486-A1) teaches calculating an ESG score for a company by classifying pertinent news articles and calculating an ESG risk for clusters of articles. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOUGLAS W PINSKY whose telephone number is (571)272-4131. The examiner can normally be reached on 8:30 am - 5:30 pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jessica Lemieux can be reached on 571-270-3445. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DWP/ Examiner, Art Unit 3626 /JESSICA LEMIEUX/Supervisory Patent Examiner, Art Unit 3626 1 Note the instant claim amendments incorporate the subject matter of former dependent claims 5 and 15 into independent claims 1 and 11, respectively. Thus, the subject matter argued by Applicant, now appearing in claims 1 and 11, was previously in claims 5 and 15, which have now been cancelled. Accordingly, the discussion below draws on the rejection of claims 5 and 15 in the previous Office Action (issued 07/30/25), it being understood that the limitations of claims 5 and 15 discussed here are now limitations of claims 1 and 11 and include the limitation (namely, the "inputting" step) on the basis of which Applicant argues that claims 1 and 11 differ from the cited prior art.
Read full office action

Prosecution Timeline

Sep 21, 2023
Application Filed
Jul 25, 2025
Non-Final Rejection — §101, §103, §112
Oct 22, 2025
Response Filed
Jan 08, 2026
Final Rejection — §101, §103, §112
Feb 12, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12481976
ENCODED TRANSFER INSTRUMENTS
2y 5m to grant Granted Nov 25, 2025
Patent 12450588
METHOD FOR PROCESSING A SECURE FINANCIAL TRANSACTION USING A COMMERCIAL OFF-THE-SHELF OR AN INTERNET OF THINGS DEVICE
2y 5m to grant Granted Oct 21, 2025
Patent 12450591
SYSTEMS AND METHODS FOR CONTACTLESS CARD ACTIVATION VIA UNIQUE ACTIVATION CODES
2y 5m to grant Granted Oct 21, 2025
Patent 12406309
Auto Filing of Insurance Claim Via Connected Car
2y 5m to grant Granted Sep 02, 2025
Patent 12254516
NETWORK-BASED JOINT INVESTMENT PLATFORM
2y 5m to grant Granted Mar 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
26%
Grant Probability
41%
With Interview (+15.5%)
2y 12m
Median Time to Grant
Moderate
PTA Risk
Based on 112 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month