Prosecution Insights
Last updated: April 19, 2026
Application No. 18/174,973

EVALUATION METHOD, EVALUATION APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING EVALUATION PROGRAM

Non-Final OA §101§102§112
Filed
Feb 27, 2023
Examiner
CHEN, KUANG FU
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
203 granted / 252 resolved
+25.6% vs TC avg
Strong +67% interview lift
Without
With
+67.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
289
Total Applications
across all art units

Statute-Specific Performance

§101
18.4%
-21.6% vs TC avg
§103
47.4%
+7.4% vs TC avg
§102
11.5%
-28.5% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 252 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the application filed 2/27/2023. Claims 1-9 are presented for examination. Priority Applicant’s claim for the benefit of continuation of prior filed International Application PCT/JP2020/038178 filed on 10/8/2020, is acknowledged. Information Disclosure Statement The information disclosure statements (IDS) submitted 2/27/2023, 11/21/2023, and 1/19/2024 have been considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: A Method for Evaluating Machine Learning Robustness Using Gradient Ascent to Generate Adversarial Training Data. Claim Rejections - 35 USC § 112(b) The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding dependent claim 2 (lines 5-6), reciting in part “assigning one or a plurality of labels different from an original label to each piece of the selected data”, it is unclear what the scope of “each piece of the selected data” is as it is unclear if the selected data selected data refers to the singular initial point randomly selected recited earlier in claim 2 and if so what constitutes a plurality of pieces of the singular initial point to assign labels to, or does the said limitation refer to each piece of some other selected data, or something else. Thus, claim 2 is indefinite. For the purposes of examination, the said unclear limitations are interpreted as “assigning one or a plurality of labels different from an original label to each piece of pieces of selected data”. Regarding dependent claims 3-5, these claims do not cure the 112(b) deficiencies of claim 2 from which these claims variously depend from, thus claims 3-5 are also rejected under 35 U.S.C. 112(b). Additionally, dependent claim 3 (line 13), reciting in part “a plurality of the initial points”, it is unclear what the scope of “the initial points” is as it lacks antecedent basis and can be interpreted as a plurality of initial points not related to a randomly selected initial point of claim 2, a plurality of initial points consisting of a set of the same randomly selected initial point of claim 2, or a plurality of initial points consisting of a set of different randomly selected initial points, or something else. Thus, claim 3 is indefinite. For the purposes of examination, the said unclear limitations are interpreted as “a plurality of initial points”. Additionally, dependent claim 3 (line 5 and lines 7-8), reciting in part “using each piece of the plurality of second training data”, it is unclear what the scope of “the plurality of second training data” is as it lacks antecedent basis and can be interpreted as the plurality of pieces of the second training data, a plurality of copies of second training data, or something else. Thus, claim 3 is indefinite. For the purposes of examination, the said unclear limitations are interpreted as “using each piece of the plurality of pieces of the second training data”. Additionally, dependent claim 3 (line 7), reciting in part “each of a plurality of the trained machine learning models trained”, it is unclear what the scope of “the trained machine learning models trained” is as it lacks antecedent basis and can be interpreted as a plurality of trained machine learning models trained or something else. Thus, claim 3 is indefinite. For the purposes of examination, the said unclear limitations are interpreted as “each of a plurality of trained machine learning models trained”. Regarding dependent claim 6 (line 7), reciting in part “evaluating the trained machine learning models”, it is unclear what the scope of “the trained machine learning models” is as it is unclear which particular plurality of the trained machine learning models is being referred and if the particular plurality of the trained machine learning models is limited to the trained machine learning model instances recited in claim 6 or something else. Thus, claim 6 is indefinite. For the purposes of examination, the said unclear limitations are interpreted as “evaluating trained machine learning models”. Regarding dependent claim 7, claim 7 does not cure the 112(b) deficiencies of claim 6 from which claim 7 depends from, thus claim 7 is also rejected under 35 U.S.C. 112(b). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”). Claim 1 Step 1: The claim recites “An evaluation method…the evaluation method comprising processing of:”; therefore, it is directed to the statutory category of a process. Step 2A Prong 1: The claim recites, inter alia: the evaluation method comprising processing of: generating, based on information that indicates a degree of reduction of inference accuracy of a machine learning model to a change in first training data, second training data that reduces the inference accuracy: These limitations recite a mentally performable evaluation process with aid of pen and paper of using judgement to notate second training data that is expected to reduce the inference accuracy of a machine learning model based on observing information indicating a degree of reduction of inference accuracy of the machine learning model to a change in first training data. evaluating the trained machine learning model: These limitations recite a mentally performable process of observing the operation of the trained machine learning model. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: An evaluation method executed by a computer: These additional elements are recited at a high level of generality and merely represent generic computer machinery performing in their ordinary capacity to implement the underlying judicial exception. See MPEP 2106.05(f). training the machine learning model by using the second training data: These additional elements are mere instructions to implement the judicial exception because the additional elements only recite the idea of a solution of training the machine learning model by using the second training data but failing to recite any details of how training of the machine learning model is accomplished, e.g. is this supervised/unsupervised/hybrid training, etc. See MPEP 2106.05(f). Thus, the way in which the additional elements use or interact with the judicial exception do not integrate the judicial exception into a practical application. Step 2B: The additional elements from Step 2A Prong 2 include invoking computers or other machinery to apply the underlying judicial exception and adding the words “apply it” (or equivalent) with the judicial exception. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 2 Step 1: a process, as above. Step 2A Prong 1: The claim recites, inter alia: wherein the generating of the second training data includes: randomly selecting data as an initial point from clusters of all labels of the first training data; adding, to the initial point, data obtained by assigning one or a plurality of labels different from an original label to each piece of the selected data (interpreted as an original label to each piece of the pieces of selected data per the 35 U.S.C. 112(b) rejection above); adding, to the initial point, data obtained by pairing data with different labels with each other; and generating the second training data based on the initial point: These limitations further the mentally performable process of the generating of the second training data by further including using judgement to randomly select data as an initial data point from clusters of all labels of the first training data, adding to the selected initial point further data obtained by assigning with aid of pen and paper one or a plurality of labels observed different from an original label to each piece of pieces of selected random data, adding data to the randomly selected initial point by pairing with aid of pen and paper different labels with each other, and generating with aid of pen and paper the second training data based on the initial point randomly selected. Step 2A Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more than the judicial exception. As such, the claim is patent ineligible. Claim 3 Step 1: a process, as above. Step 2A Prong 1: The claim recites, inter alia: wherein the generating of the second training data includes generating a plurality of pieces of the second training data based on a plurality of the initial points (interpreted as initial points per the 35 U.S.C. 112(b) rejection above): These limitations further the mentally performable process of the generating of the second training data by further including generating with aid of pen and paper a plurality of pieces of the second training data based on observation of a plurality of initial points. and the evaluating of the trained machine learning model includes evaluating each of a plurality of the trained machine learning models trained (interpreted as a plurality of trained machine learning models trained per the 35 U.S.C. 112(b) rejection above) by using each piece of the plurality of second training data (interpreted as using each piece of the plurality of pieces of the second training data per the 35 U.S.C. 112(b) rejection above): These limitations further the mentally performable process of evaluating the trained machine learning model by further including observing and evaluating each of a plurality of the trained machine learning models that was trained by using each piece of the plurality of generated pieces of the second training data. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: the training of the machine learning model includes training the machine learning model trained (interpreted as training machine learning models trained per the 35 U.S.C. 112(b) rejection above) by using each piece of the plurality of second training data (interpreted as using each piece of the plurality of pieces of the second training data per the 35 U.S.C. 112(b) rejection above): These additional elements are mere instructions to implement the judicial exception because the additional elements only recite the idea of a solution of training the machine learning model trained by using each piece of the plurality of pieces of the second training data but failing to recite any details of how training of the machine learning model is accomplished using each piece of the plurality of pieces of the second training data, e.g. used in supervised/unsupervised/hybrid training, etc. See MPEP 2106.05(f). Thus, the way in which the additional elements use or interact with the judicial exception do not integrate the judicial exception into a practical application. Step 2B: The additional elements from Step 2A Prong 2 include invoking computers or other machinery to apply the underlying judicial exception and adding the words “apply it” (or equivalent) with the judicial exception. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 4 Step 1: a process, as above. Step 2A Prong 1: The claim recites, inter alia: wherein the generating of the second training data based on the initial point includes: updating the initial point by a gradient ascent method: These limitations recite mathematical calculations of updating the initial point by a gradient ascent method as part of the generating of the second training data based on the initial point. and generating the second training data based on the updated initial point: These limitations recite a mentally performable process with aid of pen and paper of generating the second training data based on the observed updated initial point. Step 2A Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more than the judicial exception. As such, the claim is patent ineligible. Claim 5 Step 1: a process, as above. Step 2A Prong 1: The claim recites, inter alia: wherein the generating of the second training data based on the initial point includes: updating a label assigned to the initial point by the gradient ascent method: These limitations recite mathematical calculations of updating a label assigned to the initial point by the gradient ascent method as part of the generating of the second training data based on the initial point. and generating the second training data based on the updated initial point: These limitations recite a mentally performable process with aid of pen and paper of generating the second training data based on the observed updated initial point and label. Step 2A Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more than the judicial exception. As such, the claim is patent ineligible. Claim 6 Step 1: a process, as above. Step 2A Prong 1: The claim recites, inter alia: wherein the evaluating of the trained machine learning model includes: calculating, by using a function that calculates a change amount of a loss function, a first accuracy difference of the inference accuracy between the machine learning model trained by using the second training data and the machine learning model trained by using the first training data: These limitations recite mathematical calculations by using a function that calculates a change amount of a loss function of a first accuracy difference of the inference accuracy between the machine learning model trained by using the second training data and the machine learning model trained by using the first training data as part of the evaluating of the trained machine learning model. and evaluating the trained machine learning models (interpreted as evaluating trained machine learning models per the 35 U.S.C. 112(b) rejection above) based on the first accuracy difference: These limitations recite a mentally performable process of using judgement to evaluate trained machine learning models based on the calculated first accuracy difference. Step 2A Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more than the judicial exception. As such, the claim is patent ineligible. Claim 7 Step 1: a process, as above. Step 2A Prong 1: The claim recites, inter alia: the evaluation method further comprising: calculating, by using the loss function, a second accuracy difference of the inference accuracy between the machine learning model trained by using the first training data and the machine learning model trained by using the second training data: These limitations recite mathematical calculations using the loss function to calculate a second accuracy difference of the inference accuracy between the machine learning model trained by using the first training data and the machine learning model trained by using the second training data as part the evaluation method. replacing, in a case where a difference between the first accuracy difference and the second accuracy difference is a predetermined threshold or more, the first training data with the second training data to generate fourth training data that reduces the inference accuracy: These limitations recite a mathematical relationship of organizing and manipulating information by replacing the first training data with the second training data to generate fourth training data that reduces the inference accuracy through mathematically correlating in a case where a difference between the first accuracy difference and the second accuracy difference is a predetermined threshold or more. and evaluating the trained machine learning model trained by using the fourth training data: These limitations recite a mentally performable process of using judgement to evaluate the trained machine learning model observed as trained by using the fourth training data. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows: training the machine learning model by using the fourth training data: These additional elements are mere instructions to implement the judicial exception because the additional elements only recite the idea of a solution of training the machine learning model trained by using the fourth training data but failing to recite any details of how training of the machine learning model is accomplished using the fourth training data, e.g. used in supervised/unsupervised/hybrid training, etc. See MPEP 2106.05(f). Thus, the way in which the additional elements use or interact with the judicial exception do not integrate the judicial exception into a practical application. Step 2B: The additional elements from Step 2A Prong 2 include invoking computers or other machinery to apply the underlying judicial exception and adding the words “apply it” (or equivalent) with the judicial exception. Thus, the additional elements, viewed individually or in combination, do not provide an inventive concept or otherwise amount to significantly more than the abstract idea itself. See MPEP 2106.05. Claim 8 Step 1: The claim recites “An evaluation apparatus comprising:”; therefore, it is directed to the statutory category of machines. Step 2A Prong 1: Claim 8 recites the same abstract ideas comprising the judicial exception as claim 1. The analysis at this step mirrors that of claim 1. Step 2A Prong 2: The judicial exception recited in claim 8 is not integrated into a practical application. The only substantive difference between claim 8 and claim 1 is that claim 8 includes additional elements “An evaluation apparatus comprising: a memory; and a processor coupled to the memory, the processor being configured to perform processing including:”. However, mere recitation that a judicial exception is to be performed using generic computer machinery in their ordinary capacity, e.g. an evaluation apparatus comprising: a memory; and a processor coupled to the memory, the processor being configured to perform processing, cannot meaningfully integrate the judicial exception into a practical application. See MPEP 2106.05(f). With that exception, the analysis at this step mirrors that of claim 1. Step 2B: Claim 8 does not contain significantly more than the judicial exception. The only substantive difference between claim 8 and claim 1 is that claim 8 includes additional elements “An evaluation apparatus comprising: a memory; and a processor coupled to the memory, the processor being configured to perform processing including:”. However, mere recitation that a judicial exception is to be performed using generic computer machinery in their ordinary capacity, e.g. an evaluation apparatus comprising: a memory; and a processor coupled to the memory, the processor being configured to perform processing, cannot amount to significantly more than the judicial exception. See MPEP 2106.05(f). With that exception, the analysis at this step mirrors that of claim 1. Claim 9 Step 1: The claim recites “A non-transitory computer-readable recording medium storing an evaluation program for causing a computer to perform processing including:”; therefore, it is directed to the statutory category of an article of manufacture. Step 2A Prong 1: Claim 9 recites the same abstract ideas comprising the judicial exception as claim 1. The analysis at this step mirrors that of claim 1. Step 2A Prong 2: The judicial exception recited in claim 9 is not integrated into a practical application. The only substantive difference between claim 9 and claim 1 is that claim 9 includes additional elements “A non-transitory computer-readable recording medium storing an evaluation program for causing a computer to perform processing including:”. However, mere recitation that a judicial exception is to be performed using generic computer machinery in their ordinary capacity, e.g. a non-transitory computer-readable recording medium storing an evaluation program for causing a computer to perform processing, cannot meaningfully integrate the judicial exception into a practical application. See MPEP 2106.05(f). With that exception, the analysis at this step mirrors that of claim 1. Step 2B: Claim 9 does not contain significantly more than the judicial exception. The only substantive difference between claim 9 and claim 1 is that claim 9 includes additional elements “A non-transitory computer-readable recording medium storing an evaluation program for causing a computer to perform processing including:”. However, mere recitation that a judicial exception is to be performed using generic computer machinery in their ordinary capacity, e.g. a non-transitory computer-readable recording medium storing an evaluation program for causing a computer to perform processing, cannot amount to significantly more than the judicial exception. See MPEP 2106.05(f). With that exception, the analysis at this step mirrors that of claim 1. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 6 and 8-9 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Jagielski et. al (hereinafter Jagielski), “Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning” (2018). Jagielski was disclosed in an IDS dated 11/21/2023. Regarding independent claim 1, Jagielski discloses an evaluation method executed by a computer (pg. 26 within Section V. Experimental Evaluation discloses implemented on four 32 core CPU computer), the evaluation method comprising processing of: generating, based on information that indicates a degree of reduction of inference accuracy of a machine learning model to a change in first training data (pg. 21 within Section II. Poisoning Attack Strategy discloses the use of a bi-level optimization problem that necessarily evaluates an iterative processing to (the evaluation method comprising processing of) update a data set (generating) based on a poisoned training dataset which necessarily indicates (based on information that indicates) a degree of reduction of inference accuracy of a machine learning model to a change on an untainted data set (a change in first training data)), second training data that reduces the inference accuracy (pg. 21 within Section II. Poisoning Attack Strategy discloses outer optimization amounting to selecting the poisoning points (second training data) to maximize a loss function W on the untainted data set which necessarily reduces the inference accuracy); training the machine learning model by using the second training data (pg. 21 within Section II. Poisoning Attack Strategy discloses while the inner optimization corresponds to retraining the regression algorithm (training the machine learning model) on a poisoned training set (by using the second training data)); and evaluating the trained machine learning model (pg. 21 within Section II left column discloses measuring the success rate of the poisoning attack by the difference in testing set MSE of the corrupted model, e.g. trained with the poisoned points of the poisoned data set (and evaluating the trained machine learning model), compared to the legitimate model). Regarding dependent claim 6, Jagielski discloses the evaluation method according to claim 1, wherein the evaluating of the trained machine learning model includes: calculating, by using a function that calculates a change amount of a loss function, a first accuracy difference of the inference accuracy between the machine learning model trained by using the second training data and the machine learning model trained by using the first training data; and evaluating the trained machine learning models (interpreted as and evaluating trained machine learning models per the 35 U.S.C. 112(b) rejection set forth above) based on the first accuracy difference (pgs. 21-22 within Section II left column and subsection Poisoning Attack Strategy disclose measuring the success rate of the poisoning attack by the difference in testing set MSE of the corrupted model, e.g. trained with the poisoned points of the poisoned data set (evaluating the trained machine learning model), compared to the legitimate model wherein measuring with the difference necessarily includes calculating the MSE function that calculates a change amount of the MSE function, wherein the success rate is construed as a first accuracy difference of the inference accuracy between the trained corrupted model (the machine learning model trained by using the second training data) and the legitimate model trained with the untainted dataset (and the machine learning model trained by using the first training data), wherein measuring the success rate of the poisoning necessarily attack evaluates the trained poisoned model and the trained legitimate model (and evaluating trained machine learning models) based on the MSE success rate calculated difference (based on the first accuracy difference)). Regarding independent claim 8, claim 8 is a evaluation apparatus claim that is substantially the same as the evaluation method of claim 1. Thus, claim 8 is rejected for the same reasons as claim 1. In addition Jagielski discloses an evaluation apparatus comprising: a memory; and a processor coupled to the memory, the processor being configured to perform processing (pg. 26 within Section V. Experimental Evaluation discloses implementing algorithm code, which necessarily would need to be stored on memory coupled to the CPU, on four 32 core CPU computer (an evaluation apparatus)). Regarding independent claim 9, claim 9 is a non-transitory computer-readable recording medium claim that is substantially the same as the evaluation method of claim 1. Thus, claim 9 is rejected for the same reasons as claim 1. In addition Jagielski discloses a non-transitory computer-readable recording medium storing an evaluation program for causing a computer to perform processing (pg. 26 within Section V. Experimental Evaluation discloses implementing algorithm code, which necessarily would need to be stored on memory (a non-transitory computer-readable recording medium) coupled to the CPU, on four 32 core CPU computer (for causing a computer to perform processing)). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Andreou et al., US 2020/0160046 A1 (May 21, 2020) ([0097] Let X=x, . . . , x.sub.N be a set of unlabeled training data, where each x.sub.i∈custom-character.sup.D. Define a set of j={1, . . . , K} cluster prototypes μ.sub.j∈custom-character.sup.D. A types may be initialized using the K-means++ algorithm, which randomly selects one of the data points from X to be the first cluster prototype and selects subsequent points, one at a time, from X to be initial cluster prototypes with probability inversely proportional to their distance from the nearest existing selected prototype, [0098] Once all K of the clusters have been initialized, the training data points may all be assigned to the nearest cluster. In an exemplary embodiment, the distance between any data point in the training set and any cluster mean). CHEN et. al, US 2020/0151578 A1 (May 14, 2020) (ABSTRACT Disclosed are a data sample label processing method and apparatus. The data sample label processing method comprises: obtaining a first set of data samples without determined labels and a second set of data samples with determined labels; performing an iteration with the following steps until an accuracy rate meets a preset requirement: training a prediction model based on a combination of the first set of data samples and the second set of data samples; inputting data samples from the first set of data samples into the prediction model to obtain prediction values as learning labels for each data sample, and associating the learning labels with the data samples respectively; obtaining a subset from the first set of data samples, wherein the subset comprise data samples associated with learning labels; obtaining determined labels for the data samples in the subset; obtaining the accuracy rate based at least on the learning labels of the data samples in the subset and the determined labels of the data samples in the subset; and if the accuracy rate does not meet the preset requirement, labeling the data samples in the subset with the determined labels for the data samples in the subset, and moving the subset from the first set of data samples to the second set of data samples; and after the iteration ends, labeling the remaining data samples in the first set with the associated learning labels). Basel et al., US 2020/0134510 A1 (Apr. 30, 2020) (ABSTRACT A method includes performing a first clustering operation to group members of a first data set into a first group of clusters and associating each cluster of the first group of clusters with a corresponding label of a first group of labels. The method includes performing a second clustering operation to group members of a combined data set into a second group of clusters. The combined data set includes a second data set and at least a portion of the first data set. The method includes associating one or more clusters of the second group of clusters with a corresponding label of the first group of labels and generating training data based on a second group of labels and the combined data set. The method includes training a machine learning classifier based on the training data to provide labels to a third data set). Any inquiry concerning this communication or earlier communications from the examiner should be directed to KUANG FU CHEN whose telephone number is (571)272-1393. The examiner can normally be reached M-F 9:00-5:30pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached on (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KC CHEN/Primary Patent Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Feb 27, 2023
Application Filed
Nov 13, 2025
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579425
PARAMETERIZED ACTIVATION FUNCTIONS TO ADJUST MODEL LINEARITY
2y 5m to grant Granted Mar 17, 2026
Patent 12566994
SYSTEMS AND METHODS TO CONFIGURE DEFAULTS BASED ON A MODEL
2y 5m to grant Granted Mar 03, 2026
Patent 12561593
METHOD FOR DETERMINING PRESENCE OF A SIGNATURE CONSISTENT WITH A PAIR OF MAJORANA ZERO MODES AND A QUANTUM COMPUTER
2y 5m to grant Granted Feb 24, 2026
Patent 12561561
Mapping User Vectors Between Embeddings For A Machine Learning Model for Authorizing Access to Resource
2y 5m to grant Granted Feb 24, 2026
Patent 12561497
AUTOMATED OPERATING MODE DETECTION FOR A MULTI-MODAL SYSTEM WITH MULTIVARIATE TIME-SERIES DATA
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+67.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 252 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month