Prosecution Insights
Last updated: April 19, 2026
Application No. 18/328,514

SEMI-SUPERVISED MACHINE LEARNING MODEL FRAMEWORK FOR UNLABELED LEARNING

Non-Final OA §101§103
Filed
Jun 02, 2023
Examiner
SINGH, AMRESH
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
Paypal Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
463 granted / 610 resolved
+20.9% vs TC avg
Strong +22% interview lift
Without
With
+22.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
32 currently pending
Career history
642
Total Applications
across all art units

Statute-Specific Performance

§101
18.8%
-21.2% vs TC avg
§103
46.0%
+6.0% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 610 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1-20 are presented for examination. This is a Non-Final Action. Claim Rejections - 35 U.S.C. §101 35 U.S.C. §101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 USC 101 as directed to an abstract idea without significantly more. With respect to independent claims, 1, 8, and the combination of 15 and 16 specifically claim 1 recites “…classify data sets into a first classification or a second classification, wherein the training data comprises a first group of data sets and a second group of data sets, wherein each data set in the first group of data sets is labeled with the first classification, and wherein each data set in the second group of data sets is labeled with the second classification; modifying the first group of data sets and the second group of data sets based on the first plurality of classification scores and the second plurality of classification scores, wherein the modifying comprises relabeling at least one data set in the second group of data sets from the second classification to the first classification.” Claim 8 recites “dividing training data into a first set of training data associated with a first classification and a second set of training data associated with a second classification” and “modifying the training data based on the plurality of classification scores”. These limitations could be reasonably and practically performed by the human mind, a person can classify documents/data into multiple classifications based on classes (labels) and then modify the data/documents into new classification based on re-classification (re-labeling) mentally is based on the are observation/evaluation steps. Accordingly, the claim recites a mental process and mathematical relationships, which can be done utilizing pen and paper. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. At step 2A, prong two, claim(s) 1, 8 and the combination of 15 and 16 recites the additional elements of “a non-transitory memory; and one or more hardware processors coupled with the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations “; “non-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations”; “obtaining training data for training a machine learning model” are elements merely invoking a generic computer environment (processor, database, memory) and “obtaining, a plurality of classification scores based on the first group of data sets and a second plurality of classification scores based on the second group of data sets” basic data-gathering or outputting functions (MPEP 21.96.05(f)) and recitation of “training a machine learning model” “re-training the machine learning model based on the modified first group of data sets and the modified second group of data sets” amounts to merely applying Machine learning to “field of use” or technological environment of AI. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claims, 1, 8 and 15 and 16 at step 2B do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As explained with respect to Step 2A Prong Two, the additional elements as recited in step 2A prong 2 recite conventional computer executing routine data-storage and retrieval operations. The “obtaining, a plurality of classification scores based on the first group of data sets and a second plurality of classification scores based on the second group of data sets” and recitation of “training a machine learning model” “re-training the machine learning model based on the modified first group of data sets and the modified second group of data sets” individually or in combination does not add “significantly more” than the abstract idea hence are no more than well-understood, routine and conventional computer functions that merely apply the abstract idea in a field of use of AI environment. When viewed as an ordered combination, these additional elements do not integrate the abstract idea into a practical application and do not add significantly more than the abstract idea itself. According, claim 1 is ineligible under 101. Claims 2-7, 10 and 11 are dependent claims and do not recite any additional elements that would amount to significantly more than the abstract idea. Specifically, Claim 2. With respect to step 2A prong 2 “wherein the training the machine learning model is based on an objective function that minimizes a within-group output variance and/or maximizes a between-group output variance of the machine learning model.” recites additional elements of insignificant extra solution activity. With respect to step 2B the recited insignificant extra solution activity is recited at a high level of generality which are well-understood, routine and conventional as taught by the prior art of records. Claim 3. With respect to step 2A prong 1 “detecting that the at least one data set has been mislabeled based on the first plurality of classification scores and the second plurality of classification scores, wherein the modifying is based further on the detecting. ” recites abstract idea of mental steps (observation & evaluation), a person can detect datasets / documents which have been mislabeled and can modify and relabel the documents mentally based on the determination. Claim 4. With respect to step 2A prong 1 “calculating a first value based on the first plurality of classification scores and the second plurality of classification scores, wherein the first value indicates an efficacy of the machine learning model in detecting mislabeled data sets; and determining whether the first value is larger than a second value calculated during a previous training iteration of the machine learning model, and determining that the first value is larger than the second value by a threshold. ” recites abstract idea of mental steps (observation & evaluation), a person can calculate and determine values based on previously calculated values. With respect to step 2A prong 2 “re-training the machine learning model…” recites additional elements of insignificant extra solution activity of field of use of using machine learning in a AI environment. With respect to step 2B the recited insignificant extra solution activity is recited at a high level of generality which are well-understood, routine and conventional as taught by the prior art of records. Claim 5. With respect to step 2A prong 1 “prior to the training the machine learning model, relabeling a first subset of data sets in the first group from the first classification to the second classification and moving the first subset of data sets from the first group to the second group. ” recites abstract idea of mental steps (observation & evaluation), a person can relabel a document or data into a second classification mentally based on the observation/evaluation step. Claim 6. With respect to step 2A prong 1 “randomly selecting the first subset of data sets from the first group of data sets” recites abstract idea of mental steps (observation & evaluation), a person can randomly select a subset of documents from a set. Claim 7. With respect to step 2A prong 2 “wherein each data set in the first group of data sets and the second group of data sets corresponds to a transaction, wherein the first classification corresponds to a fraudulent classification, and wherein the second classification corresponds to a non-fraudulent classification.” recites additional elements of insignificant extra solution activity of merely characterizing the type of data being analyzed and the field of use in which it is applied. With respect to step 2B the recited insignificant extra solution activity is recited at a high level of generality which are well-understood, routine and conventional as taught by the prior art of records. Claim 10. With respect to step 2A prong 1 “determining a threshold based on the first set of classification scores, wherein the determining that the portion of the training data has been mislabeled is based on the threshold” recites abstract idea of mental steps (observation & evaluation), a person can determine the threshold on which a data is mislabeled. With respect to step 2A prong 2 wherein the plurality of classification scores comprises a first set of classification scores obtained from the machine learning model based on the first set of training data” recites additional elements of insignificant extra solution activity of mere data outputting, wherein data is gathered from a machine learning model. With respect to step 2B the recited insignificant extra solution activity is recited at a high level of generality which are well-understood, routine and conventional as taught by the prior art of records. Claim 3. With respect to step 2A prong 2 “wherein the threshold corresponds to at least one of a highest classification score or a lowest classification score in the first set of classification scores. ” recites additional elements of insignificant extra solution activity of merely specifying a particular value selection within the classification scores. It does not impose any meaningful limitation on the abstract idea or improve the functioning of a computer or machine learning model. With respect to step 2B the recited insignificant extra solution activity is recited at a high level of generality which are well-understood, routine and conventional as taught by the prior art of records. Claims 8-9 and 12-20 are similar to claims 1-7, 10 and 11 hence rejected similarly. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 8 and 9 rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 12,482,242) further in view of Forman et al. (US 7,792,353) 1. Zhou teaches, A system, comprising: a non-transitory memory; and one or more hardware processors coupled with the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations (Claim 19 – teaches computer system with one or more processors and a non-transitory computer-readable medium having instructions encoded thereon, Zhou) comprising: obtaining training data for training a machine learning model configured to classify data sets into a first classification or a second classification (Claim 1 - teaches obtaining a plurality of labeled samples… each labeled with a ground-truth label – thus teaching labeled training data for a classifier having multiple classifications, Zhou), wherein the training data comprises a first group of data sets and a second group of data sets (Claim 1 – dividing the plurality of labeled samples into plurality of training subsets and hold-out test subsets, Zhou); training the machine learning model using the training data (Claim 1 – teaches training a machine learning model using a corresponding training subset, Zhou); obtaining, from the trained machine learning model, a first plurality of classification scores based on the first group of data sets (Claim 1 – teaches each prediction label has a confidence score indicating a likelihood that the prediction label is correct – thus teaching obtaining confidence scores from the trained model, Zhou) and a second plurality of classification scores based on the second group of data sets (Claim 1 – teaches prediction label has a confidence score – thus teaching confidence scores are obtained for all labeled samples, Zhou). Zhou does not explicitly teach, wherein each data set in the first group of data sets is labeled with the first classification, and wherein each data set in the second group of data sets is labeled with the second classification; and modifying the first group of data sets and the second group of data sets based on the first plurality of classification scores and the second plurality of classification scores, wherein the modifying comprises relabeling at least one data set in the second group of data sets from the second classification to the first classification; and re-training the machine learning model based on the modified first group of data sets and the modified second group of data sets. However, Forman teaches, wherein each data set in the first group of data sets is labeled with the first classification and wherein each data set in the second group of data sets is labeled with the second classification (Col 10: lines 25-33 – teaches positive samples with be mostly gathered together, and group-select them and label them positive (first group / first classification) and same for the negative samples thus teaching group selection and labeling negative (second group / second classification, Forman); and modifying the first group of data sets and the second group of data sets based on the first plurality of classification scores and the second plurality of classification scores (Fig 8, Col 10: lines 25-33 - teaches samples are selected for labeling, sorted by their prediction strength (e.g., probability of belonging to the positive class according to the current classifiers)… the positive samples will be mostly gathered together… making it easier for the user to group-select them and label them positive (same for the negative samples) – thus disclosing modifying groups of data samples, where the grouping and modification are based on classification scores produced by the classifier, Forman), wherein the modifying comprises relabeling at least one data set in the second group of data sets from the second classification to the first classification (Col 10: lines 2-34 and Fig 8:138 - teaches relabeling samples by assigning a positive label to group samples and assigning negative labels to other grouped samples, thereby changing the classification of at least some samples accordingly – thus teaching relabeling samples from one classification to another, Forman); and re-training the machine learning model based on the modified first group of data sets and the modified second group of data sets (Fig 8:138 & Col 10: lines 16-21 – teaches retraining the classified based on modified training set and lopping back to reprocess samples as explicitly cross-referenced in Fig 2, Forman). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to modify Zhou’s system to relabel identified mislabeled samples and retrain the classifier as taught by Forman because both prior arts are in the same field of endeavor of supervised ML classification with label quality issues, thereby would improve classification accuracy and robustness to mislabeled data. 3. The combination of Zhou and Forman teaches, The system of claim 1, wherein the operations further comprise: detecting that the at least one data set has been mislabeled (Claim 1 and Col 8: lines 43-46 – teaches determining whether the candidate mislabel is a mislabel, Zhou) based on the first plurality of classification scores and the second plurality of classification scores (Claim 1 and Col 6: lines 15-18– teaches each prediction label has a confidence score indicating a likelihood that the prediction label is correct, and determining whether the candidate mislabel is a mislabel based in part on the confidence score, Zhou), wherein the modifying is based further on the detecting (Col 6: lines 11-15 and Claim 1 – identifying candidate mislabeled samples for further processing – thus disclosing that detected mislabeled samples are used as a basis for subsequent modification or handling, Zhou). Claim 8 is similar to claim 1 hence rejected similarly. Claim 9 is similar to the combination of claim 1 and 3 hence rejected similarly. 10. The combination of Zhou and Forman teaches, The method of claim 9, wherein the plurality of classification scores comprises a first set of classification scores obtained from the machine learning model based on the first set of training data (Claim 1 - teaches each prediction label has a confidence score indicating a likelihood that the prediction label is correct – discloses classification scores (confidence scores) output by a trained machine learning model for labeled training samples, Zhou), and wherein the method further comprises: determining a threshold based on the first set of classification scores (Claim 1 - teaches determining whether the candidate mislabel is a mislabel based in part on the confidence score – discloses using confidence scores to establish decision criteria for identifying candidate mislabels; such criteria inherently operate as a threshold on the confidence scores, Zhou), wherein the determining that the portion of the training data has been mislabeled is based on the threshold (Claim 1 – identifying candidate mislabeled samples and determining whether the candidate mislabel is a mislabel based on confidence scores – discloses that mislabel determination is made by comparing confidence scores against a decision criterion (threshold), Zhou). 11. The combination of Zhou and Forman teaches, The method of claim 10, wherein the threshold corresponds to at least one of a highest classification score (Col 10: lines 25-29 - teaches samples are sorted by their prediction strength (e.g. probability of belonging to the positive class according to the current classifier – discloses sorting by prediction strength explicitly identifies samples with the highest classification scores, which operates as an upper-end threshold, Forman) or a lowest classification score (Col 10: lines 29-33 – teaches the positive samples will be mostly gathered together … (same for the negative samples), with a few individual clicks to treat the exceptions – discloses sorting necessarily places lowest-confidence / lowest-score samples at the opposite end, which operate as a lower-end threshold, Forman) in the first set of classification scores (Claim 1 – each prediction label has a confidence score indicating a likelihood that the prediction label is correct – discloses that classification scores are generated for the training data, Zhou and Forman operates on such scores). The combination of claims 15 and 16 are similar to claim 1 hence rejected similarly. Claims 2, 13 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 12,482,242) further in view of Forman et al. (US 7,792,353) further in view of Tickoo et al. (US 2015/0117766) All the limitations of claim 1 are taught above. 2. The combination of Zhou and Forman do not explicitly teach, wherein the training the machine learning model is based on an objective function that minimizes a within-group output variance and/or maximizes a between-group output variance of the machine learning model. However, Tickoo teaches, wherein the training the machine learning model is based on an objective function that minimizes a within-group output variance and/or maximizes a between-group output variance of the machine learning model (Abstract - teaches “using the direction optimization set to calculate an optimum transformation vector that maximizes inter-class separability and minimizes intra-class variance of the feature samples with respect to corresponding class labels” thus disclosing a training based on an optimization criterion that maximizes inter-class separability and minimizes intra-class variance, as the direction optimization set is used to calculate an optimum transformation vector satisfying those criteria, which constitutes an objective function, Tickoo). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to modify the combination of Zhou and Formans’ system because they are in the same field of endeavor, as directed to training supervised machine learning classifiers using labeled data and improving classification accuracy through training methodologies. Thus, the motivation to combine would be to train the machine learning model of Zhou using the objective function taught by Tickoo in order to improve class separation and classification accuracy. Claim 13 is similar to claim 2 hence rejected similarly. Claim 18 is similar to claim 2 hence rejected similarly. Claims 4, 12 and 17 rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 12,482,242) further in view of Forman et al. (US 7,792,353) further in view of Anwar et al. (US 2022/0156574) 4. The combination of Zhou and Forman teach, The system of claim 1, wherein the operations further comprise: …based on the first plurality of classification scores (Claim 1 - each prediction label has a confidence score indicating a likelihood that the prediction label is correct – discloses plural confidence scores output by the classifier, Zhou) and the second plurality of classification scores (Claim 1 – teaches each prediction label has a confidence score applied across labeled samples of different classifications – disclosing confidence score for multiple labeled data sets, Zhou); … detecting mislabeled data sets (Claim 1 and training-set correction logic – teaches identifying mislabeled samples based on classifier outputs and retraining impacts thus supplies the mislabeled-data context, Zhou). The combination of Zhou and Forman do not explicitly teach or suggest, calculating a first value… wherein the first value indicates an efficacy of the machine learning model; and determining whether the first value is larger than a second value calculated during a previous training iteration of the machine learning model, and wherein the re-training the machine learning model is responsive to determining that the first value is larger than the second value by a threshold. However, Anwar teaches, calculating a first value (Paragraph 56 - observing the recorded loss for consecutive computed models enables the worker to define a loss difference Δθ=θ.sub.i−θ.sub.i-1, Anwar), wherein the first value indicates an efficacy of the machine learning model (Paragraph 56 – teaches calculated value Δθ, which summarizes model behavior across training iterations, Anwar; This value is applied within Zhou’s mislabel-detection framework, the calculated value serves as a value indicating the efficacy of the model in detecting mislabeled data sets); and determining whether the first value is larger than a second value (Paragraph 56 - defines a loss difference Δθ=θ.sub.i−θ.sub.i-1 – discloses comparing a current value to a prior value, Anwar) calculated during a previous training iteration of the machine learning model (Paragraph 56 – teaches θ.sub.i is the loss … for the current training iteration and θ.sub.i-1 is the loss… for the preceding training iteration – thus disclosing values calculated during previous training iterations, Anwar), and wherein the re-training the machine learning model is responsive to determining that the first value is larger than the second value by a threshold (Paragraph 59 – teaches the decision … can be decided through a statically defined threshold between loss differences and Paragraph 57 – teaches if the loss difference is Δθ is large … continue to train the model – thus disclosing threshold-based training control, Anwar). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to incorporate Anwar’s iteration-based, threshold-controlled training metric into the mislabel-detection and retraining system of Zhou, as implemented using Forman’s iterative retraining framework, in order to quantitatively determine when retraining is warranted and to avoid unnecessary retraining cycles. Claim 12 is similar to claim 4 hence rejected similarly. Claim 17 is similar to claim 4 hence rejected similarly Claims 5, 6, 14, 19, and 20 rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 12,482,242) further in view of Forman et al. (US 7,792,353) further in view of Vahdat (US 11,531,852) All the limitations of claim 1 are taught above. 5. The combination of Zhou and Forman do not explicitly teach or suggest, prior to the training the machine learning model, relabeling a first subset of data sets in the first group from the first classification to the second classification and moving the first subset of data sets from the first group to the second group. However, Vahdat teaches, prior to the training the machine learning model ( Fig 530 (521, 524, 526)- teaches 521 (label noise estimation), 524 (label correction) and 526 (dataset preparation) which produces a trained joint distribution later used for inference at block 540 – thus disclosing placing label correction before completion of training of the machine learning system presented by block 530, Vahdat), relabeling a first subset of data sets in the first group (Abstract - teaches incorrect labels may be corrected by flipping the labels to their correct state – thus disclosing relabeling a subset of training samples, Vahdat) from the first classification to the second classification (Claim 1 - teaches label noise is the form of stochastic label flips on binary labels – thus disclosing that label flips explicitly teach changing a data set from one classification to another, Vahdat) and moving the first subset of data sets from the first group to the second group (Col5: lines 20-30 – teaches reassigning a data set from one class group to another, Vahdat). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to combine Vahdat with Zhou and Forman because all the references are in the same field of supervised machine learning model training and address the common problem of incorrect labels in training data, a person of ordinary skill in the art would have been motivated to incorporate Vahdat’s pre-training label correction techniques into the systems of Zhou and Forman in order to further improve classifier performance. 6. The combination of Zhou, Forman and Vahdat teach, The system of claim 5, wherein the operations further comprise randomly selecting (Claim 1 – teaches selecting candidate data sets for further processing based on classifier outputs and confidence scores – this discloses randomly selecting because randomly is not defined on how the data sets are random, what probability distribution is used or how the selection differs from other non-deterministic or unspecifying a deterministic selection rule, under BRI, Zhou)the first subset of data sets (Abstract - teaches incorrect labels may be corrected by flipping the labels to their correct state – discloses operating on a subset of the data sets, satisfying the ‘first subset” requirement, Vahdat) from the first group of data sets (Claim 1 – teaches training data sets are grouped according to classification labels and candidate data sets are selected from a labeled group – explicitly discloses classification-based grouping of data sets and selecting subsets from a group, Zhou). Claim 14 is similar to claim 5 hence rejected similarly. Claim 19 is similar to claim 5 hence rejected similarly. Claim 20 is similar to claim 6 hence rejected similarly. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (US 12,482,242) further in view of Forman et al. (US 7,792,353) further in view of Zoldi et al. (US 11,367,074) All the limitations of claim 1 are taught above. 7. The combination of Zhou and Forman does not explicitly teach, wherein each data set in the first group of data sets and the second group of data sets corresponds to a transaction, wherein the first classification corresponds to a fraudulent classification, and wherein the second classification corresponds to a non-fraudulent classification. However, Zoldi teaches, wherein each data set in the first group of data sets and the second group of data sets corresponds to a transaction (Col 1: lines 7-10 - teaches the subject matter described herein relates to fraud detection, and more particularly to high resolution transaction-level fraud detection and Col 4: lines 28-29 – teaches the pinpoint model is trained on transactions within the fraud window – discloses that the data processed and modeled are transactions, satisfying the requirement that each data set corresponds to a transaction, Zoldi), wherein the first classification corresponds to a fraudulent classification (Abstract - teaches distinguish fraudulent transactions from a legitimate transaction, and Claim 1 – teaches trained … to distinguish between fraudulent and legitimate transactions – thus disclosing a fraudulent classification applied to transactions, Zoldi), and wherein the second classification corresponds to a non-fraudulent classification (Abstract - distinguish fraudulent transactions from a legitimate transaction – fraud / non-fraud transactions – thus disclosing a non-fraudulent (legitimate) classification distinct from fraudulent transactions, Zoldi). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to combine Zoldi with Zhou and Forman because they are in the same field of endeavor, namely training supervised machine learning classifiers on labeled datasets, differing only in the application domain (fraud). Because Zoldi emplys supervised learning on labeled transaction data and addresses classification accuracy, a person of ordinary skill in the art would have been motivated to apply the mislabel-detection and retraining techniques of Zhou and Forman to the transaction-level fraud detection system for Zoldi is order to improve fraud classification performance.. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMRESH SINGH whose telephone number is (571)270-3560. The examiner can normally be reached Monday-Friday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J. Lo can be reached at (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMRESH SINGH/Primary Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Jun 02, 2023
Application Filed
Jan 30, 2026
Non-Final Rejection — §101, §103
Mar 26, 2026
Interview Requested
Apr 14, 2026
Examiner Interview Summary
Apr 14, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591804
SYSTEMS AND METHODS FOR DISTRIBUTED LEARNING FOR WIRELESS EDGE DYNAMICS
2y 5m to grant Granted Mar 31, 2026
Patent 12585549
BACKING UP DATABASE FILES IN A DISTRIBUTED SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12585715
SYSTEMS AND METHODS FOR INDEPENDENT AUDIT AND ASSESSMENT FRAMEWORK FOR AI SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12561572
METHOD FOR CALIBRATING PARAMETERS OF HYDROLOGY FORECASTING MODEL BASED ON DEEP REINFORCEMENT LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12554774
GRAPH DATA LOADING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
98%
With Interview (+22.0%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 610 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month