Prosecution Insights
Last updated: April 19, 2026
Application No. 18/208,470

GENERATING PARALLEL SYNTHETIC TRAINING DATA FOR A MACHINE LEARNING MODEL TO PREDICT COMPLIANCE WITH RULESETS

Non-Final OA §101
Filed
Jun 12, 2023
Examiner
ANDREI, RADU
Art Unit
3697
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fmr LLC
OA Round
1 (Non-Final)
36%
Grant Probability
At Risk
1-2
OA Rounds
3y 6m
To Grant
58%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
201 granted / 564 resolved
-16.4% vs TC avg
Strong +22% interview lift
Without
With
+21.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
65 currently pending
Career history
629
Total Applications
across all art units

Statute-Specific Performance

§101
41.9%
+1.9% vs TC avg
§103
37.8%
-2.2% vs TC avg
§102
2.1%
-37.9% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 564 resolved cases

Office Action

§101
DETAILED ACTION The present application, filed on 6/12/2023 is being examined under the AIA first inventor to file provisions. The following is a non-final First Office Action on the Merits. Claims 1-26 are pending and have been considered below. Information Disclosure Statement (IDS) The information disclosure statement (IDS) submitted on 7/3/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, such IDS is being considered by Examiner. Claim Rejections - 35 USC § 101 35 USC 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-26 are rejected under 35 USC 101 because the claimed invention is not directed to patent eligible subject matter. The claimed matter is directed to a judicial exception, i.e. an abstract idea, not integrated into a practical application, and without significantly more. Per Step 1 of the multi-step eligibility analysis, claims 1-13 are directed to a system and claims 14-26 are directed to a computer implemented method. Thus, on its face, each independent claim and the associated dependent claims are directed to a statutory category of invention. [INDEPENDENT CLAIMS] Per Step 2A.1. Independent claim 1, (which is representative of independent claims 14) is rejected under 35 USC 101 because the independent claim is directed to an abstract idea, a judicial exception, without reciting additional elements that integrate the judicial exception into a practical application. The limitations of the independent claim 1 (which is representative of independent claims 14) recite an abstract idea, shown in bold below: [A] A computer system for generating parallel synthetic training data for a machine learning model [B] generate a model training dataset from a baseline dataset comprising a plurality of sentences labeled as noncompliant with one or more rulesets; [C] train a conditional autoregressive language model using the model training dataset as input to generate synthetic sentences predicted to be noncompliant with the one or more rulesets; [D] generate a corpus of synthetic sentences using the trained conditional autoregressive language model; for each synthetic sentence in the corpus of synthetic sentences, [E] execute a compliance classification model using the synthetic sentence as input to generate a label for the synthetic sentence, the label indicating whether the synthetic sentence is compliant or noncompliant with one or more rulesets; [F] identify a plurality of the synthetic sentences labeled as noncompliant by the compliance classification model that are semantically similar to one or more sentences from the baseline dataset and [G] generate a first parallel corpus of synthetic training data comprising the identified synthetic sentences; and [H] execute a language suggestion model using the identified synthetic sentences as input to generate a second parallel corpus of synthetic training data comprising a plurality of synthetic sentences predicted to comply with the one or more rulesets. Independent claim 1 (which is representative of independent claims 14) recites: generate a corpus of synthetic sentences and execute a compliance classification ([D], [E]); identify a plurality of non-compliant sentences with the classification model ([F]); and generate a first and a second corpus of synthetic training data, the first corpus comprising training data, the second corpus containing trained sentences predicted to comply with the ruleset ([G], [H]), which, based on the claim language and in view of the application disclosure, represents a process aimed at: “generating synthetic training data for machine learning models”. This is a combination that, under its broadest reasonable interpretation, covers reasonable performance of limitations expressing observation, evaluation, judgement in the human mind. Nothing in the claim elements precludes the steps from being practically performed in the human mind. For example, the step: generate a pool of synthetic sentences and execute a compliance classification, as drafted in the context of this claim, encompasses the user manually or mentally creating list with sentences and assessing if they are compliant with the classification rules. Further, the step: identify a plurality of non-compliant sentences with the classification model, as drafted in the context of this claim, encompasses the user manually or mentally identifying non-compliant sentences from the corpus of sentences. Finally, the step: generate a first and a second corpus of synthetic training data, the second corpus containing trained sentences predicted to comply with the ruleset, as drafted in the context of this claim, encompasses the user manually or mentally separating the sentences that are likely to comply with the ruleset. These limitations fall under the Mental Processes, i.e., Concepts Performed in the Human Mind grouping of abstract ideas (see MPEP 2106.04(a)(2)). Accordingly, it is concluded that independent claim 1 (which is representative of independent claims 14) recites an abstract idea that corresponds to a judicial exception. [INDEPENDENT CLAIMS – QUALIFIERS] Per Step 2A.2. The identified abstract idea is not integrated into a practical application because the additional elements in the independent claims only amount to instructions to apply the judicial exception to a computer, or are a general link to a technological environment (see MPEP 2106.05(f); MPEP 2106.05(h)). For example, the added elements “by a server computing system,” recite computing elements at a high level of generality, generally linking the use of a judicial exception to a particular technological environment (see MPEP 2106.05(h)), or merely using a computer as a tool to perform an abstract idea (MPEP 2106.05(f)). These qualifiers of the independent claims do not preclude from carrying out the identified abstract idea “generating synthetic training data for machine learning models”, and do not serve to integrate the identified abstract idea into a practical application. [INDEPENDENT CLAIMS – ADDITIONAL STEPS] The additional steps in the independent claims, shown not bolded above, recite: generating a synthetic training data set ([B]), training a language model with the synthetic training data set ([C]). When considered individually, they amount to nothing more than receiving data, processing data, storing results or transmitting data that serves merely to implement the abstract idea using computing components for performing computer functions (corresponding to the words “apply it” or an equivalent), or merely uses a computer as a tool to perform the identified abstract idea. Thus, it is concluded that these claim elements do not integrate the identified abstract idea (“generating synthetic training data for machine learning models”) into a practical application (see MPEP 2106.05(f)(2)). Therefore, the additional claim elements of independent claim 1 (which is representative of independent claims 14) do not integrate the identified abstract idea into a practical application and the claims remain a judicial exception. Per Step 2B. Independent claim 1 (which is representative of independent claims 14) does not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when the independent claim is reevaluated as a whole, as an ordered combination under the considerations of Step 2B, the outcome is the same like under Step 2A.2. Overall, it is concluded that independent claims 1, 14 are deemed ineligible. [DEPENDENT CLAIMS] Dependent claim 2, which is representative of dependent claims 15, recites: filtering out one or more sentences from the baseline dataset using a fluency scoring model. When considered individually, these added claim elements further elaborate on the abstract idea identified in the independent claims, because the dependent claim continues to recite the identified abstract idea: “generating synthetic training data for machine learning models”. The elements in this dependent claim are comparable to receiving/transmitting data, processing data, storing results or transmitting data that serves merely to implement the abstract idea using computing components for performing computer functions (corresponding to the words “apply it” or an equivalent), or merely uses a computer as a tool to perform the identified abstract idea. Thus, it is concluded that these claim elements do not integrate the identified abstract idea (“generating synthetic training data for machine learning models”) into a practical application (see MPEP 2106.05(f)(2)). The dependent claim elements have the same relationship to the underlying abstract idea (“generating synthetic training data for machine learning models”) as outlined in the independent claims analysis above. Thus, it is readily apparent that the dependent claim elements are not directed to any specific improvements of the independent claims and do not practically or significantly alter how the identified abstract idea would be performed. When considered as a whole, as an ordered combination, the dependent claim further elaborates on the previously identified abstract idea (“generating synthetic training data for machine learning models”). Therefore, dependent claim 2 (which is representative of dependent claims 15) is deemed ineligible. Dependent claim 4, which is representative of dependent claims 17, recites: converting each sentence in the model training dataset into a contextual embedding; generating a plurality of probability values each corresponding to a predicted next word in the sentence based upon the contextual embedding; and determining a prediction error based upon a comparison of each predicted next word in the sentence to an actual next word in the sentence. When considered individually, these added claim elements further elaborate on the abstract idea identified in the independent claims, because the dependent claim continues to recite the identified abstract idea: “generating synthetic training data for machine learning models”. The elements in this dependent claim are comparable to reasonable performance of limitations expressing observation, evaluation, judgement in the human mind. Nothing in the claim elements precludes the steps from being practically performed in the human mind. These limitations fall under the Mental Processes, i.e., Concepts Performed in the Human Mind grouping of abstract ideas (see MPEP 2106.04(a)(2)). The dependent claim elements have the same relationship to the underlying abstract idea (“generating synthetic training data for machine learning models”) as outlined in the independent claims analysis above. Thus, it is readily apparent that the dependent claim elements are not directed to any specific improvements of the independent claims and do not practically or significantly alter how the identified abstract idea would be performed. When considered as a whole, as an ordered combination, the dependent claim further elaborates on the previously identified abstract idea (“generating synthetic training data for machine learning models”). Therefore, dependent claim 4 (which is representative of dependent claims 17) is deemed ineligible. Dependent claim 7, which is representative of dependent claims 20, recites: executing the trained conditional autoregressive language model using one or more configuration parameters to generate the corpus of synthetic sentences. When considered individually, these added claim elements further elaborate on the abstract idea identified in the independent claims, because the dependent claim continues to recite the identified abstract idea: “generating synthetic training data for machine learning models”. The elements in this dependent claim are comparable to receiving/transmitting data, processing data, storing results or transmitting data that serves merely to implement the abstract idea using computing components for performing computer functions (corresponding to the words “apply it” or an equivalent), or merely uses a computer as a tool to perform the identified abstract idea. Thus, it is concluded that these claim elements do not integrate the identified abstract idea (“generating synthetic training data for machine learning models”) into a practical application (see MPEP 2106.05(f)(2)). The dependent claim elements have the same relationship to the underlying abstract idea (“generating synthetic training data for machine learning models”) as outlined in the independent claims analysis above. Thus, it is readily apparent that the dependent claim elements are not directed to any specific improvements of the independent claims and do not practically or significantly alter how the identified abstract idea would be performed. When considered as a whole, as an ordered combination, the dependent claim further elaborates on the previously identified abstract idea (“generating synthetic training data for machine learning models”). Therefore, dependent claim 7 (which is representative of dependent claims 20) is deemed ineligible. Dependent claim 11, which is representative of dependent claims 24, recites: comparing the synthetic sentence to one or more sentences from the baseline dataset; determining a cosine similarity between the synthetic sentence and each of the one or more sentences from the baseline dataset; and selecting one of the one or more sentences from the baseline dataset as a semantically similar sentence based upon the cosine similarity. When considered individually, these added claim elements further elaborate on the abstract idea identified in the independent claims, because the dependent claim continues to recite the identified abstract idea: “generating synthetic training data for machine learning models”. The elements in this dependent claim are comparable to reasonable performance of limitations expressing observation, evaluation, judgement in the human mind. Nothing in the claim elements precludes the steps from being practically performed in the human mind. These limitations fall under the Mental Processes, i.e., Concepts Performed in the Human Mind grouping of abstract ideas (see MPEP 2106.04(a)(2)). The dependent claim elements have the same relationship to the underlying abstract idea (“generating synthetic training data for machine learning models”) as outlined in the independent claims analysis above. Thus, it is readily apparent that the dependent claim elements are not directed to any specific improvements of the independent claims and do not practically or significantly alter how the identified abstract idea would be performed. When considered as a whole, as an ordered combination, the dependent claim further elaborates on the previously identified abstract idea (“generating synthetic training data for machine learning models”). Therefore, dependent claim 11 (which is representative of dependent claims 24) is deemed ineligible. Dependent claims 3, 5-6, 8-10, 12-13, which are representative of dependent claims 16, 18-19, 21-23, 25-26, respectively, recite: wherein the conditional autoregressive language model comprises a multi-layer transformer decoder architecture with a plurality of attention heads. wherein the server computing device determines the prediction error using a cross entropy loss function. wherein the server computing device backpropagates the prediction error to adjust one or more weights of the conditional autoregressive language model during training. wherein the one or more configuration parameters comprise greedy sampling, top-k sampling, top-p sampling, and temperature hyperparameters. wherein the server computing device removes one or more duplicate sentences from the corpus of synthetic sentences before executing the compliance classification model. wherein the compliance classification model comprises a Multilingual Autoencoder that Retrieves and Generates (MARGE) model architecture. wherein the language suggestion model converts one or more of the identified synthetic sentences into a corresponding synthetic sentence predicted to comply with the one or more rulesets. wherein the server computing device executes the compliance classification model on each synthetic sentence in the second parallel corpus of synthetic training data to confirm whether the synthetic sentence is compliant or noncompliant with the one or more rulesets. These further elements in the dependent claims do not perform any claimed method steps. They describe the nature, structure and/or content of other claim elements – the language model; the server; the configuration parameters; the classification model; the language suggestion model – and as such, cannot change the nature of the identified abstract idea (“generating synthetic training data for machine learning models”), from a judicial exception into eligible subject matter, because they do not represent significantly more (see MPEP 2106.07). The nature, form or structure of the other claim elements themselves do not practically or significantly alter how the identified abstract idea would be performed and do not provide more than a general link to a technological environment. Therefore, dependent claims 3, 5-6, 8-10, 12-13, which are representative of dependent claims 16, 18-19, 21-23, 25-26 are deemed ineligible. When the dependent claims are considered as a whole, as an ordered combination, the claim elements noted above appear to merely apply the abstract concept to a technical environment in a very general sense. The most significant elements, which form the abstract concept, are set forth in the independent claims. The fact that the computing devices and the dependent claims are facilitating the abstract concept is not enough to confer statutory subject matter eligibility, since their individual and combined significance do not transform the identified abstract concept at the core of the claimed invention into eligible subject matter. Therefore, it is concluded that the dependent claims of the instant application, considered individually, or as a as a whole, as an ordered combination, do not amount to significantly more (see MPEP 2106.07(a)II). In sum, Claims 1-26 are rejected under 35 USC 101 as being directed to non-statutory subject matter. Examiner Remarks No art rejection has been applied to the instant set of claims. The identified most relevant prior art does not disclose the following limitations: identify a plurality of the synthetic sentences labeled as noncompliant by the compliance classification model that are semantically similar to one or more sentences from the baseline dataset and generate a first parallel corpus of synthetic training data comprising the identified synthetic sentences; and execute a language suggestion model using the identified synthetic sentences as input to generate a second parallel corpus of synthetic training data comprising a plurality of synthetic sentences predicted to comply with the one or more rulesets. The identified most relevant prior art references are listed here below. However, neither of the most relevant prior art references discloses: repeatedly selecting a random subset; training a predictive model comprising a plurality of random forest machine-learning decision trees; repeatedly and iteratively selecting a random starting factor. Furthermore, the identified most relevant prior art references do not disclose: constructing each decision tree by starting with the random starting factor; applying a predictive model to the generated feature sets to determine a confidence score; executing an action relative to each field report based on the determined confidence score. The identified pertinent prior art discloses elements of the claimed invention. However, Examiner has determined that it would be impermissible hind-sight reasoning for a person of ordinary skill in the art to combine the individual elements disclosed in the prior-art in order to achieve Applicant's claimed invention. The prior art made of record and not relied upon which, however, is considered pertinent to applicant's disclosure: US 20200081982 A1 TU; Zhaopeng et al. TRANSLATION MODEL BASED TRAINING METHOD AND TRANSLATION METHOD, COMPUTER DEVICE, AND STORAGE MEDIUM A translation model based training method is provided for a computer device. The method includes inputting a source sentence to a translation model, to obtain a target sentence outputted by the translation model; determining a fidelity of the target sentence to the source sentence; using the target sentence and a reference sentence as input of a discriminator model, using the fidelity as output of the discriminator model, and training the discriminator model on a performance of calculating a similarity between the target sentence and the reference sentence; outputting the similarity by using the discriminator model; and using the source sentence as input of the translation model, using the target sentence as output of the translation model, and using the similarity as a weight coefficient, and training the translation model on a performance of outputting the corresponding target sentence according to the input source sentence. US 20240207645 A1 Hibbard; Lyndon Stanley RADIOTHERAPY OPTIMIZATION FOR ARC SEQUENCING AND APERTURE REFINEMENT Systems and methods are disclosed for generating radiotherapy machine parameters used in a radiotherapy treatment plan, based on machine learning prediction. The systems and methods include: obtaining three-dimensional image data which indicates target dose areas and organs-at-risk areas of a subject; generating anatomy projection images from the image data, each anatomy projection image providing a view from a respective beam angle of the radiotherapy treatment; using a trained neural network model (trained with corresponding pairs of anatomy projection images and control point images) to generate control point images, each control point image indicating an intensity and aperture(s) of a control point of the radiotherapy treatment to apply at a respective beam angle; and generating a set of final control points for use in the radiotherapy treatment to control a radiotherapy treatment machine, based on optimization of the control points indicated by the generated control point images. US 20210201887 A1 CHEN; Zhijie et al. Method and Apparatus For Training Speech Spectrum Generation Model, and Electronic Device The present application discloses a method and an apparatus for training a speech spectrum generation model, as well as an electronic device, and relates to the technical field of speech synthesis and deep learning. A specific implementation is as follows: inputting a first text sequence into the speech spectrum generation model to generate an analog spectrum sequence corresponding to the first text sequence, and obtain a first loss value of the analog spectrum sequence according to a preset loss function; inputting the analog spectrum sequence corresponding to the first text sequence into an adversarial loss function model, which is a generative adversarial network model, to obtain a second loss value of the analog spectrum sequence; and training the speech spectrum generation model based on the first loss value and the second loss value. US 20230253076 A1 Safiulin; Iskander et al. LOCAL STEPS IN LATENT SPACE AND DESCRIPTORS-BASED MOLECULES FILTERING FOR CONDITIONAL MOLECULAR GENERATION A method of generating molecular structures includes: providing an ABGM; inputting into the ABGM scored molecules having an objective function value; selecting scored molecules with large objective function values; processing the selected scored molecules through an encoder to obtain latent points; selecting a latent point; sampling neighbor latent points that are within a distance from the selected latent point; processing the sampled neighbor latent points with a decoder to generate generated molecules; and provide a report having at least one generated molecule. The scored molecules can have at least one desired property. The method can include: comparing the generated molecules with selected scored molecules; selecting molecules from the generated molecules that are closest to the selected scored molecules; and providing the selected molecules as candidates for having the at least one property. US 20210209388 A1 Ciftci; Umur Aybars et al. FAKECATCHER: DETECTION OF SYNTHETIC PORTRAIT VIDEOS USING BIOLOGICAL SIGNALS Detection of synthetic content in portrait videos, e.g., deep fakes, is achieved. Detectors blindly utilizing deep learning are not effective in catching fake content, as generative models produce realistic results. However, biological signals hidden in portrait videos which are neither spatially nor temporally preserved in fake content, can be used as implicit descriptors of authenticity. 99.39% accuracy in pairwise separation is achieved. A generalized classifier for fake content is formulated by analyzing signal transformations and corresponding feature sets. Signal maps are generated, and a CNN employed to improve the classifier for detecting synthetic content. Evaluation on several datasets produced superior detection rates against baselines, independent of the source generator, or properties of available fake content. Experiments and evaluations include signals from various facial regions, under image distortions, with varying segment durations, from different generators, against unseen datasets, and under several dimensionality reduction techniques. US 20220058273 A1 RATHORE; Pradeep et al. METHOD AND SYSTEM FOR DEFENDING UNIVERSAL ADVERSARIAL ATTACKS ON TIME-SERIES DATA Data is prone to various attacks such as cyber-security attacks, in any industry. State of the art systems in the domain of data security fail to identify adversarial attacks in real-time, and this leads to security issues, as well as results in the process/system providing unintended results. The disclosure herein generally relates to data security analysis, and, more particularly, to a method and system for assessing impact of adversarial attacks on time series data and providing defenses against such attacks. The system performs adversarial attacks on a selected data-driven model to determine impact of the adversarial attacks on the selected data model, and if the impact is such that performance of the selected data model is less than a threshold, then the selected data model is retrained. US 20210241099 A1 LI; Dingcheng et al. META COOPERATIVE TRAINING PARADIGMS Generative adversarial models have several benefits; however, due to mode collapse, these generators face a quality-diversity trade-off (i.e., the generator models sacrifice generation diversity for increased generation quality). Presented herein are embodiments that improve the performance of adversarial content generation by decelerating mode collapse. In one or more embodiments, a cooperative training paradigm is employed where a second model is cooperatively trained with the generator and helps efficiently shape the data distribution of the generator against mode collapse. Moreover, embodiments of a meta learning mechanism may be used, where the cooperative update to the generator serves as a high-level meta task and which helps ensures the generator parameters after the adversarial update stay resistant against mode collapse. In experiments, tested employments demonstrated efficient slowdown of mode collapse for the adversarial text generators. Overall, embodiments outperformed the baseline approaches with significant margins in terms of both generation quality and diversity. US 20230259658 A1 Munoz Delgado; Andres Mauricio et al. DEVICE AND METHOD FOR DETERMINING ADVERSARIAL PATCHES FOR A MACHINE LEARNING SYSTEM A computer-implemented method for determining an adversarial patch for a machine learning system. The machine learning system is configured for image analysis and determines an output signal based on an input image. The output signal is determined based on an output of an attention layer of the machine learning system. The adversarial patch is determined by optimizing the adversarial patch with respect to a loss function, wherein the loss function comprises a term that characterizes a sum of attention weights of the attention layer with respect to a position of the adversarial patch in the input image and the method comprises a step of maximizing the term. US 12482482 B2 Jin; Zeyu et al. Studio quality audio enhancement Embodiments are disclosed for converting audio data to studio quality audio data. The method includes obtaining an audio data having a first quality for conversion to studio quality audio. A first machine learning model predicts a set of acoustic features. A spectral mask is applied to the audio data during the prediction of the set of acoustic features. A second machine learning model generates studio quality audio from the set of acoustic features and the audio data. US 11847245 B2 Truong; Anh et al. Privacy preserving data labeling Systems as described herein may label data to preserve privacy. An annotation server may receive a document comprising a collection of text representing a plurality of confidential data from a first computing device. The annotation server may convert the document to a plurality of text embeddings. The annotation server may input the text embeddings into a machine learning model to generate a plurality of synthetic images, and receive a label for each of the plurality of synthetic images from a third-party labeler. Accordingly, the annotation server may send the confidential data and the corresponding labels to a second computing device. US 12248796 B2 Xu; Ning et al. Modifying digital images utilizing a language guided image editing model This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that perform language guided digital image editing utilizing a cycle-augmentation generative-adversarial neural network (CAGAN) that is augmented using a cross-modal cyclic mechanism. For example, the disclosed systems generate an editing description network that generates language embeddings which represent image transformations applied between a digital image and a modified digital image. The disclosed systems can further train a GAN to generate modified images by providing an input image and natural language embeddings generated by the editing description network (representing various modifications to the digital image from a ground truth modified image). In some instances, the disclosed systems also utilize an image request attention approach with the GAN to generate images that include adaptive edits in different spatial locations of the image. US 10592386 B2 Walters; Austin et al. Fully automated machine learning system which generates and optimizes solutions given a dataset and a desired outcome Automated systems and methods for optimizing a model are disclosed. For example, in an embodiment, a method for optimizing a model may comprise receiving a data input that includes a desired outcome and an input dataset identifier. The method may include retrieving an input dataset based on the identifier and receiving an input model based on the desired outcome. The method may also comprise using a data synthesis model to create a synthetic dataset based on the input dataset and a similarity metric. The method may also comprise debugging the input model using synthetic dataset to create a debugged model. The method may also comprise selecting an actual dataset based on the input dataset and the desired outcome. In some aspects, the method may comprise optimizing the debugged model using the actual dataset and storing the optimized model. Inquiries Any inquiry concerning this communication or earlier communications from the examiner should be directed to Radu Andrei whose telephone number is 313.446.4948. The examiner can normally be reached on Monday – Friday 8:30am – 5pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patrick McAtee can be reached at 571.272.7575. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http:/www.uspto.gov/interviewpractice. As disclosed in MPEP 502.03, communications via Internet e-mail are at the discretion of the applicant. Without a written authorization by applicant in place, the USPTO will not respond via Internet e-mail to any Internet correspondence which contains information subject to the confidentiality requirement as set forth in 35 U.S.C. 122. A paper copy of such correspondence will be placed in the appropriate patent application. The following is a sample authorization form which may be used by applicant: “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with me concerning any subject matter of this application by electronic mail. I understand that a copy of these communications will be made of record in the application file.” Information regarding the status of published or unpublished applications may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center information webpage. Status information for unpublished applications is available to registered users through Patent Center information webpage only. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (in USA or CANADA) or 571-272-1000. Any response to this action should be mailed to: Commissioner of Patents and Trademarks P.O. Box 1450 Alexandria, VA 22313-1450 or faxed to 571-273-8300 /Radu Andrei/ Primary Examiner, AU 3698
Read full office action

Prosecution Timeline

Jun 12, 2023
Application Filed
Jan 25, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602685
SYSTEMS AND METHODS FOR TOKEN-BASED DEVICE BINDING DURING MERCHANT CHECKOUT
2y 5m to grant Granted Apr 14, 2026
Patent 12579542
SYSTEMS AND METHODS FOR MANAGING CRYPTOCURRENCY
2y 5m to grant Granted Mar 17, 2026
Patent 12579434
TRAINING A NEURAL NETWORK USING AN ACCELERATED GRADIENT WITH SHUFFLING
2y 5m to grant Granted Mar 17, 2026
Patent 12579226
Platform for Digitally Twinning Subjects into AI Agents
2y 5m to grant Granted Mar 17, 2026
Patent 12562927
SECURELY PROCESSING A CONTINGENT ACTION TOKEN
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
36%
Grant Probability
58%
With Interview (+21.9%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 564 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month