Prosecution Insights
Last updated: April 19, 2026
Application No. 17/719,617

SUPER-FEATURES FOR EXPLAINABILITY WITH PERTURBATION-BASED APPROACHES

Non-Final OA §101§103
Filed
Apr 13, 2022
Examiner
BEAN, GRIFFIN TANNER
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
3 (Non-Final)
21%
Grant Probability
At Risk
3-4
OA Rounds
4y 4m
To Grant
50%
With Interview

Examiner Intelligence

Grants only 21% of cases
21%
Career Allow Rate
4 granted / 19 resolved
-33.9% vs TC avg
Strong +28% interview lift
Without
With
+28.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
45 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
37.7%
-2.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
9.7%
-30.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§101 §103
DETAILED ACTION This Action is Responsive to Claims filed 02/05/2026. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/05/2026 has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-2, 7, 9-12, 17, and 19-20 have been amended. Claims 1-3, 6-13, and 16-20 are currently pending. Response to Arguments The Amendments to Claims 2 and 12 have overcome the Objections to informalities. Applicant's arguments, see Pages 8-10, filed 02/05/2026, regarding the prior art Rejection(s) of the claims have been fully considered but they are not persuasive. Regarding the Applicant’s arguments towards to cited references: The cited reference Ghai was brought in to teach the generation of local explanation(s) of the model, not labeled or unlabeled data. Ghai demonstrates generating such explanations would have been known in the art before the Applicant’s effective filing date. The modification of the primary reference Au with such XAI features as taught in Ghai would not render Au unsatisfactory. Au only refers to labels once, in the context of Equation 41 (Page 42), regarding kernel choice, and does not actually utilize said choice of kernel. Au merely recites it for reference. The Examiner fails to see how the Applicant has drawn a connection to the variable Y to labels. Labels are not mentioned in the context of this variable; Y is not used in Equation 9 as alleged; and Y is referenced as a random variable (Page 6) and a target vector (Page 15), but never in the context that it in and of itself is a set of labels. Per the cited Section of Au, the Examiner contends that Claim 2 pertained to accessing data regarding original tuples and super-features. The recitation of an “array” is highly generic and the Examiner contends Au’s manipulation of permutations of vectors would reasonably read on the use of similar data structures. The Examiner clarifies in the Rejection of Claim 8, the claim language “a result of a database statement” (misquoted on 16 of the Office Action) is being interpreted broadly. The BRI of which would reasonably be merely data received from a data source in some way, which the Examiner contends Au continues to read on, by virtue of utilizing data from data sources. Retrieving data from a data source, even one manually sorted, reads on the highly generic “a result of a database statement” The listing of elements recited in claim 9 are highly general. The data not containing a Boolean is entirely dependent on the data being manipulated, so is all of the data entries being integers. The Examiner contends the inclusion of a unique identifier would have been reasonable and well known in the art, especially when manipulating tuples of data, especially from one or more specific medical devices, as cited from Au. As cited in the Rejection of Claim 10, Au makes no reference to labels being assigned to their specific algorithm’s data; therefore, the amended recitation of unlabeled data is treated the same as previously filed with the rationale used for the previous iteration of Claim 10. See the updated prior art Rejections below. Applicant's arguments, see Pages 11-16, filed 02/05/2026, regarding the 35 U.S.C. 101 Rejection of claims 1-3, 6-13, and 16-20 have been fully considered but they are not persuasive. As presently drafted, the Examiner acknowledges the “inferring…” or “training…” steps and their placement within the independent claims are essential to the claims as a whole, as the Applicant asserts. This is tangential to the interpretation the Examiner maintains regarding the independent claims. The “defining…” step is recited highly generally and practically performed within the human mind or with the aid of pen and paper. The “inferring…” step is recited highly generally and recites no technical structure or implementation differentiating it from being broadly read on as “inputting the aforementioned data into a model and the model producing an output” (The Examiner notes here that a “model” also recites no technical structure. A “model” could be a set of equations, for example, and is not necessarily limited to computer implementation). This limitation therefore amounts to instructions to apply the data defined in the “defining…” step, in the absence of any more specifics or detail regarding the model implementation or inferencing implementation. The “performing…” step, as presently drafted, seems unrelated to the preceding “inferring…” step, as it pertains to the previously defined super-features, rather than the inference made by the model. Within this step, the “randomly selecting…” and “generating…” steps are practically performed with the human mind or with the aid of pen and paper. The second “inferring…” step amounts to instructions to apply the data manipulated in the previous two steps as, similarly, the model, its inferencing, and the data input/output is recited highly generically and without any recitation of specific structure or implementation. A second generic model is introduced in the “training…” step, now instructions to apply the outputs generated as a result of the data manipulated and inferences made thereon by the first generic model. The “calculating…” step, again is practically performed within the human mind or with the aid of pen and paper, and does not make a recitation that the previously generated output of the surrogate model is necessary in the calculation. One could perform such an importance calculation “based on” the output of the surrogate model without using the actual value of the output/results of the trained surrogate model. No technical or specific structure or implementation is recited tying these steps intrinsically to a computing environment or each other. Finally, the “generating…” step is practically performed within the human mind or with the aid of pen and paper. The “displaying…” limitation, while the only tangible connection to a computing environment, serves as well-understood- routine, or conventional activity as the limitation is recited highly generally, and only serves to recite the display of a result of the execution of the model in the context of the “calculating…” step, again, with no specific structure or implementation. The Examiner contends, as presently drafted, the proposed improvement to the functioning of a computer is a direct result of the data manipulation steps, applied on generic models, particularly in the “performing…” and “calculating…” steps of the independent claims. The display of the most or less important super-features is a direct result of the “calculating…” step, the “calculating…” step is formulated based on a generic importance value tied to a generic surrogate model, and the generic surrogate model is trained off of generically permuted data manipulated data inferred on by an original generic model. Nowhere in the independent claims is sufficient, specific, technical structure or implementation recited to amount to more than a series of data manipulation steps applied on generic models so the output of the models may also be manipulated. The Examiner reiterates MPEP 2106.05(a) in that a specific improvement must come from an additional element, rather than interpretable abstract idea mental process steps. See the updated 35 U.S.C. 101 Rejection below. Claim Rejections - 35 USC § 101 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-3, 6-13, and 16-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more; and because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. v. CLS Bank International, et al, 573 U.S. (2014). In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.) Step 1: Claims 1-3 and 6-10 recite a method for defining a plurality of super-features, which falls under the statutory category of a process. Claims 11-13 and 16-20 recite one or more non-transitory computer-readable storage media storing instructions, which falls under the statutory category of a manufacture. Step 2A – Prong 1: Claim 1 recites an abstract idea, law of nature, or natural phenomenon. The limitations of “defining a plurality of super-features that each contain a respective disjoint subset of features of a plurality of features;”, “performing for each super-feature of the plurality of super-features: randomly selecting a plurality of permuted values from original values of the super-feature in a plurality of unlabeled original tuples that are based on the plurality of features,”, “generating a plurality of permuted tuples, wherein each permuted tuple of the plurality of permuted tuples is based on said particular tuple and a respective permuted value of the plurality of permuted values,”, “calculating, for each super-feature of the plurality of super-features, an importance of the super-feature based on the surrogate model;”, and “generating, based on said calculating…a local explanation of the ML model that is at least one explanation selected from a group consisting of: a local explanation that indicates a most important super-feature and a local explanation that excludes a super-feature that has an importance that is below a threshold;” under the broadest reasonable interpretation, cover a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. Defining tuples of features, randomly selecting and generating permutations of those tuples, and generating or selecting the “best” and one that is below a threshold is practically performed within the human mind or with the aid of pencil and paper. Step 2A – Prong 2: The additional elements of claim 1 do not integrate the abstract idea into a judicial exception. The claim recites the additional elements “a method”, “tuple”, and “permuted tuples” are recognized as generic computer components recited at a high level of generality. Although they have and execute instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application). The additional elements of “a machine learning model”, “features”, “super-feature”, “inference”, “a surrogate model” and “an importance” are recognized as non-generic computer components, but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)). The additional elements recited in the limitation “a machine learning (ML) model inferring a particular inference for a particular tuple that is based on the plurality of features;”, “the ML model inferring a respective permuted inference for each permuted tuple of the plurality of permuted tuples;”, “training, based on the permuted inferences, a surrogate model;” and “wherein the method is performed by one or more computers.” are found to be mere instructions to apply the abstract idea of defining feature tuples and generating permutations of those feature tuples (see MPEP 2106.05(f) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application). The additional elements recited in the limitation “displaying a local explanation of the ML model that is at least one explanation selected from a group consisting of: a local explanation that indicates a most important super-feature and a local explanation that excludes a super-feature that has an importance that is below a threshold;” merely amounts to insignificant post-solution activity (See MPEP 2106.05(g)). Step 2B: The only limitation on the performance of the described method is a limitation reciting “a method”, “tuple”, and “permuted tuples” These elements are insufficient to transform a judicial exception to a patentable invention because the recited elements are considered insignificant extra-solution activity (generic computer system, processing resources, links the judicial exception to a particular, respective, technological environment). The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components; mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (see MPEP 2106.05(f)). The additional elements of “a machine learning model”, “features”, “super-feature”, “inference”, “a surrogate model” and “an importance” are recognized as non-generic computer components, but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)). The additional elements recited in the limitation “a machine learning (ML) model inferring a particular inference for a particular tuple that is based on the plurality of features;”, “the ML model inferring a respective permuted inference for each permuted tuple of the plurality of permuted tuples;”, “training, based on the permuted inferences, a surrogate model;” and “wherein the method is performed by one or more computers.” are found to be mere instructions to apply the abstract idea (See MPEP 2106.05(f) indicating mere instructions to apply an abstract idea does not recite significantly more). The additional elements recited in the limitation “displaying a local explanation of the ML model that is at least one explanation selected from a group consisting of: a local explanation that indicates a most important super-feature and a local explanation that excludes a super-feature that has an importance that is below a threshold;” merely amounts to well-understood, routine, or conventional activity (See MPEP 2106.05(d)(II)(iv), third list). Taken alone or in ordered combination, these additional elements do not amount to significantly more than the above-identified abstract idea. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. For the reasons above, claim 1 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to independent claim 11. Claim 11 recites similar limitations to claim 1, with the exception of “One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause:” (generic computer components). Dependent Claims: Claim 2 (claim 12) recites additional elements found to be generic computer components and a data-gathering or transmittal step (“accessing the value of a super-feature of an unlabeled original tuple of the plurality of unlabeled original tuples based on…”). This limitation is found to be mere data-gathering or data transmittal extra-solution activity steps (See MPEP 2106.05(g)) and are acknowledged to be well-understood, routine, conventional activity (see, e.g., court recognized WURC examples in MPEP 2106.05(d)(II)). Claim 3 (claim 13) recites refinements to the additional elements of claim 1. Claim 6 (claim 16) recites an abstract idea mental process step (“…ranking of at least two super-features of the plurality of super-features based on the importances of the at least two super-features.”) Claim 7 (claim 17) recites refinements to the additional elements of claim 1. Claim 8 (claim 18) recites additional elements found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)). Claim 9 (claim 19) recites an abstract idea mental process step (“populating…a feature vector…”). Claim 10 (claim 20) recites refinements to the additional elements of claim 1. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-3, 6-13, and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Au et al. (Grouped Feature Importance and Combined Features Effect Plot, 2021), hereinafter Au and Ghai et al. (Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers, 2020), hereinafter Ghai. In regards to claim 1: The present invention claims: “A method comprising: defining a plurality of super-features that each contain a respective disjoint subset of features of a plurality of features;” Au teaches “…we extend this existing definition of permutation importance to groups of features and introduce the GPFI (Grouped Permutation Feature Importance)” (Page 8, Section 2.2). Section 2.2.1 first paragraph also teaches groups of features. “a machine learning (ML) model inferring a particular inference for a particular tuple that is based on the plurality of features;” Au teaches “assume that there is an unknown functional relation f : X -> Y. ML algorithms try to learn this functional relationship using n ϵ N i.i.d. observations drawn from the joint space X x Y with unknown probability distribution P.” (Page 6, Section 2.1). “performing for each super-feature of the plurality of super-features: randomly selecting a plurality of permuted values from original values of the super-feature in a plurality of unlabeled original tuples that are based on the plurality of features,” Au teaches “Consider a p-dimensional feature space X = (X1 x … x Xp) and a one dimensional target space Y. The corresponding random variables, which are generated from these spaces are denoted by X = (X1, …, Xp) and Y.” (Page 6, Section 2.1). Au makes no reference to labeling the feature groups in their algorithm(s). “generating a plurality of permuted tuples, wherein each permuted tuple of the plurality of permuted tuples is based on said particular tuple and a respective permuted value of the plurality of permuted values,” Au teaches “Here…is a |G|-dimensional random vector of features, which is an independent replication of X-G := (Xj)…. Also this random vector is independent of both the target variable and the random vector of remaining features…” (Page 8, Section 2.2.1). “and the ML model inferring a respective permuted inference for each permuted tuple of the plurality of permuted tuples;” Au teaches calculation a feature importance (inference) based on the groups (tuples) of features (Page 8, Section 2.2.1, Equation 8.) “training, based on the permuted inferences, a surrogate model;” While a different algorithm, Au teaches generating another model (surrogate model), learned on the output from a single group (Page 10, Section 2.3.2). Given that Au indicates using such a separate model can be useful when computational costs are high or resources are low (Section 2.3.2 first paragraph), it would have been obvious to one of ordinary skill in the art to incorporate various aspects of the known algorithms taught by Au. “calculating, for each super-feature of the plurality of super-features, an importance of the super-feature based on the surrogate model;” Au teaches methods of calculating the importance of a feature group directly (Page 8, Section 2.2.1, Equation 9), as well as inferring an importance from all features other than the selected group (Page 9, Section 2.2.2, Equation 10). The algorithm of Section 2.3.2 also teaches inferring a group’s importance (Section 2.3.2, Equation 13). “wherein the method is performed by one or more computers.” See above for Au’s method and Section 5, where the method is tested on computing devices. While Au teaches the CFEP, Au fails to explicitly teach: “and generating, based on said calculating, and displaying a local explanation of the ML model that is at least one explanation selected from a group consisting of: a local explanation that indicates a most important super-feature and a local explanation that excludes a super-feature that has an importance that is below a threshold;” However, Ghai, in a similar field of endeavor of XAI, teaches “Our work leverages local explanations to accompany AL algorithms’ instance queries. Compared to other approaches including example based and rule based explanations [35], Feature importance [35, 72] is the most popular form of local explanations. It justifies the model’s decision for an instance by the instance’s important features indicative of the decision (e.g., “because the patient shows symptoms of sneezing, the model diagnosed him having a cold”). Local feature importance can be generated by different XAI algorithms depending on the underlying model and data. Some algorithms are model-agnostic [62, 72], making them highly desirable and popular techniques. Local importance can be presented to users in different formats [59], such as described in texts [27], or by visualizing the importance values [19, 69].” (Page 6). Ghai teaches “We suspect that, besides algorithmic interest, the reason is that it is much easier for lay people to consider keywords as top features for text classifiers compared to other types of data. For example, one may come up with keywords that are likely indicators for the topic of “baseball”, but it is challenging to rank the importance of attributes in a tabular database of job candidates. One possible solution is to allow people to access the model’s own reasoning with features and then make incremental adjustments. This idea underlies recent research into visual analytical tools that support debugging or feature engineering work [40, 47, 96].” (Page 5). It would have been obvious to one of ordinary skill in the at the time of the Applicant’s filing to combine the feature importance system of Au with local explanation knowledge and benefits expressed in Ghai in order to realize benefits to AI result explanation and visualization. In regards to claim 2: The present invention claims: “accessing the value of a super-feature of an unlabeled original tuple of the plurality of unlabeled original tuples based on an offset into an array that consists of values of the subset of features of the super-feature of the plurality of unlabeled original tuples.” Au teaches “Here, X[j]…is the p dimensional random variable vector of features, where ~Xj is an independent replication of Xj. The random variable ~Xj has the same distribution as Xj , but is independent of all other features and the target variable. In practice, this is done by permuting the values in the data column of the jth feature. The idea behind this method is to break the association between the jth feature and the target variable by permuting the feature values.” (Page 7, Section 2.2 The Examiner interprets this limitation broadly based on the generic use of “offset.” In the absence of more technical detail, the Examiner interprets this merely as a data structure access a person of ordinary skill in the art at the time of Au and Ghai’s writing would have been reasonably aware of, and aware of how to iterate through or access a tabular dataset such as a “jth feature”). In regards to claim 3: The present invention claims: “wherein a first super-feature of the plurality of super-features contains more features than a second super-feature of the plurality of super-features.” Au teaches “In this illustrative example, we have two predefined groups of features where the first group contains x1, x2 and x3 and features x4 and x5 belong to the second group.” (Page 23, Section 4.3) In regards to claim 6: The present invention claims: “wherein the local explanation comprises a ranking of at least two super-features of the plurality of super-features based on the importances of the at least two super-features.” Ghai teaches “For example, one may come up with keywords that are likely indicators for the topic of “baseball”, but it is challenging to rank the importance of attributes in a tabular database of job candidates. One possible solution is to allow people to access the model’s own reasoning with features and then make incremental adjustments” (Page 5, one of ordinary skill in the art combining Au and Ghai would reasonably rank the features based on importance). In regards to claim 7: The present invention claims: “wherein at least one selected from the group consisting of: said plurality of unlabeled original tuples does not include said particular tuple, the values of a particular super-feature of the plurality of super-features of the plurality of unlabeled original tuples do not contain a value of the particular super-feature in the particular tuple, and the values of the plurality of features in the plurality of unlabeled original tuples do not contain the value of a particular feature of the plurality of features in the particular tuple.” Au teaches “Also this random vector is independent of both the target variable and the random vector of remaining features,” (Page 8, Section 2.2.1, mapping to the particular group not being included or being distinct from the other feature groups). In regards to claim 8: The present invention claims: “wherein a particular super-feature of the plurality of super- features represents one selected from the group consisting of: a database connection, a database table, query criteria, a result of a database statement, and a kind of database statement.” See above where Au teaches what kinds of data may be used, see also where Au teaches testing their algorithm(s) on a dataset of mobile data (Section 5, the Examiner interprets “a result of a database statement” broadly as merely a dataset resultant from a query. A person of ordinary skill in the art at the time of Au’s writing would be aware of how to obtain such a tabular dataset from a database or other storage medium, regardless of whether Au explicitly teaches such a medium). In regards to claim 9: The present invention claims: “wherein said training the surrogate model comprises populating at least one selected from the group consisting of: a feature vector that identifies at least one unlabeled original tuple of the plurality of unlabeled original tuples, a feature vector that contains an identifier of the particular tuple, a feature vector that does not contain a Boolean, a feature vector that contains at least one array offset, and a feature vector that contains only integers.” Au teaches “While it may be too limiting to estimate the performance of a model based on one feature only, it can be informative to see how much a group of features(e.g., all measurements from a specific medical device) can reduce the expected loss in contrast to a null model. The Leave-One-Group-In (LOGI) method could be particularly helpful in settings where information on additional groups of measures will inflict significant costs (e.g., adding functional imaging data for a diagnosis)” (Page 10, Section 2.3.2, a person of ordinary skill in the art at the time of Au’s writing would know to include, obtain, or generate a unique identifier when working with predicting data from unique ore specific machines or with unique or specific patients). In regards to claim 10: The present invention claims: “wherein the ML model is unsupervised.” Au teaches their method with both supervised and unsupervised dimension reduction. (Section 4.1-4.2). The Examiner interprets this claim limitation broadly in the absence of more technical or structural detail. In regards to claim 11-13 and 16-20: Claims 11-13 and 16-20 recites similar limitations to claims 1-3 and 6-10, with the exception of “One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause…” therefore, both sets of claims are similarly rejected. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRIFFIN T BEAN whose telephone number is (703)756-1473. The examiner can normally be reached M - F 7:30 - 4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GRIFFIN TANNER BEAN/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Apr 13, 2022
Application Filed
May 02, 2025
Non-Final Rejection — §101, §103
Jun 12, 2025
Examiner Interview Summary
Jun 12, 2025
Applicant Interview (Telephonic)
Aug 13, 2025
Response Filed
Oct 31, 2025
Final Rejection — §101, §103
Jan 05, 2026
Response after Non-Final Action
Feb 05, 2026
Request for Continued Examination
Feb 16, 2026
Response after Non-Final Action
Feb 24, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12424302
ACCELERATED MOLECULAR DYNAMICS SIMULATION METHOD ON A QUANTUM-CLASSICAL HYBRID COMPUTING SYSTEM
2y 5m to grant Granted Sep 23, 2025
Patent 12314861
SYSTEMS AND METHODS FOR SEMI-SUPERVISED LEARNING WITH CONTRASTIVE GRAPH REGULARIZATION
2y 5m to grant Granted May 27, 2025
Patent 12261947
LEARNING SYSTEM, LEARNING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 25, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
21%
Grant Probability
50%
With Interview (+28.4%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month