Prosecution Insights
Last updated: April 19, 2026
Application No. 17/232,671

DATASET-FREE, APPROXIMATE MARGINAL PERTURBATION-BASED FEATURE ATTRIBUTIONS

Non-Final OA §101§103
Filed
Apr 16, 2021
Examiner
SHALU, ZELALEM W
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
3 (Non-Final)
29%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
31 granted / 108 resolved
-26.3% vs TC avg
Strong +19% interview lift
Without
With
+19.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
34 currently pending
Career history
142
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
63.4%
+23.4% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This action is responsive to the Amendment filed on 12/05/2025. Claims 1-11, 13-21, and 23- 24 are pending in the case. Applicant Response 3. In Applicant’s response dated 12/05/2025, Applicant amended Claims 1-3, 6, 8, 14-17, and 21, cancelled claims 12 and 22 and added new claim 23-24 and argued against all objections and rejections previously set forth in the Office Action dated 09/08/2025. Information Disclosure Statement 4. As required by MPEP 609 (c), the Applicants’ submission of the Information Disclosure Statement(s) filed on 11/04/2025 and 01/08/2026 are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. Continued Examination under 37 CFR 1.114 5. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/05/2025 has been entered. Claim Rejections - 35 USC § 101 6. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 7. Claims 1-11, 13-21 and 23-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed towards an abstract idea, without significantly more. Step 1 According to the first part of the analysis, in the instant case, claims 1-13 are directed to a computer-implemented method, and claims 14-22 are directed to a non-transitory computer-readable media claim. Thus, each of the claims falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter).). Regarding Claim 1, At step 2A, prong 1, Does the claim recite a judicial exception? Claim 1 further recites the steps of: assigning a respective probability distribution to each feature of a plurality of features that include a first feature and a second feature that are not categorical, wherein the first feature and the second feature have different probability distributions (This step relies on assigning probability distributions and statistical analysis which is mathematical concepts grouping of abstract ideas.); for each unlabeled original tuple in one or more original tuples that are based on the plurality of features, unsupervised machine learning (ML) model inferring a respective original inference of one or more original inferences (This step relies on performing model inference which is mathematical operation on collected data and is mathematical concepts grouping of abstract ideas.); for each feature of the plurality of features, for each unlabeled original tuple in the one or more unlabeled original tuples, generating a plurality of perturbed values based on the probability distribution of the feature (This step relies on generating random or sampled value from probability distribution model which is mathematical concept grouping of abstract ideas.); for each feature of the plurality of features, for each unlabeled original tuple in the one or more unlabeled original tuples, generating a plurality of perturbed tuples, wherein each perturbed tuple of the plurality of perturbed tuples is based on the original tuple and a respective perturbed value of the plurality of perturbed values (This step relies on data manipulation using mathematical methods which is mathematical concept grouping of abstract ideas.); for each feature of the plurality of features, for each unlabeled original tuple in the one or more unlabeled original tuples , the unsupervised ML model inferring a respective perturbed inference for each perturbed tuple of the plurality of perturbed tuples (This step relies on data manipulation using mathematical methods which is mathematical concept grouping of abstract ideas.); for each feature of the plurality of features, for each unlabeled original tuple in the one or more unlabeled original tuples, measuring a respective difference between each perturbed inference of the plurality of perturbed tuples and the original inference (This step relies on mathematical methods or calculation to compare between values which is mathematical concept grouping of abstract ideas.); for each feature of the plurality of features, calculating a respective importance of the feature based on the differences measured for the feature (This step relies on mathematical methods or calculation to compare between values which is mathematical concept grouping of abstract ideas.); The claim recites a process that can be carried out mentally which falls within the “Mental Processes” groupings of abstract ideas. Accordingly, the claims recite an abstract idea. Step 2A prong 2: Does the claim recite additional elements? Do those additional elements, individually and in combination, integrate the judicial exception into a practical application? Further, the claim does not recite any additional element which could integrate this abstract idea into a practical application, because the additional elements recited of consist of: One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors cause: that is, generic computer components on which to implement the abstract idea (see MPEP 2106.05(f)); measuring a respective difference between each perturbed inference of the plurality of perturbed tuples and the original inference; that is, insignificant extra-solution activity of data analyzing (see MPEP 2106.05(g)), unsupervised machine learning (ML) model inferring a respective original inference of one or more original inferences; the ML model inferring a respective perturbed inference for each perturbed tuple of the plurality of perturbed tuples, “using a computer or other machinery” as a tool to perform the abstract idea step of generating an output (see MPEP 2106.05(f)), generating and displaying an explanation of the ML model based on the importance of at least one feature of the plurality of features, “generating and displaying information on a generic computer does not convert the abstract idea into practical application”; wherein the method does not entail training the unsupervised ML model, and the method is performed by one or more computers, “using a computer or other machinery” as a tool to present information and the model is not trained and the step is post hoc explain only which does not implement the abstract idea into the practical application and is considered to be step of generating an output using generic computer (see MPEP 2106.05(f)),”. The additional elements are recited at a high level of generality and do not amount to significantly more than the abstract idea (MPEP 2106.05(f; Thus, the claim is directed towards the abstract idea. )). The claim does not improve computer functionality or improve the structure of the ML model or provide technological improvement. Thus, the abstract idea is not integrated into practical application. Step 2B: Do the additional elements, considered individually and in combination, amount to significantly more than the judicial exception? No, As shown above with respect to integration of the abstract idea into a practical application, the additional element of one or more non-transitory computer-readable media storing instructions that, when executed by one or more processors cause: that is, generic computer components on which to implement the abstract idea (see MPEP 2106.05(f)); measuring a respective difference between each perturbed inference of the plurality of perturbed tuples and the original inference; that is, insignificant extra-solution activity of data analyzing (see MPEP 2106.05(g)), unsupervised machine learning (ML) model inferring a respective original inference of one or more original inferences; the ML model inferring a respective perturbed inference for each perturbed tuple of the plurality of perturbed tuples, “using a computer or other machinery” as a tool to perform the abstract idea step of generating an output (see MPEP 2106.05(f)), generating and displaying an explanation of the ML model based on the importance of at least one feature of the plurality of features, “generating and displaying information on a generic computer does not convert the abstract idea into practical application”; wherein the method does not entail training the unsupervised ML model, and the method is performed by one or more computers, “using a computer or other machinery” as a tool to present information and the model is not trained and the step is post hoc explain only which does not implement the abstract idea into the practical application and is considered to be step of generating an output using generic computer (see MPEP 2106.05(f)),”. The additional elements, alone and in combination, fail to integrate the abstract idea into a practical application. Thus, the claims are not patent eligible. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept, neither can insignificant extra-solution activity. All of these additional elements as generically claimed are thus considered well-understood, routine, and conventional. Therefore, these limitations, taken alone or in combination, do not integrate the abstract idea into a practical application or recite significantly more that the abstract idea. Thus, these independent claims are not patent eligible. The dependent claims respectively recite a judicial exception in limitations of: “generating at least one value of the plurality of perturbed values of a particular feature that does not occur for the particular feature in said unlabeled original tuples,” (Claim 2), “wherein the explanation comprises at least one selected from the group consisting of: a global explanation that is based on the one or more unlabeled original tuples, wherein the one or more unlabeled original tuples are at least two original tuples, a local explanation that is based on the one or more unlabeled original tuples, wherein the one or more unlabeled original tuples is a particular tuple, a ranking of at least two features of the plurality of features based on the importance of the at least two features.” (Claims 3, 14), “wherein said assigning probability distributions is based on a plurality of original tuples that does not include said particular tuple.” (Claim 4), “wherein said generating said local explanation occurs without access to said plurality of original tuples.” (Claim 5, 18), “wherein said assigning probability distributions is based on the one or more unlabeled original tuples.” (Claims 6, 19), “wherein said assigning the probability distribution to each feature of a plurality of features comprises selecting the probability distribution from a plurality of probability distributions.”(claims 7, 20), “wherein said selecting the probability distribution comprises measuring fitness of each probability distribution of the plurality of probability distributions to the one or more original tuples.”(Claim 8), “wherein said selecting the probability distribution comprises at least one selected from the group consisting of: measuring Kolmogorov-Smirnov fitness based on the one or more original tuples, and selecting a default probability distribution when a threshold exceeds all said fitness’s of the plurality of probability distributions.”(claim 9), “wherein the default probability distribution is a uniform probability distribution.”(Claim 10), “wherein said measuring the respective difference comprises measuring a respective difference between a respective loss of each perturbed inference of the plurality of perturbed tuples and a loss of the original inference.”(claim 11), “further comprising two execution contexts concurrently performing said generating two respective perturbed tuples of the plurality of perturbed tuples.” (Claim 13), “wherein said selecting the probability distribution comprises selecting a default probability distribution when a threshold exceeds all said fitness’s of the plurality of probability distributions “(Claim 21), “wherein said selecting the probability distribution comprises measuring fitness of each probability distribution of the plurality of probability distributions to the one or more unlabeled original tuples.” (Claim 23), wherein the default probability distribution is a uniform probability distribution (claim 24) These additional limitations (in claims 2-11, 13, 15-21 and 23-24) also constitute concepts performed in the human mind which fall within the “Mental Processes” groupings of abstract ideas. This judicial exception is not integrated into a practical application. Additional elements “computer readable medium comprising: computer program code (in claims (in claims 2-11, 13, 15-21 and 23-24), all amount to no more than adding insignificant extra-solution activity/specifications related to data gathering, data input, or data transmittal. These additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of non-transitory computer readable medium comprising: computer program code are again insignificant extra-solution activity steps that cannot provide an inventive concept. All of these additional elements as generically claimed are considered well-understood, routine, and conventional. Therefore, these limitations, taken alone or in combination, do not integrate the abstract idea into a practical application or recite significantly more that the abstract idea. Thus, all of the dependent claims are also not patent eligible. Examiner Comments 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-11,13-21 and 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Mathew (Pat. No. US 9524469 B1, Pat. Date: 2016-12-20) in view of Givental (Pub. No. US 20210281592 A1, Pub. Date2021-09-09) in further view of Nourian (Pub. No. US A1, Pub. Date: 2021-02-18) Regarding independent Claim 1, Mathew teaches a method comprising: the method is performed by one or more computers (see Mathew: Fig. 15, Col.19, Line 12-15, “an example computer system 1500 in which embodiments of the present invention, or portions thereof, may be implemented as computer-readable code.”) a respective probability distribution to each feature of a plurality of features that include a first feature and a second feature that are not categorical (see Mathew: Fig. 35A, Col.32, Line 36-48, “for each first feature among the plurality of first features, a respective first probability distribution indicating, for each respective second feature among a plurality of second features, a probability that a person having the respective second feature has the respective first feature, is determined, thereby generating a plurality of first probability distributions.”), wherein the first feature and the second feature have different probability distributions (see Mathew: Fig. 35A, Col.34, Line 18-21, “a probabilistic classifier is used to generate a merged probability distribution based on the plurality of first probability distributions and the second probability distribution.”) As shown above, Matthew teaches a feature level probabilistic modeling for machine learning systems and a feature level perturbation that assign and generate probability distributions for individual features in a dataset. Matthew further discloses processing unlabeled data and generating outputs based on learned feature interactions. Mathew does not teach the remaining limitations. However, Givental teaches the computer system wherein: for each unlabeled original tuple in one or more unlabeled original tuples that are based on the plurality of features (see Givental: Fig.1, [0059], “the labeled log data 150 and unlabeled data from the data cleaning and feature engineering engine 110, i.e. the input data to the ensemble 120. ”), an unsupervised machine learning (ML) model inferring a respective original inference of one or more original inferences (see Givental: Fig.1, [0053], “the outputs of the various unsupervised machine learning models 122-128 may be output to a dynamic weights generator 130 which applies weights 132-136 to the outputs prior to combining the results of the unsupervised machine learning models 122-128 to generate the anomaly score 140 (i.e. inference)”) for each feature of the plurality of features, for each unlabeled original tuple in the one or more unlabeled original tuples (see Givental: Fig.1, [0045], “The data cleaning and feature engineering engine 110 parses the log data and converts the log data into a commonly structured data frame. Feature engineering is performed on the converted log data to drop useless features, e.g., implementing a “dropna” algorithm or the like, and extract new features to supplement the existing feature set present in the converted log data so as to represent the individual event logs.”), Because both Matthew and Givental are in the same/similar field of endeavor of machine learning model and feature analysis, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify Matthew’s machine learning system to include Givental’s unsupervised machine learning model training and feature importance framework that interpret model behavior when trained on unlabeled data. One would have been motivated to make such a combination in order to improve interpretability of ML model without requiring modification of the underlying ML architecture. As shown above, Matthew teaches a feature level probabilistic modeling for machine learning systems and a feature level perturbation that assign and generate probability distributions for individual features in a dataset. Givental teaches training an unsupervised machine learning model using unlabeled data and determining feature importance within unsupervised machine learning model to generate explanation of the model based on the importance of the features.(see Fig.2, [0052]-[0058]) Matthew and Givental does not teach the method wherein: for each feature of the plurality of features, for each unlabeled original tuple in the one or more unlabeled original tuples : generating a plurality of perturbed values based on the probability distribution of the feature, generating a plurality of perturbed tuples, wherein each perturbed tuple of the plurality of perturbed tuples is based on the unlabeled original tuple and a respective perturbed value of the plurality of perturbed values, the unsupervised ML model inferring a respective perturbed inference for each perturbed tuple of the plurality of perturbed tuples, and measuring a respective difference between each perturbed inference of the plurality of perturbed tuples and the original inference; for each feature of the plurality of features, calculating a respective importance of the feature based on the differences measured for the feature; and generating and displaying an explanation of the unsupervised ML model based on the importance of at least one feature of the plurality of features; wherein the method does not entail training the unsupervised ML model. However, Nourian teaches the method comprising: for each feature of the plurality of features, for each unlabeled original tuple in the one or more unlabeled original tuples (see Nourian: Fig.14, [0061], “a local explanation for a model, which in addition to explaining the model the interface also provides explanations for prediction instances. The prediction instances may include instance level explanations, such that given a single prediction instance, sensitivity analysis, feature contribution analysis and correction search may be performed.”) generating a plurality of perturbed values based on the probability distribution of the feature (see Nourian: Fig.15, [0062], “a sensitivity analysis may be configured by perturbing one input variable at a single prediction instance, while keeping all other input variables constant. As shown, sensitivity curves may be represented in a chart format for an instance given either a tree ensemble model (the top figure), or a neural network model (the bottom figure” generating a plurality of perturbed tuples, wherein each perturbed tuple of the plurality of perturbed tuples is based on the unlabeled original tuple and a respective perturbed value of the plurality of perturbed values (see Nourian: Fig.15, [0015], “depending on implementation, a sensitivity analysis may be configured by perturbing one input variable at a single prediction instance, while keeping all other input variables constant. As shown, sensitivity curves may be represented in a chart format for an instance given either a tree ensemble model (the top figure), or a neural network model (the bottom figure).”, the unsupervised ML model inferring a respective perturbed inference for each perturbed tuple of the plurality of perturbed tuples (see Nourian: Fig.1, [0074], “model-independent approaches may not be specific for a model or process family and may be used to assess a black-box model (i.e., a model in which the internal functionality or implementations of the model is unknown or obfuscated). For example, permutation importance method may be used to randomly permute a feature to determine how the model performs in the presence of perturbed data. This approach may be implemented based on brute-force, and thereby may involve a substantial level of resources for complex models. However, such approach may perform better than model-dependent counterparts for specific methods, such as Gini index in Random Forest”, and measuring a respective difference between each perturbed inference of the plurality of perturbed tuples and the original inference (see Nourian: Fig.15, “instances of prediction changes for one or more variables in a model may be determined. Such changes may also be better understood in terms of the sensitivity of the model to the features. In an example sensitivity analysis, a feature for a record may be arbitrarily varied across the feature's natural domain. Accordingly, depending on implementation, a sensitivity analysis may be configured by perturbing one input variable at a single prediction instance, while keeping all other input variables constant. As shown, sensitivity curves may be represented in a chart format for an instance given either a tree ensemble model (the top figure), or a neural network model (the bottom figure).”) for each feature of the plurality of features, calculating a respective importance of the feature based on the differences measured for the feature (see Nourian: Fig.6, [0045], “Important features corresponding to predicted outcomes of interest and degrees of importance may be illustrated. Algorithms or processes that determine feature importance may vary based on the model type. Importance by weight, gain, cover, and permutation may be provided for gradient boosted trees and random forests, for example.”)… (see Nourian: Fig.8, [0073], “a model dependent or model independent approach depending on implementation, where a plurality of methods may be utilized for explaining a model. One method may include calculating feature importance categorized at a high level as either model-dependent or model-independent. A model-dependent approach may take into account the unique properties of a given machine learning system (e.g., a feature's importance) when explaining the mode.”); and generating and displaying an explanation of the unsupervised ML model based on the importance of at least one feature of the plurality of features (see Nourian: Fig.2B, [0026], “provided explanations may be generated in the form of visually displayable indicators such as diagrams, charts, textual definitions or code to allow for a better understanding of the components of a target model. Moreover, the manner certain components correspond to each other or how certain components are associated with the results generated by the model may be also analyzed and disclosed. For example, features that are most important to the model's functionality or factors that most contribute to certain interesting outcomes generated by the model may be determined and graphically displayed.”); wherein the method does not entail training unsupervised ML model (see Nourian: Fig.2B, [0025], “a ML model may be loaded into a computing environment.”, i.e. ML model explanation method operate on already trained models). As how above, Givental teaches unsupervised learning system that receives unlabeled data tuples and generate inferences ( anomaly score) and applies unlabeled data to generate original inferences. Nourian teaches feature and tuple perturbation to explain inferences in the model. Nourian discloses expandability technique that operates on feature based tuples and determine the effect of varying each feature on model output, Because Matthew, Givental, and Nourian are in the same/similar field of endeavor of machine learning model and feature analysis, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the teaching of Matthew to include a system that generate and display an explanation of the unsupervised ML model based on the importance of at least one feature of the plurality of features by calculating respective importance of the feature based on the differences measured for the feature as taught by Nourian. One would have been motivated to make such a combination in order to provide machine learning models predictive advantages to enhance the functionality of a system or a computing model when complex relationships or constraints are at play to increase explanation accuracy. (see Nourian [0004]) Regarding Claim 2, Matthew, Givental, and Nourian teaches all the limitations of Claim 1. Nourian further teaches the method further comprising: at least one selected from the group consisting of generating at least one value of the perturbed values of a particular feature that does not occur for the particular feature in said plurality of unlabeled original tuples (see Nourian: Fig.6, [0044], “feature importance may be listed by permutation, which does not take into account a particular model or algorithm type when calculating the feature importance. In another embodiment, feature importance may be listed by gain, which takes into account specific structure of the analyzed model (e.g., a decision tree) and how much information was lost or gained when following a certain path. Such a tool may be equipped with both model agnostic and model specific techniques to provide a comprehensive view of a feature's importance.”) It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the teaching of Matthew to include to generating an explanation of the ML model based on the importance of at least one feature of the plurality of features as taught by Nourian. One would have been motivated to make such a combination in order to provide machine learning models predictive advantages to enhance the functionality of a system or a computing model when complex relationships or constraints are at play. (see Nourian [0004]) Regarding Claim 3, Matthew, Givental, and Nourian teaches all the limitations of Claim 1. Nourian further teaches the method wherein ; the explanation comprises at least one selected from the group consisting of a global explanation that is based on the one or more unlabeled original tuples, wherein the one or more original tuples are at least two original tuples, a local explanation that is based on the one or more unlabeled original tuples, wherein the one or more unlabeled original tuples is a particular tuple, a ranking of at least two features of the plurality of features based on the importance of the at least two features (see Nourian: Fig. 7, [0046], “average importance rankings for different features in the target model may be listed across one or more importance measures.”) See the motivation to combine Matthew, Mankowitz, Nourian and Vaishak above. Regarding Claim 4, Matthew, Givental, and Nourian teaches all the limitations of Claim 3. Matthew further teaches the method further comprising wherein said assigning probability distributions is based on a plurality of original tuples that does not include said particular tuple (see Mathew: Fig. 35A, Col.32, Line 36-48, “for each first feature among the plurality of first features, a respective first probability distribution indicating, for each respective second feature among a plurality of second features, a probability that a person having the respective second feature has the respective first feature, is determined, thereby generating a plurality of first probability distributions.”) Regarding Claim 5, Matthew, Givental, and Nourian teaches all the limitations of Claim 1. Nourian further teaches the method further comprising wherein said generating said local explanation occurs without access to said plurality of original tuples (see Nourian: Fig. 1, [0010], “a first threshold may be determined and the local explanation provides an understanding of how possible changes to the instance's feature values adjust or shift an expected result or projected outcome beyond the first threshold. In response to understanding how the machine learning model behaves in the first instance, the machine learning model may be tuned to select outcomes that best suit an expected result in a first set of instances.”) It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the teaching of Matthew to generating said local explanation occurs without access to said plurality of original tuples as taught by Nourian. One would have been motivated to make such a combination in order to provide machine learning models predictive advantages to enhance the functionality of a system or a computing model when complex relationships or constraints are at play. (see Nourian [0004]) Regarding Claim 6, Matthew, Givental, and Nourian teaches all the limitations of Claim 1. Matthew further teaches the method further comprising wherein said assigning probability distributions is based on the one or more unlabeled original tuples (see Mathew: Fig. 35A, Col.32, Line 36-48, “for each first feature among the plurality of first features, a respective first probability distribution indicating, for each respective second feature among a plurality of second features, a probability that a person having the respective second feature has the respective first feature, is determined, thereby generating a plurality of first probability distributions.”) Regarding Claim 7, Matthew, Givental, and Nourian teaches all the limitations of Claim 6. Matthew further teaches the method further comprising wherein said assigning the probability distribution to each feature of a plurality of features comprises selecting the probability distribution from a plurality of probability distributions (see Mathew: Fig. 35A, Col.32, Line 36-48, “for each first feature among the plurality of first features, a respective first probability distribution indicating, for each respective second feature among a plurality of second features, a probability that a person having the respective second feature has the respective first feature, is determined, thereby generating a plurality of first probability distributions.”) Regarding Claim 8, Matthew, Givental, and Nourian teaches all the limitations of Claim 7. Nourian further teaches the method further comprising wherein said selecting the probability distribution comprises measuring fitness of each probability distribution of the plurality of probability distributions to the one or more unlabeled original tuples (see Nourian: Fig. 5, [0034], “a receiver operating characteristic (ROC) curve, an area under the curve (AUC), a confusion matrix, or Kolmogorov-Smirnov test results. Kolmogorov-Smirnov test results provide a nonparametric test of the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution, or to compare two samples.”) See motivation to combine Matthew, Givental, and Nourian claim 1 above. Regarding Claim 9, Matthew, Givental, and Nourian teaches all the limitations of Claim 8. Nourian further teaches the method further comprising wherein said selecting the probability distribution comprises at least one selected from the group consisting of: measuring Kolmogorov-Smirnov fitness based on the one or more original tuples, and selecting a default probability distribution when a threshold exceeds all said fitness’s of the plurality of probability distributions (see Nourian: Fig. 5, [0034], “illustrates examples of visual charts that may be generated to help a user better understand a model's behavior. The charts may provide a receiver operating characteristic (ROC) curve, an area under the curve (AUC), a confusion matrix, or Kolmogorov-Smirnov test results. Kolmogorov-Smirnov test results provide a nonparametric test of the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution, or to compare two sample.’) It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the teaching of Matthew to measuring Kolmogorov-Smirnov fitness based on the one or more original tuples, and selecting a default probability distribution when a threshold exceeds all said fitness’s of the plurality of probability distributions as taught by Nourian. One would have been motivated to make such a combination in order to provide machine learning models predictive advantages to enhance the functionality of a system or a computing model when complex relationships or constraints are at play. (see Nourian [0004]) Regarding Claim 10, Matthew, Givental, and Nourian teaches all the limitations of Claim 9. Matthew further teaches the method further comprising wherein the default probability distribution is a uniform probability distribution (see Mathew: Fig. 35A, Col.32, Line 36-48, “for each first feature among the plurality of first features, a respective first probability distribution indicating, for each respective second feature among a plurality of second features, a probability that a person having the respective second feature has the respective first feature, is determined, thereby generating a plurality of first probability distributions.”). Regarding Claim 11, Matthew, Givental, and Nourian teaches all the limitations of Claim 1. Nourian further teaches the method further comprising wherein said measuring the respective difference comprises measuring a respective difference between a respective loss of each perturbed inference of the plurality of perturbed tuples and a loss of the original inference (see Nourian: Fig. 1, [0022], earning software 112 may be trained based on certain incentives or disincentives (e.g., a calculated loss function) to adjust the manner in which the provided input is classified. The adjustment may be implemented by way of updating weights and biases over and over again. Through multiple iterations and adjustments, the internal state of learning software 112 may be continually updated to a point where a satisfactory predictive state is reached (i.e., until learning software 112 starts to more accurately classify the training data). Seer motivation to combine Matthew, Givental, and Nourian in Claim 1 above Regarding Claim 12, Matthew, Givental, and Nourian teaches all the limitations of Claim 1. Nourian further teaches the method further comprising wherein at least one selected from the group consisting of: the ML model is unsupervised, and the one or more original tuples are unlabeled (see Nourian: Fig. 2B, [0025], “ML model may be loaded into a computing environment and training data for the model may be imported (S310). In one aspect, relationships between the model's features and one or more constraints and values used to define the model may be analyzed and initial indicators of the model's efficacy may be displayed (S320). As shown in FIGS. 3 through 21, depending on user instructions received (S330), analysis results may be used to provide an explanation of the model's behavior and functionality globally (S340) or desirably across selected local features or instances (S350), or both. The explanations may be generated during the training of the model as well as when the model is deployed. In some implementations, a certificate of explainability may be also generated (S360).”) It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the teaching of Matthew to the ML model is unsupervised, and the one or more original tuples are unlabeled as taught by Nourian. One would have been motivated to make such a combination in order to provide machine learning models predictive advantages to enhance the functionality of a system or a computing model when complex relationships or constraints are at play. (see Nourian [0004]) Regarding Claim 13, Matthew, Givental, and Nourian teaches all the limitations of Claim 1. Mankowitz further teaches the method further comprising two execution contexts concurrently performing said generating two respective perturbed tuples of the plurality of perturbed tuples (see Nourian: Fig.6, [0045], “Important features corresponding to predicted outcomes of interest and degrees of importance may be illustrated. Algorithms or processes that determine feature importance may vary based on the model type. Importance by weight, gain, cover, and permutation may be provided for gradient boosted trees and random forests, for example.”)… (see Nourian: Fig.8, [0073], “a model dependent or model independent approach depending on implementation, where a plurality of methods may be utilized for explaining a model. One method may include calculating feature importance categorized at a high level as either model-dependent or model-independent. A model-dependent approach may take into account the unique properties of a given machine learning system (e.g., a feature's importance) when explaining the mode.”); and See motivation to combine Matthew, Givental, and Nourian in Claim1 above. Regarding independent Claim 14, Claim 14 is directed to non-transitory computer-readable media and has similar limitations as Claim 1 and is rejected with the same rationale. Regarding Claim 15, Claim 15 is directed to an non-transitory computer-readable media claim and has similar/same claim limitations as claim 2 and is rejected under the same rationale. Regarding Claim 16, Claim 16 is directed to an non-transitory computer-readable media claim and has similar/same claim limitations as claim 3 and is rejected under the same rationale. Regarding Claim 17, Claim 17 is directed to an non-transitory computer-readable media claim and has similar/same claim limitations as claim 6 and is rejected under the same rationale. Regarding Claim 18, Claim 18 is directed to an non-transitory computer-readable media claim and has similar/same claim limitations as claim 7 and is rejected under the same rationale. Regarding Claim 19, Claim 19 is directed to an non-transitory computer-readable media claim and has similar/same claim limitations as claim 11 and is rejected under the same rationale. Regarding Claim 20, Claim 20 is directed to an non-transitory computer-readable media claim and has similar/same claim limitations as claim 13 and is rejected under the same rationale. Regarding Claim 21, Matthew, Givental, and Nourian teaches all the limitations of claim 23. Nourian further teaches the method further comprising wherein: said selecting the probability distribution comprises selecting a default probability distribution when a threshold exceeds all said fitnesses of the plurality of probability distributions (see Nourian: Fig.20, [0069], “a graphical interface may be provided for the minimum required features subset found for a local prediction instance. The visualization together with the textual explanations show the important features to keep in the subset to achieve the same desired decision threshold even when other features are masked. The height of each bar indicates the prediction outcome when all features to the right of this bar are masked while keeping the features to the left of this bar unchanged.”) It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the teaching of Matthew to selecting the probability distribution comprises selecting a default probability distribution when a threshold exceeds all said fitness’s of the plurality of probability distributions as taught by Nourian. One would have been motivated to make such a combination in order to provide machine learning models predictive advantages to enhance the functionality of a system or a computing model when complex relationships or constraints are at play. (see Nourian [0004]) Regarding Claim 23, Matthew, Givental, and Nourian teaches all the limitations of claim 18. Matthew further teaches the method further comprising wherein: said selecting the probability distribution comprises measuring fitness of each probability distribution of the plurality of probability distributions to the one or more unlabeled original tuples (see Mathew: Fig. 35A, Col.32, Line 36-48, “for each first feature among the plurality of first features, a respective first probability distribution indicating, for each respective second feature among a plurality of second features, a probability that a person having the respective second feature has the respective first feature, is determined, thereby generating a plurality of first probability distributions.”) Regarding Claim 24, Matthew, Givental, and Nourian teaches all the limitations of claim 21. Matthew further teaches the method further comprising wherein: wherein the default probability distribution is a uniform probability distribution (see Mathew: Fig. 35A, Col.32, Line 36-48, “for each first feature among the plurality of first features, a respective first probability distribution indicating, for each respective second feature among a plurality of second features, a probability that a person having the respective second feature has the respective first feature, is determined, thereby generating a plurality of first probability distributions.”) Response to Arguments Claim Rejections - 35 U.S.C. § 101, Regarding the 35 U.S.C. 101 rejection for being directed non-statutory subject matter has been updated based on applicant amendments. Examiner notes that the ML model is used as a tool to perform mathematical analysis and the computer performs routine numerical operations and data manipulation. to perform amendment to the claims that recite “wherein each sample is a fabric sample and each sample score is a fabric sample score” is Merly field of use limitation and the amended limitation does not provide any machine control or transformation of fabric or any improvement to computer functionality. Therefore, the 35 U.S.C. 101 rejection has been withdrawn. Therefore, the 35 U.S.C. 101 rejection has been sustained. Claim Rejections - 35 U.S.C. § 103, Applicant’s arguments with respect to claim amendments have been considered but are moot considering the new combination of references being used in the current rejection. The new combination of references was necessitated by Applicant’s claim amendments. Therefore, the claims are rejected under the new combination of references as indicated above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. PGPUB NUMBER: INVENTOR-INFORMATION: TITLE / DESCRIPTION US 20190122135 A1 Parker; Charles Title: PREDICTION CHARACTERIZATION FOR BLACK BOX MACHINE LEARNING MODELS Description: The present disclosure pertains to data processing, and in particular to systems and methods for characterizing black box machine learning models US 20190197411 A1 Di; Wei Title: CHARACTERIZING MODEL PERFORMANCE USING GLOBAL AND LOCAL FEATURE CONTRIBUTIONS Description: The disclosed embodiments relate to statistical model performance. More specifically, the disclosed embodiments relate to techniques for performing hybrid characterization of model performance using global and local feature contributions. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZELALEM W SHALU whose telephone number is (571)272-3003. The examiner can normally be reached M- F 0800am- 0500pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached on (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Zelalem Shalu/Examiner, Art Unit 2145 /CESAR B PAULA/Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Apr 16, 2021
Application Filed
Feb 02, 2022
Response after Non-Final Action
Apr 29, 2025
Non-Final Rejection — §101, §103
Jun 04, 2025
Applicant Interview (Telephonic)
Jun 04, 2025
Examiner Interview Summary
Jun 04, 2025
Response Filed
Sep 03, 2025
Final Rejection — §101, §103
Oct 30, 2025
Examiner Interview Summary
Oct 30, 2025
Applicant Interview (Telephonic)
Nov 10, 2025
Response after Non-Final Action
Dec 05, 2025
Request for Continued Examination
Dec 12, 2025
Response after Non-Final Action
Mar 02, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12477016
AUTOMATION OF VISUAL INDICATORS FOR DISTINGUISHING ACTIVE SPEAKERS OF USERS DISPLAYED AS THREE-DIMENSIONAL REPRESENTATIONS
2y 5m to grant Granted Nov 18, 2025
Patent 12468969
METHODS FOR CORRELATED HISTOGRAM CLUSTERING FOR MACHINE LEARNING
2y 5m to grant Granted Nov 11, 2025
Patent 12419611
PATIENT MONITOR, PHYSIOLOGICAL INFORMATION MEASUREMENT SYSTEM, PROGRAM TO BE USED IN PATIENT MONITOR, AND NON-TRANSITORY COMPUTER READABLE MEDIUM IN WHICH PROGRAM TO BE USED IN PATIENT MONITOR IS STORED
2y 5m to grant Granted Sep 23, 2025
Patent 12153783
User Interfaces and Methods for Generating a New Artifact Based on Existing Artifacts
2y 5m to grant Granted Nov 26, 2024
Patent 12120422
SYSTEMS AND METHODS FOR CAPTURING AND DISPLAYING MEDIA DURING AN EVENT
2y 5m to grant Granted Oct 15, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
29%
Grant Probability
48%
With Interview (+19.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month