DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/04/2025 has been entered.
Claims 2-3, 5-7, 12-13 and 15-17 have been cancelled and claims 1, 9-11, 14 and 19, have been amended. Claims 1, 4, 8-11, 14, 18-20 have been examined and are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 4, 8-11, 14, 18-20 are rejected under 35 U.S.C. 101 because claims are directed to an abstract idea.
Regarding claim 1:
Subject Matter Eligibility Analysis Step 2A Prong 1:
The claim recites Inferring … a particular inference for a new tuple to explain that is based on the plurality of features which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user judging making a determination and inference of a tuple that was generated with a different value in one of the dimensions than other tuples. See 2106.04.(a)(2).III.C.
The claim recites generating a plurality of random integers in a range that is based on a count of the plurality of training tuples which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a certain number of random integers. See 2106.04.(a)(2).III.C.
The claim recites randomly selecting, based on the plurality of random integers, a plurality of perturbed values from values of the feature in the plurality of training tuples which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user selecting perturbed values at random. See 2106.04.(a)(2).III.C.
The claim recites generating a plurality of perturbed tuples…wherein each perturbed tuple of the plurality of perturbed tuples is based on same said new tuple to explain and a respective perturbed value of the plurality of perturbed values which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitation encompasses the user generates with the aid of pen and paper a plurality of modified or changed tuples that are based on original fourth dimensional tuple data and the respective perturbed value for data interpretation or comparison. See 2106.04.(a)(2).III.C.
The claim recites …inferring a respective perturbed inference for each perturbed tuple of the plurality of perturbed tuples which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitation encompasses the user determining an outcome based on a tuple. See 2106.04.(a)(2).III.C.
The claim recites measuring a respective difference between each perturbed inference of the plurality of perturbed tuples and same said particular inference which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitation encompasses the user determining the difference between two outcomes. See 2106.04.(a)(2).III.C.
The claim recites for each feature of the plurality of features, calculating a respective importance of the feature based on the differences measured for the feature which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitation encompasses the user determining the relevance of each dimension of two tuple based comparing two respective outcomes of the two tuples. See 2106.04.(a)(2).III.C.
The claim recites generating, based on the importance of at least one numeric feature of the plurality of features…a local explanation of the particular inference for the new tuple to explain which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitation encompasses a user evaluating and generating an explanation (mental process of evaluate and/or explain) of an outcome where the explanation is how relevant each dimension of a tuple was to the outcome. See 2106.04.(a)(2).III.C.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Training an opaque(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f)))
with a plurality of training tuples that are based on a plurality of features; (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)))
the opaque model (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f)))
performed by one or more computers (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f)))
including concurrently generating two perturbed tuples of the plurality of perturbed tuples(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)))
and displaying(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f)))
Subject Matter Eligibility Analysis Step 2B:
Additional elements (a) (c) (d) and (f) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f).
Additional elements (b) and (e) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)).
The additional element(s) (a) (b) (c) (d) (e) and (f) in claim 1 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible.
Regarding claim 4:
Subject Matter Eligibility Analysis Step 2A Prong 1:
The claim recites wherein the local explanation comprises a ranking of at least two features of the plurality of features based on the importances of the at least two features which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitation encompasses a user evaluating dimensional relevance to output of tuples and deciding the most important tuples via ranking said evaluated dimensional relevance of tuples. See 2106.04.(a)(2).III.C.
Subject Matter Eligibility Analysis Step 2A Prong 2:
The claim does not contain elements that would warrant a Step 2A Prong 2 analysis.
Subject Matter Eligibility Analysis Step 2B:
Claim 4 does not include any additional element, when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible.
Regarding claim 8:
Subject Matter Eligibility Analysis Step 2A Prong 1:
The claim recites measuring the respective difference comprises measuring a respective difference between a respective loss of each perturbed inference of the plurality of perturbed tuples and a loss of the particular inference which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitation encompasses a user evaluating (mental process of performing an evaluation) by comparing the outcome of each perturbed tuple with the outcome of tuple to explain. See 2106.04.(a)(2).III.C.
Subject Matter Eligibility Analysis Step 2A Prong 2:
The claim does not contain elements that would warrant a Step 2A Prong 2 analysis.
Subject Matter Eligibility Analysis Step 2B:
Claim 8 does not include any additional element, when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor to significantly more than the judicial exception. The claim is not patent eligible.
Regarding claim 9:
Subject Matter Eligibility Analysis Step 2A Prong 1:
The claim does not contain elements that would warrant a Step 2A Prong 1 analysis.
Subject Matter Eligibility Analysis Step 2A Prong 2:
the opaque model is unsupervised, and the plurality of training tuples are unlabeled(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)))
Subject Matter Eligibility Analysis Step 2B:
Additional elements (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)).
Regarding claim 10:
Subject Matter Eligibility Analysis Step 2A Prong 1:
The claim does not contain elements that would warrant a Step 2A Prong 1 analysis.
Subject Matter Eligibility Analysis Step 2A Prong 2:
two execution contexts performing said concurrently generating (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)))
Subject Matter Eligibility Analysis Step 2B:
Additional elements (a) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)).
Regarding claim 11:
Subject Matter Eligibility Analysis Step 2A Prong 1:
The claim recites Inferring … a particular inference for a new tuple to explain that is based on the plurality of features which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user judging making a determination and inference of a tuple that was generated with a different value in one of the dimensions than other tuples. See 2106.04.(a)(2).III.C.
The claim recites generating a plurality of random integers in a range that is based on a count of the plurality of training tuples which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user creating a certain number of random integers. See 2106.04.(a)(2).III.C.
The claim recites randomly selecting, based on the plurality of random integers, a plurality of perturbed values from values of the feature in the plurality of training tuples which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitations encompass a user selecting perturbed values at random. See 2106.04.(a)(2).III.C.
The claim recites generating a plurality of perturbed tuples…wherein each perturbed tuple of the plurality of perturbed tuples is based on same said new tuple to explain and a respective perturbed value of the plurality of perturbed values which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitation encompasses the user generates with the aid of pen and paper a plurality of modified or changed tuples that are based on original fourth dimensional tuple data and the respective perturbed value for data interpretation or comparison. See 2106.04.(a)(2).III.C.
The claim recites …inferring a respective perturbed inference for each perturbed tuple of the plurality of perturbed tuples which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitation encompasses the user determining an outcome based on a tuple. See 2106.04.(a)(2).III.C.
The claim recites measuring a respective difference between each perturbed inference of the plurality of perturbed tuples and same said particular inference which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitation encompasses the user determining the difference between two outcomes. See 2106.04.(a)(2).III.C.
The claim recites for each feature of the plurality of features, calculating a respective importance of the feature based on the differences measured for the feature which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitation encompasses the user determining the relevance of each dimension of two tuple based comparing two respective outcomes of the two tuples. See 2106.04.(a)(2).III.C.
The claim recites generating, based on the importance of at least one numeric feature of the plurality of features…a local explanation of the particular inference for the new tuple to explain which, under the broadest reasonable interpretation, covers performance of the limitation in the mind with or without a physical aid. The limitation encompasses a user evaluating and generating an explanation (mental process of evaluate and/or explain) of an outcome where the explanation is how relevant each dimension of a tuple was to the outcome. See 2106.04.(a)(2).III.C.
Subject Matter Eligibility Analysis Step 2A Prong 2:
Training an opaque(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f)))
with a plurality of training tuples that are based on a plurality of features; (merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)))
the opaque model (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f)))
performed by one or more computers (merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f)))
including concurrently generating two perturbed tuples of the plurality of perturbed tuples(merely specifies a particular technological environment in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)))
and displaying(merely recites a generic computer on which to perform the abstract idea, e.g. "apply it on a computer" (see MPEP 2106.05(f)))
Subject Matter Eligibility Analysis Step 2B:
Additional elements (a) (c) (d) and (f) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation amount to no more than mere instructions to apply the exception using a generic computer component. Please see MPEP §2106.05(f).
Additional elements (b) and (e) do not integrate the abstract idea into a practical application nor do the additional limitation provide significantly more than the abstract idea because the limitation merely specifies a field of use in which the abstract idea is to take place, i.e. a field of use (see MPEP 2106.05(h)).
The additional element(s) (a) (b) (c) (d) (e) and (f) in claim 11 do/does not include any additional elements , when considered separately and in combination, that amount to an integration of the judicial exception into a practical application, nor significantly more than the judicial exception for the reasons set forth in step 2A prong 2 analysis above. The claim is not patent eligible.
Claim 14 is rejected under that same 101 claim analysis due to the substantially similarity of the limitations and additional elements of claim 4 found in claim 14.
Claims 18-20 are rejected under that same 101 claim analysis due to the substantially similarity of the limitations and additional elements of claims 8-10 found in claims 18-20 respectively.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 4, 8-11, 14, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Parker et al henceforth referred to as Parker(US 20190122135 A1) in view of Santa (StackOverflow answer, https://stackoverflow.com/questions/9470104/select-random-java-variable-is-this-possible) and further in view of Dalli et al henceforth referred to as Dalli(US 20210133630 A1)
Regarding claim 1, Parker teaches training an opaque model(Parker, [00032], “Process 100 begins at block 110, where a processor, at a user's direction, selects a learned ML model 110. As shown, the model may be opaque”) with a plurality of training tuples(Parker, [0143], “Each of data source 602 and input data 604 supplies data to dataset generator 610, which generates a training set 611 with which to train the model” where the training set generated is considered training tuples) that are based on a plurality of features(Parker, [0091], “In some embodiments, detailed summary statistics for each feature in the data set used to train the model m are available. Specifically, given a set of training data, histograms may be constructed for each feature of the training data representing a discrete approximation of the distribution of each feature” where the training data with features is considered training tuples with features (See Also: Parker, Figure 6, 610 DATASET GENERATOR and 611 TRAINING DATASET))
Parker teaches inferring, by the opaque model after said training, a particular inference for a new tuple to explain(Parker, [0018], “…a prediction m(P) of m for P” where inference is understood to refer to the prediction or classification a machine learning model gives given an input and a prediction m(P) is considered an inference of a tuple. See [0104] for an example “Thus, the input data point P for this applicant is P=[100,000, 40,000, 5]” and Figure 6, 604 INPUT DATA) that is based on the plurality of features, (Parker, [0018],“ input data point P to m, P including one or more features,” where input data point P is a tuple of features which is considered a new tuple to explain that is based on the plurality of features)
Parker teaches for each feature of the plurality of features, for each training tuple in the plurality of training tuples (Parker, Figure 3 operational flow of a process for generating synthetic datapoints as well as figure 6 with 626 to optionally iterate.)
Parker teaches randomly selecting… a plurality of perturbed values(Parker, [0092], “With these histograms, in embodiments, a function draw(h[i]) may be specified, which draws a histogram bin with a probability corresponding to its frequency, and outputs a value in that bin (..for example…a random value within the bin)” where the randomly selecting from a value from a histogram bin to generate a set of perturbed input data points corresponds to randomly selecting…a plurality of perturbed values(See Also: Parker, Figure 6, 610 DATASET GENERATOR and 611 TRAINING DATASET as the input to 613 HISTOGRAMS and Parker, Figure 3, 308, 310 and 312 of selecting an input datapoint)) from values of the feature in the plurality of training tuples (Parker, [0046], “…to generate these modifications (synthetic datapoints), a system may, for example, leverage histogram-style summaries of the distributions for each of the features in the point (such as, for example, those shown in FIG. 4). Thus, to select a “different” value for a given feature, in embodiments, the system selects a value from the histogram bin other than the one containing the current value” where the selection of the perturbed values being chosen from a summary of distributions of each feature created from training data(See: [0091] “… given a set of training data, histograms may be constructed for each feature of the training data representing a discrete approximation of the distribution of each feature”) and selecting a value from the histogram bin randomly for one that is NOT the current value is considered randomly selecting perturbed values of features of the training tuples)
Parker teaches generating a plurality of perturbed tuples…wherein each perturbed tuple of the plurality of perturbed tuples (Parker, [0018], “create a set of perturbed input data points (Pk) from P by changing the value of at least one feature of P for each perturbed input data point” where creating a set of perturbed input data points (Pk) from P is considered generating a plurality of perturbed tuples based on a new tuple to explain; See also: Figure 3, illustrates an overview of the operational flow of a process for generating one or more synthetic datapoints, based on a single input datapoint) is based on same said new tuple to explain and a respective perturbed value of the plurality of perturbed values(Parker Figure 3, where the loop of 312 to 324 shows the generation of synthetic data based on a single input dataset (of tuples) P where the new/perturbed value is based on summaries of features of the input)
Parker teaches the opaque model inferring a respective perturbed inference for each perturbed tuple of the plurality of perturbed tuples, (Parker, [0018], “obtain a prediction m(Pk) for each of the perturbed input data points” where each of the predictions of (Pk) is considered an inference and finding a prediction m(Pk) for each of the perturbed input data points is considered inferring a respective perturbed inference for each perturbed tuple of the plurality of perturbed tuples)
Parker teaches measuring a respective difference between each perturbed inference of the plurality of perturbed tuples and same said particular inference(Parker, [0089], “Measure the difference between the predictions m(p) and m(p_i) for each p_i” where p_i is the set of perturbed inputs and measuring the differences between m(p) and m(p_i) is considered measuring the differences of inferences between each perturbed inference and same said particular inference)
Parker teaches for each feature of the plurality of features, calculating a respective importance of the feature based on the differences measured for the feature; (Parker, [0035], “based on the analysis, it is determined which feature(s) is or are important to, or influential of, the model's prediction outcome”)
Parker teaches and generating, based on the importance of at least one numeric feature of the plurality of features, and displaying(Parker, Fig. 5 and [0099], “FIG. 5 illustrates an image of an example output prediction explanation for a sample prediction made by an example credit risk model, in accordance with various embodiments….The next column to the right of column 510 is “Importance” 505. This column depicts a bar that graphically demonstrates the importance of that feature to the model in making a credit risk prediction. At the far right of the bar, in column 507, a numerical percentage is provided that corresponds to the length of the bar” where Fig. 5 corresponds to GUI interface displaying a numeric and graphical representation of the important of features with a features that include a numeric) a local explanation of the particular inference for the new tuple to explain(Parker, [0035], “…where a report is generated and output to a user indicating which features were most influential in the model's prediction results” where a report that indicates which features were most influential to the model’s prediction is considered a local explanation)
Parker teaches wherein the method is performed by one or more computers(Parker, Abstract, “A non-transitory computer-readable medium including instructions, which when executed by one or more processors of a computing system”)
Parker does not teach, however Dalli discloses including concurrently generating two perturbed tuples of perturbed tuples(Dalli, [0051], “In an alternative embodiment, the fit 511 may be reconfigured to implement the inner functions in parallel or as an atomic…multiple local models which represent the individual partitions may be fitted concurrently” where 511 has the perturbation node, 514, and Dalli implementing 511 with parallelism and concurrently corresponds with concurrently generating perturbed tuples (511See also: Parker, [0028], “Still referring to exemplary FIG. 4, the illustrated system may take into account a variety of factors. For example, in the illustrated system, these factors may include a…number of concurrent applications”))
References Parker and Dalli are analogous art because they are from the field of endeavor directed towards explainable artificial intelligence techniques and explaining black-box machine learning models.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Parker and Dalli before him or her, to modify the perturbation from an input data as disclosed of Parker to include the concurrency and parallel of Dali enhanced benefit of concurrency and parallelism as it allows multiple executions or simultaneous runs. The suggestion/motivation for doing so would have been, as Dalli states, Dalli, [0009], “A combination of multiple local models may be used in a global model creation process” and Dalli, [0051], “…multiple local models which represent the individual partitions may be fitted concurrently”
Parker does not teach, however Santa discloses generating a plurality of random integers in a range that is based on a count of the plurality of training tuples and randomly selecting based on the plurality of random integers (Santa, code for function getRandomString(), where r.nextInt()%3 is generating a random integer based on the count of the list and switch(i) is selecting a case based on the random integer generated)
References Parker and Santa are analogous art because they are from the field of endeavor of using technology and creating a list with random selection.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Parker and Santa before him or her, to modify the selection of the input data point of Parker to include the random selection with a list of Santa as it is a simple implementation with low complexity.
The suggestion/motivation for doing so would have been, as Santa states, a simple method to randomly select to implement.
Regarding claim 4, Parker-Santa teaches the method of claim 1 (and thus the rejection of claim 1 is incorporated).
Parker further teaches wherein the local explanation comprises a ranking of at least two features of the plurality of features based on the importances of the at least two features(Parker, [0018], “analyze the predictions m(Pk) for the perturbed input data points to determine which features are most influential to the prediction; and output the analysis results to a user” See also: Parker, [0099], where [0099] explains Figure 5’s breakdown of an example with 505 and 507 showing how important each of the four features 501 are.))
Regarding claim 8, Parker-Santa-Dalli teaches the method of claim 1 (and thus the rejection of claim 1 is incorporated).
Parker further teaches wherein said measuring the respective difference comprises measuring a respective difference between a respective loss of each perturbed inference of the plurality of perturbed tuples and a loss of the particular inference(Parker, [0063], “where, for each synthetic prediction, a net decrease in the predicted probability for a given class, or result, is determined” where determining each synthetic prediction’s net decrease in relation to a given original prediction result is considered measuring a respective difference between a respective loss of each perturbed inference of the plurality of perturbed tuples and a loss of the particular inference)
Regarding claim 9, Parker-Santa-Dalli teaches the method of claim 1 (and thus the rejection of claim 1 is incorporated).
Parker further teaches wherein at least one selected from the group consisting of: the opaque model is unsupervised, and the plurality of training tuples are unlabeled(Parker, [0084], “… may even be an unsupervised model, such as a latent topic model”)
Regarding claim 10, Parker-Santa-Dalli teaches the method of claim 1 (and thus the rejection of claim 1 is incorporated).
Dalli further discloses further comprising two execution contexts performing said concurrently generating(Dalli, [0051], “In an alternative embodiment, the fit 511 may be reconfigured to implement the inner functions in parallel or as an atomic…multiple local models which represent the individual partitions may be fitted concurrently” where 511 has the perturbation node, 514, and Dalli implementing 511 with parallelism and concurrently corresponds with concurrently generating perturbed tuples (See also: Dalli, [0028], “Still referring to exemplary FIG. 4, the illustrated system may take into account a variety of factors. For example, in the illustrated system, these factors may include a…number of concurrent applications”))
References Parker and Dalli are analogous art because they are from the field of endeavor directed towards explainable artificial intelligence techniques and explaining black-box machine learning models.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Parker and Dalli before him or her, to modify the perturbation from an input data as disclosed of Parker to include the concurrency and parallel of Dali enhanced benefit of concurrency and parallelism as it allows multiple executions or simultaneous runs. The suggestion/motivation for doing so would have been, as Dalli states, Dalli, [0009], “A combination of multiple local models may be used in a global model creation process” and Dalli, [0051], “…multiple local models which represent the individual partitions may be fitted concurrently”
Regarding claim 11, Parker teaches One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors(Parker, Abstract, “A non-transitory computer-readable medium including instructions, which when executed by one or more processors of a computing system”)
Parker teaches training an opaque model(Parker, [00032], “Process 100 begins at block 110, where a processor, at a user's direction, selects a learned ML model 110. As shown, the model may be opaque”) with a plurality of training tuples(Parker, [0143], “Each of data source 602 and input data 604 supplies data to dataset generator 610, which generates a training set 611 with which to train the model” where the training set generated is considered training tuples) that are based on a plurality of features(Parker, [0091], “In some embodiments, detailed summary statistics for each feature in the data set used to train the model m are available. Specifically, given a set of training data, histograms may be constructed for each feature of the training data representing a discrete approximation of the distribution of each feature” where the training data with features is considered training tuples with features (See Also: Parker, Figure 6, 610 DATASET GENERATOR and 611 TRAINING DATASET))
Parker teaches inferring, by the opaque model after said training, a particular inference for a new tuple to explain(Parker, [0018], “…a prediction m(P) of m for P” where inference is understood to refer to the prediction or classification a machine learning model gives given an input and a prediction m(P) is considered an inference of a tuple. See [0104] for an example “Thus, the input data point P for this applicant is P=[100,000, 40,000, 5]” and Figure 6, 604 INPUT DATA) that is based on the plurality of features, (Parker, [0018],“ input data point P to m, P including one or more features,” where input data point P is a tuple of features which is considered a new tuple to explain that is based on the plurality of features)
Parker teaches for each feature of the plurality of features, for each training tuple in the plurality of training tuples (Parker, Figure 3 operational flow of a process for generating synthetic datapoints as well as figure 6 with 626 to optionally iterate.)
Parker teaches randomly selecting… a plurality of perturbed values(Parker, [0092], “With these histograms, in embodiments, a function draw(h[i]) may be specified, which draws a histogram bin with a probability corresponding to its frequency, and outputs a value in that bin (..for example…a random value within the bin)” where the randomly selecting from a value from a histogram bin to generate a set of perturbed input data points corresponds to randomly selecting…a plurality of perturbed values(See Also: Parker, Figure 6, 610 DATASET GENERATOR and 611 TRAINING DATASET as the input to 613 HISTOGRAMS and Parker, Figure 3, 308, 310 and 312 of selecting an input datapoint)) from values of the feature in the plurality of training tuples (Parker, [0046], “…to generate these modifications (synthetic datapoints), a system may, for example, leverage histogram-style summaries of the distributions for each of the features in the point (such as, for example, those shown in FIG. 4). Thus, to select a “different” value for a given feature, in embodiments, the system selects a value from the histogram bin other than the one containing the current value” where the selection of the perturbed values being chosen from a summary of distributions of each feature created from training data(See: [0091] “… given a set of training data, histograms may be constructed for each feature of the training data representing a discrete approximation of the distribution of each feature”) and selecting a value from the histogram bin randomly for one that is NOT the current value is considered randomly selecting perturbed values of features of the training tuples)
Parker teaches generating a plurality of perturbed tuples…wherein each perturbed tuple of the plurality of perturbed tuples (Parker, [0018], “create a set of perturbed input data points (Pk) from P by changing the value of at least one feature of P for each perturbed input data point” where creating a set of perturbed input data points (Pk) from P is considered generating a plurality of perturbed tuples based on a new tuple to explain; See also: Figure 3, illustrates an overview of the operational flow of a process for generating one or more synthetic datapoints, based on a single input datapoint) is based on same said new tuple to explain and a respective perturbed value of the plurality of perturbed values(Parker Figure 3, where the loop of 312 to 324 shows the generation of synthetic data based on a single input dataset (of tuples) P where the new/perturbed value is based on summaries of features of the input)
Parker teaches the opaque model inferring a respective perturbed inference for each perturbed tuple of the plurality of perturbed tuples, (Parker, [0018], “obtain a prediction m(Pk) for each of the perturbed input data points” where each of the predictions of (Pk) is considered an inference and finding a prediction m(Pk) for each of the perturbed input data points is considered inferring a respective perturbed inference for each perturbed tuple of the plurality of perturbed tuples)
Parker teaches measuring a respective difference between each perturbed inference of the plurality of perturbed tuples and same said particular inference(Parker, [0089], “Measure the difference between the predictions m(p) and m(p_i) for each p_i” where p_i is the set of perturbed inputs and measuring the differences between m(p) and m(p_i) is considered measuring the differences of inferences between each perturbed inference and same said particular inference)
Parker teaches for each feature of the plurality of features, calculating a respective importance of the feature based on the differences measured for the feature; (Parker, [0035], “based on the analysis, it is determined which feature(s) is or are important to, or influential of, the model's prediction outcome”)
Parker teaches and generating, based on the importance of at least one numeric feature of the plurality of features, and displaying(Parker, Fig. 5 and [0099], “FIG. 5 illustrates an image of an example output prediction explanation for a sample prediction made by an example credit risk model, in accordance with various embodiments….The next column to the right of column 510 is “Importance” 505. This column depicts a bar that graphically demonstrates the importance of that feature to the model in making a credit risk prediction. At the far right of the bar, in column 507, a numerical percentage is provided that corresponds to the length of the bar” where Fig. 5 corresponds to GUI interface displaying a numeric and graphical representation of the important of features with a features that include a numeric) a local explanation of the particular inference for the new tuple to explain(Parker, [0035], “…where a report is generated and output to a user indicating which features were most influential in the model's prediction results” where a report that indicates which features were most influential to the model’s prediction is considered a local explanation)
Parker teaches wherein the method is performed by one or more computers(Parker, Abstract, “A non-transitory computer-readable medium including instructions, which when executed by one or more processors of a computing system”)
Parker does not teach, however Dalli discloses including concurrently generating two perturbed tuples of perturbed tuples(Dalli, [0051], “In an alternative embodiment, the fit 511 may be reconfigured to implement the inner functions in parallel or as an atomic…multiple local models which represent the individual partitions may be fitted concurrently” where 511 has the perturbation node, 514, and Dalli implementing 511 with parallelism and concurrently corresponds with concurrently generating perturbed tuples (511See also: Parker, [0028], “Still referring to exemplary FIG. 4, the illustrated system may take into account a variety of factors. For example, in the illustrated system, these factors may include a…number of concurrent applications”))
References Parker and Dalli are analogous art because they are from the field of endeavor directed towards explainable artificial intelligence techniques and explaining black-box machine learning models.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Parker and Dalli before him or her, to modify the perturbation from an input data as disclosed of Parker to include the concurrency and parallel of Dali enhanced benefit of concurrency and parallelism as it allows multiple executions or simultaneous runs. The suggestion/motivation for doing so would have been, as Dalli states, Dalli, [0009], “A combination of multiple local models may be used in a global model creation process” and Dalli, [0051], “…multiple local models which represent the individual partitions may be fitted concurrently”
Parker does not teach, however Santa discloses generating a plurality of random integers in a range that is based on a count of the plurality of training tuples and randomly selecting based on the plurality of random integers (Santa, code for function getRandomString(), where r.nextInt()%3 is generating a random integer based on the count of the list and switch(i) is selecting a case based on the random integer generated)
References Parker and Santa are analogous art because they are from the field of endeavor of using technology and creating a list with random selection.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Parker and Santa before him or her, to modify the selection of the input data point of Parker to include the random selection with a list of Santa as it is a simple implementation with low complexity.
The suggestion/motivation for doing so would have been, as Santa states, a simple method to randomly select to implement.
Regarding claim 14, The rejection of claim 11 incorporated in claim 14, and further, claim 14 is rejected under the same rationale as set forth in the rejection of claim 4.
Regarding claim 18, The rejection of claim 11 incorporated in claim 18, and further, claim 18 is rejected under the same rationale as set forth in the rejection of claim 8.
Regarding claim 19, The rejection of claim 11 incorporated in claim 19, and further, claim 19 is rejected under the same rationale as set forth in the rejection of claim 9.
Regarding claim 20, The rejection of claim 11 incorporated in claim 20, and further, claim 20 is rejected under the same rationale as set forth in the rejection of claim 10.
Response to Arguments
Applicant's arguments filed 12/04/2025 have been fully considered but they are not persuasive. A breakdown of arguments can be found below.
103:
A-C:
Applicant appears to argue Wong does not disclose certain aspects of the claims.
Examiner respectfully disagrees as Applicant’s arguments with respect to the claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
D:
Applicant appears to argue that Santa does not disclose “randomly selecting…from…the feature” as a “numeric feature” is required, that the Santa reference does not have the claimed training tuples as a String is a datatype that would not fit the required classification of “ training tuple” and that Santa is nonanalogous as it doesn’t perturb the claimed feature.
Examiner respectfully disagrees as the generating, based on the importance of at least one numeric feature of the plurality of features, and displaying the perturbation is mapped to Parker. Additionally, Strings and tuples are ordered sequences of data and in response to applicant's argument that Santa is nonanalogous art, it has been held that a prior art reference must either be in the field of the inventor’s endeavor or, if not, then be reasonably pertinent to the particular problem with which the inventor was concerned, in order to be relied upon as a basis for rejection of the claimed invention. See In re Oetiker, 977 F.2d 1443, 24 USPQ2d 1443 (Fed. Cir. 1992). In this case, Santa’s generation of a random integer based on the count of the list of tuples(strings) and selecting a case based on the random integer must be viewed in combination of the selection of datapoint and perturbation of tuple values of Parker and not by itself.
E-F:
Applicant appears to argue that the motivation to combine for Parker and Santa is unclear, that due to the incompatibility that there is no motivation to combine and that there is no prima facie obviousness established.
Examiner respectfully disagrees as the rejection is made of the combination of arts and to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). The rejection outlines that Parker discloses “randomly selecting… a plurality of perturbed values from values of the feature in the plurality of training tuples” through random selecting of values as discussed in Parker, [0092], “With these histograms, in embodiments, a function draw(h[i]) may be specified, which draws a histogram bin with a probability corresponding to its frequency, and outputs a value in that bin (..for example…a random value within the bin” as well as Figure 3 of selecting an input datapoint. Parker in combination of Santa discloses “generating a plurality of random integers in a range that is based on a count of the plurality of training tuples” and “randomly selecting based on the plurality of random integers” through the generating random integer(r.nextInt()) based on a count of tuples in a list(int i = r.nextInt()%3 where 3 is the total number of tuples) and selecting based on a random integer (switch (i)). The combination of references together teaches generating a list of tuples and generating a random integer in a range that is based on a count of tuples in the list of training tuples and randomly selecting based on the random integer.
101:
A:
Applicant appears to argue that the addition of the step of displaying information integrates into practical application and quote mapped limitations and elements from the 101 analysis with a statement that an well-known routine or conventional explanation was not alleged.
Applicant respectfully disagrees as the claims appear to recite a generic computer component to perform abstract ideas("apply it on a computer" (see MPEP 2106.05(f))) and fail to integrate into “significantly more” as claims that require a computer may still recite a mental process (please see MPEP 2106.04(a)(2).III.C) and the training tuples specifies a data gathering step that is limited to a particular data source or a particular type of data, i.e. a field of use (see MPEP 2106.05(h)). Further, displaying a generated local explanation is considered reciting a generic computer(displaying on a monitor) on which to perform the abstract idea(generating… a local explanation of the particular inference for the new tuple to explain), e.g. "apply it on a computer" (see MPEP 2106.05(f))) as the focus of the claim is generating…a local explanation of the particular inference for the new tuple to explain.)
B:
Applicant appears to argue appears to argue the use of Shapley values and concurrent programming provides unconventionally performance.
Examiner respectfully disagrees as applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e. the claims do not mention Shapley values by name, however arguments cite the specification stating the benefits of Shapley values/techniques) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Further, the use of concurrent programming in the 101 analysis is merely specifying a particular technological environment(processor using concurrent programming) in which the abstract idea(generating a plurality of perturbed tuples) is to take place, i.e. a field of use (see MPEP 2106.05(h)).
C:
Applicant appears to argue that generating and displaying a local explanation of a model, randomly selecting…from…the plurality of the training tuples, concurrently generating two perturbed tuples and inferring respective perturbed inferences provide an inventive concept.
Examiner respectfully disagrees as the pending Claims are directed to a judicial exception due to reciting limitations which fall within the “mental processes” group of abstract ideas; where the judicial exception is unable to be directed to significantly more than the judicial exception due to the pending Claims not including additional elements that contribute to an “inventive concept”. The claims are directed towards the improvement of an abstract idea. Improvements to an abstract idea are still considered to an abstract idea.
D-E:
Applicant arguments are directed towards prior art of Parker rather than the 101 analysis. Arguments concerning the prior art of Parker are not pertinent to 101 analysis and therefor carry no weight as prior art is not taken into account in the Subject Matter Eligibility Test for Products and Process flowchart or MPEP guidelines.
F-G:
Applicant appears to argue a technological problem and solution through use of generating and selecting from training tuples from training tuples providing increased accuracy and the use of concurrent programming provides unconventional speed..
Examiner respectfully disagrees as Applicant does not cite limitations that are not mental, mere instructions, insignificant or extra-solution for claim 1 that provide the technological improvement that create a solution to technological problem. As the neural network is cited at a high level it is considered a generic computer component. An improvement in an abstract idea results in an improved abstract idea and not an improvement in technology or computer. Examiner notes MPEP 2106.05(a) which provides the requirements for how an improvement to the functioning of a computer or to any other technology or technical field is evaluated. Further, the use of concurrent programming in the 101 analysis is merely specifying a particular technological environment in which the abstract idea(generating a plurality of perturbed tuples) is to take place, i.e. a field of use (see MPEP 2106.05(h)).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES JEFFREY JONES JR whose telephone number is (703)756-1414. The examiner can normally be reached Monday - Friday 8:00 - 5:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/C.J.J./Examiner, Art Unit 2122
/KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122