DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
In response to communications filed on 10 March 2026, claims 1-20 are presently pending in the application, of which, claims 1, 8, and 14 are presented in independent form. The Examiner acknowledges amended claims 1, 2, 8, 9, and 14-15. No claims were cancelled or newly added.
Response to Remarks/Arguments
All objections and/or rejections issued in the previous Office Action, mailed 10 December 2025, have been withdrawn, unless otherwise noted in this Office Action.
Applicant’s arguments, see pages 9-11, filed 10 March 2026, with respect to the rejections of claims 1-20 under 35 U.S.C. 102 (a)(1)/(a)(2) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of Rawal, Kalvayla, et al (U.S. 2024/0311685, filed 16 March 2023, and known hereinafter as Rawal).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being anticipated by Yuan, Luo, et al (U.S. 2021/0103838, and known hereinafter as Yuan) in view of Rawal, Kalvayla, et al (U.S. 2024/0311685, filed 16 March 2023, and known hereinafter as Rawal)(newly presented).
As per claim 1, Yuan teaches a method, comprising:
generating, by a multi-label joint autoencoder, latent embeddings of a plurality of predictions of a machine learning model by positioning datapoints representing the plurality of predictions within an embedding space, wherein the datapoints are positioned within the embedding space based on a semantic labeling of each datapoint (e.g. Yuan, see paragraphs [0041-0045], which discloses a decision-making system that uses machine learning for decision making based on one or more sets of data, where the machine is trained to learn how to perform different tasks that also includes an interactive computing environment that includes an explainability system that provides labeling for each of the data provided.);
generating, based on the latent embeddings, a plurality of computer-searchable data structures (e.g. Yuan, see paragraphs [0053-0056], which discloses a decision-making system that applies feature engineering on decision data, where the feature engineering generates features using domain knowledge to transform raw data in order to facilitate the working of one or more machine learning algorithm, which then allows the user to search through the explainability system.).
Yuan does not explicitly disclose configuring a nearest flipped neighbor determiner based on the plurality of computer-searchable data structures for identifying a nearest flipped neighbor of datapoints within the embedding space; determining a contrastive explanation of a prediction generated by the machine learning model based on the nearest flipped neighbor of datapoints; and outputting the contrastive explanation of the prediction.
Rawal teaches configuring a nearest flipped neighbor determiner based on the plurality of computer-searchable data structures for identifying a nearest flipped neighbor of datapoints within the embedding space (e.g. Rawal, see paragraphs [0019-0021], which discloses local interpretable model-agnostic (LIME) is a known technique that can sample ‘n’ points in the neighborhood of an explicand and fetch model predictions using the generated samples using model artifacts. The model predictions are then used to train a model to generate explanations, where model predictions are fetched for the nearest neighbor points from the prediction log.);
determining a contrastive explanation of a prediction generated by the machine learning model based (e.g. Rawal, see paragraphs [0012-0015], which discloses accessing only the set of inputs issued to a machine learning model and corresponding prediction outputs, which can be readily obtained from the prediction logs associated with the deployed machine language model by querying the model directly, where kernel shapley additive explanations (SHARP) attributes based on post-hoc explanation used multiple ‘predict()’ function calls. See further paragraphs [0035-0036], which discloses explanation includes indication of a contrastive explanation.) on the nearest flipped neighbor of datapoints (e.g. Rawal, see paragraph [0015], which discloses k-nearest neighbors regressors can be used to predict feature attributions directly instead of computing feature contributions from example predictions stored in the database.); and
outputting the contrastive explanation of the prediction (e.g. Rawal, see paragraphs [0036-0037], which discloses subset of inputs and subset of outputs can be generated, where the output from the set of outputs was produced by the machine language model in response to the input received.)
Yuan is directed to explainability framework of machine learning based decision making. Rawal is directed to providing information security for machine learning model by generating an explanation for the machine learning model. Both are analogous art because they are directed to enhancing machine learning efficiencies and therefore it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to modify the teachings of Yuan with the teachings of Rawal to include the claimed features with the motivation to improve machine learning model predictions.
As per claim 8, Yuan teaches a system, comprising:
one or more processors configured to initiate operations including:
generating, by a multi-label joint autoencoder, latent embeddings of a plurality of predictions of a machine learning model by positioning datapoints representing the plurality of predictions within an embedding space, wherein the datapoints are positioned within the embedding space based on a semantic labeling of each datapoint (e.g. Yuan, see paragraphs [0041-0045], which discloses a decision-making system that uses machine learning for decision making based on one or more sets of data, where the machine is trained to learn how to perform different tasks that also includes an interactive computing environment that includes an explainability system that provides labeling for each of the data provided.);
generating, based on the latent embeddings, a plurality of computer-searchable data structures (e.g. Yuan, see paragraphs [0053-0056], which discloses a decision-making system that applies feature engineering on decision data, where the feature engineering generates features using domain knowledge to transform raw data in order to facilitate the working of one or more machine learning algorithm, which then allows the user to search through the explainability system.).
Yuan does not explicitly disclose configuring a nearest flipped neighbor determiner based on the plurality of computer-searchable data structures for identifying a nearest flipped neighbor of datapoints within the embedding space; determining a contrastive explanation of a prediction generated by the machine learning model based on the nearest flipped neighbor of datapoints; and outputting the contrastive explanation of the prediction.
Rawal teaches configuring a nearest flipped neighbor determiner based on the plurality of computer-searchable data structures for identifying a nearest flipped neighbor of datapoints within the embedding space (e.g. Rawal, see paragraphs [0019-0021], which discloses local interpretable model-agnostic (LIME) is a known technique that can sample ‘n’ points in the neighborhood of an explicand and fetch model predictions using the generated samples using model artifacts. The model predictions are then used to train a model to generate explanations, where model predictions are fetched for the nearest neighbor points from the prediction log.);
determining a contrastive explanation of a prediction generated by the machine learning model based (e.g. Rawal, see paragraphs [0012-0015], which discloses accessing only the set of inputs issued to a machine learning model and corresponding prediction outputs, which can be readily obtained from the prediction logs associated with the deployed machine language model by querying the model directly, where kernel shapley additive explanations (SHARP) attributes based on post-hoc explanation used multiple ‘predict()’ function calls. See further paragraphs [0035-0036], which discloses explanation includes indication of a contrastive explanation.) on the nearest flipped neighbor of datapoints (e.g. Rawal, see paragraph [0015], which discloses k-nearest neighbors regressors can be used to predict feature attributions directly instead of computing feature contributions from example predictions stored in the database.); and
outputting the contrastive explanation of the prediction (e.g. Rawal, see paragraphs [0036-0037], which discloses subset of inputs and subset of outputs can be generated, where the output from the set of outputs was produced by the machine language model in response to the input received.)
Yuan is directed to explainability framework of machine learning based decision making. Rawal is directed to providing information security for machine learning model by generating an explanation for the machine learning model. Both are analogous art because they are directed to enhancing machine learning efficiencies and therefore it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to modify the teachings of Yuan with the teachings of Rawal to include the claimed features with the motivation to improve machine learning model predictions.
As per claim 14, Yuan teaches a computer program product, the computer program product comprising:
one or more computer-readable storage media and program instructions collectively stored on the one or more computer-readable storage media, the program instructions executable by a processor to cause the processor to initiate operations including:
generating, by a multi-label joint autoencoder, latent embeddings of a plurality of predictions of a machine learning model by positioning datapoints representing the plurality of predictions within an embedding space, wherein the datapoints are positioned within the embedding space based on a semantic labeling of each datapoint (e.g. Yuan, see paragraphs [0041-0045], which discloses a decision-making system that uses machine learning for decision making based on one or more sets of data, where the machine is trained to learn how to perform different tasks that also includes an interactive computing environment that includes an explainability system that provides labeling for each of the data provided.);
generating, based on the latent embeddings, a plurality of computer-searchable data structures (e.g. Yuan, see paragraphs [0053-0056], which discloses a decision-making system that applies feature engineering on decision data, where the feature engineering generates features using domain knowledge to transform raw data in order to facilitate the working of one or more machine learning algorithm, which then allows the user to search through the explainability system.).
Yuan does not explicitly disclose configuring a nearest flipped neighbor determiner based on the plurality of computer-searchable data structures for identifying a nearest flipped neighbor of datapoints within the embedding space; determining a contrastive explanation of a prediction generated by the machine learning model based on the nearest flipped neighbor of datapoints; and outputting the contrastive explanation of the prediction.
Rawal teaches configuring a nearest flipped neighbor determiner based on the plurality of computer-searchable data structures for identifying a nearest flipped neighbor of datapoints within the embedding space (e.g. Rawal, see paragraphs [0019-0021], which discloses local interpretable model-agnostic (LIME) is a known technique that can sample ‘n’ points in the neighborhood of an explicand and fetch model predictions using the generated samples using model artifacts. The model predictions are then used to train a model to generate explanations, where model predictions are fetched for the nearest neighbor points from the prediction log.);
determining a contrastive explanation of a prediction generated by the machine learning model based (e.g. Rawal, see paragraphs [0012-0015], which discloses accessing only the set of inputs issued to a machine learning model and corresponding prediction outputs, which can be readily obtained from the prediction logs associated with the deployed machine language model by querying the model directly, where kernel shapley additive explanations (SHARP) attributes based on post-hoc explanation used multiple ‘predict()’ function calls. See further paragraphs [0035-0036], which discloses explanation includes indication of a contrastive explanation.) on the nearest flipped neighbor of datapoints (e.g. Rawal, see paragraph [0015], which discloses k-nearest neighbors regressors can be used to predict feature attributions directly instead of computing feature contributions from example predictions stored in the database.); and
outputting the contrastive explanation of the prediction (e.g. Rawal, see paragraphs [0036-0037], which discloses subset of inputs and subset of outputs can be generated, where the output from the set of outputs was produced by the machine language model in response to the input received.)
Yuan is directed to explainability framework of machine learning based decision making. Rawal is directed to providing information security for machine learning model by generating an explanation for the machine learning model. Both are analogous art because they are directed to enhancing machine learning efficiencies and therefore it would have been obvious to one of ordinary skilled in the art at the time the invention was filed to modify the teachings of Yuan with the teachings of Rawal to include the claimed features with the motivation to improve machine learning model predictions.
As per claims 2, 9, and 15, the modified teachings of Yuan and Rawal teaches the method of claim 1, the system of claim 8, and the computer program product of claim 14, respectively, wherein
the contrastive explanation corresponds to a nearest flipped neighbor determined by the nearest flipped neighbor determiner (e.g. Rawal, see paragraphs [0012-0015], which discloses accessing only the set of inputs issued to a machine learning model and corresponding prediction outputs, which can be readily obtained from the prediction logs associated with the deployed machine language model by querying the model directly, where kernel shapley additive explanations (SHARP) attributes based on post-hoc explanation used multiple ‘predict()’ function calls. See further paragraphs [0035-0036], which discloses explanation includes indication of a contrastive explanation.).
As per claims 3, 10, and 16, the modified teachings of Yuan and Rawal teaches the method of claim 2, the system of claim 9, and the computer program product of claim 15, respectively, wherein the determining the contrastive explanation includes interpolating a datapoint within the embedding space representing the prediction and a datapoint within the embedding space representing the nearest flipped neighbor (e.g. Rawal, see paragraphs [0019-0021], which discloses local interpretable model-agnostic (LIME) is a known technique that can sample ‘n’ points in the neighborhood of an explicand and fetch model predictions using the generated samples using model artifacts. The model predictions are then used to train a model to generate explanations, where model predictions are fetched for the nearest neighbor points from the prediction log.).
As per claims 4, 11, and 17, the modified teachings of Yuan and Rawal teaches the method of claim 3, the system of claim 10, and the computer program product of claim 16, respectively, wherein the interpolating further includes generating an optimal interpolation parameter using a greedy heuristic (e.g. Yuan, see paragraphs [0055-0059], which discloses features that include historic alerts, the type of account, and aggregated transaction amount as illustrations of parameters for decision making system.).
As per claims 5, 12, and 18, the modified teachings of Yuan and Rawal teaches the method of claim 1, the system of claim 8, and the computer program product of claim 14, respectively, wherein nearest flipped neighbor determiner is configured as k-d tree that can be searched to identify a nearest neighbor (e.g. Yuan, see paragraphs [0054-0055], which discloses the machine learning algorithm includes tree-based models, feed-forward neural network, clustering methods, and linear model that are used for performing explainability of the decision-making system.).
As per claims 6, 13, and 19, the modified teachings of Yuan and Rawal teaches the method of claim 1, the system of claim 8, and the computer program product of claim 14, respectively, wherein the machine learning model is a machine learning classifier trained to generate predictions by assigning an input to one of multiple classes (e.g. Yuan, see paragraphs [0056-0057], which discloses the algorithm traverses the tree-based model and generates the explanation for each prediction given by the decision-making system.).
As per claims 7, 13, and 20, the modified teachings of Yuan and Rawal teaches the method of claim 6, the system of claim 8, and the computer program product of claim 19, respectively, wherein the nearest flipped neighbor determiner comprises multiple k-d trees, each of the k-d trees uniquely corresponding to one of the multiple classes (e.g. Yuan, see paragraphs [0054-0055], which discloses the machine learning algorithm includes tree-based models, feed-forward neural network, clustering methods, and linear model that are used for performing explainability of the decision-making system.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See attached PTO-892 that includes additional prior art of record describing the general state of the art in which the invention is directed to.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARHAN M SYED whose telephone number is (571)272-7191. The examiner can normally be reached M-F 8:30AM-5:30PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached at 571-272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/FARHAN M SYED/Primary Examiner, Art Unit 2161 March 22, 2026