DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis ( i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-2, 5-6, 8, 10, 12, 15-16, 18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. [US 2021/0374132 A1] in view of Friedman et al. [US 2022/0165007 A1]. Claim 1 is rejected over Yang and Friedman . Yang teaches “A system for using explainability vectors to rank user interface elements, the system comprising:” as “ The system provides the recommendation for one or more items and corresponding explanation narratives based on ranking predicted scores and explainability scores for the items. ” [Abstract] “receiving training data for a predictive machine learning model that outputs resource availability scores, wherein the training data comprises values for a first set of features, wherein the first set of features comprises variables that influence resource availability;” as “ recommendation explainability scores or metrics can provide explanations to users for why certain items are recommended. ” [¶0016] “training the predictive machine learning model based on the training data;” as “ such processors may accelerate various computing tasks associated with evaluating neural network models (e.g., training, prediction, preprocessing, and/or the like) by an order of magnitude or more in comparison to a general-purpose CPU. ” [¶0025] “processing the predictive machine learning model to extract an explainability vector, wherein each entry in the explainability vector corresponds to a feature in the first set of features and is indicative of a correlation between the feature and output of the predictive machine learning model;” as “ relevance model 305 may be trained to learn the probability P( y|x ) of an action label y given the input features x and generate a predicted score 320. ” [¶0041] “processing the second set of features and the output of the predictive machine learning model to generate an explanative factor;” as “ The system obtains first features of at least one user and second features of a set of items. The system provides the first features and the second features to a first machine learning network for determining a predicted score for an item. ” [Abstract] “training the ranking machine learning model, wherein the ranking machine learning model takes the third set of features and the explanative factor as input; and” as “ Recommendation module 130 may then rank the combined score from the highest score to the lowest score and select an item that corresponds to the highest score or item(s) that corresponds to the top k scores as the recommended item(s). ” [¶0075] Yang does not explicitly teach based on the explainability vector, processing the first set of features to generate a second set of features such that each feature in the second set of features has a correlation with the output of the predictive machine learning model that is above a correlation threshold; determining to train a ranking machine learning model which uses a third set of features as input, wherein the third set of features contains variables affecting resource availability; receiving, as output from the ranking machine learning model, a vector indicating display positions and rankings of one or more user interface elements for a software application. However, Friedman teaches “based on the explainability vector, processing the first set of features to generate a second set of features such that each feature in the second set of features has a correlation with the output of the predictive machine learning model that is above a correlation threshold;” as “ The computing machine computes, for each first node from among a plurality of first nodes that are intermediate nodes or end nodes, a provenance value representing dependency of an explainability value vector of the first node on the one or more nodes upstream from the first node. The computing machine computes, for each first node, the explainability value vector. The computing machine provides a graphical output representing at least an explainability value vector of an end node. ” [Abstract] “determining to train a ranking machine learning model which uses a third set of features as input, wherein the third set of features contains variables affecting resource availability;” as “ the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. ” [¶0216] “ receiving, as output from the ranking machine learning model, a vector indicating display positions and rankings of one or more user interface elements for a software application. ” as “ using a rover's GPS sensor to measure its position assumes that the GPS sensor is on the rover. This assumption affects the integrity of all downstream beliefs and planned actions that rely directly or indirectly on positional data. ” [ ¶0170 ] Yang and Friedman are analogous arts because they teach machine learning and adaptive learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Yang and Friedman before him/her, to modify the teachings of Yang to include the teachings of Friedman with the motivation of the provenance-based approach. treats the plan as a tripartite dependency graph that helps explain the foundations, reliability, impact, and sensitivity of the information that comprises the plan's states and actions. [ Friedman , ¶0135] Claim 2 is rejected over Yang and Friedman under the same rationale of rejection of claim 1. Claim 5 is rejected over Yang and Friedman . Yang teaches “ wherein processing the second set of features and the output of the predictive machine learning model to generate an explanative factor comprises: generating an encoding map which translates the first set of features to the second set of features; ” as “ support vector machines, encoders, decoders, auto-encoders, stacked auto-encoders, perceptrons , multi-layer perceptrons , artificial neural networks ” [¶0030] Yang does not explicitly teach using the output of the predictive machine learning model and the explainability vector, generating an embedding vector; and based on the encoding map and the embedding vector, generating the explanative factor. However, Friedman teaches “ using the output of the predictive machine learning model and the explainability vector, generating an embedding vector; and ” as “ each beginning node in at least a subset of the beginning nodes having an explainability value vector. ” [¶0007] “ based on the encoding map and the embedding vector, generating the explanative factor. ” as “ The activity record can include an association with the received datum and any input datums used by the agent to generate the received datum. ” [¶0052] Claim 6 is rejected over Yang and Friedman . Yang teaches “ using the vector indicating rankings of one or more user interface elements, determining a display order of the one or more user interface elements; and based on the display order of the one or more user interface elements, causing to be displayed on a user interface on a user device the one or more user interface elements. ” as “ The data may also be segmented into training and test datasets based on user rating history in a leave-one-out way. For example, for each user, the movies the user rated may be sorted by the timestamp in ascending order. ” [¶0078] Claim 8 is rejected over Yang and Friedman . Yang teaches “ the predictive machine learning model is defined by a set of parameters comprising a matrix of weights for a supervised classifier algorithm; and ” as “ Further a machine learning process may comprise a trained algorithm that is trained through supervised learning (e.g., various parameters are determined as weights or scaling factors). ” [¶0030] “ the explainability vector is extracted from the set of parameters using a Local Interpretable Model-agnostic Explanations method. ” as “ Note that the LIME method is a model-agnostic method for generating explanations, requiring training of a local linear model for each user and item pair. ” [¶0093] Claim 10 is rejected over Yang and Friedman . Yang teaches “ the predictive machine learning model is defined by a set of parameters comprising a matrix of weights for a convolutional neural network algorithm; and ” as “ the module may be implemented on one or more neural networks, such as one or more supervised and/or unsupervised neural networks, convolutional neural networks, and/or memory-augmented neural networks, among others. ” [¶0023] “ the explainability vector is extracted from the set of parameters using a Gradient Class Activation Mapping method. ” as “ The machine learning process may comprise one or more of regression analysis, regularization, classification, dimensionality reduction, ensemble learning, meta learning, association rule learning, cluster analysis, anomaly detection, deep learning, or ultra-deep learning. The machine learning process may comprise, but is not limited to: k-means, k-means clustering, k-nearest neighbors, learning vector quantization, linear regression, non-linear regression, least squares regression, partial least squares regression, logistic regression, stepwise regression, multivariate adaptive regression splines, ridge regression, principle component regression, least absolute shrinkage and selection operation, least angle regression, canonical correlation analysis, factor analysis, independent component analysis, linear discriminant analysis, multidimensional scaling, non-negative matrix factorization, principal components analysis, principal coordinates analysis, projection pursuit, Sammon mapping, t-distributed stochastic neighbor embedding, AdaBoosting , boosting, gradient boosting ” [¶0030] Claim 12 is rejected over Yang and Friedman under the same rationale of rejection of claim 1. Claim 15 is rejected over Yang and Friedman under the same rationale of rejection of claim 5. Claim 16 is rejected over Yang and Friedman under the same rationale of rejection of claim 6. Claim 18 is rejected over Yang and Friedman under the same rationale of rejection of claim 8. Claim 20 is rejected over Yang and Friedman under the same rationale of rejection of claim 10. Claim(s) 7, 9 , 1 7 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. [US 2021/0374132 A1] in view of Friedman et al. [US 2022/0165007 A1] and in further view of Shah [US 2025/0029105] . Claim 7 is rejected over Yang, Friedman and Shah. Yang teaches “the predictive machine learning model is defined by a set of parameters comprising a matrix of weights for a multivariate regression algorithm; and” as “ The machine learning process may comprise one or more of regression analysis, regularization, classification, dimensionality reduction, ensemble learning, meta learning, association rule learning, cluster analysis, anomaly detection, deep learning, or ultra-deep learning. ” [¶0030] The combination of Yang and Friedman does not explicitly teach the explainability vector is extracted from the set of parameters using a Shapley Additive Explanation method. However, Shah teaches “the explainability vector is extracted from the set of parameters using a Shapley Additive Explanation method.” as “ the fraud detection system 102 can determine a Shapley Additive Explanations (SHAP) value for each feature. ” [¶0054] Yang, Friedman and Shah are analogous arts because they teach machine learning and adaptive learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Yang, Friedman and Shah before him/her, to modify the teachings of combination of Yang and Friedman to include the teachings of Shah with the motivation of the fraud detection system, however, identifies features associated with the network transaction and utilizes a trained card-not-present machine learning model to generate accurate fraud predictions in real time. [Shah, ¶0022] Claim 9 is rejected over Yang, Friedman and Shah. The combination of Yang and Friedman does not explicitly teach the predictive machine learning model is defined by a set of parameters comprising a vector of coefficients for a generalized additive model; and the explainability vector is extracted from the vector of coefficients in the generalized additive model. However, Shah teaches “the predictive machine learning model is defined by a set of parameters comprising a vector of coefficients for a generalized additive model; and” as “ the fraud detection system 102 can determine a Shapley Additive Explanations (SHAP) value for each feature. ” [¶0054] “the explainability vector is extracted from the vector of coefficients in the generalized additive model.” as “ the card-not-present machine learning model 302 is a different type of machine learning model, such as a neural network, a support vector machine, or a random forest. ” [¶0058] Yang, Friedman and Shah are analogous arts because they teach machine learning and adaptive learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Yang, Friedman and Shah before him/her, to modify the teachings of combination of Yang and Friedman to include the teachings of Shah with the motivation of the fraud detection system, however, identifies features associated with the network transaction and utilizes a trained card-not-present machine learning model to generate accurate fraud predictions in real time. [Shah, ¶0022] Claim 17 is rejected over Yang, Friedman and Shah under the same rationale of rejection of claim 7. Claim 19 is rejected over Yang, Friedman and Shah under the same rationale of rejection of claim 9. Allowable Subject Matter Claims 3-4, 11 and 13-14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT MASUD K KHAN whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-0606 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday-Friday (8am-5pm) . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Hosain Alam can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-3978 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MASUD K KHAN/ Primary Examiner, Art Unit 2132