Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. 2. Claims 1- 9 are pending in this office action. This action is responsive to Applicant’s application filed 07/19/2023 . Information Disclosure Statement 3. The references listed in the IDS filed 07/19/2023 has been considered. A copy of the signed or initialed IDS is hereby attached. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims under 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of 35 U.S.C. 103(c) and potential 35 U.S.C. 102(e), (f) or (g) prior art under 35 U.S.C. 103(a). 4. Claims 1- 2, 4- 7, and 9 are rejected under 35 U.S.C. 103(a) as being unpatentable over Sriram (US Patent Publication No. 20 22/0366186 A1, hereinafter “ Sriram ”) in view of Zoldi et al. (US Patent Publication No. 20 23/0080851 A1, hereinafter “ Zoldi ”). As to Claim 1, Sriram teaches the claimed limitations: “ A computer system comprising: as an arithmetic device ” as a computing device in a machine or system such as a vehicle, a robot, a manufacturing machine or medical scanner can be programmed to detect objects and regions based on image data acquired by a sensor included in the system (paragraph 0010). Training a quantile neural network includes error functions that receive a prediction and uncertainties from the quantile neural network and calculate respective error terms. In this example the error terms can be an arithmetic difference between the predicted locations of an object and the measured location of the object from the ground truth (paragraph 0042). “ a storage device, wherein the storage device stores a model configured to output an action predicted based on an action value in response to input data, wherein the arithmetic device is configured to: ” as a computer, including a processor and a memory (e.g., storage) , the memory including instructions to be executed by the processor to train a quantile neural network to input an image and output a lower quantile (LQ) prediction (e.g., an epistemic uncertainty of the action value lower than a threshold value in claim 8) , a median quantile (MQ) prediction and an upper quantile (UQ) prediction corresponding to an object in the image, wherein an LQ loss, an MQ loss and a UQ loss are determined for the LQ prediction, the MQ prediction and the UQ prediction respectively and wherein the LQ loss, the MQ loss and the UQ loss are combined to form a base layer loss and output the quantile neural network (abstract). P rogram or train Deep neural networks (DNNs) to output one or more confidence or uncertainty values (e.g., risk value) that correspond to a probability that the prediction output by the system is correct. This confidence can be used by a decision controller to determine whether to reject or use the machine-learning model prediction by the system. Such confidence would help in taking safety actions by switching off industrial robots or autonomous vehicles enabled by AI systems. For example, a vehicle or mobile robot can be operated (e.g., action) based on predicted locations of objects in an environment around the vehicle. A robot can be directed to move a gripper to a location based on determining that no objects block the predicted motion of the gripper. Both false positive and false negative predictions regarding pathology detection in a medical scan can have adverse effects (paragraphs 0013-0015). “ acquire data to be explained including values of a plurality of components to be explained in order to explain first prediction processing of the model that outputs a first predicted action in response to first input data ” as c omputing devices included in a robot or a vehicle can be equipped with one or more DNNs to acquire and process image data regarding an environment and to make decisions based on DNN predictions such as object detection. Machine learning systems can be programmed or trained to input image data and output predictions regarding object labels and locations. In the examples outlier data generated due to noise factors can cause prediction errors which can be identified in the output data. In both of these examples, using an erroneous prediction from a DNN can lead to degraded system performance (paragraph 0012). P rogram or train DNNs to output one or more confidence or uncertainty values that correspond to a probability that the prediction output by the system is correct. This confidence can be used by a decision controller to determine whether to reject or use the machine-learning model prediction by the system (paragraphs 0013-0015). Sriram does not explicitly teach the claimed limitation “ determine contributions of each of the plurality of components to be explained to an action value and an uncertainty of the action value in the first prediction processing; detect one or more risk components in the first prediction processing from the plurality of components to be explained based on the contributions; and present information on the risk components ”. Zoldi teaches c omputer-implemented machines, systems and methods for providing insights about uncertainty of a machine learning model. A method includes determining an uncertainty value (e.g., risk value) associated with a first machine learning model output of a first machine learning model . The method further includes switching, responsive to the uncertainty value satisfying a threshold, from the first machine learning model to a second machine learning model, the second machine learning model generating a second machine learning model output. The method further includes providing, responsive to the switching, the machine learning output, the uncertainty value, the confidence interval, and the second machine learning output to a user interface (abstract). Generating the second machine learning model may be based on the first machine learning model. Generating the second machine learning model may include constructing hidden layers of the second machine learning model where hidden nodes of the hidden layers are a sparse sub-network of hidden nodes approximating the first machine learning model. Generating the second machine learning model may further include generating perturbed variations of the sparse networks of high variance hidden nodes. Generating the second machine learning model may further include removing or prohibiting feature interactions contributing the high variance hidden nodes. Generating the second machine learning model may further include iterating and training the second machine learning model based on removed and prohibited feature interactions to minimize model variance of the second machine learning model. Providing the machine learning output, the uncertainty value, the confidence interval, and the second machine learning output comprises transmitting the machine learning output, the uncertainty value, the confidence interval, and the second machine learning output to a display of the user interface (paragraph s 0008, 0010, claim 1). Uncertainty measures may provide a practitioner evidence of the model's confidence in a given result, high uncertainty measures may indicate that the model result should not be used in subsequent decisions. In many applications of machine learning the outcome of a decision can have significant asymmetric real-world consequences, for autonomous vehicles, a machine learning outcome may determine that there is no obstacle on the road. However, if that outcome has a high uncertainty measurement, it may be prudent to slow down or stop a vehicle until the ML model makes a more confident determination. Additionally, machine learning outcomes may have an impact on personal and/or national security, biometrics may be used to authorize the user for a personal device or military operation. In such a circumstance, machine learning determinations may call for a high level of certainty for authorization (paragraphs 0034-0035, 0054). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Sriram and Zoldi before him/her, to modify Sriram determine contributions of each of the plurality of components to be explained to an action value because that would provide predictive advantages to enhance the functionality of a system or a computing model when complex relationships or constraints are at play as taught by Zoldi (paragraph 000 3 ). As to Claim 2, Sriram teaches the claimed limitations: “ wherein the data to be explained is the first predicted action ” as (paragraphs 0013-0014). Zoldi teaches (paragraph s 0035 , 0086 ). As to Claim 4 , Sriram teaches the claimed limitations: “ wherein the arithmetic device is configured to search for and present a revised action of the first predicted action that improves the action value and the uncertainty of the action value, the revised action being obtained by altering the values of one or more of the components in the first predicted action ” as (paragraphs 0012-0018, 0034-0040, 00 42-0047, 0053-0055, 0076, 0084, 0090) Zoldi teaches (paragraph s 0 002, 0008, 0036-0037, 0042, 0044-0046, 0065 ). As to Claim 5 , Sriram teaches the claimed limitations: “ wherein the one or more of the components are the risk components ” as (paragraphs 0012-0015). Zoldi teaches (paragraph s 0008, 0010, 0034-0035, 0054 ). As to Claim 6 , Sriram teaches the claimed limitations: “ wherein the one or more of the components are components designated by a user ” as (paragraphs 0015, 0020, 0022, 0037). Zoldi teaches (paragraph 0034). As to Claim 7 , Sriram teaches the claimed limitations: “ wherein the arithmetic device is configured to: evaluate the first predicted action and the revised action through a simulation; and present a result of the evaluation ” as (paragraphs 0018). Zoldi teaches (paragraph s 0026, 0061, 0082-0083, 00 86, 0094-0095 ). As to claim 9 is rejected under 35 U.S.C 103(a), the limitations therein have substantially the same scope as claim 1 . In addition, Sriram teaches a method including training a quantile neural network to input an image and output a lower quantile (LQ) prediction, a median quantile (MQ) prediction and an upper quantile (UQ) prediction corresponding to an object in the image (paragraph 0019) . Therefore, th is claim is rejected for at least the same reasons as claim 1 . 5 . Claim 3 is rejected under 35 U.S.C. 103(a) as being unpatentable over Sriram (US Patent Publication No. 20 22/0366186 A1) as applied to claim 1 above, and further in view of Zoldi et al. (US Patent Publication No. 20 23/0080851 A1) and Buda et al. (US Patent Publication No. 20 21/0407686 A1, hereinafter “ Buda ”). As to Claim 3 , Sriram does not explicitly teach the claimed limitation “ wherein the arithmetic device is configured to detect one or more components to be explained whose contributions worsen both the action value and the uncertainty of the action value as the risk components” . Buda teaches a preventative healthcare system calibrates a risk model by assigning weights to attributes for the freshness, completeness and uncertainty of a user's medical information. A risk predictive model is implemented based on the medical information. The risk of a specific health outcome of the user is determined using the risk predictive model, which is calibrated by computing attribute scores for freshness, completeness and uncertainty of the medical information and by assigning weights to the attribute scores (abstract , paragraphs 0015-0016, 0018, 0034, 0041, 0050, 0059, 0072, 0081-0084, 0097 ) Zoldi teaches (paragraph s 0008, 0010, 0034-0035, 0054 ). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Sriram and Zoldi before him/her, to modify Sriram determine contributions of each of the plurality of components to be explained to an action value because that would provide predictive advantages to enhance the functionality of a system or a computing model when complex relationships or constraints are at play as taught by Zoldi (paragraph 000 3 ). Or contributions worsen both the action value and the uncertainty of the action value as the risk components provide early symptom detection and preventative healthcare by balancing between data requirements defined as completeness, freshness and uncertainty, such that for a specific healthcare outcome and for a specific patient it provide an accurate continuous risk prediction as taught by Buda (paragraph 0002 ). 6 . Claim 8 is rejected under 35 U.S.C. 103(a) as being unpatentable over Sriram (US Patent Publication No. 20 22/0366186 A1) as applied to claim 1 above, and further in view of Zoldi et al. (US Patent Publication No. 20 23/0080851 A1) and Rastogi (US Patent Publication No. 2017/0351966 A1, hereinafter “ Rastogi ”). As to Claim 8 , Sriram does not explicitly teach the claimed limitation “ wherein the uncertainty of the action value is an aleatoric uncertainty of the action value, and wherein the arithmetic device is configured to search for a revised action from actions exhibiting an epistemic uncertainty of the action value lower than a threshold value” . Rastogi teaches (paragraphs 0027, 0035). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention, having the teachings of Sriram and Zoldi before him/her, to modify Sriram determine contributions of each of the plurality of components to be explained to an action value because that would provide predictive advantages to enhance the functionality of a system or a computing model when complex relationships or constraints are at play as taught by Zoldi (paragraph 000 3 ). Or an aleatoric uncertainty of the action value, and wherein the arithmetic device is configured to search for a revised action from actions exhibiting an epistemic uncertainty provided for determining remaining usage life of a structural component exhibiting a defect d etermining one or more confidence limit values associated with the remaining usage life value, and providing the remaining usage life value and the one or more confidence limit values as taught by Rastogi (paragraph s 0004-0005). Examiner’s Note Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to James Hwa whose telephone number is 571-270-1285, email address is james.hwa@uspto.gov . The examiner can normally be reached on 9:00 am – 5:30 pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached on 571-272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only, for more information about the PAIR system, see http://pair-direct.uspto.gov . Should you have questions on access to the PAIR system contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. 0 2/25 /202 6 /SHYUE JIUNN HWA/ Primary Examiner, Art Unit 2156