Prosecution Insights
Last updated: April 19, 2026
Application No. 18/417,447

MANAGING DATA PROCESSING SYSTEM FAILURES USING VISUALIZATIONS OF HIDDEN KNOWLEDGE FROM PREDICTIVE MODELS

Non-Final OA §103
Filed
Jan 19, 2024
Examiner
PATEL, JIGAR P
Art Unit
2114
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
3 (Non-Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
97%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
460 granted / 575 resolved
+25.0% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
26 currently pending
Career history
601
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
62.9%
+22.9% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
5.2%
-34.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 575 resolved cases

Office Action

§103
DETAILED ACTION This communication is responsive to the application, filed January 13, 2026. Claims 1-3, 5-8, 11-15, 17-19, and 21-24, and 26 are pending in this application. The applicant has canceled claims 4, 9, 10, 16, 20, and 25. The applicant has added new claim 26. Examined under the first inventor to file provisions of the AIA The present application was filed on January 19, 2024, which is on or after March 16, 2013, and thus is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-8, 11-15, 17-19, 21-24, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 2024/0168835 A1) in view of Lo et al. (US 12,061,970 B1) and further in view of Watts (US 2021/0042590 A1) and further in view of Li et al. (US 2019/0244122 A1). As per claim 1: A method for managing failures of data processing systems, comprising and by a data processing system manager configured to manage the data processing systems: extracting one or more structured knowledge attributes providing the structured knowledge visualization diagram to the user. Wang discloses [0008-0014] optimizing a decision tree-based hard disk failure prediction model by using the hard disk failure prediction key database to obtain a model and acquiring SMART attribute values to predict if target hard disk is normal, in poor condition, or the disk is about to fail. Wang discloses obtaining data stored in knowledge repository and obtain data request using knowledge repository and user preference [Wang; 0063], but fails to explicitly disclose servicing the data request based on user data and knowledge repository. Lo discloses a similar method, which further teaches [col. 1, lines 29-45] a user profile associated with a user, wherein the user profile includes user persona attributes and user parameters and generating response based on the user profile. Lo further discloses [col. 11, lines 24-45] based on the user and the conversation, the model may load the data sources and/or agents needed to fulfill the response. Lo further discloses [col. 10, lines 57-67] the context engine may review the conversation occurring via the GUI and determine how much conversation history is required to maintain the conversation. The context engine performs this dynamically and with the injection of context data to enable seamless conversation and user feedback in response to previous messages. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Wang with that of Lo. One would have been motivated to service data request based on user and knowledge repository because it allows to load the data sources and/or agents needed to fulfill the response [Lo; col. 11, lines 24-45]. generating a structured knowledge visualization diagram using the one or more structured knowledge attributes, wherein the one or more structured knowledge attributes Wang and Lo disclose structured knowledge attributes that can predict a failure for a data processing system, but fails to explicitly disclose generating visualization diagram that interprets failure prediction. Watts discloses a similar method, which further teaches [0071-0072] generating visualization diagrams that uses time series hard drive statistics data to predict hard drive failure, the time series classification differentiated in the diagrams to be ordered. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Wang and Lo with that of Watts. One would have been motivated to generate visualization diagrams to predict failure because it makes the model classification decision clearer [Watts; 0072]. from an internal architecture of a trained machine learning model that is inaccessible to a user of the data processing systems and storing the one or more structured knowledge attributes extracted from the trained machine learning model into a structured knowledge repository, the internal architecture of the trained machine learning model controls how the trained machine learning model generates inferences; provide visibility for the user of the data processing system into understanding how and why a failure prediction generated by the trained machine learning model for a data processing system of the data processing systems is generated in a manner that the trained machine learning model generated the failure prediction, the failure prediction being one of the inferences; and Wang, Lo, and Watts disclose knowledge attributes and knowledge repository and providing visualization diagram, but fail to explicitly disclose internal architecture generates inferences and provide visibility to a failure prediction. Li discloses a similar method, which further teaches [Fig. 2; 0019, 0037-0053] an explainable AI system that generates knowledge models comprising ontologies and inferencing rules for generating explanations for decisions made by the AI system. A knowledge model constructor that creates ontologies from data and constructs inferencing rules that form the basis of the knowledge model used for explainable AI, the basis of the explanation that the system provides to the user for a decision. The system extracts relations and inferencing rules from curated data, which represent internal reasoning structures of the AI model and an explainer component that processes the deconstructed problem and uses the internal reasoning structures derived from the trained model’s architecture to provide explanations. Li further discloses [Fig. 10; 0138-0143] the system stores the extracted inferencing rules and relations (representing the internal architecture) and uses them to generate machine reasoning and explanations that accompany the hypothesis/decision. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Wang, Lo, and Watts with that of Li. One would have been motivated to have internal architecture control how learning model generates inferences because doing so enables the system to provide explainable AI allowing users to understand the reasoning process [Li; 0140-0142]. As per claim 2: The method of claim 1, wherein the structured knowledge visualization diagram is a bar graph comprising one or more bars illustrating one or more events attributed to the failure prediction, an x-axis illustrating an event timeline for the one or more events, a y-axis illustrating an importance of the one or more events, and each of the one or more bars comprises a width representing an event duration of respective ones of the one or more events. Watts discloses [0071-0072] generating visualization diagrams that uses time series hard drive statistics data to predict hard drive failure, the time series classification differentiated in the diagrams to be ordered. As per claim 3: The method of claim 2, wherein the importance is based on an attribution score of each of the one or more events attributed to the failure prediction, the attribution score of each of the one or more events being obtained by the trained machine learning model. Watts discloses [0071-0072] generating visualization diagrams that uses time series hard drive statistics data to predict hard drive failure, the time series classification differentiated in the diagrams to be ordered based on importance. As per claim 5: The method of claim 2, wherein generating the structured knowledge visualization diagram further comprises: filtering the one or more structured knowledge attributes based on one or more filter parameters to obtain one or more filtered structure knowledge attributes, Wang discloses [0050-0051] SMART attribute values are filtered by using a Relief algorithm to obtain the filtered SMART attribute values to improve the hard disk failure prediction model for prediction. wherein the structured knowledge visualization diagram is generated based one the one or more filtered structure knowledge attributes. Watts discloses [0071-0072] generating visualization diagrams that uses time series hard drive statistics data to predict hard drive failure, the time series classification differentiated in the diagrams to be ordered. As per claim 6: The method of claim 5, wherein the data request further comprises a service request that comprises failure information associated with an indication of the failures, and generating the structured knowledge visualization diagram further comprises: applying the one or more filter parameters on the failure information to obtain filtered failure information, Wang discloses [0050-0051] SMART attribute values are filtered by using a Relief algorithm to obtain the filtered SMART attribute values to improve the hard disk failure prediction model for prediction. wherein the structured knowledge visualization diagram is generated based on the one or more filtered structure knowledge attributes and the filtered failure information, and Watts discloses [0071-0072] generating visualization diagrams that uses time series hard drive statistics data to predict hard drive failure, the time series classification differentiated in the diagrams to be ordered. wherein the structured knowledge visualization diagram generated using the one or more filtered structure knowledge attributes and the filtered failure information is generated by overlaying the filtered failure information on the bar graph making up the structured knowledge visualization diagram as a reflection of the one or more filtered structured knowledge attributes. Wang discloses filtering one or more structured knowledge attributes and the filtering failure information, but fails to explicitly disclose overlaying the attributes on a visualization diagram. Watts discloses [0071-0072] generating visualization diagram to make a graph of the filtered information. It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Wang with that of Watts. One would have been motivated to generate visualization diagrams to predict failure because it makes the model classification decision clearer [Watts; 0072]. As per claim 7: The method of claim 6, wherein the reflection comprises one or more bars illustrating one or more events recorded in the failure information, wherein the one or more bars illustrating one or more events attributed to the failure prediction are illustrated on a positive portion of the y-axis of the structured knowledge visualization diagram, and wherein the one or more bars illustrating one or more events recorded in the failure information are illustrated on a negative portion of the y-axis of the structured knowledge visualization diagram. Watts discloses [0071-0072] generating visualization diagrams that uses time series hard drive statistics data to predict hard drive failure, the time series classification differentiated in the diagrams to be ordered. It is clear that the visualization can be illustrated in different ways as shown by the teachings of Watts. As per claim 8: The method of claim 5, wherein the one or more filter parameters comprises one or more hardware components associated with the indication of the failures of the data processing system, one or more conditions that comprises a minimum attribution score threshold, and a predetermined value denoting a top N number of events to be selected as the one or more events attributed to the failure prediction. Wang discloses [0008-0014] optimizing a decision tree-based hard disk failure prediction model by using the hard disk failure prediction key database to obtain a model and acquiring SMART attribute values to predict if target hard disk is normal, in poor condition, or the disk is about to fail. As per claim 11: The method of claim 1, further comprising: prior to extracting the one or more structured knowledge attributes: identifying an occurrence of an indication of a failure of a data processing system among the data processing systems; and based on the occurrence, using an inference model to obtain an indication of a root cause for the failure, the structured knowledge repository being based, at least in part, on the inference model and logs on which the inference model is based, the inference model being the trained machine learning model that generates the failure prediction, the failure prediction being generated in view of the failure, and the indication of the root cause being specified in the failure prediction. Li discloses [Fig. 2; 0019, 0037-0053] an explainable AI system that generates knowledge models comprising ontologies and inferencing rules for generating explanations for decisions made by the AI system. A knowledge model constructor that creates ontologies from data and constructs inferencing rules that form the basis of the knowledge model used for explainable AI, the basis of the explanation that the system provides to the user for a decision. The system extracts relations and inferencing rules from curated data, which represent internal reasoning structures of the AI model and an explainer component that processes the deconstructed problem and uses the internal reasoning structures derived from the trained model’s architecture to provide explanations. Li further discloses [Fig. 10; 0138-0143] the system stores the extracted inferencing rules and relations (representing the internal architecture) and uses them to generate machine reasoning and explanations that accompany the hypothesis/decision. As per claim 12: The method of claim 11, further comprising: after providing the structured knowledge visualization diagram: assessing a likelihood of the root cause being accurate using the structured knowledge visualization diagram; and in an instance of the assessing where the likelihood meets a threshold: identifying at least one remediation action based on the root cause; and performing the at least one remediation action to obtain an updated data processing system to attempt to remediate the failure. Wang discloses [0058] it is necessary to divide the SMART attribute values by weighting according to the probability level of failure occurrence and the magnitude of damage caused by anomalous attribute values, so that the SMART attributes with a high weight are mainly predicted in the decision tree based hard disk failure prediction model, thereby improving the failure prediction accuracy. Furthermore, Li discloses [Fig. 2; 0019, 0037-0053] an explainable AI system that generates knowledge models comprising ontologies and inferencing rules for generating explanations for decisions made by the AI system. A knowledge model constructor that creates ontologies from data and constructs inferencing rules that form the basis of the knowledge model used for explainable AI, the basis of the explanation that the system provides to the user for a decision. The system extracts relations and inferencing rules fro As per claims 13-15: Although claims 13-15 are directed towards a medium claim, they are rejected under the same rationale as the method claims 1-3, 5-8, 11, and 12 above. As per claims 17-19: Although claims 17-19 are directed towards a system claim, they are rejected under the same rationale as the method claims 1-3, 5-8, 11, and 12 above. As per claim 21: The method of claim 1, wherein the one or more structured knowledge attributes comprise parameters on which the trained machine learning model is trained to generate the inferences. Li discloses [0019-0023] knowledge models with ontologies and inferencing rules that encode the logic by which the AI system generates decisions and explanations. These ontologies and rules function as parameters on which the AI system is trained and by which it generates inferences. As per claim 22: The method of claim 21, wherein the one or more structured knowledge attributes further comprise relationships between components of the trained machine learning model, the components comprising at least input features of data ingested into the trained machine learning model, the inferences generated by the trained machine learning model, and one or more rules followed by the trained machine learning model to generate the inferences. Li discloses [0023-0053, 0125-0129] the input features (feature vectors, data attributes), the inferences (labels, answers, hypothesis), and the rules and reasoning paths (inferencing rules, chaining paths) linking them, which together instantiate relationships between components of the trained machine learnings model. As per claim 23: The method of claim 1, wherein the structured knowledge visualization diagram provide visibility into a trustworthiness of the failure prediction generated by the trained machine learning model. Watts discloses [0071-0072] generating visualization diagrams that uses time series hard drive statistics data to predict hard drive failure, the time series classification differentiated in the diagrams to be ordered based on importance. As per claim 24: The method of claim 23, wherein the structured knowledge visualization diagram further provide visibility into errors made by the trained machine learning model in generating the failure prediction. Li discloses [0020, 0130, 0139-0143] the system allows users to identify errors or flaws in reasoning and pinpoint the exact step where the error occurred and propose fixes by allowing users to identify the exact step in the process. The visual explanations (knowledge graphs and reasoning paths) show the steps taken, so the user can inspect which rule/fact combination led to an incorrect outcome. As per claim 26: The method of claim 1, wherein the structure knowledge attributes are extracted from the internal architecture of the trained machine learning model using explainable AI techniques. Li is explicitly [Abstract; 0005-0021] an explainable AI system. It constructs data-driven ontologies and inferencing rules, representing the internal reasoning structures of the AI system, from real-world data, and uses these as the basis for explanations. The knowledge model (ontologies/rules) and provenance of reasoning graphs function as structured knowledge extracted from the internal reasoning framework of the AI system. Response to Arguments Applicant’s arguments with respect to claim(s) 1, 13, 17, and 26 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant’s art and those arts considered reasonably pertinent to applicant’s disclosure. See MPEP 707.05(c). · US 2022/0171991 A1 – Das discloses capturing feature attribution and bias metrics from models in an ML pipeline and generating views for them and presenting such metrics as visual views, which include feature-importance plots and attribution information that explain model predictions. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIGAR P PATEL whose telephone number is (571)270-5067. The examiner can normally be reached on Monday to Friday 10AM-6PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ashish Thomas, can be reached on 571-272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JIGAR P PATEL/Primary Examiner, Art Unit 2114
Read full office action

Prosecution Timeline

Jan 19, 2024
Application Filed
Mar 22, 2025
Non-Final Rejection — §103
Jun 24, 2025
Response Filed
Oct 15, 2025
Final Rejection — §103
Jan 13, 2026
Request for Continued Examination
Jan 24, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103
Apr 16, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602273
SAFETY MONITORING OF A SYSTEM-ON-A-CHIP
2y 5m to grant Granted Apr 14, 2026
Patent 12602298
Self-Repairable Chip For Silent Data Corruption Issues
2y 5m to grant Granted Apr 14, 2026
Patent 12591480
AUTOMATIC IDENTIFICATION OF ROOT CAUSE AND MITIGATION STEPS FOR INCIDENTS GENERATED IN AN INCIDENT MANAGEMENT SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12585570
Microchip with on-chip debug and trace engine
2y 5m to grant Granted Mar 24, 2026
Patent 12579017
APPARATUS AND METHODS FOR SECURING INTEGRITY AND DATA ENCRYPTION LINK SESSIONS WITHIN DIE INTERCONNECT ARCHITECTURES
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
97%
With Interview (+16.9%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 575 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month