Prosecution Insights
Last updated: April 19, 2026
Application No. 18/427,290

METHOD AND SYSTEM FOR PREDICTING SERVER HARDWARE AND SERVER HARDWARE COMPONENT FAILURES

Non-Final OA §101§102
Filed
Jan 30, 2024
Examiner
EHNE, CHARLES
Art Unit
2113
Tech Center
2100 — Computer Architecture & Software
Assignee
Jpmorgan Chase Bank N A
OA Round
1 (Non-Final)
92%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 92% — above average
92%
Career Allow Rate
758 granted / 822 resolved
+37.2% vs TC avg
Moderate +9% lift
Without
With
+8.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
15 currently pending
Career history
837
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
10.1%
-29.9% vs TC avg
§102
57.4%
+17.4% vs TC avg
§112
5.0%
-35.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 822 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite obtaining performance metrics, generating component failure probabilities, determining component failures probabilities exceed a threshold and determining and initiating a remedial action. These limitations describe collecting data, analyzing data and evaluating data which fall under the mental process grouping. If a claim limitation, under its broadest reasonable interpretation this covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Metal Processes” grouping of abstract ideas. This judicial exception is not integrated into a practical application because the additional elements including a processor, memory, network interface and ML model are described at a high level of generality and perform their well known and typical functions of receiving, storing and processing data. The step of executing of remedial action applies to the result of the abstract analysis and represents an insignificant post solution activity that does not integrate the abstract idea into practical application. The claims do not recite a specific improvement to the computer or technology. Instead, the additional elements merely implement the abstract idea using generic computer components to perform the data analysis. Therefor the claims do not integrate the abstract idea into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the elements perform well understood, routine and conventional functions such a receiving data, analyzing data and generating outputs. Claims 2-10 and 12-17, 19 and 20 are rejected under the same reasons as their respective independent claims, as the additional limitations merely further describe the abstract idea using well understood, routine and conventional data processing techniques and do not amount to significantly more. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gadepalli (US11,113,144). As to claim 1, Gadepalli disclose a method for predicting system component failures within a computer system, the method comprising: obtaining a first set of performance metrics by monitoring a network interface of the computer system (column 4, lines 54-56 & columns 6-7, lines 64-12), The telemetry data repository may comprise automatic recording and transmission data received from remote sources and is used for monitoring and analysis of the data, therefore the network interface is met by the references disclosure of receiving data from distributed systems); generating a first set of component failure probabilities by processing the first set of performance metrics (column 8, lines 14-33); determining that at least a first component failure probability from among the first set of component failure probabilities exceeds at least one risk threshold (column 13, lines 44-66); determining a first set of remedial actions that mitigate at least a first component failure probability (column 9, lines 7-22); and mitigating at least the first component failure probability by initiating an execution of the first set of remedial actions (column 6, lines 21-25). As to claim 2, Gadepalli disclose the method of claim 1, wherein the obtaining comprises: periodically obtaining each of a plurality of sets of performance metrics that include the first set of performance metrics (column 10, lines 22-32 & column 6, lines 101-13). As to claim 3, Gadepalli disclose the method of claim 1, wherein the network interface comprises a connection to at least one from among a network feed and a network performance metric repository that stores a plurality of sets of historical performance metrics (columns 5-6, lines 58-36). As to claim 4, Gadepalli disclose the method of claim 1, wherein the first set of performance metrics comprises at least one from among telemetry data, static system component data, network event data, and historical performance metrics (columns 5-6, lines 58-36). As to claim 5, Gadepalli disclose the method of claim 1, wherein the processing comprises: cleansing the first set of performance metrics to produce a first set of cleansed performance metrics; and generating the first set of component failure probabilities by evaluating the first set of cleansed performance metrics against a training dataset, wherein the training dataset is based on historical performance metrics (column 8, lines 14-33). As to claim 6, Gadepalli disclose the method of claim 5, wherein the training dataset comprises a plurality of sets of performance metrics values, and wherein each set of performance metrics values from among the plurality of sets of performance metrics values respectively correlates to a cascading failure (column 13, lines 6-41). As to claim 7, Gadepalli disclose the method of claim 1, wherein the processing comprises: performing the processing by utilizing a first artificial intelligence and machine learning (AI/ML) model that is trained to determine at least one component failure probability that is linked to a cascading failure (column 13, lines 6-41). As to claim 8, Gadepalli disclose the method of claim 7, wherein the processing further comprises: generating, by the first AI/ML model, the first set of component failure probabilities by evaluating the first set of performance metrics against a trained model, wherein historical performance metrics have been utilized to train the first AI/ML model to generate system component failure probabilities (column 13, lines 6-41). As to claim 9, Gadepalli disclose the method of claim 7, wherein the first AI/ML model determines the at least one component failure probability based on a respective degree of correspondence between the first set of performance metrics and at least one from among a plurality of sets of computer system performance metrics values, wherein each system component failure from among a first set of system component failures has a respective correspondence that exceeds a correspondence threshold, and wherein each system component failure from among the first set of system component failures respectively corresponds to at least one respective set of computer system performance metrics values from among the at least one of the plurality of sets of computer system performance metrics values (columns 5-6, lines 58-36 & columns 10-11, lines 33-20). As to claim 10, Gadepalli disclose the method of claim 9, wherein at least one from among the first set of performance metrics comprises first system component failure event data, and wherein at least one system component failure from among a first set of system component failures comprises a cascading failure that results from a first system component failure that corresponds to a first system component failure (columns 5-6, lines 58-3). As to claim 11, Gadepalli disclose a system for predicting system component failures within a computer system, the system comprising: a processor; a network interface of the computer system; and memory storing instructions that, when executed by the processor, cause the processor to perform operations comprising (column 6, liens 37-50): obtaining a first set of performance metrics by monitoring the network interface (column 4, lines 54-56); generating a first set of component failure probabilities by processing the first set of performance metrics (column 8, lines14-33); determining that at least a first component failure probability from among the first set of component failure probabilities exceeds at least one risk threshold (column 13, lines 44-66); determining a first set of remedial actions that mitigate at least a first component failure probability (column 9, lines 7-22); and mitigating at least the first component failure probability by initiating an execution of the first set of remedial actions (column 6, lines 21-25). As to claim 12, Gadepalli disclose the system of claim 11, wherein when the instructions are executed by the processor, the processing comprises: cleansing the first set of performance metrics to produce a first set of cleansed performance metrics; and generating the first set of component failure probabilities by evaluating the first set of cleansed performance metrics against a training dataset, wherein the training dataset is based on historical performance metrics (column 8, lines 14-33). As to claim 13, Gadepalli disclose the system of claim 12, wherein the training dataset comprises a plurality of sets of performance metrics values, and wherein each set of performance metrics values from among the plurality of sets of performance metrics values respectively correlates to a cascading failure (column 13, lines 6-41). As to claim 14, Gadepalli disclose the system of claim 11, wherein when the instructions are executed by the processor, the processing comprises: performing the processing by utilizing a first artificial intelligence and machine learning (AI/ML) model that is trained to determine at least one component failure probability that is linked to a cascading failure (column 13, lines 6-41). As to claim 15, Gadepalli disclose the system of claim 14, wherein when the instructions are executed by the processor, the processing further comprises: generating, by the first AI/ML model, the first set of component failure probabilities by evaluating the first set of performance metrics against a trained model, wherein historical performance metrics have been utilized to train the first AI/ML model to generate system component failure probabilities (column 13, lines 6-41). As to claim 16, Gadepalli disclose the system of claim 14, wherein when the instructions are executed by the processor, the first AI/ML model determines the at least one component failure probability based on a respective degree of correspondence between the first set of performance metrics and at least one from among a plurality of sets of computer system performance metrics values, wherein each system component failure from among a first set of system component failures has a respective correspondence that exceeds a correspondence threshold, and wherein each system component failure from among the first set of system component failures respectively corresponds to at least one respective set of computer system performance metrics values from among the at least one of the plurality of sets of computer system performance metrics values (columns 5-6, lines 58-36 & columns 10-11, lines 33-20). As to claim 17, Gadepalli disclose the system of claim 16, wherein at least one from among the first set of performance metrics comprises first system component failure event data, and wherein at least one system component failure from among a first set of system component failures comprises a cascading failure that results from a first system component failure that corresponds to a first system component failure (columns 5-6, lines 58-3). As to claim 18, Gadepalli disclose a non-transitory computer-readable medium for predicting system component failures within a computer system, the computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform operations comprising: obtaining a first set of performance metrics by monitoring a network interface of the computer system (column 4, lines 54-56); generating a first set of component failure probabilities by processing the first set of performance metrics (column 8, lines14-33); determining that at least a first component failure probability from among the first set of component failure probabilities exceeds at least one risk threshold (column 13, lines 44-66); determining a first set of remedial actions that mitigate at least a first component failure probability (column 9, lines 7-22); and mitigating at least the first component failure probability by initiating an execution of the first set of remedial actions (column 6, lines 21-25). As to claim 19, Gadepalli disclose the computer-readable medium of claim 18, wherein when the instructions are executed by the processor, the processing comprises: performing the processing by utilizing a first artificial intelligence and machine learning (AI/ML) model that is trained to generate the first set of component failure probabilities by evaluating the first set of performance metrics against a training dataset, wherein the first AI/ML model determines the first set of component failure probabilities based on a respective set of degrees of correspondence between the first set of performance metrics and at least one from among a plurality of sets of computer system performance metrics values (column 13, lines 6-41). As to claim 20, Gadepalli disclose the computer-readable medium of claim 19, wherein at least one from among the first set of performance metrics comprises first system component failure event data, and wherein at least one system component failure from among a first set of system component failures comprises a cascading failure that results from a first system component failure that corresponds to a first system component failure (columns 5-6, lines 58-3). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Prior art Chahan (US 2019/0068467) discloses a system for cloud network stability includes a cloud network, a cloud instrument monitor, and a cloud network stability server. The cloud network includes a plurality of components. The cloud instrument monitor includes one or more instruments. Each of the one or more instruments may monitor the plurality of components. The cloud network stability server may include an interface and a processor operably coupled to the interface. The interface may receive an identification of a performance anomaly in the cloud network. A predictive analyzer implemented by a processor may identify a plurality of operational parameters associated with the performance anomaly; detect one or more operational issues associated with the plurality of operational parameters; calculate a network component failure using the detected one or more operational issues; and determine a remediation solution to resolve the network component failure (Abstract). Prior art Souza (US 2023/0273908) discloses obtaining, by a computing device, real-time performance metrics of a mainframe database; automatically generating, by the computing device, a predicted maintenance task as an output of a trained database maintenance task classification machine learning (ML) model based on an input of the real-time performance metrics; automatically generating, by the computing device, a time to execute the predicted maintenance task as an output of a trained database maintenance triggering ML model based on an input of the predicted maintenance task and the real-time performance metrics (Abstract). Prior art Vijayaraghavan (US 2021/0165708) discloses a system for predicting future system failures. Performance metrics (e.g., key performance indicators (KPIs)) of a system may be monitored and machine learning techniques may utilize a trained model to evaluate the performance metrics and identify trends in the performance metrics indicative of future failures of the monitored system. The predicted future failures may be identified based on combinations of different performance metrics and the impact that the performance metric trends of the group of different performance metrics will have on the system in the future. Upon predicting that a system failure will occur, operations to mitigate the failure may be initiated (Abstract). Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLES EHNE whose telephone number is (571)272-2471. The examiner can normally be reached 8:00-5:00 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bryce Bonzo can be reached at 571-272-3655. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLES EHNE/ Primary Examiner, Art Unit 2113 /BRYCE P BONZO/Supervisory Patent Examiner, Art Unit 2113
Read full office action

Prosecution Timeline

Jan 30, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585521
DEGRADED AVAILABILITY ZONE REMEDIATION FOR MULTI-AVAILABILITY ZONE CLUSTERS OF HOST COMPUTERS
2y 5m to grant Granted Mar 24, 2026
Patent 12547513
METHOD AND APPARATUS FOR MONITORING HARDWARE PARTITION OF SERVER HOST SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12511212
DATA TRANSMISSION
2y 5m to grant Granted Dec 30, 2025
Patent 12481563
SITE AND STORAGE TIER AWARE REFERENCE RESOLUTION
2y 5m to grant Granted Nov 25, 2025
Patent 12449959
TECHNIQUES FOR IMPLEMENTING ROLLBACK OF INFRASTRUCTURE CHANGES IN A CLOUD INFRASTRUCTURE ORCHESTRATION SERVICE
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
92%
Grant Probability
99%
With Interview (+8.6%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 822 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month