Prosecution Insights
Last updated: April 19, 2026
Application No. 18/461,543

PROACTIVE INSIGHTS FOR SYSTEM HEALTH

Final Rejection §103
Filed
Sep 06, 2023
Examiner
TRUONG, LOAN
Art Unit
2114
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
4 (Final)
77%
Grant Probability
Favorable
5-6
OA Rounds
3y 4m
To Grant
90%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
458 granted / 594 resolved
+22.1% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
32 currently pending
Career history
626
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
25.0%
-15.0% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 594 resolved cases

Office Action

§103
DETAILED ACTION This office action is in response to applicant’s remarks filed on July 16, 2025 in application 18/461,543. Claims 1-20 are presented for examination. Claims 1, 8, and 15 are amended. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wollny et al. (US 2023/0161662) in further view of Mandal et al. (US 2023/0105304) in further view of Cropper et al. (US 2017/0230306). In regard to claim 1, Wollny et al. teach a method comprising: prior to generating fault predictions (machine learning model may be trained to identify predictive indicators of error conditions, para. 30, it is noted that the machine learning model is created/train before errors conditions can be predicted or identify, hence the modelling is done prior to the generating fault predictions), modelling faults (machine learning model, para. 33) using source data from a computing system (collect operating data from a plurality of computing devices, para. 31) comprising server computers running instances of a software program (proactive support method implemented at one or more servers, para. 29) and remote data storage system that is used by the computing system to store data associated with the software program (added to a knowledge base for further use, para. 30),; and generating fault predictions based on current data from the computing system and the remote data storage system and the modelled faults (one or more predictive indicators associated with one or more conditions are identified, para. 34). Wollny et al. teach of collecting operation data from a plurality of computing devices (abstract) but does not explicitly teach the source data consisting of prior SaaS computing system performance, remote data storage system performance, and source data that is not a measure of performance consisting of SaaS computing system workload, data storage system workload, SaaS computing system configuration data and remote data storage system configuration data the current data; the current data consistning of current SaaS computing system performance, remote data storage system performance, and current data that is not a measure of performance comprising SaaS computing system workload, data storage system workload, SaaS computing system configuration data, and remote data storage system configuration data Mandal et al. teach of monitoring a computing environment by monitoring each component associated set of key performance indicators (KPIs) (para. 24). Data types can span a variety of data, for example, performance metrics, transaction metrics, logs, traces, topology, etc. (para. 48-50). It would have been obvious to modify the method of Wollny et al. by adding Mandal et al. proactive avoidance of performance issues in computing environment. A person of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make the modification because it would aid in proactively detecting performance issues (para. 24). Wollny et al. and Mandal et al. does not explicitly teach prior to generating storage capacity exhaustion fault predictions, modelling storage capacity exhaustion faults occurring in a remote storage system using source data from a software as a service (SaaS), the SaaS computing system generating a storage capacity exhaustion fault prediction based on current data from the SaaS computing and the remote data storge system; and responsive to the storage capacity exhaustion fault prediction, the SaaS computing system signaling a reclaimable storage remediation message to the remote data storage system, thereby causing the remote data storage system to reclaim storage space as indicated in the reclaimable storage remediation message to avoid the predicted storage capacity exhaustion fault. Cropper et al. teach of a cloud computing model for on-demand access to a shared pool of configurable computing resources (para. 19). The service models could be a Software as a Service (SaaS) (para. 26-29). Asset management of configurable computing resources use a set of asset weight values to determine an impact on a resource utilization value or a set of asset priority values to spread/balance assets (para. 51-62, fig. 4). It would have been obvious to modify the method of Wollny et al. and Mandal et al. by adding Cropper et al. asset management. A person of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make the modification because it would provide a method to configured or reconfigured asset to spread/balance in regard to a threshold (para. 51). In regard to claim 2, Wollny et al. teach the method of claim 1 further comprising generating rules for selecting fault avoidance recommendations based on the source data (one or more corresponding conditional corrective scripts are selected for execution by the computing device to avoid, correct or recover from, or otherwise remediate one or more error conditions associated with each detected trigger condition, para. 26). In regard to claim 3, Wollny et al. teach the method of claim 2 further comprising using the rules with the current data to select fault avoidance recommendations corresponding to predicted faults (corrective scripts may be stored in the knowledge base associated with one or more error conditions or predictive indicators to facilitate later selection and use… one or more corrective scripts may be added to the knowledge base as a known solution to the error condition, para. 36, the corrective scripts may be selected for the one or more error conditions from corrective scripts previously generated, or the corrective scrips may be generated based upon the error conditions associated with the predictive indicators, para. 38). In regard to claim 4, Wollny et al. teach the method of claim 3 further comprising proactively implementing at least some of the fault avoidance recommendations (one or more corrective scripts may be selected in order to avoid or remediate one or more error conditions, para. 39). In regard to claim 5, Wollny et al. teach the method of claim 4 further comprising generating messages indicating that specific fault avoidance recommendations have been implemented (may be configured to provide the dashboards, generate reports, and provide a website that may be accessed by users to obtain additional information regarding known error conditions or solutions, para. 44). In regard to claim 6, Wollny et al. teach the method of claim 5 further comprising using the modelled faults and the current data from the computing system and the associated remote data storage system to calculate efficacy of implementation of the fault avoidance recommendations to avoid predicted faults (operating data may be included as separate variables in the operating data message for data efficiency, para. 23). In regard to claim 7, Wollny et al. teach the method of claim 6 further comprising including performance, workload, and configuration data in the source data and the current data (monitor real-time system performance which may be analyze to perform aspects of the proactive support techniques, para. 43). In regard to claim 8, Wollny et al. teach an apparatus comprising: a computing system comprising server computers that run instances of a computer software program (proactive support method implemented at one or more servers, para. 29); a remote data storage system that maintains data used by the instances of the computer software program (data warehouse, fig. 2, 214, para. 43); and a fault prediction system configured to: generate a model of faults (machine learning model, para. 33) using source data from the computing system (collect operating data from a plurality of computing devices, para. 31) and the remote data storage system (added to a knowledge base for further use, para. 30) prior to generating fault predictions (machine learning model may be trained to identify predictive indicators of error conditions, para. 30, it is noted that the machine learning model is created/train before errors conditions can be predicted or identify, hence the modelling is done prior to the generating fault predictions); and generate fault predictions using current data from the computing system and the remote data storage system and the model (one or more predictive indicators associated with one or more conditions are identified, para. 34). Wollny et al. teach of collecting operation data from a plurality of computing devices (abstract) but does not explicitly teach the source data comprising prior computing system performance, remote data storage system performance, computing system workload, data storage system workload, computing system configuration data and remote data storage system configuration data the current data comprising current computing system performance, remote data storage system performance, computing system workload, data storage system workload, computing system configuration data, and remote data storage system configuration data Mandal et al. teach of monitoring a computing environment by monitoring each component associated set of key performance indicators (KPIs) (para. 24). Data types ca span a variety of data, for example, performance metrics, transaction metrics, logs, traces, topology, etc. (para. 48-50). Refer to claim 1 for motivational statement. Wollny et al. and Mandal et al. does not explicitly teach prior to generating storage capacity exhaustion fault predictions, modelling storage capacity exhaustion faults occurring in a remote storage system using source data from a software as a service (SaaS), the SaaS computing system generating a storage capacity exhaustion fault prediction based on current data from the SaaS computing and the remote data storge system; and responsive to the storage capacity exhaustion fault prediction, the SaaS computing system signaling a reclaimable storage remediation message to the remote data storage system, thereby causing the remote data storage system to reclaim storage space as indicated in the reclaimable storage remediation message to avoid the predicted storage capacity exhaustion fault. Cropper et al. teach of a cloud computing model for on-demand access to a shared pool of configurable computing resources (para. 19). The service models could be a Software as a Service (SaaS) (para. 26-29). Asset management of configurable computing resources use a set of asset weight values to determine an impact on a resource utilization value or a set of asset priority values to spread/balance assets (para. 51-62, fig. 4). Refer to claim 1 for motivational statement. In regard to claim 9, Wollny et al. teach the apparatus of claim 8 further comprising the fault prediction system being configured to generate rules for selecting fault avoidance recommendations based on the source data (one or more corresponding conditional corrective scripts are selected for execution by the computing device to avoid, correct or recover from, or otherwise remediate one or more error conditions associated with each detected trigger condition, para. 26). In regard to claim 10, Wollny et al. teach the apparatus of claim 9 further comprising the fault prediction system being configured to use the rules with the current data to select fault avoidance recommendations corresponding to predicted faults (corrective scripts may be stored in the knowledge base associated with one or more error conditions or predictive indicators to facilitate later selection and use… one or more corrective scripts may be added to the knowledge base as a known solution to the error condition, para. 36, the corrective scripts may be selected for the one or more error conditions from corrective scripts previously generated, or the corrective scrips may be generated based upon the error conditions associated with the predictive indicators, para. 38). In regard to claim 11, Wollny et al. teach the apparatus of claim 10 further comprising the fault prediction system being configured to proactively implement at least some of the fault avoidance recommendations (one or more corrective scripts may be selected in order to avoid or remediate one or more error conditions, para. 39). In regard to claim 12, Wollny et al. teach the apparatus of claim 11 further comprising the fault prediction system being configured to receive messages indicating that specific fault avoidance recommendations have been implemented (may be configured to provide the dashboards, generate reports, and provide a website that may be accessed by users to obtain additional information regarding known error conditions or solutions, para. 44). In regard to claim 13, Wollny et al. teach the apparatus of claim 12 further comprising the fault prediction system being configured to use the model and the current data from the computing system and the remote data storage system to calculate efficacy of implementation of the fault avoidance recommendations to avoid predicted faults (operating data may be included as separate variables in the operating data message for data efficiency, para. 23). In regard to claim 14, Wollny et al. teach the apparatus of claim 13 in which the source data and the current data comprise performance, workload, and configuration data (monitor real-time system performance which may be analyze to perform aspects of the proactive support techniques, para. 43). In regard to claim 15, Wollny et al. teach a non-transitory computer-readable storage medium with instructions that when executed by a computer perform a method comprising: prior to generating fault predictions (machine learning model may be trained to identify predictive indicators of error conditions, para. 30, it is noted that the machine learning model is created/train before errors conditions can be predicted or identify, hence the modelling is done prior to the generating fault predictions), modelling faults (machine learning model, para. 33) using source data from a computing system (collect operating data from a plurality of computing devices, para. 31) comprising server computers running a software program (proactive support method implemented at one or more servers, para. 29) and an associated remote data storage system (data warehouse, fig. 2, 214, para. 43) that is used by the computing system to store data associated with the software program (added to a knowledge base for further use, para. 30); generating fault predictions based on current data from the computing system and the remote data storage system and the modelled faults (one or more predictive indicators associated with one or more conditions are identified, para. 34). Wollny et al. teach of collecting operation data from a plurality of computing devices (abstract) but does not explicitly teach the source data comprising prior computing system performance, remote data storage system performance, computing system workload, data storage system workload, computing system configuration data and remote data storage system configuration data the current data comprising current computing system performance, remote data storage system performance, computing system workload, data storage system workload, computing system configuration data, and remote data storage system configuration data Mandal et al. teach of monitoring a computing environment by monitoring each component associated set of key performance indicators (KPIs) (para. 24). Data types ca span a variety of data, for example, performance metrics, transaction metrics, logs, traces, topology, etc. (para. 48-50). Refer to claim 1 for motivational statement. Wollny et al. and Mandal et al. does not explicitly teach prior to generating storage capacity exhaustion fault predictions, modelling storage capacity exhaustion faults occurring in a remote storage system using source data from a software as a service (SaaS), the SaaS computing system generating a storage capacity exhaustion fault prediction based on current data from the SaaS computing and the remote data storge system; and responsive to the storage capacity exhaustion fault prediction, the SaaS computing system signaling a reclaimable storage remediation message to the remote data storage system, thereby causing the remote data storage system to reclaim storage space as indicated in the reclaimable storage remediation message to avoid the predicted storage capacity exhaustion fault. Cropper et al. teach of a cloud computing model for on-demand access to a shared pool of configurable computing resources (para. 19). The service models could be a Software as a Service (SaaS) (para. 26-29). Asset management of configurable computing resources use a set of asset weight values to determine an impact on a resource utilization value or a set of asset priority values to spread/balance assets (para. 51-62, fig. 4). Refer to claim 1 for motivational statement. In regard to claim 16, Wollny et al. teach the non-transitory computer-readable storage medium of claim 15 in which the method further comprises generating rules for selecting fault avoidance recommendations based on the source data (one or more corresponding conditional corrective scripts are selected for execution by the computing device to avoid, correct or recover from, or otherwise remediate one or more error conditions associated with each detected trigger condition, para. 26). In regard to claim 17, Wollny et al. teach the non-transitory computer-readable storage medium of claim 16 in which the method further comprises using the rules with the current data to select fault avoidance recommendations corresponding to predicted faults (corrective scripts may be stored in the knowledge base associated with one or more error conditions or predictive indicators to facilitate later selection and use… one or more corrective scripts may be added to the knowledge base as a known solution to the error condition, para. 36, the corrective scripts may be selected for the one or more error conditions from corrective scripts previously generated, or the corrective scrips may be generated based upon the error conditions associated with the predictive indicators, para. 38). In regard to claim 18, Wollny et al. teach the non-transitory computer-readable storage medium of claim 17 in which the method further comprises proactively implementing at least some of the fault avoidance recommendations (one or more corrective scripts may be selected in order to avoid or remediate one or more error conditions, para. 39). In regard to claim 19, Wollny et al. teach the non-transitory computer-readable storage medium of claim 18 in which the method further comprises generating messages indicating that specific fault avoidance recommendations have been implemented (may be configured to provide the dashboards, generate reports, and provide a website that may be accessed by users to obtain additional information regarding known error conditions or solutions, para. 44). In regard to claim 20, Wollny et al. teach the non-transitory computer-readable storage medium of claim 19 in which the method further comprises using the modelled faults and the current data from the computing system and the associated remote data storage system to calculate efficacy of implementation of the fault avoidance recommendations to avoid predicted faults (operating data may be included as separate variables in the operating data message for data efficiency, para. 23). ******************************* Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892. Chou et al. (US 2015/0254088) prediction for reclaimed data block Chang et al. (US 9,460,147) reclaim space of deleted records Cropper et al. (US 2017/0111289) reclaim resources, SaaS Ari et al. (US 12,301,423) computing capacity on demand Zhuravlev et al. (US 2022/0261164) storage configuration utilization patterns ************* Chandana et al. (US 2024/0364712) ML model and predict Sethi et al. (US 12228,999) Dell- logs, performance metrics Faulhaber, Jr. et al. (US 2019/0156247) ML and model selector Dayama et al. (US 2021/0326746) ML performance verification Cui et al. (US 10,853,116) ML predicted to fail ************* Marakala et al. (US 2024/0419561) predictive failure model Swidan et al. (US 2025/0036537) various model such as SaaS service model Karr (US 2023/0385154) recovery for object stores ************* Darling et al. (US 2015/0019912) predictive model for error prediction Hu et al. (US 12,051,008) reliability measure of predictive system Kim et al. (US 11,810,003) learning model tree structure Ishida (US 2019/0188598) learning error prediction model Fujimura et al. (US 11,593,817 prediction apparatus Kakuda et al. (US 2024/0232659) prediction device of learning device Takada (US 2022/0076161) prediction model Lin et al. (US 2012/0284212) predictive model Mann et al. (US 2015/0170049) predictive analytic modeling Breckenridge et al. (US 8,762,299) predictive analytical model training Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOAN TRUONG whose telephone number is 408-918-7552. The examiner can normally be reached on 10AM-6PM PST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Ashish can be reached on 571-272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Loan L.T. Truong/Primary Examiner, Art Unit 2114 Loan.truong@uspto.gov
Read full office action

Prosecution Timeline

Sep 06, 2023
Application Filed
Dec 08, 2024
Non-Final Rejection — §103
Dec 17, 2024
Response Filed
Apr 01, 2025
Final Rejection — §103
Apr 09, 2025
Interview Requested
Apr 24, 2025
Request for Continued Examination
May 05, 2025
Response after Non-Final Action
Jun 14, 2025
Non-Final Rejection — §103
Jun 24, 2025
Interview Requested
Jul 01, 2025
Examiner Interview Summary
Jul 01, 2025
Applicant Interview (Telephonic)
Jul 16, 2025
Response Filed
Nov 15, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591485
STORAGE SYSTEM AND MANAGEMENT METHOD FOR STORAGE SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12585557
SYNCHRONIZATION OF CONTAINER ENVIRONMENTS TO MAINTAIN AVAILABILITY FOR A PREDETERMINED ZONE
2y 5m to grant Granted Mar 24, 2026
Patent 12579031
Read Data Path for a Memory System
2y 5m to grant Granted Mar 17, 2026
Patent 12561212
METHOD AND APPARATUS FOR PHASED TRANSITION OF LEGACY SYSTEMS TO A NEXT GENERATION BACKUP INFRASTRUCTURE
2y 5m to grant Granted Feb 24, 2026
Patent 12554581
A MULTI-PART COMPARE AND EXCHANGE OPERATION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
77%
Grant Probability
90%
With Interview (+12.8%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 594 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month