Prosecution Insights
Last updated: April 19, 2026
Application No. 17/118,081

METHOD FOR EVENT-BASED FAILURE PREDICTION AND REMAINING USEFUL LIFE ESTIMATION

Final Rejection §103
Filed
Dec 10, 2020
Examiner
LIANG, LEONARD S
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Hitachi, Ltd.
OA Round
4 (Final)
62%
Grant Probability
Moderate
5-6
OA Rounds
3y 9m
To Grant
65%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
388 granted / 629 resolved
-6.3% vs TC avg
Minimal +3% lift
Without
With
+2.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
51 currently pending
Career history
680
Total Applications
across all art units

Statute-Specific Performance

§101
22.2%
-17.8% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
16.4%
-23.6% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 629 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 10/27/25 have been fully considered but they are not persuasive. The applicant’s drawing amendments have overcome the previous objection to the drawings. With respect to the claim rejection under 35 U.S.C. §103, the applicant first argues: PNG media_image1.png 647 719 media_image1.png Greyscale This argument is not persuasive because the applicant is applying an overly narrow interpretation to broad claims, while also narrowing the broad and vast teachings of the claimed art, as well as overlooking what would be well-understood and obvious to one of ordinary skill in the art. With respect to the data augmentation limitations that are under debate, the claims merely state: executing, by the processor, data augmentation on the data, the data augmentation configured to generate additional semantically similar simulating data samples based on the data, by performing: dropping of subsequences within the generated sequences randomly injecting subsequences within the generated sequences randomly variate continuous features of the generated sequences value swapping of nearby subsequences of the generated sequences Please note that these limitations do not create any nexus between data augmentation and fault diagnosis. In fact, they do not mention fault diagnosis at all. The claims merely disclose data augmentation on the data, without the data being limited or narrowed to any specific context or application. Contrary to the applicant’s assertion that “the Examiner’s rejection relies on overly broad … constructions of the claim language,” the examiner contends that it is the claims that are broad. The examiner is required to give claims their broadest reasonable interpretation (BRI). Giving BRI to already broad claims may result in an interpretation that is broader than the applicant’s narrowly intended interpretation, while still being reasonable. The applicant further argues, “The Examiner has failed to establish why a person of ordinary skill in fault diagnosis would look to computer vision data augmentation techniques from Zhang et al NPL 2.” A key flaw of this argument is that it narrows the fields of endeavor to computer vision and fault diagnosis. The examiner contends that the more proper field of endeavor, in which to evaluate the claims under BRI, would be “data processing.” As discussed above, the claimed limitations around data augmentation are not tied to a specific application or field of use. They are directed to general data processing. One of ordinary skill in the art recognizes that data processing encompasses and intersects with many different fields of endeavor. The examiner considers that narrowing the claims to merely the scope of computer vision or fault diagnosis would be overly narrow and contrary to BRI. For argument’s sake, even if “data processing” is considered too broad a scope, the examiner contends that a narrower field of endeavor that still satisfies BRI would be the field of machine learning / artificial intelligence (ML / AI). In fact, “data augmentation” appears to be a well-known and well-understood term of art to one in the ML / AI field of endeavor. All that being said, both Zhang et al NPL (abstract) and Zhang et al NPL 2 (abstract) disclose fault / failure diagnosis. So while Zhang et al NPL 2 may discuss computer vision challenges, that is not all that it discusses. It also discusses fault diagnosis. Please note the title of Zhang et al NPL 2, which states, “A New Deep Learning Model for Fault Diagnosis with Good Anti-Noise and Domain Adaptation Ability on Raw Vibration Signals.” (emphasis mine). Next, the applicant argues: PNG media_image2.png 210 725 media_image2.png Greyscale PNG media_image3.png 272 720 media_image3.png Greyscale This argument is not persuasive because the examiner gives claims their broadest reasonable interpretation. Here, the applicant appears to be inserting an unclaimed, narrow interpretation of “dropping” to mean “removing”. The claims do not state removing. The examiner interpreted “dropping of subsequences within the generated sequences,” to be indicative of “dropping in” data which would be an additive procedure. For the sake of argument, even if the claim was interpreted to be removing data, the examiner contends that the limitation would still be obvious in view of what is well-known and well-understood about data augmentation to one of ordinary skill in the machine learning/artificial intelligence field of endeavor. Based on the examiner’s understanding, data augmentation, in a machine learning / artificial intelligence context, is a well-established term of art to describe a technique that artificially increases the size and diversity of a dataset by applying various transformations to existing data. Also, based on the examiner’s understanding, the type of transformations to existing data are broad and diverse and can include both adding new data points, as well as “removing data” in certain contexts that allow for new, different training examples that are modified from the original data. Based on the examiner’s understanding, the purpose of data augmentation, particularly in a deep learning context (which is disclosed throughout both Ristovski et al and Zhang et al NPL) is to improve the performance of machine learning models, such as through the addressing of data scarcity, mitigating overfitting, and improving the robustness of models. The examiner contends that one of ordinary skill in the art would be familiar with the general principles of data augmentation and would recognize the various claimed transformations to be encompassed by and obvious to those general principles. Zhang et al NPL (along with the art it incorporates by reference) disclose the principle of data augmentation throughout its disclosure. The examiner considered what would be well-understood to one of ordinary skill in the art regarding the principle of data augmentation. Next, the applicant argues: PNG media_image4.png 522 731 media_image4.png Greyscale This argument is not persuasive for similar reasons as those discussed above. The applicant attempts to limit the teachings of the art to computer vision applications. However, as discussed above, the teachings of the art are not limited to computer vision applications. Furthermore, the applicant argues, “the Examiner’s analysis fails to establish the required nexus between these disparate teachings and the specific claimed limitation.” However, the applicant’s own limitations also fail to establish any specific nexus between the claimed operations (i.e. dropping of sequences, randomly injecting subsequences, randomly variate continuous features, and value swapping of nearby sequences) with the specific application context that the applicant is arguing. In other words, the applicant is reading in a narrow interpretation to unclaimed elements. Also, as discussed above, “data augmentation” appears to be a well-known and well-established term of art in the machine learning / artificial intelligence field of endeavor. Zhang et al NPL (along with the art that it incorporates by reference) appears to be using this term of art under the understanding of what would be well-understood to one of ordinary skill in the art regarding the principles of data augmentation. As discussed above, one of ordinary skill in the art understands that data augmentation encompasses a number of different transformations. The claimed limitation, which is broadly and generally recited, would be obvious as one example of a data augmentation transformation. The applicant’s argument of impermissible hindsight reasoning is not persuasive because it ignores what would be well-understood to one of ordinary skill in the art for a technique that has achieved the status of commonly used term of art, as demonstrated by the fact that Zhang et al NPL, Zhang et al NPL 2, and Shao et al NPL all refer to data augmentation. It is not hindsight reasoning when a term / technique is so ubiquitous that a significant number of references all refer to it under a common understanding of what it is. Next, the applicant argues: PNG media_image5.png 118 723 media_image5.png Greyscale PNG media_image6.png 580 720 media_image6.png Greyscale This argument is not persuasive for similar reasons as those discussed above. The applicant attempts to limit the teachings of the art to computer vision applications. However, as discussed above, the teachings of the art are not limited to computer vision applications. Also, as discussed above, the applicant’s argument of impermissible hindsight reasoning is not persuasive because it ignores what would be well-understood to one of ordinary skill in the art for a technique that has achieved the status of commonly used term of art, as demonstrated by the fact that Zhang et al NPL, Zhang et al NPL 2, and Shao et al NPL all refer to data augmentation. It is not hindsight reasoning when a term/technique is so ubiquitous that a significant number of references all refer to it under a common understanding of what it is. Furthermore, with respect to the applicant’s argument that, “while Shao et al NPL describes generating sequences ‘from random noise of latent space,’ this teaches creating entirely new sequences from noise distributions … not randomly variating continuous features within existing generated sequences of fault events. The claimed approach of randomly variating continuous features within generated sequences may provide distinct technical advantages in maintaining sequence structure while introducing controlled variability that preserves fault diagnosis relevance,” the examiner contends that the applicant is reading in narrow interpretations to broad claims, while ignoring the breadth of the art and what would be understood to one of ordinary skill in the art. The claimed limitation merely states, “randomly variate continuous features of the generated sequences.” The narrowing context given in the applicant’s argument is not claimed. The examiner must give claims their broadest reasonable interpretation. Here, the applicant appears to be creating a very narrow interpretation that excludes Shao, but such an interpretation is not claimed. Furthermore, the applicant appears to only focus on Shao, while ignoring that Zhang et al NPL also discloses many teachings of randomness (both in its base reference and via the art it incorporates by reference). Also, as discussed above, random transformations appear to be well-understood in data augmentation, which would render them obvious to one of ordinary skill in the art. Next, the applicant argues: PNG media_image7.png 369 724 media_image7.png Greyscale PNG media_image8.png 551 727 media_image8.png Greyscale This argument is not persuasive for similar reasons as discussed above. The applicant asserts that the Examiner’s construction is overly broad. However, the examiner contends that it is the applicant’s claims that are broad. The claimed limitation in question merely states, “value swapping of nearby subsequences of the generated sequences,” without any definition or context of what “value swapping” entails. The applicant appears to assign a very narrow interpretation of what value swapping entails and then disqualifies the art based on that narrow interpretation. As discussed above, the examiner must give claims their broadest reasonable interpretation. For reasons similar to those given above, the examiner maintains that the BRI of the claimed limitation is anticipated by or obvious in view of the cited art, which represents the broad principle of data augmentation. Also, as discussed above, the examiner maintains that data augmentation is a well-known and well-understood term of art in the machine learning / artificial intelligence field of endeavor that encompasses many different potential transformations. The examiner further maintains that in view of the broadly recited claims, the claimed operations would be obvious to one of ordinary skill in the art, especially in view of the data augmentation teachings of not just Zhang et al NPL but also multiple pieces of art that it incorporates by reference. With respect to the applicant’s argument that, “Even if the individual concepts of data augmentation were known, it would not be obvious to try the specific claimed combination of dropping subsequences, randomly injecting subsequences, randomly variating continuous features, and value swapping of nearby subsequences. The prior art provides no guidance on how these techniques would work together or what parameters would be appropriate for fault diagnosis applications,” it should be noted that the applicant’s claims also do not disclose how the various limitations would work together, nor do they claim any specific parameters or mention any connection to fault diagnosis applications. The applicant is again reading in a narrow interpretation to unclaimed elements. In addition, the examiner maintains that the claimed limitations merely disclose general and generic transformation techniques of the broader data augmentation principle, which is a well-known and well-established principle. General and generic transformation techniques would be obvious to one of ordinary skill in the art. The examiner maintains that the BRI of the claimed limitations are anticipated by or obvious in view of the cited art. However, even if the claimed transformations were slightly different than the transformations represented by the art, one of the KSR rationales for obviousness is design incentives or market forces prompting variations. The prior art teaches the same principle as that which is claimed (i.e., data augmentation). One of ordinary skill in the art understands that the way data augmentation works is through a variety of different data transformations. Design incentives or market forces would have prompted variations in these data transformations, such that even if the claimed transformations were not the exact same as the transformations in the art, they could be considered obvious variants. Known variations or principles would meet the difference between the claimed invention and the prior art and the implementation would have been predictable. Next, the applicant argues: PNG media_image9.png 396 722 media_image9.png Greyscale PNG media_image10.png 92 723 media_image10.png Greyscale This argument is not persuasive for similar reasons as those discussed above. The applicant argues that “Zhang et al NPL’s weighting relates to ‘iteratively updating the network connection weights’ – general neural network training, not the claimed data-adaptive optimization that specifically weighs original equipment data higher than synthetic data samples.” Here, the applicant is overlooking what would be obvious to one of ordinary skill in the machine learning / artificial intelligence field of endeavor. One of ordinary skill in the art understands that the principles of machine learning / artificial intelligence are broad and vast in their applications and contexts, such that broad teachings of the principles regarding optimization, weighting, prediction, etc … in the general context of remaining use life would encompass many obvious variations of remaining use life machine learning / artificial intelligence applications, including the claimed optimization. As discussed above, one of the KSR rationales for obviousness is “Design Incentives or Market Forces Prompting Variations.” Also, please note that the language “configured to weigh ones …” is intended use language that is able to be performed by the infrastructure represented by modified Ristovski et al. As discussed above, the examiner is not relying on hindsight reasoning. Rather, the examiner is considering what would be obvious to one of ordinary skill in the art. Also, here, the applicant continues to limit the teachings of the art to computer vision, while overlooking both the broader teachings of the art but also what would be well-understood and obvious to one of ordinary skill in the art, particularly in a field that is as vast and pervasive as machine learning / artificial intelligence, which encompasses techniques and principles that transcend the narrow confines of a single use case or field of endeavor. Finally, the applicant argues: PNG media_image11.png 242 727 media_image11.png Greyscale This argument is not persuasive for the same reasons given above. The rejection is maintained. Drawings The drawings of 10/27/25 are accepted. Examiner’s Note - 35 USC § 101 For reasons discussed in the previous action, claims 1, 3-5, 7, 9-11, 13, and 15-17 qualify as eligible subject matter under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-5, 7, 9-11, 13, and 15-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ristovski et al (US PgPub 20190235484) (cited in 06/27/22 IDS) in view of Zhang et al NPL (Zhang, Liangwei; Jing, Lin; Lue, Bin; Zhang, Zhicong; Yan, Xiaohui; and Wei, Muheng – “A Review on Deep Learning Applications in Prognostics and Health Management”; Special Section on Data Analytics and Artificial Intelligence for Prognostics and Health Management (PHM) Using Disparate Data Streams, Volume 7, 2019.). Please note that Zhang et al NPL incorporates by reference two references that will be referred to, not as modifying references, but as illuminating references that illustrate what would be understood to one of ordinary skill in the art, when considering Zhang et al NPL. These illuminating references are: 1) Zhang et al NPL 2 (Zhang, Wei; Peng, Gaoliang; Li, Chuanhao; Chen, Yuanhang; and Zhang, Zhujun – “A New Deep Learning Model for Fault Diagnosis with Good Anti-Noise and Domain Adaptation Ability on Raw Vibration Signals”; Sensors, column. 17, no. 2, p. 425, 2017.) and 2) Shao et al NPL (Shao, Siyu; Wang, Pu; and Yan, Ruqiang – “Generative Adversarial Networks for Data Augmentation in Machine Fault Diagnosis”; Comput. Ind., vol. 106, pp. 85-93, Apr. 2019.). With respect to claim 1, Ristovski et al discloses: A method for predicting failures and remaining useful life (RUL) for equipment (abstract; paragraph 0011 states, “Aspects of the present disclosure further include a method for managing a single deep learning architecture with three modes including a failure prediction mode, a RUL mode, and a unified mode …”) for data received from the equipment comprising fault events, conducting, by a processor, feature extraction on the data to generate sequences of event features based on the fault events (paragraph 0034 states, “Pre-processing data 110 preparation can include creating a common sequence on a component time-scale, perform feature extraction on the pre-processed data, identify operation status changes for each component.”; See also paragraphs 0040, 0051, and 0074-0075 for further feature extraction teachings. Paragraphs 0029-0030 state, “Event data can include sequences of events with different types of events … Failure data can include discrete failure events …”) applying, by the processor, deep learning modeling to the sequences of event features to generate a model configured to predict the failures and the RUL for the equipment based on event features extracted from data of the equipment (paragraph 0040 states, “Labels are created for each element of the created sequence with corresponding RUL value or failure values for the deep learning model parameters in the training and learning phase 100.” Deep learning is further taught throughout the disclosure of Ristovski et al, such as in abstract; figure 1(a), reference 120; figure 2, reference 205; figure 5; and paragraphs 0005 and paragraphs 0008-0013) executing, by the processor, optimization on the model (optimization taught in paragraphs 0007, 0045, 0050, and 0052) executing, by the processor, the model on the data received from the equipment to generate the predicted failures and RUL (figure 7, reference 710; paragraphs 0073-0074 states, “a processor can manage a single deep learning architecture for three modes comprising a failure prediction mode, a remaining useful life (RIL) mode, and a unified mode …”) controlling, by the processor, operation of the equipment through performing at least one of equipment configuration, safe mode reset of the equipment, equipment force shutdown, and activation based on failure type associated with the predicted failures and RUL (paragraph 0010 states, “Aspects of the present disclosure can include a system configured to manage a single deep learning architecture … For subsequently received streaming data associated with the equipment, the system is configured to apply the learned parameters of the single deep learning architecture and associated transformation function to generate a maintenance prediction for the equipment.”; paragraphs 0089-0090 state, “it is appreciated that throughout the description, discussions utilizing terms such as ‘managing,’ ‘processing,’ ‘computing,’ ‘calculating,’ ‘determining,’ ‘adjusting,’ or the like, can include the actions and processes of a computer system or other information processing device that manipulates and transforms data … Example implementations may also related to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs.” Please also note paragraph 0005, which states, “The single deep learning architecture is used for maintenance recommendation that can be utilized by, but is not limited to, equipment end-users and/or operators, maintenance personnel and management, decision makers, and operation managers.” This suggests common maintenance operations, such as the ones disclosed. Furthermore, paragraphs 0053-0056 disclose alerts, which are a type of andon activation.) With respect to claim 1, Ristovski et al differs from the claimed invention in that it does not explicitly disclose: executing, by the processor, data augmentation on the data, the data augmentation configured to generate additional semantically similar simulating data samples based on the data, by performing: dropping of subsequences within the generated sequences randomly injecting subsequences within the generated sequences randomly variate continuous features of the generated sequences value swapping of nearby subsequences of the generated sequences wherein the optimization is data-adaptive optimization configured to weigh ones derived from the data received from the equipment higher than ones derived from the semantically similar simulating data samples for the prediction of the failures and the RUL for the equipment With respect to claim 1, Zhang et al NPL discloses: executing, by the processor, data augmentation on the data, the data augmentation configured to generate additional semantically similar simulating data samples based on the data (page 162426, column 2, paragraph 2 states, “With the aid of data augmentation (including random rotation, translation, zoom, shear and elastic transformation) and a segmentation step prior to classification, they successfully augmented the number of labeled training samples … Similar work used data augmentation …” One of ordinary skill in the art understands that the general purpose of data augmentation is to perform the claimed intended use of generating additional semantically similar simulating data samples based on the data. The structure of Zhang et al NPL is able to perform the claimed intended use. Please note that Zhang et al NPL incorporates by reference many different further references that also teach data augmentation. Particular attention will be given below to Zhang et al NPL 2 (reference 124 on page 162436) and Shao et al NPL (reference 209 on page 162438).), by performing: dropping of subsequences within the generated sequences (Zhang et al NPL 2 and Shao et al NPL provide two frameworks for data augmentation. Zhang et al NPL 2 section 3.4, paragraph 1 states, “In computer vision, data augmentation is frequently used to increase a number of training samples to enhance the generalization performance of CNN [31]. Horizontal flips, random crops/scales, and color jitter are widely used to augment training samples in computer vision assignments. In fault diagnosis, data augmentation is also necessary for a convolutional neural network to achieve high classification precision. However, it is much easier to obtain huge amounts of data by slicing the training samples with overlap. This process is shown in Figure 5. The training samples is prepared with overlap.” The insertion of overlap is broadly construed to serve as the claimed dropping of subsequence within the overall sequence, as shown in figure 5. Shao et al NPL section 2.1, paragraph 1 states, “The main thought behind GAN is using adversarial networks to improve the quality of generated data. The generator is trained to produce realistic synthesized data xgenerated = G(z) from a random noise vector z, trying to fool the discriminator so that xgenerated would not be recognized as generated samples.” Here, the samples (either generated or real) would be the dropped-in subsequences.) randomly injecting subsequences within the generated sequences (As seen above, Zhang et al NPL 2 discloses random crops/scales as a technique for data augmentation in computer vision assignments. One of ordinary skill in the art understands that there is also an analogue for randomness in a fault diagnosis context. The claimed limitation would be obvious in view of that randomness analogue. Also, as seen above, Shao et al NPL discloses a random noise vector. Shao et al NPL section 3.2, paragraph 2 also states, “The generator produces sequence samples from random noise of latent space with specific labels.” The claimed limitation would also be obvious in view of the randomness teachings of Shao et al NPL.) randomly variate continuous features of the generated sequences (As seen above, Zhang et al NPL 2 discloses random crops/scales as a technique for data augmentation in computer vision assignments. One of ordinary skill in the art understands that there is also an analogue for randomness in a fault diagnosis context. The claimed limitation would be obvious in view of that randomness analogue. Also, as seen above, Shao et al NPL discloses a random noise vector. Shao et al NPL section 3.2, paragraph 2 also states, “The generator produces sequence samples from random noise of latent space with specific labels.” The claimed limitation would also be obvious in view of the randomness teachings of Shao et al NPL.) value swapping of nearby subsequences of the generated sequences (For Zhang et al NPL 2, the data augmentation technique shown in figure 5 can be broadly construed as “value swapping” the original signal sample with the overlap signal sample. For Shao et al NPL, the choice of a discriminator in a Gan to choose a generated data sample over a real data sample can also be broadly construed as “value swapping.”) wherein the optimization is data-adaptive optimization configured to weigh ones derived from the data received from the equipment higher than ones derived from the semantically similar simulating data samples for the prediction of the failures and the RUL for the equipment (obvious in view of combination; As discussed above, Ristovski et al discloses optimization. Ristovski et al also discloses weighting (paragraphs 0058-0060). Zhang et al NPL, including its many incorporated references, teaches data augmentation. Zhang et al also weights (page 162418, column 1, paragraph 3 states, “By iteratively updating the network connection weights …”). Zhang et al NPL 2 also discloses weights (page 2, paragraph 3 states, “CNNs have two main features: weights sharing and spatial pooling …”).) With respect to claim 1, it would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to incorporate the teachings of Zhang et al NPL into the invention of Ristovski et al. The motivation for the skilled artisan in doing so is to gain the benefit of improving performance by providing a large and diverse dataset for learning models. Independent claims 7 and 13 represent the non-transitory computer readable medium and apparatus variations of method claim 1 and are rejected for similar reasons as those given with respect to claim 1 above. With respect to claims 3, 9, and 15, Ristovski et al, as modified, discloses: wherein the deep learning modeling comprises learnable neural network-based attention mechanisms configured to determine relevant ones of the event features within the sequences of event features and discarding less relevant ones of the event features (paragraph 0006 of Ristovski et al states, “The base architecture can involve layer types such as Convolutional Neural Network (CNN), Long Short Term Memory (LSTM), and multi-layer fully connected neural network (NN).” The claimed “discarding” is an obvious consequence of the machine learning and artificial intelligence teachings of the art. For example, in the GAN teachings of Shao, the unselected samples between the generative and real data are naturally “discarded.” Also, one of ordinary skill in the art recognizes that the very act of classifying (via a classifier model) inherently selects more relevant data and discards (or does not select) less relevant ones.) With respect to claims 4, 10, and 16, Ristovski et al, as modified, discloses: wherein the deep learning modeling is one of multi-head attention, Long Short Term Memory (LSTM), and ensemble modeling (Ristovski paragraph 0006 discloses LSTM. Zhang et al NPL page 162429, column 2, paragraph 1 discloses ensemble modeling.) With respect to claims 5, 11, and 17, Ristovski et al, as modified, discloses: wherein the optimization of the model is cost sensitive optimization configured to weigh predictions of failures to be higher based on cost (obvious in view of combination; Both Ristovski et al (paragraphs 0004 and 0009) and Zhang et al NPL (page 162415, column 1, paragraph 1; page 162416, column 1, paragraph 2; page 162417, column 2, paragraph 4; and page 162422, column 2, paragraph 2) disclose cost considerations.) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Kocberber et al (US PgPub 20200104200) discloses disk drive failure prediction with neural networks. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEONARD S LIANG whose telephone number is (571)272-2148. The examiner can normally be reached M-F 10:00 AM - 7 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ARLEEN M VAZQUEZ can be reached at (571)272-2619. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LEONARD S LIANG/Examiner, Art Unit 2857 12/04/25 /ARLEEN M VAZQUEZ/Supervisory Patent Examiner, Art Unit 2857
Read full office action

Prosecution Timeline

Dec 10, 2020
Application Filed
Jun 03, 2022
Non-Final Rejection — §103
Jul 29, 2022
Applicant Interview (Telephonic)
Aug 01, 2022
Examiner Interview Summary
Sep 06, 2022
Response Filed
Sep 10, 2022
Final Rejection — §103
Nov 30, 2022
Applicant Interview (Telephonic)
Nov 30, 2022
Examiner Interview Summary
Dec 15, 2022
Response after Non-Final Action
Jan 10, 2023
Applicant Interview (Telephonic)
Jan 11, 2023
Response after Non-Final Action
Mar 15, 2023
Request for Continued Examination
Mar 20, 2023
Response after Non-Final Action
Aug 23, 2025
Non-Final Rejection — §103
Oct 27, 2025
Response Filed
Dec 04, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554035
CORROSION EVALUATION OF NESTED CASINGS VIA PULSED EDDY CURRENT
2y 5m to grant Granted Feb 17, 2026
Patent 12517088
METHOD FOR SELECTING MATERIAL FOR ORGANIC LIGHT-EMITTING DEVICE
2y 5m to grant Granted Jan 06, 2026
Patent 12405606
SYSTEM AND METHOD FOR PERFORMANCE AND HEALTH MONITORING TO OPTIMIZE OPERATION OF A PULVERIZER MILL
2y 5m to grant Granted Sep 02, 2025
Patent 12385384
RATE OF PENETRATION (ROP) OPTIMIZATION ADVISORY SYSTEM
2y 5m to grant Granted Aug 12, 2025
Patent 12369823
OPTIONAL SENSOR CALIBRATION IN CONTINUOUS GLUCOSE MONITORING
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
62%
Grant Probability
65%
With Interview (+2.9%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 629 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month