Prosecution Insights
Last updated: April 19, 2026
Application No. 17/807,054

FAULT DETECTION IN NEURAL NETWORKS

Non-Final OA §102§103
Filed
Jun 15, 2022
Examiner
TRUONG, LOAN
Art Unit
2114
Tech Center
2100 — Computer Architecture & Software
Assignee
Arm Limited
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
90%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
458 granted / 594 resolved
+22.1% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
32 currently pending
Career history
626
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
25.0%
-15.0% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 594 resolved cases

Office Action

§102 §103
DETAILED ACTION This office action is in response to the filed application 17/807,054 on June 15, 2022. Claims 1-20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Information Disclosure Statement The information disclosure statements (IDS) submitted on June 15, 2022 was in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements were considered by the Examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-4, 8-11, 13-16, 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Smyth et al. (US 2021/0028558). In regard to claim 1 , Smyth et al. teach a method of performing fault detection during computations relating to a neural network comprising a first neural network layer and a second neural network layer in a data processing system ( neural network employed by the system may be represented by a multilayer perceptron (MLP), para. 26, fig. 5) , the method comprising: scheduling computations onto data processing resources for the execution of the first neural network layer and the second neural network layer ( model employed for performing the feature extraction and clustering may be implemented by a neural network, para. 25) , wherein the scheduling includes: for a given one of the first neural network layer and the second neural network layer, scheduling a respective given one of a first computation and a second computation as a non-duplicated computation, in which the given computation is at least initially scheduled to be performed only once during the execution of the given neural network layer ( layers 520A and 520L, fig. 5, para. 26) ; and, for the other of the first and second neural network layers, scheduling the respective other of the first and second computations as a duplicated computation, in which the other computation is at least initially scheduled to be performed at least twice during the execution of the other neural network layer to provide a plurality of outputs ( nodes of layers 520A and 520L, 550B-550K and 560B-560M, fig. 5, para. 26) ; performing computations in the data processing resources in accordance with the scheduling ( performing a clustering technique, para. 29, fig. 5) ; and, comparing the outputs from the duplicated computation to selectively provide a fault detection operation during processing of the other neural network layer ( compared and the error is propagated back to the previous layers of the neural network, para. 29) . In regard to claim 2, Smyth et al. teach t he method according to claim 1, comprising: for the given neural network layer, scheduling the given computation onto a first of a plurality of computing components, such that the given computation is scheduled as a non-duplicated computation in which the first component performs a computation which is not scheduled to be performed by any other of the plurality of computing components ( each artificial neurons processes one or more input signals and transmits the output signal to one or more neighboring artificial neurons, para. 25, fig. 5 , nodes 550A-550K) ) ; and, for the other neural network layer, scheduling the other computation onto the first component and onto a second, different, one of the plurality of computing components, each of the first and second components providing a respective one of said plurality of outputs ( each artificial neurons processes one or more input signals and transmits the output signal to one or more neighboring artificial neurons, para. 25, fig. 5, nodes 560A-560M), In regard to claim 3, Smyth et al. teach t he method of claim 2, wherein the scheduling includes, for the given neural network layer, scheduling a third computation, different to the given, non-duplicated, computation, onto the second component ( the model employed for performing the feature extraction and clustering implemented by a neural network … where multitude of nodes call artificial neurons such that each artificial neurons processes one or more input signals and transmits the output signal to one or more neighboring artificial neurons… the output may be applying to a linear combination or may be trained by processing examples (training data sets), para. 25, fig. 5) . In regard to claim 4, Smyth et al. teach t he method of claim 2, wherein the scheduling includes, for the given neural network layer, scheduling the second component to be placed in a low-power state in which no computation is performed ( other hyperparameters of the model may include the number of nodes in each layer, the activation function types, etc , para. 26, it is noted that the number of nodes exceeding the number of nodes required by the model is deem inactive and equate that to a low power state) . In regard to claim 8, Smyth et al. teach t he method of claim 1, wherein the first neural network layer and the second neural network layer are executed during the performance of one inference or one training run of a neural network ( the model for performing the feature extraction may be implemented by a neural network …. A neural network may be trained by processing examples, para. 25) , each layer being a different layer of the neural network ( an implementation could be operation performed by the same neural network, in which a first subset of layers perform the feature extraction, the second subset of layers perform the clustering, while the remaining layers perform the regression tasks, para. 30) . In regard to claim 9, Smyth et al. teach a data processing system configured to perform fault detection during computations, the data processing system comprising: control circuitry ( processing device, fig. 8, para. 41) ; and, one or more computing components configured to provide data processing resources ( the processing device may include one or more application processors, para. 42) , wherein the control circuitry is configured to schedule computations onto the plurality of data processing resources for the execution of a first neural network layer and a second neural network layer ( the processing device may receive a plurality of values … and may employ a machine learning model to perform a feature extraction, para. 45-46) , including: for a given one of the first neural network layer and the second neural network layer, scheduling a respective given one of a first computation and a second computation as a non-duplicated computation, in which the given computation is at least initially scheduled to be performed only once during the execution of the given neural network layer ( layers 520A and 520L, fig. 5, para. 26) ; and, for the other of the first and second neural network layers, scheduling the respective other of the first and second computations as a duplicated computation, in which the other computation is at least initially scheduled to be performed at least twice during the execution of the other neural network layer to provide a plurality of outputs ( nodes of layers 520A and 520L, 550B-550K and 560B-560M, fig. 5, para. 26) , wherein the data processing system is configured to compare the outputs from the duplicated computation to selectively provide a fault detection operation during processing of the other neural network layer ( compared and the error is propagated back to the previous layers of the neural network, para. 29) . In regard to claim 1 0, Smyth et al. teach a method of generating a hardware configuration addressing an operational performance target for a data processing system that is programmable to execute a first neural network layer and a second neural network layer, the method comprising: determining a first operation for one of the first neural network layer and second neural network layer ( model employed for performing the feature extraction and clustering may be implemented by a neural network, para. 25) ; determining a first fault detection operation for the other of the first neural network layer and the second neural network layer, wherein the first operation and the first fault detection operations may differ from one another and wherein a combination of the first operation and the first fault detection operation can address the operational performance target for the neural network ( compared and the error is propagated back to the previous layers of the neural network, para. 29) ; and, determining a hardware configuration for the data processing system, wherein the hardware configuration is operable to provide the first operation and the first fault detection operation ( if the values exceeds the predetermined threshold value, the processing device may identify an antenna array element and/or a design parameter of the antenna array that has caused the angular resolution value to exceed … notify and take remedial action, para. 58-59) . In regard to claim 1 1 , Smyth et al. teach t he method of claim 10, further comprising providing a data processing system that comprises the hardware configuration that is operable to provide the first operation and the first fault detection operation ( identify one or more sub-optimal sections of the antenna array response, and further identify the antenna array elements corresponding to the identified sub-optimal sections and design parameters that are likely to have caused the sub-optimal antenna response, para. 58) . In regard to claim 1 3, Smyth et al. teach t he method of claim 10, wherein the data processing system comprises a neural processing unit that is programmable to execute the neural network ( neural network employed by the system, para. 26, fig. 5) . In regard to claim 1 4, Smyth et al. teach t he method of claim 10, in which determining the first operation and/or the first fault detection operation comprises: determining a property of at least the first neural network layer ( predetermined threshold, para. 57-58) ; and, using the determined property to determine whether fault detection should be enabled, or a suitable fault detection operation that should be used, for the first neural network layer, in order to address the operational performance target for the neural network ( responsive to determining that the angular resolution value exceeds the predetermined threshold value, para. 58) . In regard to claim 1 5, Smyth et al. teach t he method of claim 10, comprising: determining a property of at least the first neural network layer ( predetermined threshold, para. 57-58) ; and, configuring the first operation and/or the first fault detection operation in view of the determined property of one or more layers of the neural network, in order to address the operational performance target for the neural network ( responsive to determining that the angular resolution value exceeds the predetermined threshold value, para. 58) . In regard to claim 1 6, Smyth et al. teach t he method of claim 10, wherein determining the first operation and/or the first fault detection operation comprises considering at least one of: the susceptibility of and impact of error of a first component; a size of a first component; a number of processing elements within a first component; an intended or potential function of a first component; and, a potential contribution of a first component, to meeting the operational performance target for the data processing system ( responsive to determining that the angular resolution value exceeds the predetermined threshold value, para. 58) . In regard to claim 20, Smyth et al. teach a non-transitory computer-readable storage medium comprising computer- executable instructions stored thereon which, when executed by at least one processor, cause the at least one processor to generate a hardware configuration addressing an operational performance target for a data processing system that is programmable to execute a first neural network layer and a second neural network layer ( neural network employed by the system may be represented by a multilayer perceptron (MLP), para. 26, fig. 5) , the instructions comprising the steps of : determining a first operation for one of the first neural network layer and second neural network layer ( model employed for performing the feature extraction and clustering may be implemented by a neural network, para. 25) ; determining a first fault detection operation for the other of the first neural network layer and the second neural network layer, wherein the first operation and the first fault detection operations may differ from one another and wherein a combination of the first operation and the first fault detection operation can address the operational performance target for the neural network ( compared and the error is propagated back to the previous layers of the neural network, para. 29) ; and, determining a hardware configuration for the data processing system, wherein the hardware configuration is operable to provide the first operation and the first fault detection operation ( if the values exceeds the predetermined threshold value, the processing device may identify an antenna array element and/or a design parameter of the antenna array that has caused the angular resolution value to exceed … notify and take remedial action, para. 58-59) . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co. , 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . Claim s 5-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smyth et al. (US 2021/0028558) in further view of Dikici et al. (US 2023/0195831). In regard to claim 5, Smyth et al. does not explicitly teach but Dikici et al. teach t he method of claim 2, wherein the first and second components comprise multiply accumulator engines under the control of central network control circuitry ( the system may also comprise one or more accumulators 2204, fig. 22, para. 85, 91). It would have been obvious to modify the method of Smyth et al. by adding Denison et al. analytical server. A person of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make the modification because it would aid in performing a convolution transpose operation ( abstract). In regard to claim 6, Smyth et al. does not explicitly teach but Dikici et al. teach t he method of claim 2, wherein the first and second components comprise programmable compute engines under the control of central network control circuitry ( the system 2200 comprises one or more convolution engines and one or more accumulators, fig. 22, para. 85) . Refer to claim 5 for motivational statement. ************** Claim 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smyth et al. (US 2021/0028558) in further view of Dikici et al. (US 2023/0195831) in further view of Denison et al. (US 2007/0142936). In regard to claim 7, Smyth et al. and Dikici et al. does not explicitly teach but Denison et al. teach t he method of claim 5, wherein at least part of the central network control circuitry is duplicated in hardware to increase the resilience of the central network control circuitry ( the advance control block can be executed by the controller 11A and a copy is located in the redundant controllers 11B, para. 31). It would have been obvious to modify the method of Smyth et al. and Dikici et al. by adding Denison et al. analytical server. A person of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make the modification because it would provide redundancy in case primary controller 11A fails ( para. 31). ************** Claim s 12, 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smyth et al. (US 2021/0028558) in further view of Zhang et al. (US 2019/0122104). In regard to claim 1 2, Smyth et al. does not explicitly teach but Zhang et al. teach t he method of claim 10, wherein the operational performance target relates to a resilience target for the neural network ( the system can include a deep neural network builder component that can include a neural network training component and a neural network duplication component, fig. 1, para. 23) . It would have been obvious to modify the method of Smyth et al. by adding Zhang et al. building a binary neural network architecture . A person of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make the modification because it would aid in building a binary neural network architecture ( para. 22-23). In regard to claim 1 7, Smyth et al. does not explicitly teach but Zhang et al. teach t he method of claim 10, wherein the step of determining a hardware configuration for the data processing system comprises determining whether to duplicate some or all of the hardware comprised within the first component ( the neural network duplication component may train a copy of the first neural network to determine whether a second class exists, para. 27) . Refer to claim 12 for motivational statement. In regard to claim 1 8, Smyth et al. does not explicitly teach but Zhang et al. teach t he method of claim 10, wherein the step of determining the first fault detection operation for the first component comprises determining whether a computation that a first processing element within the first component is operable to make should also be carried out in a first processing element within a different component, which can be configured to make duplicated computations with the first component ( can train the copy of the first neural network to form a second neural network, para. 27) . Refer to claim 12 for motivational statement. ************** Claim 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Smyth et al. (US 2021/0028558) in further view of Denison et al. (US 2007/0142936). In regard to claim 1 9, Smyth et al. does not explicitly teach but Denison et al. teach t he method of claim 10, wherein the step of determining the first fault detection operation for the first component comprises determining whether operation of the first component can be monitored without duplicating all the hardware within the first component, in order to address the operational performance target for the data processing syste m ( the controller 11A may include a number of single-loop, SISO control routine, para. 31) . It would have been obvious to modify the method of Smyth et al. by adding Denison et al. analytical server. A person of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make the modification because it would provide redundancy in case primary controller 11A fails ( para. 31). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892. Yu et al. (US 12/468,946) layer of neural network and comparing threshold Semenov (US 11,816,165) neural network output compare with training output Gao et al. (US 2022/0398456) using a neural network Chin et al. (US 2021/0141697) mission-critical with multi-layer fault tolerance support Chen et al. (US 10,901,815) output layer of neural network and standard correction mode Ting et al. (US 10,867,098) neural network modeling Yao et al. (US 2020/0117997) neural network training Guo et al. (US 2020/0026988) using and training deep neural networks Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOAN TRUONG whose telephone number is 408-918-7552. The examiner can normally be reached on 10AM-6PM PST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner' s supervisor, Thomas Ashish can be reached on FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-0631 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Loan L.T. Truong/ Primary Examiner, Art Unit 2114 HYPERLINK "mailto:Loan.truong@uspto.gov" Loan.truong@uspto.gov
Read full office action

Prosecution Timeline

Jun 15, 2022
Application Filed
Jan 10, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591485
STORAGE SYSTEM AND MANAGEMENT METHOD FOR STORAGE SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12585557
SYNCHRONIZATION OF CONTAINER ENVIRONMENTS TO MAINTAIN AVAILABILITY FOR A PREDETERMINED ZONE
2y 5m to grant Granted Mar 24, 2026
Patent 12579031
Read Data Path for a Memory System
2y 5m to grant Granted Mar 17, 2026
Patent 12561212
METHOD AND APPARATUS FOR PHASED TRANSITION OF LEGACY SYSTEMS TO A NEXT GENERATION BACKUP INFRASTRUCTURE
2y 5m to grant Granted Feb 24, 2026
Patent 12554581
A MULTI-PART COMPARE AND EXCHANGE OPERATION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
90%
With Interview (+12.8%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 594 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month