Prosecution Insights
Last updated: April 19, 2026
Application No. 17/487,497

METHOD AND SYSTEM FOR PROBABLY ROBUST CLASSIFICATION WITH MULTICLASS ENABLED DETECTION OF ADVERSARIAL EXAMPLES

Non-Final OA §101§103
Filed
Sep 28, 2021
Examiner
MILLER, ALEXANDRIA JOSEPHINE
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
5 (Non-Final)
18%
Grant Probability
At Risk
5-6
OA Rounds
4y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
5 granted / 27 resolved
-36.5% vs TC avg
Strong +71% interview lift
Without
With
+71.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
40 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
32.6%
-7.4% vs TC avg
§103
52.4%
+12.4% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 27 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1, 3-15, 17-20 are presented for examination. This office action is in response to submission of application on 30-DECEMBER-2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 07-DECEMBER-2021 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 30-DECEMBER-2025 has been entered. Response to Amendment The amendment filed 30-DECEMBER-2025 in response to the non-final office action mailed 01-OCTOBER-2025 has been entered. Claims 1, 3-15, 17-20 remain pending in the application. With regards to the non-final office action’s rejection under 101, the amendments to the claims are not sufficient to overcome the original rejection with regards to the claims being directed towards an abstract idea. Regarding the applicant’s arguments that the amended limitations overcome the current 101 rejection, the examiner respectfully disagrees. Regarding the augmentation of training data, this is performed by classifying inputs into particular classes. Classification at a high level would be considered evaluation, as the sorting of various data points and object is performable by the human mind. Furthermore, the training of a classifier is well-understood in the art, wherein a classifier containing multiple classes including abstain classes would not overcome this limitation (MPEP 2106.05(d)). Finally, the outputting of the classification is seen to be merely a conclusory action that is separate from the claimed invention (MPEP 2106.05(g)) and does not provide any improvement to the technology. With regards to the non-final office action’s rejections under 103, the amendments to the claims necessitated a new consideration of the art. After this consideration, the examiner respectfully disagrees with the applicant’s arguments that the art referenced in the previous office action does not teach the amendment claim limitations. A new 103 rejection over the prior art has been provided. Regarding the new limitations of claim 1, these limitations have been addressed in the 103 rejection: Asbag and Metzler disclose generating augmented training data by augmenting a training data set using at least a term promoting classification of adversarial inputs into respective additional abstain classes of at least two additional abstain classes: Metzler recites: “In case of supervised machine learning also labeling information (i.e. assignment of the object classes to the data) is necessary.” This teaches the used of supervised machine learning, which is a form of annotation of training data to possess labels. Here, Metzler uses labeling to assign different object classes to the data. This would be training data augmented by a term promoting classification of input In each of the additional classes of the plurality of classes. Furthermore, with Asbag’s teachings of abstain classes below, it would have been obvious to combine this augmentation and the use of abstain classes as is disclosed in the claim limitation. Asbag discloses training a classifier using augmented training data, wherein the classifier includes a plurality of classes, including one or more correct classes and at least two additional abstain classes: Asbag teaches training a classifier that has the ability to classify data into rejection bins in response to a low confidence score. (Column 2 line 35 – Column 3 line 15). These bins are classes that have been bound together (Column 3, lines 60-65), and act as abstain classes. These bins are also sorted by priority, with Key Defects of Interest being the highest priority abstain class. (Column 2 line 35 – Column 3 line 15). This would be analogous to detecting an additional abstain class in response to obtaining the worst-case bound, and further examples of abstain classes are given to provide at least two additional classes. Metzler, description: “Probabilistic classification algorithms further use statistical inference to find the best class for a given instance… consequently, providing an option to abstain a choice when its confidence value is too low.” Furthermore, Metzler teaches classification algorithms that sort inputs into multiple correct classes, which would provide one or more correct classes when combined with Asbag. Asbag discloses wherein each additional abstain class of the at least two additional abstain classes is determined in response to at least bounding the input data: Asbag teaches the bounding of input data by the confidence thresholds that they correspond to upon processing, which classifies them into rejection bins of multiple abstain classes (Column 2 line 35 – Column 3 line 15). This would be analogous to each additional abstain class of the plurality of additional abstain classes is determined in response to at least bounding the input data. Asbag discloses determining a first lower bound for each of the one or more correct classes of the plurality of classes; determining, using interval bound propagation, a second lower bound for each of the at least two additional abstain classes; using at least one of the worst-case bound, the first lower bound, and the second lower bound, wherein assignment of the input data to any respective class of the one or more correct classes and the at least two additional abstain classes is a valid assignment: Asbag teaches that when detecting defects, it may determine for each of the defect classes e.g. the correct classes, determining a particular confidence threshold that the generated classification must clear. This confidence threshold would be analogous to a first lower bound for each of the one or more correct classes of the plurality of classes. Likewise, the classes it generates a threshold for includes the False class, which would be an abstain class, therefore generating a second lower bound for an additional abstain class. As this confidence threshold is required for classification, Asbag uses the first lower bound and second lower bound in order to output a classification (Column 2 line 35 – Column 3 line 15). Furthermore, all the assignments of Asbag above whether they be one of the correct classes or an abstain class would be valid as Asbag is able to classify input data as such, creating a valid output. Therefore, the examiner believes that Metzler in view of Asbag fully teach the newly amended limitations. Therefore, the dependent claims would be obvious over their respective prior art as claims 1, 8, 14 upon which they rely have been fully disclosed by Metzler in view of Asbag. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-15, 17-20 rejected under 35 U.S.C. 101 because the claimed invention is direction to an abstract idea without significantly more. MPEP 2106.04(a)(2)(I) “The mathematical concepts grouping is defined as mathematical relationships, mathematical formulas or equations, and mathematical calculations.” MPEP 2106.04(a)(2)(Ill) “Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, Judgments, and opinions. Further, the MPEP recites “The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide run) to perform the claim limitation. Regarding claim 1: Step 2A, Prong 1 will now be evaluated for this claim: A judicial exception is recited in this claim as it recites a mathematical concept: The following are mathematical calculations: obtaining a worst-case bound on a classification error and loss for perturbed versions of the input data, utilizing at least bounding of one or more hidden layer values by an adversarial norm constraint Obtaining a worst-case bound on a classification error and loss for perturbed version of the data would be mathematical calculation as they are described by mathematical equations within the specification. in response to not exceeding the convergence threshold, continuing to train the classifier. Not exceeding the convergence threshold would be a numerical inequality, which would be a mathematical calculation. The training of the classifier would be insignificant extra-solution activity, as discussed further below. in response to exceeding a convergence threshold Exceeding the convergence threshold would be a numerical inequality, which would be a mathematical calculation. determining, using interval bound propagation, a second lower bound for each of the at least two abstain classes Using interval bound propagation describes the use of a particular mathematical process in order to calculate the second lower bound, which would be mathematical calculation. A judicial exception is recited in this claim as it recites a mental process: The following is an evaluation: classifying, using one or more layers of the trained classifier, the input data as an abstain class in response to the input data including at least one of the perturbation and adversarial information Classification as described at a high level is an evaluation that can take place within the human mind, which is able to categorize objects. In this particular case, the result of the classification is the classification is unknown due to perturbations and adversarial information in the data, which is likewise performable by the human mind, as a human would be able to refrain from further classification of an object upon recognizing that there is unusual data regarding it. Generating augmented training data by augmenting a training data set using at least a term promoting classification of adversarial inputs into respective additional abstain classes of at least two additional abstain classes Augmenting training data with a term that indicates that a classification of adversarial inputs - for example, a label indicating a greater chance that an adversarial input is present, or the annotation system discussed the present application’s specification – would be a version of labeling training data. Labeling training data would be an evaluation as it is a determination of the specific traits of training data. determining a first lower bound for each of the one or more correct classes of the plurality of classes; Determining a lower bound could be considered to be “deciding upon” a lower bound for the abstain classes, which would be accomplishable in the human mind as a human would be able to decide on a threshold value. Step 2A, Prong 2 will now be evaluated for this claim: MPEP 2106.05(g) Insignificant Extra-Solution Activity has found mere data gathering and post-solution activity to be insignificant extra-solution activity. The following step is mere data gathering: receiving an input data from a sensor, wherein the input data includes a perturbation, wherein the input data is indicative of image, radar, sonar, or sound information The following step is merely post solution activity: using at least one of the worst-case bound, the first lower bound, and the second lower bound, outputting a classification in response to the input data indicating one of the plurality of classes, wherein assignment of the input data to any respective class of the one or more correct classes and the at least two abstain classes is a valid assignment outputting a trained classifier wherein the trained classifier is configured to detect at least one additional abstain class of the at least two additional abstain classes in response to obtaining the worst-case bound Outputting is a form of post-solution activity. MPEP 2106.05(d) Well-Understood, Routine, Conventional Activity has found certain computer functions to be insignificant extra-solution activity. The following step is well-understood, routine, conventional activity within the field: training a classifier using the augmented training data, wherein the classifier includes a plurality of classes, including one or more correct classes and the at least two additional abstain classes, and wherein each additional abstain class of the at least two additional abstain classes is determined in response to at least bounding the input data Training a classifier would be an example of performing repetitive calculation using a computer. Furthermore, the description of the classifier itself does not introduce anything novel about its training, so the additional limitation does not make the training itself not well-understood, routine, and conventional. The additional elements have been considered both individually and as an ordered combination in order to determine whether they integrate the exception into a practical application. Therefore, no meaningful limits are imposed practicing the abstract idea. Therefore, the claim is related to an abstract idea. Step 2B will now be discussed with regards to this claim: The claim does not provide an inventive concept. There is no additional Insignificant Extra- Solution Activity, as identified in Step 2A Prong Two, that provides an inventive concept. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)). The additional elements have been considered both individually and as an ordered combination as to whether they whether they warrant significantly more consideration. The claim is ineligible. Regarding claim 3, which is dependent on claim 1: This claim further details the plurality of classes discussed in claim 1. Specifying the plurality of classes does not overcome the parent claim’s rejection. This claim is rejected for incorporating the parent claim in full. This claim is ineligible. Regarding claim 4, which is dependent on claim 1: A judicial exception is recited in this claim as it recites a mental process: The following is an evaluation: determining a hidden value upper bound and hidden value lower bound associated with a hidden value of a network layer of the machine- learning network This claim is ineligible. Regarding claim 5, which is dependent on claim 1: This claim further details the one or more hidden layer values discussed in claim 1. Specifying the one or more hidden layer values does not overcome the parent claim’s rejection. This claim is rejected for incorporating the parent claim in full. This claim is ineligible. Regarding claim 6, which is dependent on claim 1: This claim further details the plurality of classes discussed in claim 1. Specifying the plurality of classes does not overcome the parent claim’s rejection. This claim is rejected for incorporating the parent claim in full. This claim is ineligible. Regarding claim 7, which is dependent on claim 1: A judicial exception is recited in this claim as it recites a mathematical concept: The following is a mathematical calculation: further comprising bounding a training objective function by a worst-case upper bound utilizing an interval bound propagation (IBP) technique This claim is ineligible. Regarding claim 8: Step 2A, Prong 1 will now be evaluated for this claim: A judicial exception is recited in this claim as it recites a mathematical concept: The following is a mathematical calculation: in response to not exceeding the convergence threshold, continue to train the classifier. Not exceeding the convergence threshold would be a numerical inequality, which would be a mathematical calculation. The training of the classifier would be insignificant extra-solution activity, as discussed further below. in response to exceeding a convergence threshold Exceeding the convergence threshold would be a numerical inequality, which would be a mathematical calculation. determining, using interval bound propagation, a second lower bound for each of the at least two abstain classes Using interval bound propagation describes the use of a particular mathematical process in order to calculate the second lower bound, which would be mathematical calculation. A judicial exception is recited in this claim as it recites a mental process: The following is an evaluation: classify, using one or more layers of the trained classifier, the input data as an abstain class in response to the input data including at least one of the perturbation and adversarial information Classification as described at a high level is an evaluation that can take place within the human mind, which is able to categorize objects. In this particular case, the result of the classification is the classification is unknown due to perturbations and adversarial information in the data, which is likewise performable by the human mind, as a human would be able to refrain from further classification of an object upon recognizing that there is unusual data regarding it. Generating augmented training data by augmenting a training data set using at least a term promoting classification of adversarial inputs into respective additional abstain classes of at least two additional abstain classes Augmenting training data with a term that indicates that a classification of adversarial inputs - for example, a label indicating a greater chance that an adversarial input is present, or the annotation system discussed the present application’s specification – would be a version of labeling training data. Labeling training data would be an evaluation as it is a determination of the specific traits of training data. determining a first lower bound for each of the one or more correct classes of the plurality of classes; Determining a lower bound could be considered to be “deciding upon” a lower bound for the abstain classes, which would be accomplishable in the human mind as a human would be able to decide on a threshold value. Step 2A, Prong 2 will now be evaluated for this claim: The additional elements: an input interface a processor, in communication with the input interface, wherein the processor is configured to are interpreted as a general purpose computer under MPEP 2106.05(f) Furthermore, MPEP 2106.05(g) Insignificant Extra-Solution Activity has found mere data gathering and post-solution activity to be insignificant extra-solution activity. The following steps are mere data gathering: receive an input data from a sensor via an input interface, wherein the input data is indicative of image, radar, sonar, or sound information configured to receive input data from a sensor, wherein the sensor includes a video, radar, LiDAR, sound, sonar, ultrasonic, motion, or thermal imaging sensor The following step is merely post solution activity: using at least one of the worst-case bound, the first lower bound, and the second lower bound, output a classification in response to the input data indicating one of the plurality of classes, wherein assignment of the input data to any respective class of the one or more correct classes and the at least two additional abstain classes is a valid assignment output a trained classifier configured to detect at least one additional abstain class of the at least two additional abstain classes In both cases, outputting is a form of post-solution activity. MPEP 2106.05(d) Well-Understood, Routine, Conventional Activity has found certain computer functions to be insignificant extra-solution activity. The following step is well-understood, routine, conventional activity within the field: train a classifier using the augmented training data, wherein the classifier includes a plurality of classes, including the at least two additional abstain classes, wherein each additional abstain class of the at least two additional abstain classes is determined in response to at least bounding input data including one or more perturbations Training a classifier would be an example of performing repetitive calculation using a computer. Furthermore, the description of the classifier itself does not introduce anything novel about its training, so the additional limitation does not make the training itself not well-understood, routine, and conventional. The additional elements have been considered both individually and as an ordered combination in order to determine whether they integrate the exception into a practical application. Therefore, no meaningful limits are imposed practicing the abstract idea. Therefore, the claim is related to an abstract idea. Step 2B will now be discussed with regards to this claim: The claim does not provide an inventive concept. There is no additional Insignificant Extra- Solution Activity, as identified in Step 2A Prong Two, that provides an inventive concept. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)). The additional elements have been considered both individually and as an ordered combination as to whether they whether they warrant significantly more consideration. The claim is ineligible. Regarding claim 9, which is dependent on claim 8: A judicial exception is recited in this claim as it recites a mathematical concept: The following is a mathematical calculation: detect the at least one additional abstain class of the plurality of additional abstain classes in response to the input data including one or more perturbations This claim is ineligible. Regarding claim 10, which is dependent on claim 8: A judicial exception is recited in this claim as it recites a mathematical concept: The following is a mathematical calculation: utilize interval bound propagation to compute […] associated with perturbed versions of the input data This claim is ineligible. Regarding claim 11, which is dependent on claim 10: A judicial exception is recited in this claim as it recites a mathematical concept: The following is a mathematical calculation: compute an upper bound associated with training of the machine-learning network This claim is ineligible. Regarding claim 12, which is dependent on claim 8: A judicial exception is recited in this claim as it recites a mathematical concept: The following is a mathematical calculation: compute an upper bound and lower bound of the input data This claim is ineligible. Regarding claim 13, which is dependent on claim 12: A judicial exception is recited in this claim as it recites a mathematical concept: The following is a mathematical calculation: compute a hidden value upper bound and hidden value lower bound associated with the hidden value of a network layer This claim is ineligible. Regarding claim 14: Step 2A, Prong 1 will now be evaluated for this claim: A judicial exception is recited in this claim as it recites a mathematical concept: The following are mathematical calculations: obtain a worst case bound on a classification error and loss associated with perturbed versions of the input data, utilizing at least bounding of one or more hidden layer values; Obtaining a worst-case bound on a classification error and loss for perturbed version of the data would be mathematical calculation as they are described by mathematical equations within the specification. in response to not exceeding the convergence threshold, continue to train the classifier. Not exceeding the convergence threshold would be a numerical inequality, which would be a mathematical calculation. The training of the classifier would be insignificant extra-solution activity, as discussed further below. in response to exceeding a convergence threshold Exceeding the convergence threshold would be a numerical inequality, which would be a mathematical calculation. determining, using interval bound propagation, a second lower bound for each of the at least two additional abstain classes Using interval bound propagation describes the use of a particular mathematical process in order to calculate the second lower bound, which would be mathematical calculation. A judicial exception is recited in this claim as it recites a mental process: The following is an evaluation: Generating augmented training data by augmenting a training data set using at least a term promoting classification of adversarial inputs into respective additional abstain classes of at least two additional abstain classes Augmenting training data with a term that indicates that a classification of adversarial inputs - for example, a label indicating a greater chance that an adversarial input is present, or the annotation system discussed the present application’s specification – would be a version of labeling training data. Labeling training data would be an evaluation as it is a determination of the specific traits of training data. classify, using one or more layers of the trained classifier, the input data as an abstain class in response to the input data including at least one of the perturbation and adversarial information Classification as described at a high level is an evaluation that can take place within the human mind, which is able to categorize objects. In this particular case, the result of the classification is the classification is unknown due to perturbations and adversarial information in the data, which is likewise performable by the human mind, as a human would be able to refrain from further classification of an object upon recognizing that there is unusual data regarding it. determining a first lower bound for each of the one or more correct classes of the plurality of classes; Determining a lower bound could be considered to be “deciding upon” a lower bound for the abstain classes, which would be accomplishable in the human mind as a human would be able to decide on a threshold value. Step 2A, Prong 2 will now be evaluated for this claim: The additional elements: A system comprising: a processor; and a memory including instructions that, when executed by the processor, cause the processor to are interpreted as a general purpose computer under MPEP 2106.05(f) Furthermore, MPEP 2106.05(g) Insignificant Extra-Solution Activity has found mere data gathering and post-solution activity to be insignificant extra-solution activity. The following step is mere data gathering: receive input data from a sensor, wherein the sensor includes a video, radar, LiDAR, sound, sonar, ultrasonic, motion, or thermal imaging sensor, wherein the input data is indicative of an image The following step is merely post solution activity: using at least one of the worst-case bound, the first lower bound, and the second lower bound, output a classification in response to the input data indicating one of the plurality of classes, wherein assignment of the input data to any respective class of the one or more correct classes and the at least two additional abstain classes is a valid assignment output a trained classifier configured to detect at least one additional abstain class of the at least two additional abstain classes In both cases, outputting is a form of post-solution activity. MPEP 2106.05(d) Well-Understood, Routine, Conventional Activity has found certain computer functions to be insignificant extra-solution activity. The following step is well-understood, routine, conventional activity within the field: train a classifier of a machine-learning network using the augmented training data, wherein the classifier includes a plurality of classes, including one or more correct classes and the at least two additional abstain classes, and wherein each additional abstain class of the at least two additional abstain classes is determined in response to at least bounding input data including one or more perturbations Training a classifier would be an example of performing repetitive calculation using a computer. Furthermore, the description of the classifier itself does not introduce anything novel about its training, so the additional limitation does not make the training itself not well-understood, routine, and conventional. The additional elements have been considered both individually and as an ordered combination in order to determine whether they integrate the exception into a practical application. Therefore, no meaningful limits are imposed practicing the abstract idea. Therefore, the claim is related to an abstract idea. Step 2B will now be discussed with regards to this claim: The claim does not provide an inventive concept. There is no additional Insignificant Extra- Solution Activity, as identified in Step 2A Prong Two, that provides an inventive concept. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)). The additional elements have been considered both individually and as an ordered combination as to whether they whether they warrant significantly more consideration. The claim is ineligible. Regarding claim 15, which is dependent on claim 14: Generally linking a judicial exception to a particular field of use, i.e. Limiting the abstract idea of collecting information, analyzing it, and displaying certain results of the collection and analysis to data related to the electric power grid, because limiting application of the abstract idea to power-grid monitoring is simply an attempt to limit the use of the abstract idea to a particular technological environment, Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016) (see MPEP § 2106.05(g)) does not overcome a rejection. This claim is ineligible. Regarding claim 17, which is dependent on claim 14: This claim further details the plurality of classes discussed in claim 14. Specifying the plurality of classes does not overcome the parent claim’s rejection. This claim is rejected for incorporating the parent claim in full. This claim is ineligible. Regarding claim 18, which is dependent on claim 14: A judicial exception is recited in this claim as it recites a mathematical concept: The following is a mathematical calculation: compute an upper bound associated with training of the machine-learning network This claim is ineligible. Regarding claim 19, which is dependent on claim 14: A judicial exception is recited in this claim as it recites a mental process: The following is an evaluation: classify a non-perturbation class This claim is ineligible. Regarding claim 20, which is dependent on claim 14: This claim further details the machine learning network discussed in claim 14. Specifying the machine learning network does not overcome the parent claim’s rejection. This claim is rejected for incorporating the parent claim in full. This claim is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3, 6, 8, 14-15, 17, 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Metzler et al. (Pub. No. WO 2020088739 A1, filed on October 29th 2018, hereinafter Metzler) in view of Asbag et al. (Pub. No. US 11037286 B2, filed on September 28th 2017, hereinafter Asbag). Regarding claim 1: Claim 1 recites: A method for training a machine-learning network, the method comprising: receiving input data from a sensor, wherein the input data includes a perturbation, wherein the input data is indicative of image, radar, sonar, or sound information; obtaining a worst-case bound on a classification error and loss for perturbed versions of the input data, utilizing at least bounding of one or more hidden layer values by an adversarial norm constraint; generating augmented training data by augmenting a training data set using at least a term promoting classification of adversarial inputs into respective additional abstain classes of at least two additional abstain classes; training a classifier using the augmented training data, wherein the classifier includes a plurality of classes, including one or more correct classes and at least two additional abstain classes, wherein each additional abstain class of the at least two additional abstain classes is determined in response to at least bounding the input data; determining a first lower bound for each of the one or more correct classes of the plurality of classes; determining, using interval bound propagation, a second lower bound for each of the at least two additional abstain classes; using at least one of the worst-bound, the first lower bound, and the second lower bound, outputting a classification in response to the input data indicating one of the plurality of classes, wherein the assignment of the input data to any respective class of the one or more correct classes and the at least two additional abstain classes is a valid assignment; in response to exceeding a convergence threshold: outputting a trained classifier, wherein the trained classifier is configured to detect at least one additional abstain class of the at least two additional abstain classes in response to obtaining the worst-case bound; and classifying, using one or more layers of the trained classifier, the input data as an abstain class in response to the input data including at least one of the perturbation and adversarial information; and in response to not exceeding the convergence threshold, continuing to train the classifier. Metzler discloses receiving an input data from a sensor, wherein the input data includes a perturbation: “…The system further comprises communication means for transmitting data from the surveillance sensors to the central computing unit and state derivation means for analyses of the surveillance data and derivation of at least one state…” (Metzler) “…Ambiguous deductions can further be the result of unfavorable conditions for survey data acquisition such as poor light conditions, inauspicious robot position P10 when generating surveillance data or perturbing environmental influence…” (Metzler) Metzler teaches transmitting data from surveillance centers to the central computing unit, where the input data may include perturbing environmental influence. This would be analogous to the central computing unit receiving input data from a sensor, wherein the data includes a perturbation. Metzler discloses wherein the input data is indicative of image, radar, sonar, or sound information “The plurality of surveillance sensors… comprise for example one or more RGB camera… microphone…” (Metzler) Metzler teaches that the sensors may be, for example, an RGB camera or a microphone, which would be an example of image and sound information. Metzler discloses hidden layers “The Neural Network 20 comprises an input layer 17, a hidden layer 18 and an output layer 19” (Metzler) Metzler teaches the use of hidden layers. Bounding is taught by Asbag further below. Metzler discloses outputting a classification in response to the input data indicating one of the plurality of classes: “…The outputted classification result 28 of the classification model 25 indicates the overall state of the surveillance object” (Metzler) Metzler teaches the outputting of a classification model with indicates the overall state of the object. This would be analogous to outputting a classification in response to the input data indicating one of the plurality of classes Metzler discloses in response to exceeding a convergence threshold: outputting a trained classifier: “in case the probability is above a defined threshold such that considering the additional data, subsequent classification results in a probability below the defined threshold. Said otherwise, if there is a too high uncertainty or unreliability of a classification, the system retrieves automatically additional data about a state pattern resp. one or more facility elements, such that additional information (e.g. parameters or features) describing the state is available, allowing for a higher certainty of assignment as "critical" or "non- critical" or possibly as "normal" or "anomalous".” (Metzler) “At step 116, it is checked if the determined uncertainty 115 is above a defined threshold. If the result is "no", i.e. there is low uncertainty and the detected state 114 can be seen as correct or unambiguous, than the robot 100 continues its patrol resp. goes on to the next object to be surveyed” (Metzler) Metzler teaches a process iteratively continuing if a value fails to meet a particular threshold, which discloses a further process continuing upon a classifier exceeding a convergence threshold. Furthermore, Metzler explicitly describes the successful meeting of a convergence threshold to trigger additional actions as well, describing a robot that when there is low uncertainty (i.e., the uncertainty has met the threshold) performs further actions. Metzler discloses classifying, using one or more layers of the trained classifier, the input data as an abstain class in response to the input data including at least one of the perturbation and adversarial information: Metzler teaches: “…Ambiguous deductions can further be the result of unfavourable conditions for survey data acquisition such as poor light conditions, inauspicious robot position P10 when generating surveillance data or perturbing environmental influence…” (Metzler) The ambiguous deduction would be analogous to the abstain class as it is not classified to be in a particular classification outside of being unknown, and the perturbing environment influence would be at least one of the perturbation and adversarial information from the input data. Metzler discloses and in response to not exceeding the convergence threshold, continuing to train the classifier: “In an optional further stage of this aspect of the invention, such a generic classifier and/or detector can additionally be post-trained by real world pictures. […] For example, the real world pictures on which the detector and/or classifier is applied can be used as additional training resource to enhance the detector and/or classifier, e.g. to improve its real world success rate” (Metzler) Metzler teaches the process of, in order to improve a particular value, continuing to train a classifier. This would be analogous to in response to not exceeding the convergence threshold, continuing to train the classifier. Asbag discloses obtaining a worst-case bound on a classification error and loss for perturbed versions of the input data, utilizing at least bounding of one or more hidden layer values by an adversarial norm constraint: Asbag in the same field of endeavor of reinforcement learning teaches setting a confidence value that is balanced between enhancing the quality of classification and loss of defects of interest (Column 2 line 35 – Column 3 line 15). The confidence value would therefore be the worst-case bound as it describes a performance metrics that must be balanced between classification error and loss for perturbed versions of the data. Furthermore, Asbag teaches an adversarial norm constraint as it teaches the use of a threshold for particular abstain classes (Column 2 line 35 – Column 3 line 15) wherein this would act as an adversarial norm constraint as it bounds the inputs that will be classified into correct classes by the model rather than abstain classes. Metzler and Asbag are analogous art to the present application because they are all in the same field of endeavor of reinforcement learning. Asbag and Metzler disclose generating augmented training data by augmenting a training data set using at least a term promoting classification of adversarial inputs into respective additional abstain classes of at least two additional abstain classes: Metzler recites: “In case of supervised machine learning also labeling information (i.e. assignment of the object classes to the data) is necessary.” This teaches the used of supervised machine learning, which is a form of annotation of training data to possess labels. Here, Metzler uses labeling to assign different object classes to the data. This would be training data augmented by a term promoting classification of input In each of the additional classes of the plurality of classes. Furthermore, with Asbag’s teachings of abstain classes below, it would have been obvious to combine this augmentation and the use of abstain classes as is disclosed in the claim limitation. Asbag discloses training a classifier using augmented training data, wherein the classifier includes a plurality of classes, including one or more correct classes and at least two additional abstain classes: Asbag teaches training a classifier that has the ability to classify data into rejection bins in response to a low confidence score. (Column 2 line 35 – Column 3 line 15). These bins are classes that have been bound together (Column 3, lines 60-65), and act as abstain classes. These bins are also sorted by priority, with Key Defects of Interest being the highest priority abstain class. (Column 2 line 35 – Column 3 line 15). This would be analogous to detecting an additional abstain class in response to obtaining the worst-case bound, and further examples of abstain classes are given to provide at least two additional classes. Metzler, description: “Probabilistic classification algorithms further use statistical inference to find the best class for a given instance… consequently, providing an option to abstain a choice when its confidence value is too low.” Furthermore, Metzler teaches classification algorithms that sort inputs into multiple correct classes, which would provide one or more correct classes when combined with Asbag. Asbag discloses wherein each additional abstain class of the at least two additional abstain classes is determined in response to at least bounding the input data: Asbag teaches the bounding of input data by the confidence thresholds that they correspond to upon processing, which classifies them into rejection bins of multiple abstain classes (Column 2 line 35 – Column 3 line 15). This would be analogous to each additional abstain class of the plurality of additional abstain classes is determined in response to at least bounding the input data. Asbag discloses determining a first lower bound for each of the one or more correct classes of the plurality of classes; determining, using interval bound propagation, a second lower bound for each of the at least two additional abstain classes; using at least one of the worst-case bound, the first lower bound, and the second lower bound, wherein assignment of the input data to any respective class of the one or more correct classes and the at least two additional abstain classes is a valid assignment: Asbag teaches that when detecting defects, it may determine for each of the defect classes e.g. the correct classes, determining a particular confidence threshold that the generated classification must clear. This confidence threshold would be analogous to a first lower bound for each of the one or more correct classes of the plurality of classes. Likewise, the classes it generates a threshold for includes the False class, which would be an abstain class, therefore generating a second lower bound for an additional abstain class. As this confidence threshold is required for classification, Asbag uses the first lower bound and second lower bound in order to output a classification (Column 2 line 35 – Column 3 line 15). Furthermore, all the assignments of Asbag above whether they be one of the correct classes or an abstain class would be valid as Asbag is able to classify input data as such, creating a valid output. Asbag discloses wherein the trained classifier is configured to detect at least one additional abstain class of the at least two additional abstain classes in response to obtaining the worst-case bound: Asbag teaches training a classifier that has the ability to classify data into rejection bins in response to a low confidence score. (Column 2 line 35 – Column 3 line 15). These bins are classes that have been bound together (Column 3, lines 60-65), and act as abstain classes. These bins are also sorted by priority, with Key Defects of Interest being the highest priority abstain class. (Column 2 line 35 – Column 3 line 15). This would be analogous to detecting an additional abstain class in response to obtaining the worst-case bound. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler and the teachings of Asbag. This would have provided the advantage of improving automated classification (Asbag, Column 4, lines 15-20). Regarding claim 3, which is dependent upon claim 1: Metzler in view of Asbag teaches the method of claim 1 upon which claim 3 depends. However, Metzler does not disclose wherein the plurality of classes includes original classes corresponding to the input data. However, Asbag teaches: General description, paragraph 11, excerpt: “By way of non-limiting example, each class can be assigned to one of the following classification groups: “Key Defects of Interest (KDOI)” being the classification group with the highest priority, “Defects of Interest (DOI)”, and “False” being the classification group with the lowest priority.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler in view of Asbag that disclosed: method of claim 1 And the teachings of Asbag that disclosed: wherein the plurality of classes includes original classes corresponding to the input data It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler and the teachings of Asbag. This would have provided the advantage of improving automated classification (Asbag, Column 4, lines 15-20). Regarding claim 6, which is dependent upon claim 1: Metzler in view of Asbag teaches the method of claim 1 upon which claim 6 depends. Furthermore, Metzler teaches: Description, excerpt: “…Ambiguous deductions can further be the result of unfavourable conditions for survey data acquisition such as poor light conditions, inauspicious robot position P10 when generating surveillance data or perturbing environmental influence…” This discloses wherein the classifier does not classify the input data as the [original] classes when the input data includes perturbations as data with the perturbing influence need not be classified into an original class. Metzler does not disclose wherein the plurality of classes includes original classes corresponding to the input data. However, Asbag teaches: General description, paragraph 11, excerpt: “By way of non-limiting example, each class can be assigned to one of the following classification groups: “Key Defects of Interest (KDOI)” being the classification group with the highest priority, “Defects of Interest (DOI)”, and “False” being the classification group with the lowest priority.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler in view of Asbag that disclosed: method of claim 1 wherein the classifier does not classify the input data as the [original] classes when the input data includes perturbations And the teachings of Asbag that disclosed: wherein the plurality of classes includes original classes corresponding to the input data It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler and the teachings of Asbag. This would have provided the advantage of improving automated classification (Asbag, Column 4, lines 15-20). Regarding claim 8: Claim 8 recites: A system including a machine-learning network, comprising: an input interface configured to receive input data from a sensor, wherein the sensor includes a video, radar, LiDAR, sound, sonar, ultrasonic, motion, or thermal imaging sensor; a processor, in communication with the input interface, wherein the processor is configured to: receive input data from a sensor via the input interface, wherein the input data is indicative of image, radar, sonar, or sound information; generate augmented training data by augmenting a training data set using at least a term promoting classification of adversarial inputs into respective additional abstain classes of at least two additional abstain classes; training a classifier using the augmented training data, wherein the classifier includes a plurality of classes, including one or more correct classes and at least two additional abstain classes, wherein each additional abstain class of the at least two additional abstain classes is determined in response to at least bounding the input data including one or more perturbations; determine a first lower bound for each of the one or more correct classes of the plurality of classes; determine, using interval bound propagation, a second lower bound for each of the at least two additional abstain classes; using at least one of the first lower bound, and the second lower bound output a classification in response to the input data indicating one of the plurality of classes wherein assignment of the input data to any respective class of the one or more correct classes and the at least two additional abstain classes is a valid assignment; in response to the classifier exceeding a convergence threshold: output a trained classifier configured to detect at least one additional abstain class of the at least two additional abstain classes; and classify, using one or more layers of the trained classifier, the input data as an abstain class in response to the input data including at least one of the perturbation and adversarial information; and in response to the classifier not exceeding the convergence threshold, continue to train the classifier. Regarding the limitation an input interface configured to receive input data from a sensor, wherein the sensor includes a video, radar, LiDAR, sound, sonar, ultrasonic, motion, or thermal imaging sensor; a processor, in communication with the input interface, wherein the processor is configured to: receive an input data from a sensor, wherein the input data is indicative of image, radar, sonar, or sound information: “The plurality of surveillance sensors… comprise for example one or more RGB camera… microphone…” (Metzler) Metzler teaches that the sensors may be, for example, an RGB camera or a microphone, which would be an example of image and sound information. Regarding the limitation in response to exceeding a convergence threshold: outputting a trained classifier: “in case the probability is above a defined threshold such that considering the additional data, subsequent classification results in a probability below the defined threshold. Said otherwise, if there is a too high uncertainty or unreliability of a classification, the system retrieves automatically additional data about a state pattern resp. one or more facility elements, such that additional information (e.g. parameters or features) describing the state is available, allowing for a higher certainty of assignment as "critical" or "non- critical" or possibly as "normal" or "anomalous".” (Metzler) “At step 116, it is checked if the determined uncertainty 115 is above a defined threshold. If the result is "no", i.e. there is low uncertainty and the detected state 114 can be seen as correct or unambiguous, than the robot 100 continues its patrol resp. goes on to the next object to be surveyed” (Metzler) Metzler teaches a process iteratively continuing if a value fails to meet a particular threshold, which discloses a further process continuing upon a classifier exceeding a convergence threshold. Furthermore, Metzler explicitly describes the successful meeting of a convergence threshold to trigger additional actions as well, describing a robot that when there is low uncertainty (i.e., the uncertainty has met the threshold) performs further actions. Regarding the limitation classifying, using one or more layers of the trained classifier, the input data as an abstain class in response to the input data including at least one of the perturbation and adversarial information: Metzler teaches: “…Ambiguous deductions can further be the result of unfavourable conditions for survey data acquisition such as poor light conditions, inauspicious robot position P10 when generating surveillance data or perturbing environmental influence…” (Metzler) The ambiguous deduction would be analogous to the abstain class as it is not classified to be in a particular classification outside of being unknown, and the perturbing environment influence would be at least one of the perturbation and adversarial information from the input data. Regarding the limitation and in response to not exceeding the convergence threshold, continuing to train the classifier: “In an optional further stage of this aspect of the invention, such a generic classifier and/or detector can additionally be post-trained by real world pictures. […] For example, the real world pictures on which the detector and/or classifier is applied can be used as additional training resource to enhance the detector and/or classifier, e.g. to improve its real world success rate” (Metzler) Metzler teaches the process of, in order to improve a particular value, continuing to train a classifier. This would be analogous to in response to not exceeding the convergence threshold, continuing to train the classifier. However, Metzler does not teach train a classifier, wherein the classifier includes a plurality of classes, including one or more correct classes and at least two abstain classes: Asbag teaches training a classifier that has the ability to classify data into rejection bins in response to a low confidence score. (Column 2 line 35 – Column 3 line 15). These bins are classes that have been bound together (Column 3, lines 60-65), and act as abstain classes. These bins are also sorted by priority, with Key Defects of Interest being the highest priority abstain class. (Column 2 line 35 – Column 3 line 15). This would be analogous to detecting an additional abstain class in response to obtaining the worst-case bound. Metzler, description: “Probabilistic classification algorithms further use statistical inference to find the best class for a given instance… consequently, providing an option to abstain a choice when its confidence value is too low.” Furthermore, Metzler teaches classification algorithms that sort inputs into multiple correct classes, which would provide one or more correct classes when combined with Asbag. Asbag and Metzler disclose generating augmented training data by augmenting a training data set using at least a term promoting classification of adversarial inputs into respective additional abstain classes of at least two additional abstain classes: Metzler recites: “In case of supervised machine learning also labeling information (i.e. assignment of the object classes to the data) is necessary.” This teaches the used of supervised machine learning, which is a form of annotation of training data to possess labels. Here, Metzler uses labeling to assign different object classes to the data. This would be training data augmented by a term promoting classification of input In each of the additional classes of the plurality of classes. Furthermore, with Asbag’s teachings of abstain classes below, it would have been obvious to combine this augmentation and the use of abstain classes as is disclosed in the claim limitation. Asbag discloses training a classifier using augmented training data, wherein the classifier includes a plurality of classes, including one or more correct classes and at least two additional abstain classes: Asbag teaches training a classifier that has the ability to classify data into rejection bins in response to a low confidence score. (Column 2 line 35 – Column 3 line 15). These bins are classes that have been bound together (Column 3, lines 60-65), and act as abstain classes. These bins are also sorted by priority, with Key Defects of Interest being the highest priority abstain class. (Column 2 line 35 – Column 3 line 15). This would be analogous to detecting an additional abstain class in response to obtaining the worst-case bound, and further examples of abstain classes are given to provide at least two additional classes. Metzler, description: “Probabilistic classification algorithms further use statistical inference to find the best class for a given instance… consequently, providing an option to abstain a choice when its confidence value is too low.” Furthermore, Metzler teaches classification algorithms that sort inputs into multiple correct classes, which would provide one or more correct classes when combined with Asbag. Asbag discloses wherein each additional abstain class of the at least two additional abstain classes is determined in response to at least bounding the input data: Asbag teaches the bounding of input data by the confidence thresholds that they correspond to upon processing, which classifies them into rejection bins of multiple abstain classes (Column 2 line 35 – Column 3 line 15). This would be analogous to each additional abstain class of the plurality of additional abstain classes is determined in response to at least bounding the input data. Asbag discloses determining a first lower bound for each of the one or more correct classes of the plurality of classes; determining, using interval bound propagation, a second lower bound for each of the at least two additional abstain classes; using at least one of the worst-case bound, the first lower bound, and the second lower bound, wherein assignment of the input data to any respective class of the one or more correct classes and the at least two additional abstain classes is a valid assignment: Asbag teaches that when detecting defects, it may determine for each of the defect classes e.g. the correct classes, determining a particular confidence threshold that the generated classification must clear. This confidence threshold would be analogous to a first lower bound for each of the one or more correct classes of the plurality of classes. Likewise, the classes it generates a threshold for includes the False class, which would be an abstain class, therefore generating a second lower bound for an additional abstain class. As this confidence threshold is required for classification, Asbag uses the first lower bound and second lower bound in order to output a classification (Column 2 line 35 – Column 3 line 15). Furthermore, all the assignments of Asbag above whether they be one of the correct classes or an abstain class would be valid as Asbag is able to classify input data as such, creating a valid output. Metzler does not disclose output a trained classifier configured to detect at least one additional abstain class of the at least two abstain classes: Asbag teaches training a classifier that has the ability to classify data into rejection bins in response to a low confidence score. (Column 2 line 35 – Column 3 line 15). These bins are classes that have been bound together (Column 3, lines 60-65), and act as abstain classes. These bins are also sorted by priority, with Key Defects of Interest being the highest priority abstain class. (Column 2 line 35 – Column 3 line 15). This would be analogous to detecting an additional abstain class in response to obtaining the worst-case bound. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler and the teachings of Asbag. This would have provided the advantage of improving automated classification (Asbag, Column 4, lines 15-20). Claim 14 recites a system that parallels the method and system of claims 1 and 8. It contains no limitations that are not found within claims 1 and 8, and as evidenced by the identical amendments is intended as a counterpart to claim 1 and 8. Therefore, the analysis discussed above with respect to claims 1 and 8 also applies to claim 14. Accordingly, claims 14 is rejected based on substantially the same rationale as set forth above with respect to claims 1 and 8. Regarding claim 15 which is dependent upon claim 14: Metzler in view of Asbag teaches the method of claim 14 upon which claim 15 depends. Furthermore, Metzler teaches: Description, excerpt: “…The state detector resp. an underlying computing unit is further configured to trigger an action of the robot by the action controller in case an ambiguity is noticed…” This discloses wherein instructions further cause the processor to operate a physical system based on output data, wherein the physical system is […] a robot […]. Regarding claim 17, which is dependent upon claim 14: Metzler in view of Asbag teaches the method of claim 14 upon which claim 17 depends. However, Metzler does not disclose wherein the plurality of classes includes original classes corresponding non-perturbation classification associated with the input data. However, Asbag teaches: General description, paragraph 11, excerpt: “By way of non-limiting example, each class can be assigned to one of the following classification groups: “Key Defects of Interest (KDOI)” being the classification group with the highest priority, “Defects of Interest (DOI)”, and “False” being the classification group with the lowest priority.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler in view of Asbag that disclosed: method of claim 14 And the teachings of Asbag that disclosed: wherein the plurality of classes includes original classes corresponding non-perturbation classification associated with the input data It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler and the teachings of Asbag. This would have provided the advantage of improving automated classification (Asbag, Column 4, lines 15-20). Regarding claim 19 which is dependent upon claim 14: Metzler in view of Asbag teaches the method of claim 14 upon which claim 19 depends. However, Metzler does not disclose wherein the plurality of classes except the plurality of additional abstain classes are utilized to classify a non-perturbation class. Asbag teaches: General description, paragraph 11 excerpt: “By way of non-limiting example, each class can be assigned to one of the following classification groups: “Key Defects of Interest (KDOI)” being the classification group with the highest priority, “Defects of Interest (DOI)”, and “False” being the classification group with the lowest priority. The prioritized rejection bins can consist, accordingly, of “KDOI” CND rejection bin; “DOI” CND rejection bin, “False” CND rejection bin and “unknown (UNK)” rejection bin, and wherein priorities of CND rejection bins correspond to priorities of respective classification groups” It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler in view of Asbag that disclosed: method of claim 14 And the teachings of Asbag that disclosed: wherein the plurality of classes except the plurality of additional abstain classes are utilized to classify a non-perturbation class It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler and the teachings of Asbag. This would have provided the advantage of improving automated classification (Asbag, Column 4, lines 15-20). Regarding claim 20, which is dependent upon claim 14: Metzler in view of Asbag teaches the method of claim 14 upon which claim 20 depends. Furthermore, Metzler teaches: Description, excerpt: “…the criticality classification and optionally the normality classification is implemented with at least one of a rule-based system, based on expert knowledge, in particular comprising… a neural network …” This discloses wherein the machine-learning network is a neural network. Claims 4 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Metzler in view of Asbag, further in view of Guan et al. (Pub. No. CN 108446506 A, published August 24th 2018, hereinafter Guan). Regarding claim 4, which is dependent upon claim 1: Metzler in view of Asbag teaches the method of claim 1 that claim 4 is dependent upon. Metzler in view of Asbag does not disclose determining a hidden value upper bound and hidden value lower bound associated with a hidden value of a network layer of the machine- learning network. However, Guan in the same field of endeavor of reinforcement learning teaches: Step 4.3 description, excerpt: “…respectively represent the upper limit and lower limit of k moment j-th supporting layer output. and ? [sic] j respectively represent the upper limit of the j-th hidden node threshold value and the lower limit...” Metzler in view of Asbag and Guan are analogous art because they are in the same field of endeavor. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler in view of Asbag that disclosed: the method of claim 1 And the teachings of Guan that disclosed: determining a hidden value upper bound and hidden value lower bound associated with a hidden value of a network layer of the machine- learning network. This would have provided to Metzler in view of Asbag the advantage of more efficient system modeling (Guan: “interval feedback neural network due to its own structure with memory, the adaptive time-varying characteristics, can solve the feedforward neural network to problem of order dynamic system modeling, so it can be used as effective means of uncertain system modelling”). Regarding claim 5, which is dependent upon claim 1: Metzler in view of Asbag teaches the method of claim 1 that claim 5 is dependent upon. Metzler in view of Asbag does not disclose wherein the one or more hidden layer values is associated with a last layer of the machine-learning network. However, Guan teaches: Step 4.7.1. description, excerpt: “…obtaining the hidden node weight value upper limit and lower limit of the correction value to the output layer node…” This output layer would be the associated last layer. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler in view of Asbag that disclosed: the method of claim 1 And the teachings of Guan that disclosed: wherein the one or more hidden layer values is associated with a last layer of the machine-learning network This would have provided to Metzler in view of Asbag the advantage of more efficient system modeling (Guan: “interval feedback neural network due to its own structure with memory, the adaptive time-varying characteristics, can solve the feedforward neural network to problem of order dynamic system modeling, so it can be used as effective means of uncertain system modelling”). Claims 7, 10-11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Metzler in view of Asbag, further in view of Gowal et al. (“On the effectiveness of interval bound propagation for training verifiably robust models”, published August 29th 2019, hereinafter Gowal). Regarding claim 7, which is dependent upon claim 1: Metzler in view of Asbag teaches the method of claim 1 that claim 7 is dependent upon. Metzler in view of Asbag does not disclose bounding a training objective function by a worst-case upper bound utilizing an interval bound propagation (IBP) technique. However, Gowal in the same field of endeavor of adversarial learning teaches: Introduction, excerpt: “…IBP allows to define a loss to minimize an upper bound on the maximum difference between any pair of logits when the input can be perturbed…” Metzler in view of Asbag and Gowal are analogous art because they are in the same field of endeavor. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler in view of Asbag that disclosed: the method of claim 1 And the teachings of Gowal that disclosed: bounding a training objective function by a worst-case upper bound utilizing an interval bound propagation (IBP) technique. This would have provided to Metzler in view of Asbag the advantage of tighter bounds in later stages of training (Gowal, “Perhaps surprisingly, our results show that neural networks can easily adapt to make the rather loose bound provided by IBP much tighter.”) Regarding claim 10, which is dependent upon claim 8: Metzler in view of Asbag teaches the method of claim 8 that claim 10 is dependent upon. Metzler in view of Asbag does not disclose wherein the processor is further configured to utilize interval bound propagation […] with perturbed versions of the input data. However, Gowal teaches: Introduction, excerpt: “…IBP allows to define a loss to minimize an upper bound on the maximum difference between any pair of logits when the input can be perturbed…” It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler in view of Asbag that disclosed: the method of claim 8 And the teachings of Gowal that disclosed: wherein the processor is further configured to utilize interval bound propagation […] with perturbed versions of the input data. This would have provided to Metzler in view of Asbag the advantage of tighter bounds in later stages of training (Gowal, “Perhaps surprisingly, our results show that neural networks can easily adapt to make the rather loose bound provided by IBP much tighter.”) Regarding claim 11, which is dependent upon claim 10: Metzler in view of Asbag further in view of Gowal teaches the method of claim 10 that claim 11 is dependent upon. Metzler in view of Asbag does not disclose wherein the processor is further configured to compute an upper bound associated with training of the machine-learning network. However, Gowal teaches: Introduction, excerpt: “…IBP allows to define a loss to minimize an upper bound on the maximum difference between any pair of logits when the input can be perturbed…” It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler in view of Asbag further in view of Gowal that disclosed: the method of claim 10 And the teachings of Gowal that disclosed: wherein the processor is further configured to compute an upper bound associated with training of the machine-learning network. This would have provided to Metzler in view of Asbag the advantage of tighter bounds in later stages of training (Gowal, “Perhaps surprisingly, our results show that neural networks can easily adapt to make the rather loose bound provided by IBP much tighter.”) Regarding claim 18, which is dependent upon claim 14: Metzler in view of Asbag teaches the method of claim 14 that claim 18 is dependent upon. Metzler in view of Asbag does not disclose wherein the instructions further cause the processor to compute an upper bound associated with training of the machine-learning network. However, Gowal teaches: Introduction, excerpt: “…IBP allows to define a loss to minimize an upper bound on the maximum difference between any pair of logits when the input can be perturbed…” It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler in view of Asbag that disclosed: the method of claim 14 And the teachings of Gowal that disclosed: wherein the instructions further cause the processor to compute an upper bound associated with training of the machine-learning network. This would have provided to Metzler in view of Asbag the advantage of tighter bounds in later stages of training (Gowal, “Perhaps surprisingly, our results show that neural networks can easily adapt to make the rather loose bound provided by IBP much tighter.”) Claims 9 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Metzler in view of Asbag, further in view of Beggel et al. (Pub. No. EP 3477553 A1, published May 1st 2019, hereinafter Beggel). Regarding claim 9, which is dependent upon claim 8: Metzler in view of Asbag teaches the method of claim 8 that claim 9 is dependent upon. Metzler in view of Asbag does not disclose wherein the classifier is further configured to detect the at least one additional abstain class […] in response to the input data including one or more perturbations. However, Beggel in the same field of endeavor of adversarial learning teaches: Description of embodiments, excerpt: “…The Adversarial Autoencoder induces a prior distribution on the latent low dimensional space. This prior distribution can be predetermined and can be input into the Adversarial Autoencoder… Alternatively, a mixture of Gaussians distribution with one or more dedicated rejection classes (for anomalies) can be used, especially when the number of different anomaly classes is known...” The dedicated rejection classes for anomalies would be the at least one additional abstain class. Metzler in view of Asbag and Beggel are analogous art because they are in the same field of endeavor. It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler in view of Asbag that disclosed: the method of claim 8 And the teachings of Beggel that disclosed: wherein the classifier is further configured to detect the at least one additional abstain class […] in response to the input data including one or more perturbations. This would have provided to Metzler in view of Asbag the advantage of reliably identifying anomalous data (Beggel, “The method can reliably identify anomalies in images that were not contained in the training set”). Regarding claim 12, which is dependent upon claim 8: Metzler in view of Asbag teaches the method of claim 8 that claim 12 is dependent upon. Metzler in view of Asbag does not disclose wherein the processor is further configured to compute an upper bound and lower bound of the input data. However, Beggel teaches: Figure 2 description, excerpt: “…In this approach, the kernel-transformed normal data can be separated from the origin by a decision boundary whereas the kernel-transformed anomalies lie on the other side of the boundary closer to the origin…” It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a method that utilized the teachings of Metzler in view of Asbag that disclosed: the method of claim 8 And the teachings of Beggel that disclosed: wherein the processor is further configured to compute an upper bound and lower bound of the input data. This would have provided to Metzler in view of Asbag the advantage of reliably identifying anomalous data (Beggel, “The method can reliably identify anomalies in images that were not contained in the training set”). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Metzler in view of Asbag, further in view of Guan, further in view of Beggel. Claim 13 recites a system that parallels the method of claim 4. Therefore, the analysis discussed above with respect to claim 4 also applies to claim 13. Accordingly, claim 13 is rejected based on substantially the same rationale as set forth above with respect to claim 4. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDRIA JOSEPHINE MILLER whose telephone number is (703)756-5684. The examiner can normally be reached Monday-Thursday: 7:30 - 5:00 pm, every other Friday 7:30 - 4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached at (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.J.M./Examiner, Art Unit 2142 /HAIMEI JIANG/Primary Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Sep 28, 2021
Application Filed
Oct 22, 2024
Non-Final Rejection — §101, §103
Jan 10, 2025
Response Filed
Feb 18, 2025
Final Rejection — §101, §103
May 05, 2025
Request for Continued Examination
May 08, 2025
Response after Non-Final Action
May 14, 2025
Non-Final Rejection — §101, §103
Aug 12, 2025
Response Filed
Sep 25, 2025
Final Rejection — §101, §103
Dec 30, 2025
Request for Continued Examination
Jan 20, 2026
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566943
METHOD AND APPARATUS WITH NEURAL NETWORK QUANTIZATION
2y 5m to grant Granted Mar 03, 2026
Patent 12481890
SYSTEMS AND METHODS FOR APPLYING SEMI-DISCRETE CALCULUS TO META MACHINE LEARNING
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
18%
Grant Probability
90%
With Interview (+71.4%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 27 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month