Prosecution Insights
Last updated: April 19, 2026
Application No. 17/611,088

OPERATION OF TRAINABLE MODULES, INCLUDING MONITORING AS TO WHETHER THE RANGE OF APPLICATION OF THE TRAINING IS ABANDONED

Final Rejection §103
Filed
Nov 12, 2021
Examiner
STORK, KYLE R
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Robert Bosch GmbH
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
554 granted / 865 resolved
+9.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
51 currently pending
Career history
916
Total Applications
across all art units

Statute-Specific Performance

§101
14.9%
-25.1% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 865 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This final office action is in response to the amendment filed 12 September 2025. Claims 20 and 22-39 are pending. Claims 20, 32, and 38-39 are independent claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 10, 26-27, 30-35, 37-39 are rejected under 35 U.S.C. 103 as being unpatentable over Iwamasa et al. (US 2018/0082204, published 22 March 2019, hereafter Iwamasa) and further in view of Andoni et al. (US 2109/0073591, published 7 March 2019, hereafter Andoni) and further in view of Praveen et al. (US 2020/0160185, filed 21 November 2018, hereafter Praveen). As per independent claim 20, Iwamasa discloses a method for operating a trainable module, which translates one or more input variable values into one or more output variable values, the input variable values including measurement data (paragraph 0020: Here, sensor data, measurement data, is received for input into a model), which are obtained by a physical measuring operations and/or by a partial or complete simulation of the measuring operation (paragraph 0003: Here, both measurement data and simulation data may be used for training a model) and/or by a partial or complete simulation of a technical system capable of being monitored by the measuring operation, the method comprising the following steps: supplying at least one input variable value to variations of the trainable module, the variations differing so much from each other, that they may not be converted into each other in a congruent manner, using progressive learning (paragraphs 0019-0029: Here, a first and second calculation model are provided. The first learning model calculates a characteristic value based upon simulated inputs (paragraph 0031), while the second learning model calculates a characteristic value based upon sensor inputs (paragraph 0034)) ascertaining a measure of uncertainty of output variable values from a difference of the output variable values, into which each of the variations translate the input variable value (paragraphs 0019-0029: Here, an uncertainty value is calculated for the models by comparing the character values calculated from the simulated values and the character values calculated from the sensor data) comparing the uncertainty to a distribution of uncertainties, which is ascertained for input variable learning values used during training of the trainable module and/or for further input variable test values, to which relationships learned during the training of the trainable module are applicable (paragraphs 0019-0029: Here, the character values from the simulated and sensor data are compared to provide a probability distribution) evaluating the extent to which the relationships learned during the training of the trainable module are applicable to the input variable value, based on a result of the comparison (paragraphs 0019-0029: Here, a distribution and reliability are calculated based upon the relationship of the two data sets) Iwamasa fails to specifically disclose: in response to a determination that the relationship learned during the training of the trainable module are applicable to the input variable value ascertaining a control signal from an output variable value supplied for the input variable value, by the trainable module and/or the variations controlling, using the control signal, a vehicle and/or a classification system and/or a system for quality control of mass-produced products and/or a system for medical imaging However, Andoni, which is analogous to the claimed invention because it is directed toward selective execution of a training algorithm, discloses: in response to a determination that the relationship learned during the training of the trainable module are applicable to the input variable value (paragraph 0025: Here, a fitness function compares the models to the input set. This includes testing the models to determine whether the predicted values and the actual values are within a margin to determine the fitness of the model) ascertaining a control signal from an output variable value supplied for the input variable value, by the trainable module and/or the variations (paragraph 0032: Here, the input data set is provided to a trainable model to output an output data set and determine error values for correcting the trainable model) controlling, using the control signal, a vehicle and/or a classification system and/or a system for quality control of mass-produced products and/or a system for medical imaging (paragraphs 0033-0034: Here, a classification system (model) is controlled to in some instances perform backpropagation to reinforce certain traits. Further, the backpropagation may be “turned off” or disabled to prevent reinforcing traits that fail to satisfy the fitness threshold) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Andoni with Iwamasa, with a reasonable expectation of success, as it would have allowed for improved reinforcement learning by disabling reinforcement of trains that fail to satisfy a fitness threshold while propagating traits that satisfy the threshold (Andoni: paragraphs 0033-0034). Iwamasa fails to specifically disclose where the variations are formed: by deactivating different neurons in an artificial neural network (ANN) which is contained in the trainable module and/or by varying parameters which characterize a behavior of the trainable module and/or by deactivating connections between neurons in the ANN However, Praveen, which is analogous to the claimed invention because it is directed toward pruning a neural network, discloses the variations are formed by: by deactivating different neurons in an artificial neural network (ANN) which is contained in the trainable module (paragraph 0022: Here, a pruning engine prunes neurons within layers of the neural network to reduce complexity of the neural network) and/or by varying parameters which characterize a behavior of the trainable module and/or by deactivating connections between neurons in the ANN (Figure 2B; paragraph 0031: Here, a pruned neural network deactivates connections between pruned nodes) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Praveen with Iwamasa-Andoni, with a reasonable expectation of success, as it would have provided the advantage of reducing complexity of the network while improving performance while maintaining accuracy (Praveen: paragraph 0001). As per dependent claim 26, Iwamasa discloses the method, further comprising: in response to a determination that the relationship learned during the training of the trainable module are applicable to the input variable value, updating the distribution using the input variable value (paragraphs 0041-0043: Here, the predicted values from model one and the observed values are compared. This allows the first model to repeatedly perform correction and simulations on the probability distribution to minimize the error until it is below a predetermined threshold value (paragraph 0043)) As per dependent claim 27, Iwamasa discloses the method, wherein: a set of variables, which are each a function of a sum formed over all input variable values and/or uncertainties contributing to the distribution, is updated by adding a further summand (paragraphs 0039-0045: Here, the space is further divided to create additional divisions. The values of each division, including those that do not contain a sensor, are estimated based on the characteristic values at the location, the error calculated by the predictive value calculation unit, the sensor data of the virtual sensor, and collected sensor data. This includes averaging the results of the predictive value calculation, and thus includes summing each of the values of the divisions (paragraphs 0047-0048)) the updated distribution and/or a set of parameters which characterizes the updated distribution, is ascertained from the set of variables (paragraphs 0049-0051: Here, the reliability distribution is estimated based upon the repeatedly corrected simulations (paragraph 0043) to improve the prediction data) As per dependent claim 30, Iwamasa discloses the method, further comprising: in response to a determination that the relationships learned during the training of the trainable module are not applicable to the input variable value, taking countermeasures, in order to prevent a negative effect, on a technical system, of an output variable value supplied for the input variable value by the trainable module and/or by the variations (paragraphs 0041-0043: Here, the predicted values from model one and the observed values are compared. This allows the first model to repeatedly perform correction and simulations on the probability distribution to minimize the error until it is below a predetermined threshold value (paragraph 0043)) As per dependent claim 31, Iwamasa discloses the method, wherein the countermeasures include: suppressing the output variable value and/or ascertaining a correction and/or a substitute for the output variable value and/or requesting an output variable learning value belonging to the input variable value for a further training of the trainable module and/or requesting an update for the trainable module (paragraphs 0041-0043: Here, the predicted values from model one and the observed values are compared. This allows the first model to repeatedly perform correction and simulations on the probability distribution to minimize the error until it is below a predetermined threshold value (paragraph 0043)) and/or restricting a technical system controlled using the trainable module, in its functionality or stopping the technical system and/or requesting a further sensor signal from another sensor As per independent claim 32, Iwamasa discloses a method for training a trainable module, which translates one or more input variable values into one or more output variable values, using learning data sets which contain input variable learning values and corresponding output variable learning values, at least the input learning variable learning values including measurement data (paragraph 0020: Here, sensor data, measurement data, is received for input into a model), which are obtained by a physical measuring operations and/or by a partial or complete simulation of the measuring operation (paragraph 0003: Here, both measurement data and simulation data may be used for training a model) and/or by a partial or complete simulation of a technical system capable of being monitored by the measuring operation, the method comprising the following steps: supplying input variable learning values to variations of the trainable module, the variations differing so much from each other that they may not be converted into each other in a congruent manner, using progressive learning (paragraphs 0019-0029: Here, a first and second calculation model are provided. The first learning model calculates a characteristic value based upon simulated inputs (paragraph 0031), while the second learning model calculates a characteristic value based upon sensor inputs (paragraph 0034)) ascertaining a measure of uncertainty of output variable values from a difference of the output variable values, from each other, into which each of the variations translate the input variable value (paragraphs 0019-0029: Here, an uncertainty value is calculated for the models by comparing the character values calculated from the simulated values and the character values calculated from the sensor data) ascertaining a distribution of the uncertainties (paragraphs 0019-0029: Here, the character values from the simulated and sensor data are compared to provide a probability distribution. Additionally the distribution and reliability are calculated based upon the relationship of the two data sets) Iwamasa fails to specifically disclose: in response to a determination that the relationship learned during the training of the trainable module are applicable to the input variable value ascertaining a control signal from an output variable value supplied for the input variable value, by the trainable module and/or the variations controlling, using the control signal, a vehicle and/or a classification system and/or a system for quality control of mass-produced products and/or a system for medical imaging However, Andoni, which is analogous to the claimed invention because it is directed toward selective execution of a training algorithm, discloses: in response to a determination that the relationship learned during the training of the trainable module are applicable to the input variable value (paragraph 0025: Here, a fitness function compares the models to the input set. This includes testing the models to determine whether the predicted values and the actual values are within a margin to determine the fitness of the model) ascertaining a control signal from an output variable value supplied for the input variable value, by the trainable module and/or the variations (paragraph 0032: Here, the input data set is provided to a trainable model to output an output data set and determine error values for correcting the trainable model) controlling, using the control signal, a vehicle and/or a classification system and/or a system for quality control of mass-produced products and/or a system for medical imaging (paragraphs 0033-0034: Here, a classification system (model) is controlled to in some instances perform backpropagation to reinforce certain traits. Further, the backpropagation may be “turned off” or disabled to prevent reinforcing traits that fail to satisfy the fitness threshold) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Andoni with Iwamasa, with a reasonable expectation of success, as it would have allowed for improved reinforcement learning by disabling reinforcement of trains that fail to satisfy a fitness threshold while propagating traits that satisfy the threshold (Andoni: paragraphs 0033-0034). Iwamasa fails to specifically disclose where the variations are formed: by deactivating different neurons in an artificial neural network (ANN) which is contained in the trainable module and/or by varying parameters which characterize a behavior of the trainable module and/or by deactivating connections between neurons in the ANN However, Praveen, which is analogous to the claimed invention because it is directed toward pruning a neural network, discloses the variations are formed by: by deactivating different neurons in an artificial neural network (ANN) which is contained in the trainable module (paragraph 0022: Here, a pruning engine prunes neurons within layers of the neural network to reduce complexity of the neural network) and/or by varying parameters which characterize a behavior of the trainable module and/or by deactivating connections between neurons in the ANN (Figure 2B; paragraph 0031: Here, a pruned neural network deactivates connections between pruned nodes) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Praveen with Iwamasa-Andoni, with a reasonable expectation of success, as it would have provided the advantage of reducing complexity of the network while improving performance while maintaining accuracy (Praveen: paragraph 0001). As per dependent claim 33, Iwamasa discloses the method wherein the distribution is modeled as a statistical distribution, using a parameterized estimate, and parameters of the estimate being expressed by moments of the statistical distribution (Figure 7; paragraphs 0066 and 0083-0084: Here, a probability distribution of the uncertain parameters is modeled and entered into the probability distribution input unit. This includes both values corresponding to sensor data and simulated values. These parameters are expressed as moments in time). As per dependent claim 34, Iwamasa discloses the method wherein the parameters of the estimate are ascertained according to a likelihood method (paragraph 0066: Here, the “error” value represents a likelihood that the estimate and the sensor values are the same) and/or according to a Bayesian method. As per dependent claim 35, Iwamasa discloses the method wherein the parameters of the estimate are ascertained using an expectation maximization algorithm, and/or an expectation/conditional maximization algorithm, and/or an expectation-conjugate-gradient algorithm, and/or a Newton-based method, and/or a Markov chain Monte Carlo-based method (paragraph 0072), and/or a stochastic-gradient algorithm. As per dependent claim 37, Iwamasa discloses the method wherein the distribution is modeled as a normal distribution (paragraph 0084), and/or an exponential distribution, and/or a gamma distribution, and/or a chi-squared distribution, and/or a beta distribution, and/or an exponential Weibull distribution, and/or a Dirichlet distribution. With respect to claim 38, the claim recites the limitations substantially similar to those in claim 20. Claim 38 is similarly rejected. Further, Iwamasa disclose a non-transitory machine-readable storage medium on which are stored a computer program including machine-readable instructions (paragraph 0130). With respect to independent claim 39, the claim recites the limitations substantially similar to those in claim 20. Claim 39 is similarly rejected. Further, Iwamasa discloses a computer (Figure 15; paragraph 0124). Claims 22-24 are rejected under 35 U.S.C. 103 as being unpatentable over Iwamasa, Andoni, and Praveen and further in view of Barre et al. (US 2019/0130640, published 2 May 2019, hereafter Barre). As per dependent claim 22, Iwamasa, Andoni, and Praveen disclose the limitations similar to those in claim 20, and the same rejection is incorporated herein. Iwamasa further discloses determining that relationships learned during the training of the trainable module are applicable to the input value (paragraphs 0019-0029). However, Iwamasa fails to specifically disclose the uncertainty lying within a specified quantile of the distribution. However, Barre, which is analogous to the claimed invention because it is directed toward using uncertainty quantiles, discloses segmenting image data into quantile intervals with different percentages associated with positions in the distribution (Figure 5C-5E; paragraphs 0056-0058: Here, the image is divided into various segments and uncertainty values are associated with each segment). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Barre with Iwamasa, with a reasonable expectation of success, as it would have allowed for associating relationships (Iwamasa) based upon uncertainty values (Barre). This would have allowed a user to associate items based upon the probability (Barre: paragraph 0040). As per dependent claim 23, Iwamasa, Andoni, and Praveen disclose the limitations similar to those in claim 20, and the same rejection is incorporated herein. Iwamasa further discloses determining that relationships learned during the training of the trainable module are applicable to the input value (paragraphs 0019-0029). However, Iwamasa fails to specifically disclose the uncertainty lying outside a specified quantile of the distribution. However, Barre, which is analogous to the claimed invention because it is directed toward using uncertainty quantiles, discloses segmenting image data into quantile intervals with different percentages associated with positions in the distribution (Figure 5C-5E; paragraphs 0056-0058: Here, the image is divided into various segments and uncertainty values are associated with each segment). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Barre with Iwamasa, with a reasonable expectation of success, as it would have allowed for associating relationships (Iwamasa) based upon uncertainty values (Barre). This would have allowed a user to associate items based upon the probability (Barre: paragraph 0040). Further, the examiner takes official notice that it was notoriously well known in the art at the time of the applicant’s effective filing date to have eliminated associations, based upon the associations being below a threshold. It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined the well-known with Iwamasa-Barre, with a reasonable expectation of success, as it would have allowed a user to remove associations that do not meet a threshold probability. This would have provided the advantage of removing low value associations. As per dependent claim 24, Iwamasa, Andoni, and Praveen disclose the limitations similar to those in claim 20, and the same rejection is incorporated herein. Iwamasa further discloses determining that relationships learned during the training of the trainable module are applicable to the input value (paragraphs 0019-0029). However, Iwamasa fails to specifically disclose the uncertainty lying outside a specified quantile of the distribution. However, Barre, which is analogous to the claimed invention because it is directed toward using uncertainty quantiles, discloses segmenting image data into quantile intervals with different percentages associated with positions in the distribution (Figure 5C-5E; paragraphs 0056-0058: Here, the image is divided into various segments and uncertainty values are associated with each segment). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Barre with Iwamasa, with a reasonable expectation of success, as it would have allowed for associating relationships (Iwamasa) based upon uncertainty values (Barre). This would have allowed a user to associate items based upon the probability (Barre: paragraph 0040). Further, the examiner takes official notice that it was notoriously well known in the art at the time of the applicant’s effective filing date to have eliminated both extremes, those over a first threshold and those under a second threshold. It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined the well-known with Iwamasa-Barre, with a reasonable expectation of success, as it would have allowed a user to remove associations that do not meet a threshold probability. This would have provided the advantage of removing extreme value associations. Claims 25, 28-29 are rejected under 35 U.S.C. 103 as being unpatentable over Iwamasa, Andoni, and Praveen and further in view of Lingg et al. (US 2019/0163193, published 30 May 2019, hereafter Lingg). As per dependent claim 25, Iwamasa, Andoni, and Praveen disclose the limitations similar to those in claim 20, and the same rejection is incorporated herein. Iwamasa fails to specifically disclose wherein the trainable module is a classifier and/or a regressor. However, Lingg, which is analogous to the claimed invention because it is directed toward classification, discloses wherein the trainable module is a classifier (paragraph 0054) and/or a regressor. It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Lingg with Iwamasa, with a reasonable expectation of success, as it would have allowed for training a classifier of a neural network to improve classification (Lingg: paragraph 0054). This would have allowed for improving the vehicle control system. As per dependent claim 28, Iwamasa, Andoni, and Praveen disclose the limitations similar to those in claim 27, and the same rejection is incorporated herein. Iwamasa fails to specifically disclose wherein the parameters are estimated, busing a method of moments and/or using a maximum likelihood method and/or using a Bayesian estimation. However, Lingg, which is analogous to the claimed invention because it is directed toward training an artificial neural network, discloses wherein the parameters are estimated, busing a method of moments and/or using a maximum likelihood method and/or using a Bayesian estimation (paragraph 0045: Here, a Bayesian network model is used for training the artificial neural network). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Lingg with Iwamasa, with a reasonable expectation of success, as it would have allowed training the model using known training means. This would have allowed for training using predictable results. As per dependent claim 29, Iwamasa, Andoni, and Praveen disclose the limitations similar to those in claim 20, and the same rejection is incorporated herein. Iwamasa discloses determining that the relationships learned during the training of the trainable module are applicable to the input variable value (paragraphs 0019-0029). However, Iwamasa fails to specifically disclose: ascertaining a control signal for an output variable value supplied for the input variable value, by the trainable module and/or the variations controlling, using the control signal, a vehicle and/or a classification system and/or a system for quality control of mass-produced products and/or a system for medical imaging However, Lingg, which is analogous to the claimed invention because it is directed toward controlling vehicles using control signals, discloses: ascertaining a control signal for an output variable value supplied for the input variable value, by the trainable module (paragraphs 0027 and 0054) and/or the variations controlling, using the control signal, a vehicle (paragraphs 0040-0041) and/or a classification system and/or a system for quality control of mass-produced products and/or a system for medical imaging It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Lingg with Iwamasa, with a reasonable expectation of success, as it would have allowed for training a model to autonomously control of a vehicle (Lingg: paragraphs 0040-0041) based upon a trained neural network (Lingg: paragraph 0054). Claim 36 is rejected under 35 U.S.C. 103 as being unpatentable over Iwamasa, Andoni, and Praveen and further in view of Christensen (US 2020/0090278, filed 11 May 2015). As per dependent claim 36, Iwamasa, Andoni, and Praveen disclose the limitations similar to those in claim 32, and the same rejection is incorporated herein. Iwamasa fails to specifically disclose wherein the distribution is modeled as a distribution an exponential family. However, Christensen, which is analogous to the claimed invention because it is directed toward modeling data, discloses wherein the distribution is a Gaussian distribution modeled as an exponential family (paragraph 0131). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Christensen with Iwamasa, with a reasonable expectation of success, as it would have allowed for modeling times between events. This would have facilitated improved models based upon time data. Response to Arguments Applicant’s arguments with respect to the rejection of claims under 35 USC 101 have been fully considered and are persuasive. The rejection has been withdrawn. Applicant's arguments with respect to the limitations of previous claim 21 (now incorporated into independent claim 20) have been fully considered but they are not persuasive. As an initial matter, it is noted that the applicant’s independent claims include limitations beyond those found in previously presented claim 21. Specifically, the independent claims include the limitations: in response to a determination that the relationship learned during the training of the trainable module are applicable to the input variable value ascertaining a control signal from an output variable value supplied for the input variable value, by the trainable module and/or the variations controlling, using the control signal, a vehicle and/or a classification system and/or a system for quality control of mass-produced products and/or a system for medical imaging The examiner has added the Andoni reference to address these newly presented limitations. The applicant further argues Praveen has a different purpose that is incompatible with the claimed invention (page 16). Specifically, the applicant argues that Praveen lacks “network pruning [that] would have been suitable for obtaining variations in a training model that “differ [] so much from each other, that they may not be converted into each other in a congruent manner, using progressive learning,” as recited in the claims (page 16).” However, Praveen is not relied upon for disclosure of this limitation. Instead, Iwamasa discloses supplying at least one input variable value to variations of the trainable module, the variations differing so much from each other, that they may not be converted into each other in a congruent manner, using progressive learning (paragraphs 0019-0029: Here, a first and second calculation model are provided. The first learning model calculates a characteristic value based upon simulated inputs (paragraph 0031), while the second learning model calculates a characteristic value based upon sensor inputs (paragraph 0034)). The examiner notes, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). For this reason, this argument is not persuasive. The applicant further argues that there would not have been a reasonable expectation of success in combining Praveen with Iwamasa (page 16). The examiner notes “reasonable expectation of success can be implicitly shown via the prior art teachings or as part of the obviousness analysis. See Elekta Ltd. v. ZAP Surgical Sys., Inc., 81 F.4th 1368, 1376-77, 2023 USPQ2d 1100 (Fed. Cir. 2023) ("[W]e can reasonably discern that the Board considered and implicitly addressed reasonable expectation of success based on the arguments and evidence presented to the Board on motivation to combine.") (MPEP 2143.02(I))”. Additionally, “Conclusive proof of efficacy is not required to show a reasonable expectation of success. OSI Pharm., LLC v. Apotex Inc., 939 F.3d 1375, 1385, 2019 USPQ2d 379681 (Fed. Cir. 2019) ("To be clear, we do not hold today that efficacy data is always required for a reasonable expectation of success. Nor are we requiring ‘absolute predictability of success.’"); Acorda Therapeutics, Inc. v. Roxane Lab., Inc., 903 F.3d 1310, 1333, 128 USPQ2d 1001, 1018 (Fed. Cir. 2018) ("This court has long rejected a requirement of ‘[c]onclusive proof of efficacy’ for obviousness." (citing to Hoffmann-La Roche Inc. v. Apotex Inc., 748 F.3d 1326, 1331 (Fed. Cir. 2014); PharmaStem Therapeutics, Inc. v. ViaCell, Inc., 491 F.3d 1342, 1364 (Fed. Cir. 2007); Pfizer, Inc. v. Apotex, Inc., 480 F.3d 1348, 1364, 1367–68 (Fed. Cir. 2007) (reasoning that "the expectation of success need only be reasonable, not absolute")) (MPEP 2143.02(I))”. Further, “obviousness does not require absolute predictability, but at least some degree of predictability is required. Evidence showing there was no reasonable expectation of success may support a conclusion of nonobviousness. In re Rinehart, 531 F.2d 1048, 189 USPQ 143 (CCPA 1976) (Claims directed to a method for the commercial scale production of polyesters in the presence of a solvent at superatmospheric pressure were rejected as obvious over a reference which taught the claimed method at atmospheric pressure in view of a reference which taught the claimed process except for the presence of a solvent. The court reversed, finding there was no reasonable expectation that a process combining the prior art steps could be successfully scaled up in view of unchallenged evidence showing that the prior art processes individually could not be commercially scaled up successfully.). See also OSI Pharm., LLC v. Apotex Inc., 939 F.3d 1375, 1385, 2019 USPQ2d 379681 (Fed. Cir. 2019) ("These references provide no more than hope—and hope that a potentially promising drug will treat a particular cancer is not enough to create a reasonable expectation of success in a highly unpredictable art such as this. Indeed, given a 99.5% failure rate and no efficacy data or any other reliable indicator of success, the only reasonable expectation at the time of the invention was failure, not success."); Amgen, Inc. v. Chugai Pharm. Co., 927 F.2d 1200, 1207-08, 18 USPQ2d 1016, 1022-23 (Fed. Cir. 1991), cert. denied, 502 U.S. 856 (1991) (In the context of a biotechnology case, testimony supported the conclusion that the references did not show that there was a reasonable expectation of success.); In re O’Farrell, 853 F.2d 894, 903, 7 USPQ2d 1673, 1681 (Fed. Cir. 1988) (The court held the claimed method would have been obvious over the prior art relied upon because one reference contained a detailed enabling methodology, a suggestion to modify the prior art to produce the claimed invention, and evidence suggesting the modification would be successful.) (MPEP 2143.02(II)).” The argument that statement that “Praveen performs pruning or connection deactivation in accordance with an equalized metric being less than a threshold pruning weight, the burden was on the Patent Office to establish that pruning done in the particular way disclosed in Praveen, when applied to Iwamasa, would have succeeded in forming variations for a training module, as recited in the claims (page 16)” is insufficient evidence to show that that there was no reasonable expectation of success. For this reason, this argument is not persuasive. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Le et al. (Using Synthetic Data to Train Neural Networks is Model-Based Reasoning, 2017): Discloses training and using a neural network model trained using both synthetic and real world data (Abstract) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE R STORK whose telephone number is (571)272-4130. The examiner can normally be reached 8am - 2pm; 4pm - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571/272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE R STORK/Primary Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Nov 12, 2021
Application Filed
Apr 14, 2025
Non-Final Rejection — §103
Sep 12, 2025
Response Filed
Nov 13, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585935
EXECUTION BEHAVIOR ANALYSIS TEXT-BASED ENSEMBLE MALWARE DETECTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12585937
SYSTEMS AND METHODS FOR DEEP LEARNING ENHANCED GARBAGE COLLECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585869
RECOMMENDATION PLATFORM FOR SKILL DEVELOPMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579454
PROVIDING EXPLAINABLE MACHINE LEARNING MODEL RESULTS USING DISTRIBUTED LEDGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12579412
SPIKE NEURAL NETWORK CIRCUIT INCLUDING SELF-CORRECTING CONTROL CIRCUIT AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
92%
With Interview (+28.3%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 865 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month