Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Applicant’s submission filed on 01/16/2026 has been entered. The status of the claims is as follows:
Claims 1-12 remain pending in the application.
Claims 1, 3, 11 and 12 are amended.
Response to Arguments
In reference to the 101 Rejection:
Step 2A, Prong One:
Applicant asserts in Remarks filed on 01/16/2026 pg. 6-7 that the Office Action does not explain, through proof or reasonable analysis, how the limitation fits within the judicial exception for mathematical concepts nor does the Office Action address whether the claim merely involves or is based on a mathematical concept rather than actually reciting one as required by MPEP 2106.04(a)(2). Applicant further asserts that because a neural network is a computational entity that can only be operationalized in a computing environment, transferring it into a machine is a technical act in the field of computer electronics, and therefore the limitation should be treated as additional element evaluated under Step 2A, Prong Two.
Applicant’s argument is not persuasive. As amended, claim 1 recites determining a “measure of a susceptibility to error” of an artificial neural network “as a function of the deviation”, and then applying a threshold comparison to decide whether the neural network is transferred into a machine (e.g., transfer if the measure is smaller than a threshold value; otherwise, do not transfer). This limitation, when considered in light of the claims as a whole, recites a mathematical concept because it depends on mathematical relationships and calculations, specifically, computing an error metric from a deviation and comparing that computed value to a threshold to drive an outcome. The additional “transfer/ not transfer” language is merely a result of applying the mathematical evaluation and does not meaningfully limit the claim to a particular technological implementation or improve computer functionality; rather it amounts to insignificant post solution activity (i.e., taking an action based on the computed result). Accordingly, the claim continues to recite a judicial exception under Step 2A, Prong One. Further, under Step 2A, Prong Two, the claim does not integrate the judicial exception into a practical application because the “transfer into a machine” is recited at a high level of generality without specifying any particular machine , transfer mechanism, technical improvement or concrete technological implementation beyond generic computer execution. Instead, the claim merely uses the computed error measure to decide whether to deploy the neural network, which is an abstract decision-making process based on mathematical evaluation. Therefore the amended limitation does not add meaningful limitation that would amount to significantly more than the judicial exception and the rejection under 35 U.S.C 101 is maintained.
Step 2A, Prong Two:
Applicant assert in Remarks filed on 01/16/2026 pg. 7-8 that the amended claims satisfy Step 2A, Prong Two because they recite a technological improvement to artificial neural networks and therefore integrate any alleged judicial exception into a practical application. Relying on the Ex Parte Desjardins ARP decision, Applicant contends the PTAB erred in rejecting AI claims using “overbroad reasoning” that treats all machine learning as an unpatentable algorithm plus generic computer components, and instead the analysis should credit specification-described improvements tied to specific claim limitations. Here, Applicant identifies claim language requiring the neural network to be transferred into a machine only when susceptibility to error is smaller than a threshold value (and otherwise not transferred), which is linked to the specification to increased operational safety and thus constitutes a concrete practical technological improvement rather than merely a mathematical calculation.
Examiner acknowledges Applicant’s arguments under Step 2A, Prong Two; however the amended claim limitation do not integrate the alleged judicial exception into a practical application. Specifically, the limitation of transferring an artificial neural network into a machine only when “a measure of susceptibility to error” is below a threshold merely applies the abstract idea using a result-based criterion (i.e., determining whether a condition is satisfied) without reciting how the susceptibility to error is technically determined or how the transfer decision improves the functioning of the neural network itself. The claimed steps therefore amount to using a generic computing environment to perform a conditional deployment of a model, which constitutes no more than an instruction to apply the abstract idea with a threshold test. Accordingly, the claim does not recite a meaningful technological improvement and remains directed to a judicial exception under Step 2A.
Step 2B:
Applicant asserts in Remarks filed on 01/16/2026 pg. 8-9 that even if the Office Action concludes that “additional element(s)” do not integrate the judicial exception into a practical application under Step 2A, it must still evaluate those elements under Step 2B. Under MPEP 2106.05(d) and 2106.07(a), which implement Berkheimer v. HP, a Step 2B finding that an additional element (or combination of elements) is well-understood, routine, and conventional requires a factual determination and cannot be made conclusory. Accordingly, an examiner may not simply assert conventionality, but must expressly support the rejection in writing with evidence, such as an express statement in the specification or prosecution recorded, a citation to relevant court decisions, a citation to a publication demonstrating conventionality, or a statement that the examiner is taking official notice. Therefore, if the Office Action proceeds under Step 2B, it must substantiate any conventionality finding with one or more of these evidentiary base.
Applicant’s argument regarding the need for evidentiary support under Berkheimer is not persuasive. In the present rejection, the Office’s Step 2B analysis is not based on a factual finding that any additional element is “well-understood, routine, and conventional”. Rather, the only additional elements recited i.e., that the claim is a “computer-implemented method” and that steps are performed by “the artificial neural network”, amount to no more than using a generic computer/ processor as a tool to perform the recited abstract idea, which is expressly identified in the eligibility guidance as failing to add significantly more (see MPEP 2106/05(f)). Because the Step 2B conclusion is grounded in the legal determination that these limitation merely invoke generic computer functionality and do not meaningfully limit the judicial exception, the rejection does not require the additional documentary evidence to establish conventionality.
Applicant’s arguments filed on 01/16/2026 have been fully considered but they are persuasive.
In reference to the 103 Rejection:
Applicant’s arguments, see Remarks pg. 9-10, filed 01/16/2026, with respect to the rejection(s) of claim(s) under 35 U.S.C 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Gurumurthi (US 2020/0151572 A1).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-12 are rejected under U.S.C 101 for containing an abstract idea without significantly more.
Regarding claim 1:
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
Yes, the claim is a process.
Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes, the claim recites an abstract idea.
verifying an artificial neural network that is trained to map an input point from an input space of a function as accurately as possible onto a functional value of the function, - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
the function being a limited or Lipschitz-constant function, the method comprising: This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.)
specifying a test point, the test point including a pair of a test input point from the input space of the function and a test functional value, - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
the input point being determined from the input space; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
mapping the input point [by the artificial neural network] onto the functional value; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
determining a reference for the functional value using the test input point; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
determining a deviation of the functional value from the reference; and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
determining a measure of a susceptibility to error of the artificial neural network as a function of the deviation. - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
No, there are no additional elements that integrate the judicial exception into a practical application. The additional elements:
A computer-implemented method – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
by the artificial neural network – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
wherein the artificial neural network is transferred into a machine when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the artificial neural network not being transferred. This limitation is directed to a decision of transmittal based on a mathematical value (mathematical relationships) as it merely applies the abstract idea using a result-based criterion (i.e., determining whether a condition is satisfied).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
No, there are no additional elements that amount to significantly more than the judicial exception. The additional elements are:
A computer-implemented method – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
by the artificial neural network – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
wherein the artificial neural network is transferred into a machine when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the artificial neural network not being transferred. This limitation is directed to a decision of transmittal based on a mathematical value (mathematical relationships) as it merely applies the abstract idea using a result-based criterion (i.e., determining whether a condition is satisfied).
Regarding claim 2,
Claim 2 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations:
wherein the function describes a curve of a physical or chemical variable in a machine, changes of the variable in the curve being limited by: (i) physical and/or chemical properties of the machine, and/or (ii) physical and/or chemical properties of components of the machine This limitation is directed to mathematical relationships since it is reciting relationships of a function and the variable recited here are interpreted as mathematical relationships of an equation or formula of the “function”.
wherein the machine or the component is controlled as a function of the functional value when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the machine or the component not being controlled as a function of the functional value. This limitation is directed to a decision of transmittal based on a mathematical value (mathematical relationships) as it merely applies the abstract idea using a result-based criterion (i.e., determining whether a condition is satisfied).
Regarding claim 3,
Claim 3 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations:
wherein the function describes a curve of a physical or chemical variable in a machine, changes of the variable in the curve being limited by: (i) physical and/or chemical properties of the machine, and/or by in particular physical, and/or (ii) chemical properties of components of the machine, This limitation is directed to mathematical relationships since it is reciting relationships of a function and the variable recited here are interpreted as mathematical relationships of an equation or formula of the “function”.
Regarding claim 4,
Claim 4 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations:
wherein, as a function of the test input point or in a neighborhood of the test input point, input points are determined from the input space, a probability being determined that among the input points there is an input point that is mapped [by the artificial neural network] onto a functional value whose deviation does not fulfill a condition, This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.)
by the artificial neural network – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
the deviation fulfilling the condition either when it is determined that the deviation is smaller than a threshold value or when it is determined that the deviation is greater than a threshold value or when it is determined that the deviation is within an upper and lower bound. - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
Regarding claim 5,
Claim 5 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 4 which includes an abstract idea (see rejection for claim 4). The additional limitations:
wherein the input points are drawn from the input space in the neighborhood according to a probability distribution randomly, and/or are drawn in a manner uniformly distributed over the neighborhood. This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
Regarding claim 6,
Claim 6 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations:
wherein in the input space a distribution that includes the test input point is determined of test input points from various test points that divides the input space into regions This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) because this step describes a mathematical division of an input space using a distribution of test points.
the regions being adjacent simplexes or adjacent spheres, the regions each including at least one test input point This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) because this step describes the input space being divided into geometric shapes and each containing at least one point. This is purely geometric/ mathematical constructs.
a pair being provided per test point of a test input point from the input space of the function and a test functional value of the function This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) because this is basic function evaluation, a foundational math concept.
the input point being determined in one of the regions - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
the reference being determined, using the test input point, from at least one test point that is included in the region in which the input point is determined, and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
the measure being determined for the region This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) as a “measure” (possibly a statistical or numeric value) is calculated for the region, this is another mathematical computation.
Regarding claim 7,
Claim 7 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 6 which includes an abstract idea (see rejection for claim 6). The additional limitations:
wherein the simplexes include one of the test input points per vertex of a simplex, or the spheres each include one of the test input points in their center This claim merely recites a further limitation on the “the regions being adjacent simplexes or adjacent spheres, the regions each including at least one test input point” from Claim 6 which was directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.)
Regarding claim 8,
Claim 8 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 6 which includes an abstract idea (see rejection for claim 6). The additional limitations:
wherein a multiplicity of input points are determined from the input space, the multiplicity of input points lying in one of the regions in the input space, the method including - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) because this step involves selecting or identifying multiple input points from a defined space.
for each respective input point from the multiplicity of input points, mapping the respective input point [by the artificial neural network] onto a respective functional value and This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
by the artificial neural network – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
determining of a deviation of the respective functional value from the reference, and This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) because this step involves calculating the difference between ANN output and a reference, deviation is a numeric difference.
the measure being determined as a function of the deviations thus determined for the multiplicity of input points. This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.)
Regarding claim 9,
Claim 9 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 6 which includes an abstract idea (see rejection for claim 6). The additional limitations:
wherein a multiplicity of input points from the input space are determined that lie in various regions in the input space, the method including, - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) because this step involves selecting or identifying multiple input points from a defined space.
for each region of the various regions, mapping an input point of the multiplicity of input points from this region [by the artificial neural network] onto a respective functional value This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
by the artificial neural network – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
determining a reference for the functional value, using the test input point, from at least one test point that is included in the region, and This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) because this step involves calculating the difference between ANN output and a reference, deviation is a numeric difference.
determining a deviation of the respective functional value from the reference, and This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) because this step involves calculating the difference between ANN output and a reference, deviation is a numeric difference.
wherein the measure is determined as a function of a frequency with which the determined deviations fulfill a condition for their respective region. This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.)
Regarding claim 10,
Claim 10 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations:
wherein the reference is determined as a function of a difference between the input point and the test input point, . This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.)
the difference being weighted with a Lipschitz constant of the function. . This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.)
Regarding claim 11,
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
Yes, the claim is a device
Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes, the claim recites an abstract idea.
verify an artificial neural network that is trained to map an input point from an input space of a function as accurately as possible onto a functional value of the function, - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
the function being a limited or Lipschitz-constant function, the device being configured to- This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.)
specify a test point, the test point including a pair of a test input point from the input space of the function and a test functional value, - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
the input point being determined from the input space; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
map the input point by [the artificial neural network] onto the functional value;
- This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
determine a reference for the functional value using the test input point; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
determine a deviation of the functional value from the reference; and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
determine a measure of a susceptibility to error of the artificial neural network as a function of the deviation. - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
No, there are no additional elements that integrate the judicial exception into a practical application. The additional elements:
A device – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
by the artificial neural network – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
wherein the artificial neural network is transferred into a machine when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the artificial neural network not being transferred. This limitation is directed to a decision of transmittal based on a mathematical value (mathematical relationships) as it merely applies the abstract idea using a result-based criterion (i.e., determining whether a condition is satisfied).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
No, there are no additional elements that amount to significantly more than the judicial exception. The additional elements are:
A device – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
by the artificial neural network – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
wherein the artificial neural network is transferred into a machine when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the artificial neural network not being transferred. This limitation is directed to a decision of transmittal based on a mathematical value (mathematical relationships) as it merely applies the abstract idea using a result-based criterion (i.e., determining whether a condition is satisfied).
Regarding claim 12:
Step 1 – Is the claim to a process, machine, manufacture or composition of matter?
Yes, the claim is a non-transitory computer-readable medium.
Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Yes, the claim recites an abstract idea.
verifying an artificial neural network that is trained to map an input point from an input space of a function as accurately as possible onto a functional value of the function, - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
the function being a limited or Lipschitz-constant function This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.)
specifying a test point, the test point including a pair of a test input point from the input space of the function and a test functional value, - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
the input point being determined from the input space; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
mapping the input point [by the artificial neural network] onto the functional value; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
determining a reference for the functional value using the test input point; - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
determining a deviation of the functional value from the reference; and - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
determining a measure of a susceptibility to error of the artificial neural network as a function of the deviation. - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.)
Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application?
No, there are no additional elements that integrate the judicial exception into a practical application. The additional elements:
A non-transitory computer-readable medium on which is stored a computer program including computer-readable instructions – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
the instructions, when executed by a computer, causing the computer to perform the following steps: – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
by the artificial neural network – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
wherein the artificial neural network is transferred into a machine when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the artificial neural network not being transferred. This limitation is directed to a decision of transmittal based on a mathematical value (mathematical relationships) as it merely applies the abstract idea using a result-based criterion (i.e., determining whether a condition is satisfied).
Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception?
No, there are no additional elements that amount to significantly more than the judicial exception. The additional elements are:
A non-transitory computer-readable medium on which is stored a computer program including computer-readable instructions – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
the instructions, when executed by a computer, causing the computer to perform the following steps: – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
by the artificial neural network – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)).
wherein the artificial neural network is transferred into a machine when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the artificial neural network not being transferred. This limitation is directed to a decision of transmittal based on a mathematical value (mathematical relationships) as it merely applies the abstract idea using a result-based criterion (i.e., determining whether a condition is satisfied).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5 and 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Zakrzewski (US 6,473,746 B1) in view of Zakrzewski (EP) (EP 1 462 989 A2) and further in view of Gurumurthi (US 2020/0151572 A1)
Regarding Claim 1, Zakrzewski explicitly discloses:
verifying an artificial neural network that is trained to map an input point from an
input space of a function as accurately as possible onto a functional value of the function, the function being a limited or Lipschitz-constant function, the method comprising: (Zakrzewski, Col. 1, Lines 53: “In accordance with the present invention, a method of verifying pretrained, static, feedforward neural network mapping software having a neural net mapping function f(x) that is intended to replace look up table mapping software having a look up table mapping function Ф(x) comprises the steps of: (1) establishing a multi-axis rectangular domain including upper and lower limits of each axis to bound all input vectors x of both said look up table mapping function Ф(x) and said neural net mapping function f(x); (2) determining a set of test points within the rectangular domain based on said upper and lower limits of each axis; (3) determining Lipschitz constant Kf for said neural net mapping function f(x);”) [Examiners’ note: “input space” is being interpreted as the multi-axis rectangular domain defined by upper and lower bounds on each axis, “a functional value of the function” is being interpreted as the output of the neural network mapping function f(x)]
specifying a test point, the test point including a pair of a test input point from the input space of the function and a test functional value, the input point being determined from the input space; (Zakrzewski, Col. 5, Lines 6-13: “The actual input-output properties of the neural network mapping function are fully determined by the weight matrices. Accordingly, the construction of a verification procedure that will guarantee full knowledge of the behavior of the trained neural network is based on the assumption that training and testing points can be generated at will by a computer simulation model.”, Col. 5, Lines 16-22: “a method of verifying pretrained neural network mapping software evaluates the output of the neural network mapping function thereof over a large set of uniformly spaced test points and to find upper and lower bounds on the values that can possibly be attained between those test points.”) [Examiner’s note: “a functional value of the function” is being interpreted as the output of the neural network mapping function f(x). The test points are input values from the input space, the output of the function is evaluated at those test points, meaning each test point is specified.]
mapping the input point by the artificial neural network onto the functional value;
(Zakrzewski, Col. 3, Lines 32-37: “Thus, for purposes of describing the exemplary neural network of FIG. 1, let us assume a neural net mapping software having a neural net function with n-dimensional real argument, and one- dimensional real output value:
PNG
media_image1.png
19
76
media_image1.png
Greyscale
”) [Examiner’s note: the highlight describes a neural network function f that maps input vectors Rn to output value R]
determining a reference for the functional value using the test input point; (Zakrzewski,
Col. 15, Lines 30-33: “Firstly, the domain of input signals was rectangular. Secondly, for each particular input value, the desired, or "true" output value was known or readily calculable via a look-up table.”) [Examiner’s note: “a reference for the functional value” is being interpreted as the desired, or true output value]
determining a deviation of the functional value from the reference; and (Zakrzewski,
Col. 10, Lines 9-25: “Continuing, the Lipschitz constant K'f is determined for the neural network mapping, so that knowledge of the gradient at the testing points allows inference about gradient values throughout the whole rectangle. Because error bounds are to be determined between f(x) and Ф(x), knowledge of the difference between gradients of the two functions
are determined… Then, for each rectangular cell a local Lipschitz constant K'Ф is determined for the gradient of the look-up table mapping. Calculation of K'Ф is described in detail in appendix A.2 supra. This constant, in conjunction with K'f allows calculating bounds for the error within each rectangular cell
PNG
media_image2.png
47
418
media_image2.png
Greyscale
”) [Examiner’s note: The highlights describe calculating bounds for the error between two functions: f(x) (i.e., the functional value) and Ф(x) (i.e., the reference)]
determining a measure of a susceptibility to error of the artificial neural network as a
function of the deviation. (Zakrzewski, Col. 10, Lines 22-25: “Then, for each rectangular cell a local Lipschitz constant K'Ф is determined for the gradient of the look-up table mapping. Calculation of K'Ф is described in detail in appendix A.2 supra. This constant, in conjunction with K'f allows calculating bounds for the error within each rectangular cell
PNG
media_image2.png
47
418
media_image2.png
Greyscale
”), Col. 10, Lines 32-39: “After lower and upper bounds for approximation error within each rectangular cell have been determined, a lower and an upper error bound is selected therefrom for the desired error bounds of the mapping functions f(x) and Ф(x). In the present embodiment, the minimum and maximum of these error bound quantities over all cells are selected to obtain the desired lower and upper error bound valid for every point within the domain R.”) [Examiner’s note: The error bounds between f(x) and Ф(x) are determined by using the Lipschitz constants Kf and KФ. These constants capture how sensitive the functions are to input changes i.e., how susceptible they are to error due to input deviation.]
Zakrzewski fails to disclose
a computer-implemented method
wherein the artificial neural network is transferred into a machine when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the artificial neural network not being transferred.
However, Zakrzewski (EP) explicitly discloses:
A computer-implemented method for (Zakrzewski (EP), ¶[0100]: “Use of the
randomized verification technique described herein is based on using test points that are randomly selected. It should be noted that, in practice in an embodiment using a computer processor, a pseudo-random number generator may be used.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Zakrzewski and Zakrzewski (EP). Zakrzewski teaches a method of verifying pretrained neural network net mapping by using Lipshitz constant function. Zakrzewski (EP) teaches using statistical analysis to reduce the number of samples required in accordance with statistical analysis confidence intervals to verify correctness of a component. One of ordinary skill would have motivation to combine Zakrzewski and Zakrzewski (EP) because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art.
However, Gurumurthi explicitly discloses:
wherein the artificial neural network is transferred into a machine when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the artificial neural network not being transferred. (Gurumurthi, ¶[0021]: “When one or more error values are lower than a threshold, the controller causes the system to switch from using the processor and memory to using the analog circuit element functional block for performing remaining iterations. As part of the switch, the system copies or transfers neural network data from the memory to the analog circuit element functional block to prepare/configure the analog circuit element functional block for performing the remaining iterations.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Zakrzewski and Gurumurthi. Zakrzewski teaches a method of verifying pretrained neural network net mapping by using Lipshitz constant function. Gurumurthi teaches using multiple functional blocks for training neural networks. One of ordinary skill would have motivation to combine Zakrzewski and Gurumurthi to increase the operational safety and reduce the computational cost for training the network of the machine as the processor functional block can use higher precision data values to produce higher precision results, which can result in less iterations being required to train neural network (Gurumurthi, ¶[0044])
Regarding Claim 2, the combination of Zakrzewski, Zakrzewski (EP) and Gurumurthi discloses all limitations of Claim 1 (as shown in the rejection above).
Zakrzewski in view of Zakrzewski (EP) and Gurumurthi further discloses:
wherein the function describes a curve of a physical or chemical variable in a machine, (Zakrzewski, Col. 16, Lines 4-10: “the state x of an aircraft fuel tank, by way of example, may consist of two attitude angles, height of the fuel surface over a reference point, fuel temperature, and magnitude of the acceleration vector. In this same example, measurement vector z may include pressure, temperature, acceleration, and ultrasonic time of flight values.”, Lines 13-22: “The rational behind introducing the concept of state x is that its components can change independently of each other, and the set of possible state values can be specified as a rectangle again Thus, we have the following situation: the "true", desired value of the estimator is calculated as a known function of state y(true)=Ф(x). The actual output of the neural net estimator
PNG
media_image3.png
19
126
media_image3.png
Greyscale
is calculated as a function of the sensor values
PNG
media_image4.png
19
70
media_image4.png
Greyscale
which in turn are functions of state contaminated by sensor noise
PNG
media_image5.png
23
18
media_image5.png
Greyscale
This mapping concept is illustrated graphically in FIG. 4.”) [Examiner’s note: The function Ф(x) maps a set of physical variables (state x) of an aircraft fuel tank (a machine) to a physical output (fuel quantity or mass). These variables are continuous and measurable (e.g., temperature, pressure, etc.), so Ф(x) describes a curve of physical variables in a machine]
changes of the variable in the curve being limited by: (i) physical and/or chemical properties of the machine, and/or (ii) physical and/or chemical properties of components of the machine, (Zakrzewski, Col. 2, Lines 25-28: “establishing a multi-axis rectangular domain including upper and lower limits of each axis to bound all state vectors x of both said predetermined mapping functions Ф(x) and
PNG
media_image6.png
20
42
media_image6.png
Greyscale
”, Col. 2, Lines 31-32: “determining an upper bound
PNG
media_image7.png
27
33
media_image7.png
Greyscale
and a lower bound
PNG
media_image8.png
24
25
media_image8.png
Greyscale
of the noise components
PNG
media_image9.png
27
28
media_image9.png
Greyscale
”, Col. 16, Lines 4-10: “the state x of an aircraft fuel tank, by way of example, may consist of two attitude angles, height of the fuel surface over a reference point, fuel temperature, and magnitude of the acceleration vector. In this same example, measurement vector z may include pressure, temperature, acceleration, and ultrasonic time of flight values.”,)
wherein the machine or the component is controlled as a function of the functional value when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the machine or the component not being controlled as a function of the functional value. (Zakrzewski (EP), ¶[0108]: “At step 20, the ith data point is evaluated in accordance with the particular function which, in this example, is the error function e(x). At step 22, a determination is made as to whether the error e(x) exceeds the predetermined bounds. If so, control proceeds to step 28 where a conclusion is made that the verification has failed and the neural network is not verified as correct. Otherwise, control proceeds to step 24 where i is incremented and processing proceeds with the next data point.”)
Regarding Claim 3, the combination of Zakrzewski, Zakrzewski (EP) and Gurumurthi discloses all limitations of Claim 1 (as shown in the rejection above).
Zakrzewski in view of Zakrzewski (EP) and Gurumurthi further discloses:
wherein the function describes a curve of a physical or chemical variable in a machine, (Zakrzewski, Col. 16, Lines 4-10: “the state x of an aircraft fuel tank, by way of example, may consist of two attitude angles, height of the fuel surface over a reference point, fuel temperature, and magnitude of the acceleration vector. In this same example, measurement vector z may include pressure, temperature, acceleration, and ultrasonic time of flight values.”, Lines 13-22: “The rational behind introducing the concept of state x is that its components can change independently of each other, and the set of possible state values can be specified as a rectangle again Thus, we have the following situation: the "true", desired value of the estimator is calculated as a known function of state y(true)=Ф(x). The actual output of the neural net estimator
PNG
media_image3.png
19
126
media_image3.png
Greyscale
is calculated as a function of the sensor values
PNG
media_image4.png
19
70
media_image4.png
Greyscale
which in turn are functions of state contaminated by sensor noise
PNG
media_image5.png
23
18
media_image5.png
Greyscale
This mapping concept is illustrated graphically in FIG. 4.”) [Examiner’s note: The function Ф(x) maps a set of physical variables (state x) of an aircraft fuel tank (a machine) to a physical output (fuel quantity or mass). These variables are continuous and measurable (e.g., temperature, pressure, etc.), so Ф(x) describes a curve of physical variables in a machine]
changes of the variable in the curve being limited by: (i) physical and/or chemical properties of the machine, and/or by in particular physical, and/or (ii) chemical properties of components of the machine, (Zakrzewski, Col. 2, Lines 25-28: “establishing a multi-axis rectangular domain including upper and lower limits of each axis to bound all state vectors x of both said predetermined mapping functions Ф(x) and
PNG
media_image6.png
20
42
media_image6.png
Greyscale
”, Col. 2, Lines 31-32: “determining an upper bound
PNG
media_image7.png
27
33
media_image7.png
Greyscale
and a lower bound
PNG
media_image8.png
24
25
media_image8.png
Greyscale
of the noise components
PNG
media_image9.png
27
28
media_image9.png
Greyscale
”, Col. 16, Lines 4-10: “the state x of an aircraft fuel tank, by way of example, may consist of two attitude angles, height of the fuel surface over a reference point, fuel temperature, and magnitude of the acceleration vector. In this same example, measurement vector z may include pressure, temperature, acceleration, and ultrasonic time of flight values.”,)
Regarding Claim 4, the combination of Zakrzewski, Zakrzewski (EP) and Gurumurthi discloses all limitations of Claim 1 (as shown in the rejection above).
Zakrzewski in view of Zakrzewski (EP) and Gurumurthi further discloses:
wherein, as a function of the test input point or in a neighborhood of the test input point, input points are determined from the input space, a probability being determined that among the input points there is an input point that is mapped by the artificial neural network onto a functional value whose deviation does not fulfill a condition, (Zakrzewski (EP), ¶[0023]: “For the approximation error-bounding problem, the required error bounds
M
l
o
e
r
r
o
r
,
M
u
p
e
r
r
o
r
may be given. The probability perror of the event that the actual approximation error between f (x) and ϕ (x) exceeds these limit values may be estimated and expressed as:
PNG
media_image10.png
65
501
media_image10.png
Greyscale
”, ¶[0028]: “In the exceedance probability estimation approach, the estimated quantity is the probability of the event that the value of the function of interest (the neural net function f (x), or the error function, e(x) = f (x) - ϕ (x) ) does not exceed the predetermined bounds.”) [Examiner’s note: the deviation does not fulfill a condition i.e., error exceeds the predetermined bounds]
the deviation fulfilling the condition either when it is determined that the deviation is smaller than a threshold value or when it is determined that the deviation is greater than a threshold value or when it is determined that the deviation is within an upper and lower bound. (Zakrzewski (EP), ¶[0031]: “In determining the estimation for p in EQUATION 3, the approximation error e(xi) is evaluated at each sample point to determine if it exceeds the bounds Mlo, Mup In an ideal situation, all test points satisfy the required bounds so that χF (xi = 0) for all i, and the resulting estimate is that p = 0.”)
Regarding Claim 5, the combination of Zakrzewski, Zakrzewski (EP) and Gurumurthi discloses all limitations of Claim 4 (as shown in the rejection above).
Zakrzewski in view of Zakrzewski (EP) and Gurumurthi further discloses:
wherein the input points are drawn from the input space in the neighborhood according to a probability distribution randomly, and/or are drawn in a manner uniformly distributed over the neighborhood. (Zakrzewski, ¶[0011]: “The neural network may be evaluated at a set of input points corresponding to the neural network inputs and outputs. In connection with a deterministic verification approach, an exhaustive search is performed through a hyper-rectangle representing the set of admissible inputs. Each dimension of the search space is discretized with a uniform step.”, ¶[0101]: “This may be done by using an appropriate sampling distribution that reflects the probability of the system visiting different regions of the search space during actual operation. Using the techniques described herein, all analyses presented remain valid - the estimated quantityp is still the measure of the failure set F, except that according to the assumed non-uniform probability measure. In order to determine an appropriate prior distribution, a sampling distribution may be constructed that closely models actual frequencies of different regions of the search space during the system's operation.”)
Regarding Claim 10, the combination of Zakrzewski, Zakrzewski (EP) and Gurumurthi discloses all limitations of Claim 1 (as shown in the rejection above).
Zakrzewski in view of Zakrzewski (EP) and Gurumurthi further discloses:
wherein the reference is determined as a function of a difference between the input point and the test input point, (Zakrzewski, Col. 10, Lines 32-: “After lower and upper bounds for approximation error within each rectangular cell have been determined, a lower and an upper error bound is selected therefrom for the desired error bounds of the mapping functions f(x) and Ф(x). In the present embodiment, the minimum and maximum of these error bound quantities over all cells are selected to obtain the desired lower and upper error bound valid for every point within the domain R.”)
the difference being weighted with a Lipschitz constant of the function. (Zakrzewski, Col. 13, Lines 35-40: “Then calculate the desired Lipschitz constant valid within the current cell by the expression:
PNG
media_image11.png
58
198
media_image11.png
Greyscale
”)
Regarding Claim 11, Zakrzewski explicitly discloses:
verify an artificial neural network that is trained to map an input point from an input space of a function as accurately as possible onto a functional value of the function, the function being a limited or Lipschitz-constant function, the device being configured to: (Zakrzewski, Col. 1, Lines 53: “In accordance with the present invention, a method of verifying pretrained, static, feedforward neural network mapping software having a neural net mapping function f(x) that is intended to replace look up table mapping software having a look up table mapping function Ф(x) comprises the steps of: (1) establishing a multi-axis rectangular domain including upper and lower limits of each axis to bound all input vectors x of both said look up table mapping function Ф(x) and said neural net mapping function f(x); (2) determining a set of test points within the rectangular domain based on said upper and lower limits of each axis; (3) determining Lipschitz constant Kf for said neural net mapping function f(x);”) [Examiners’ note: “input space” is being interpreted as the multi-axis rectangular domain defined by upper and lower bounds on each axis, “a functional value of the function” is being interpreted as the output of the neural network mapping function f(x)]
specify a test point, the test point including a pair of a test input point from the input space of the function and a test functional value, the input point being determined from the input space; (Zakrzewski, Col. 5, Lines 6-13: “The actual input-output properties of the neural network mapping function are fully determined by the weight matrices. Accordingly, the construction of a verification procedure that will guarantee full knowledge of the behavior of the trained neural network is based on the assumption that training and testing points can be generated at will by a computer simulation model.”, Col. 5, Lines 16-22: “a method of verifying pretrained neural network mapping software evaluates the output of the neural network mapping function thereof over a large set of uniformly spaced test points and to find upper and lower bounds on the values that can possibly be attained between those test points.”) [Examiner’s note: “a functional value of the function” is being interpreted as the output of the neural network mapping function f(x). The test points are input values from the input space, the output of the function is evaluated at those test points, meaning each test point is specified.]
map the input point by the artificial neural network onto the functional value; (Zakrzewski, Col. 3, Lines 32-37: “Thus, for purposes of describing the exemplary neural network of FIG. 1, let us assume a neural net mapping software having a neural net function with n-dimensional real argument, and one- dimensional real output value:
PNG
media_image1.png
19
76
media_image1.png
Greyscale
”) [Examiner’s note: the highlight describes a neural network function f that maps input vectors Rn to output value R]
determine a reference for the functional value using the test input point; (Zakrzewski,
Col. 15, Lines 30-33: “Firstly, the domain of input signals was rectangular. Secondly, for each particular input value, the desired, or "true" output value was known or readily calculable via a look-up table.”) [Examiner’s note: “a reference for the functional value” is being interpreted as the desired, or true output value]
determine a deviation of the functional value from the reference; and (Zakrzewski, Col.
10, Lines 9-25: “Continuing, the Lipschitz constant K'f is determined for the neural network mapping, so that knowledge of the gradient at the testing points allows inference about gradient values throughout the whole rectangle. Because error bounds are to be determined between f(x) and Ф(x), knowledge of the difference between gradients of the two functions
are determined… Then, for each rectangular cell a local Lipschitz constant K'Ф is determined for the gradient of the look-up table mapping. Calculation of K'Ф is described in detail in appendix A.2 supra. This constant, in conjunction with K'f allows calculating bounds for the error within each rectangular cell
PNG
media_image2.png
47
418
media_image2.png
Greyscale
”) [Examiner’s note: The highlights describe calculating bounds for the error between two functions: f(x) (i.e., the functional value) and Ф(x) (i.e., the reference)]
determine a measure of a susceptibility to error of the artificial neural network as a function of the deviation. (Zakrzewski, Col. 10, Lines 22-25: “Then, for each rectangular cell a local Lipschitz constant K'Ф is determined for the gradient of the look-up table mapping. Calculation of K'Ф is described in detail in appendix A.2 supra. This constant, in conjunction with K'f allows calculating bounds for the error within each rectangular cell
PNG
media_image2.png
47
418
media_image2.png
Greyscale
”), Col. 10, Lines 32-39: “After lower and upper bounds for approximation error within each rectangular cell have been determined, a lower and an upper error bound is selected therefrom for the desired error bounds of the mapping functions f(x) and Ф(x). In the present embodiment, the minimum and maximum of these error bound quantities over all cells are selected to obtain the desired lower and upper error bound valid for every point within the domain R.”) [Examiner’s note: The error bounds between f(x) and Ф(x) are determined by using the Lipschitz constants Kf and KФ. These constants capture how sensitive the functions are to input changes i.e., how susceptible they are to error due to input deviation.]
Zakrzewski fails to disclose
a device
wherein the artificial neural network is transferred into a machine when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the artificial neural network not being transferred.
However, Zakrzewski (EP) discloses:
A device configured to (Zakrzewski (EP), ¶[0114]: “Some or all of the connections by which the hosts, data manager system 156 and data storage system 152 may be connected to the communication medium 158 may pass through other communication devices, such as switching equipment including, for example, a phone line, a repeater, a multiplexer or even a satellite.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Zakrzewski and Zakrzewski (EP). Zakrzewski teaches a method of verifying pretrained neural network net mapping by using Lipshitz constant function. Zakrzewski (EP) teaches using statistical analysis to reduce the number of samples required in accordance with statistical analysis confidence intervals to verify correctness of a component. One of ordinary skill would have motivation to combine Zakrzewski and Zakrzewski (EP) because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art.
However, Gurumurthi explicitly discloses:
wherein the artificial neural network is transferred into a machine when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the artificial neural network not being transferred. (Gurumurthi, ¶[0021]: “When one or more error values are lower than a threshold, the controller causes the system to switch from using the processor and memory to using the analog circuit element functional block for performing remaining iterations. As part of the switch, the system copies or transfers neural network data from the memory to the analog circuit element functional block to prepare/configure the analog circuit element functional block for performing the remaining iterations.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Zakrzewski and Gurumurthi. Zakrzewski teaches a method of verifying pretrained neural network net mapping by using Lipshitz constant function. Gurumurthi teaches using multiple functional blocks for training neural networks. One of ordinary skill would have motivation to combine Zakrzewski and Gurumurthi to increase the operational safety and reduce the computational cost for training the network of the machine as the processor functional block can use higher precision data values to produce higher precision results, which can result in less iterations being required to train neural network (Gurumurthi, ¶[0044])
Regarding Claim 12, Zakrzewski explicitly discloses:
verifying an artificial neural network that is trained to map an input point from an input space of a function as accurately as possible onto a functional value of the function, the function being a limited or Lipschitz-constant function, the instructions, when executed by a computer, causing the computer to perform the following steps: (Zakrzewski, Col. 1, Lines 53: “In accordance with the present invention, a method of verifying pretrained, static, feedforward neural network mapping software having a neural net mapping function f(x) that is intended to replace look up table mapping software having a look up table mapping function Ф(x) comprises the steps of: (1) establishing a multi-axis rectangular domain including upper and lower limits of each axis to bound all input vectors x of both said look up table mapping function Ф(x) and said neural net mapping function f(x); (2) determining a set of test points within the rectangular domain based on said upper and lower limits of each axis; (3) determining Lipschitz constant Kf for said neural net mapping function f(x);”) [Examiners’ note: “input space” is being interpreted as the multi-axis rectangular domain defined by upper and lower bounds on each axis, “a functional value of the function” is being interpreted as the output of the neural network mapping function f(x)]
specifying a test point, the test point including a pair of a test input point from the input space of the function and a test functional value, the input point being determined from the input space; (Zakrzewski, Col. 5, Lines 6-13: “The actual input-output properties of the neural network mapping function are fully determined by the weight matrices. Accordingly, the construction of a verification procedure that will guarantee full knowledge of the behavior of the trained neural network is based on the assumption that training and testing points can be generated at will by a computer simulation model.”, Col. 5, Lines 16-22: “a method of verifying pretrained neural network mapping software evaluates the output of the neural network mapping function thereof over a large set of uniformly spaced test points and to find upper and lower bounds on the values that can possibly be attained between those test points.”) [Examiner’s note: “a functional value of the function” is being interpreted as the output of the neural network mapping function f(x). The test points are input values from the input space, the output of the function is evaluated at those test points, meaning each test point is specified.]
mapping the input point by the artificial neural network onto the functional value; (Zakrzewski, Col. 3, Lines 32-37: “Thus, for purposes of describing the exemplary neural network of FIG. 1, let us assume a neural net mapping software having a neural net function with n-dimensional real argument, and one- dimensional real output value:
PNG
media_image1.png
19
76
media_image1.png
Greyscale
”) [Examiner’s note: the highlight describes a neural network function f that maps input vectors Rn to output value R]
determining a reference for the functional value using the test input point; (Zakrzewski,
Col. 15, Lines 30-33: “Firstly, the domain of input signals was rectangular. Secondly, for each particular input value, the desired, or "true" output value was known or readily calculable via a look-up table.”) [Examiner’s note: “a reference for the functional value” is being interpreted as the desired, or true output value]
determining a deviation of the functional value from the reference; and (Zakrzewski,
Col. 10, Lines 9-25: “Continuing, the Lipschitz constant K'f is determined for the neural network mapping, so that knowledge of the gradient at the testing points allows inference about gradient values throughout the whole rectangle. Because error bounds are to be determined between f(x) and Ф(x), knowledge of the difference between gradients of the two functions
are determined… Then, for each rectangular cell a local Lipschitz constant K'Ф is determined for the gradient of the look-up table mapping. Calculation of K'Ф is described in detail in appendix A.2 supra. This constant, in conjunction with K'f allows calculating bounds for the error within each rectangular cell
PNG
media_image2.png
47
418
media_image2.png
Greyscale
”) [Examiner’s note: The highlights describe calculating bounds for the error between two functions: f(x) (i.e., the functional value) and Ф(x) (i.e., the reference)]
determining a measure of a susceptibility to error of the artificial neural network as a function of the deviation. (Zakrzewski, Col. 10, Lines 22-25: “Then, for each rectangular cell a local Lipschitz constant K'Ф is determined for the gradient of the look-up table mapping. Calculation of K'Ф is described in detail in appendix A.2 supra. This constant, in conjunction with K'f allows calculating bounds for the error within each rectangular cell
PNG
media_image2.png
47
418
media_image2.png
Greyscale
”), Col. 10, Lines 32-39: “After lower and upper bounds for approximation error within each rectangular cell have been determined, a lower and an upper error bound is selected therefrom for the desired error bounds of the mapping functions f(x) and Ф(x). In the present embodiment, the minimum and maximum of these error bound quantities over all cells are selected to obtain the desired lower and upper error bound valid for every point within the domain R.”) [Examiner’s note: The error bounds between f(x) and Ф(x) are determined by using the Lipschitz constants Kf and KФ. These constants capture how sensitive the functions are to input changes i.e., how susceptible they are to error due to input deviation.]
Zakrzewski fails to disclose:
A non-transitory computer-readable medium on which is stored a computer program including computer-readable instructions for
wherein the artificial neural network is transferred into a machine when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the artificial neural network not being transferred.
However, Zakrzewski (EP) explicitly discloses:
A non-transitory computer-readable medium on which is stored a computer program including computer-readable instructions for (Zakrzewski (EP), ¶[0117]: “The instructions that may be executed by processors included in the host computers may be stored in any combination of hardware and/or software including, for example, machine executable instructions input from a read-only-memory (ROM), machine-language instructions stored on a data storage device in which the machine-language instructions have been generated using a language processor, software package, and the like.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Zakrzewski and Zakrzewski (EP). Zakrzewski teaches a method of verifying pretrained neural network net mapping by using Lipshitz constant function. Zakrzewski (EP) teaches using statistical analysis to reduce the number of samples required in accordance with statistical analysis confidence intervals to verify correctness of a component. One of ordinary skill would have motivation to combine Zakrzewski and Zakrzewski (EP) because MPEP 2143 sets forth the Supreme Court rationales for obviousness including: (D) Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results; (E): “Obvious to try” choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success; (F) Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of the ordinary skill in the art.
However, Gurumurthi explicitly discloses:
wherein the artificial neural network is transferred into a machine when the measure of the susceptibility to error is smaller than a threshold value, and otherwise the artificial neural network not being transferred. (Gurumurthi, ¶[0021]: “When one or more error values are lower than a threshold, the controller causes the system to switch from using the processor and memory to using the analog circuit element functional block for performing remaining iterations. As part of the switch, the system copies or transfers neural network data from the memory to the analog circuit element functional block to prepare/configure the analog circuit element functional block for performing the remaining iterations.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Zakrzewski and Gurumurthi. Zakrzewski teaches a method of verifying pretrained neural network net mapping by using Lipshitz constant function. Gurumurthi teaches using multiple functional blocks for training neural networks. One of ordinary skill would have motivation to combine Zakrzewski and Gurumurthi to increase the operational safety and reduce the computational cost for training the network of the machine as the processor functional block can use higher precision data values to produce higher precision results, which can result in less iterations being required to train neural network (Gurumurthi, ¶[0044])
Claim(s) 6-9 are rejected under 35 U.S.C. 103 as being unpatentable over Zakrzewski (US 6,473,746 B1) in view of Zakrzewski (EP) (EP 1 462 989 A2), Gurumurthi (US 2020/0151572 A1) and in further view of Huang et al (“ReachNN: Reachability Analysis of Neural-Network Controlled Systems”) (hereafter referred to as “Huang’)
Regarding Claim 6, Zakrzewski in view of Zakrzewski (EP) and Gurumurthi discloses all the limitations of Claim 1 (as shown in the rejections above)
Zakrzewski in view of Zakrzewski (EP) and Gururmurthi further discloses:
the regions each including at least one test input point, a pair being provided per test point of a test input point from the input space of the function and a test functional value of the function, (Zakrzewski, Col. 5, Lines 17-22: “In the preferred embodiment, a method of verifying pretrained neural network mapping software evaluates the output of the neural network mapping function thereof over a large set of uniformly spaced test points and to find upper and lower bounds on the values that can possibly be attained between those test points.”)
the input point being determined in one of the regions, the reference being determined, using the test input point, from at least one test point that is included in the region in which the input point is determined, and (Zakrzewski, Col. 15, Lines 30-33: “Firstly, the domain of input signals was rectangular. Secondly, for each particular input value, the desired, or "true" output value was known or readily calculable via a look-up table.”) [Examiner’s note: “a reference for the functional value” is being interpreted as the desired, or true output value]
the measure being determined for the region. . (Zakrzewski, Col. 10, Lines 22-25: “Then, for each rectangular cell a local Lipschitz constant K'Ф is determined for the gradient of the look-up table mapping. Calculation of K'Ф is described in detail in appendix A.2 supra. This constant, in conjunction with K'f allows calculating bounds for the error within each rectangular cell
PNG
media_image2.png
47
418
media_image2.png
Greyscale
”), Col. 10, Lines 32-39: “After lower and upper bounds for approximation error within each rectangular cell have been determined, a lower and an upper error bound is selected therefrom for the desired error bounds of the mapping functions f(x) and Ф(x). In the present embodiment, the minimum and maximum of these error bound quantities over all cells are selected to obtain the desired lower and upper error bound valid for every point within the domain R.”) [Examiner’s note: The error bounds between f(x) and Ф(x) are determined by using the Lipschitz constants Kf and KФ. These constants capture how sensitive the functions are to input changes i.e., how susceptible they are to error due to input deviation.]
Zakrzewski in view of Zakrzewski (EP) and Gurumurthi fails to disclose:
wherein in the input space a distribution that includes the test input point is determined of test input points from various test points that divides the input space into regions, the regions being adjacent simplexes or adjacent spheres
However, Huang explicitly discloses:
wherein in the input space a distribution that includes the test input point is determined of test input points from various test points that divides the input space into regions, the regions being adjacent simplexes or adjacent spheres (Huang, Pg. 6, Col. 1, Theorem 3.8: “For a given box X = [l1, u1] × ・ ・ ・ × [lm, um], we perform a grid-based partition based on an integer vector p = (p1, ・ ・ ・ ,pm). That is, we partition X into a set of boxes”) [Examiner’s note: The input domain X is split into multiple boxes, each box acts like a simplex region. This corresponds to dividing the input domain into local approximation regions (i.e., adjacent simplexes)]
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Zakrzewski, Zakrzewski (EP), Gurumurthi and Huang. Zakrzewski teaches a method of verifying pretrained neural network net mapping by using Lipshitz constant function. Zakrzewski (EP) teaches using statistical analysis to reduce the number of samples required in accordance with statistical analysis confidence intervals to verify correctness of a component. Gurumurthi teaches using multiple functional blocks for training neural networks. Huang teaches new reachability analysis approach based on Bernstein polynomials that can verify neural network controlled systems with a more general form of activation functions, i.e., as long as they ensure that the neural networks are Lipschitz continuous. One of ordinary skill would have motivation to combine Zakrzewski, Zakrzewski (EP), Gurumurthi and Huang to improve the accuracy and interpretability of neural network verification and enable localized analysis of functional behavior across structured regions.
Regarding Claim 7, Zakrzewski in view of Zakrzewski (EP), Gurumurthi and Huang discloses all the limitations of Claim 6 (as shown in the rejections above)
Zakrzewski in view of Zakrzewski (EP), Gurumurthi and Huang further discloses:
wherein the simplexes include one of the test input points per vertex of a simplex, or the spheres each include one of the test input points in their center. (Zakrzewski (EP), Figure 2 discloses testing points at the vertex of a simplex (i.e., the rectangular regions)
PNG
media_image12.png
651
1017
media_image12.png
Greyscale
, Col. 9, Lines 18-20: “The bounds for the mapping function f may be obtained by dividing the rectangle R into a large number of small sub-rectangles, or "cells".”, Col. 9, Lines 27-30: “In FIG. 2 is illustrated a two-dimensional case, with N1=5 and N2 =4, in which the rectangle R is divided into 12 rectangular cells, vertices of which constitute a set of 20 testing points.”)
Regarding Claim 8, Zakrzewski in view of Zakrzewski (EP), Gurumurthi and Huang discloses all the limitations of Claim 6 (as shown in the rejections above)
Zakrzewski in view of Zakrzewski (EP), Gurumurthi and Huang further discloses:
wherein a multiplicity of input points are determined from the input space, the multiplicity of input points lying in one of the regions in the input space, the method including, (Zakrzewski, Col. 18, Lines 26-32: “Referring to FIGS. SA-SD, in block 130, a rectangular state domain R=(x(lo), x(up)) is determined, and in block 132, a transformation mapping is determined to transform an n-dimensional state vector x into an m-dimensional measurement vector z, preferably in form of a look-up table mapping.” ) [Exmainer’s note: input points i.e., state vector x]
for each respective input point from the multiplicity of input points, mapping the
respective input point by the artificial neural network onto a respective functional value and (Zakrzewski, Col. 3, Lines 32-37: “Thus, for purposes of describing the exemplary neural network of FIG. 1, let us assume a neural net mapping software having a neural net function with n-dimensional real argument, and one- dimensional real output value:
PNG
media_image1.png
19
76
media_image1.png
Greyscale
”) [Examiner’s note: the highlight describes a neural network function f that maps input vectors Rn to output value R]
determining of a deviation of the respective functional value from the reference, and
(Zakrzewski, Col. 10, Lines 9-25: “Continuing, the Lipschitz constant K'f is determined for the neural network mapping, so that knowledge of the gradient at the testing points allows inference about gradient values throughout the whole rectangle. Because error bounds are to be determined between f(x) and Ф(x), knowledge of the difference between gradients of the two functions are determined… Then, for each rectangular cell a local Lipschitz constant K'Ф is determined for the gradient of the look-up table mapping. Calculation of K'Ф is described in detail in appendix A.2 supra. This constant, in conjunction with K'f allows calculating bounds for the error within each rectangular cell
PNG
media_image2.png
47
418
media_image2.png
Greyscale
”) [Examiner’s note: The highlights describe calculating bounds for the error between two functions: f(x) (i.e., the functional value) and Ф(x) (i.e., the reference)]
the measure being determined as a function of the deviations thus determined for the multiplicity of input points. (Zakrzewski, Col. 10, Lines 22-25: “Then, for each rectangular cell a local Lipschitz constant K'Ф is determined for the gradient of the look-up table mapping. Calculation of K'Ф is described in detail in appendix A.2 supra. This constant, in conjunction with K'f allows calculating bounds for the error within each rectangular cell
PNG
media_image2.png
47
418
media_image2.png
Greyscale
”), Col. 10, Lines 32-39: “After lower and upper bounds for approximation error within each rectangular cell have been determined, a lower and an upper error bound is selected therefrom for the desired error bounds of the mapping functions f(x) and Ф(x). In the present embodiment, the minimum and maximum of these error bound quantities over all cells are selected to obtain the desired lower and upper error bound valid for every point within the domain R.”) [Examiner’s note: The error bounds between f(x) and Ф(x) are determined by using the Lipschitz constants Kf and KФ. These constants capture how sensitive the functions are to input changes i.e., how susceptible they are to error due to input deviation.]
Regarding Claim 9, Zakrzewski in view of Zakrzewski (EP), Gurumurthi and Huang discloses all the limitations of Claim 6 (as shown in the rejections above)
Zakrzewski in view of Zakrzewski (EP), Gurumurthi and Huang further discloses:
wherein a multiplicity of input points from the input space are determined that lie in various regions in the input space, the method including, (Zakrzewski, Col. 10, Lines 66-: “Referring to the flowchart of FIGS. 3A and 3B, a multi-axis rectangular input domain R=(x(lo), x(up)) is determined by block 100 and upper and lower limits, respectively, are determined for each axis i within R in block 102.”)
for each region of the various regions, mapping an input point of the multiplicity of input points from this region by the artificial neural network onto a respective functional value, (Zakrzewski, Col. 3, Lines 32-37: “Thus, for purposes of describing the exemplary neural network of FIG. 1, let us assume a neural net mapping software having a neural net function with n-dimensional real argument, and one- dimensional real output value:
PNG
media_image1.png
19
76
media_image1.png
Greyscale
”) [Examiner’s note: the highlight describes a neural network function f that maps input vectors Rn to output value R]
determining a reference for the functional value, using the test input point, from at least
one test point that is included in the region, and (Zakrzewski, Col. 15, Lines 30-33: “Firstly, the domain of input signals was rectangular. Secondly, for each particular input value, the desired, or "true" output value was known or readily calculable via a look-up table.”) [Examiner’s note: “a reference for the functional value” is being interpreted as the desired, or true output value]
determining a deviation of the respective functional value from the reference, and (Zakrzewski, Col. 10, Lines 9-25: “Continuing, the Lipschitz constant K'f is determined for the neural network mapping, so that knowledge of the gradient at the testing points allows inference about gradient values throughout the whole rectangle. Because error bounds are to be determined between f(x) and Ф(x), knowledge of the difference between gradients of the two functions are determined… Then, for each rectangular cell a local Lipschitz constant K'Ф is determined for the gradient of the look-up table mapping. Calculation of K'Ф is described in detail in appendix A.2 supra. This constant, in conjunction with K'f allows calculating bounds for the error within each rectangular cell
PNG
media_image2.png
47
418
media_image2.png
Greyscale
”) [Examiner’s note: The highlights describe calculating bounds for the error between two functions: f(x) (i.e., the functional value) and Ф(x) (i.e., the reference)]
wherein the measure is determined as a function of a frequency with which the
determined deviations fulfill a condition for their respective region. (Huang, Page 6, Col. 1-2, Theorem 3.8, ¶[2]: “we propose an alternative sampling-based approach to estimate the approximation error. For a given box X = [l1, u1] × ・ ・ ・ × [lm, um], we perform a grid-based partition based on an integer vector p = (p1, ・ ・ ・ , pm). That is, we partition X into a set of boxes
PNG
media_image13.png
169
566
media_image13.png
Greyscale
. It is easy to see that the largest error bound of all the boxes is a valid error bound over X.”, Page 6, Col. 2, Lemma 3.9: “Leveraging the Lipschitz continuity of κ, we can estimate the local error bound εk for box Bk by sampling the value of κ and Pκ,d at the box center.”) [Examiner’s note: The deviation is measured locally per region (box). The final deviation measure depends on how many boxes meet the condition. If one box exceeds the threshold, the overall deviation is enlarged, this indicates the frequency of boxes satisfying the bound. The use of multiple box regions (which serve as simplexes), and per-region deviation εk, together fulfill the condition (i.e., whether deviation is within tolerance). The frequency (number of compliant boxes) affects the final deviation used.]
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMY TRAN whose telephone number is (571)270-0693. The examiner can normally be reached Monday - Friday 7:30 am - 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMY TRAN/Examiner, Art Unit 2126
/LUIS A SITIRICHE/Primary Examiner, Art Unit 2126