Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
1. Claims 1-12 are presented for examination.
Claim Objections
2. Claims 1, 4, 5, and 7-10 are objected to because of the following informalities:
As per claim 1, it recites the limitation “functions (Jphys, i)” which is unclear what the characters in the parentheses refer. Are they reference characters corresponding to the elements in the drawing? What are Jphys and i? Claim 1 further recites: “principles (1, ..., NP)”, “first term (Jdata)”, “second term (Jphys)”, “at least one artificial neural network (f, g)”. What are the characters N, p, Jdata, f, and g?
As per Claim 4, it recites the limitation:
PNG
media_image1.png
223
510
media_image1.png
Greyscale
where the terms
PNG
media_image2.png
47
23
media_image2.png
Greyscale
and
PNG
media_image3.png
45
22
media_image3.png
Greyscale
are used in the formula. But no definition of terms
PNG
media_image2.png
47
23
media_image2.png
Greyscale
and
PNG
media_image3.png
45
22
media_image3.png
Greyscale
are recited. Instead, u(t) and y(t) are erroneously defined in the text.
As per Claim 5, it recites the limitation “said generated surrogate mathematical model” which would be better as “said generated mathematical model” as claim 1 recites “generating a mathematical model”. Further the limitation “said first artificial neural network” and “said second artificial neural network” would be better as “a first artificial neural network” and “a second artificial neural network”.
As per Claim 7, it recites the limitation “said surrogate model” which would be better as “said generated mathematical model” as claim 1 recites “generating a mathematical model”.
As per Claim 8, it recites the limitation “
PNG
media_image4.png
37
715
media_image4.png
Greyscale
” which would be better as “
PNG
media_image5.png
36
712
media_image5.png
Greyscale
” as claim 1 recites “generating a mathematical model”.
As per Claim 9, it recites the limitation “the sum” which of be better as “a sum”.
As per Claim 10, it recites the limitation “said first artificial neural network” and “said second artificial neural network” which would be better as “a first artificial neural network” and “a second artificial neural network”. Further it recites the limitation “
PNG
media_image6.png
49
390
media_image6.png
Greyscale
”. The “said functions” refers to the limitation “a plurality of functions (Jphys, i) relating to principles (1, ... , Np) governing said phenomenon under consideration” in the claim?
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
3. Claims 1-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per Claim 1 and 12, they recite the limitation which is unclear what the limitation refers:
“determining a first term …starting from said dataset”, “determining a second term… starting from said functions” and “a loss function obtained starting from said first term … and said second term …)”. In particular, it is vague what the “starting from” refers. Further, the claim defines using a term based on “a plurality of functions relating to principles … governing … phenomenon”. However, generally the loss function is designed as a discrepancy measure which is minimized in order to return the parameters that best approximate the solution. Thus, simply referring to functions i. e. “starting from” renders the claim unclear because it fails to show how a discrepancy measure can be derived/established/obtained.
As per Claim 3, it recites the limitation “said existing mathematical model” which lacks an antecedent basis.
As per Claim 5, it recites the limitation:
PNG
media_image7.png
307
826
media_image7.png
Greyscale
. What is the definition of the sub-elements N, t and T? The limitation “the state of the system” lacks an antecedent basis, thus rendering unclear the definition of x(t).
As per Claim 11, it recites the limitation:
PNG
media_image8.png
69
354
media_image8.png
Greyscale
…
PNG
media_image9.png
188
669
media_image9.png
Greyscale
where no definition of the sub-elements is recited. In particular, what are the sub-elements ỹj(t), xj(t), uj(t), ɳ*, ɣ*, Tj, Ns, and t?
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
(Step 1) The claim 1-11 recite steps or acts including generating a mathematical model; thus, the claims are to a process, which is one of the statutory categories of invention. The claim 12 is directed to a a non-transitory computer-readable medium” which is a statutory category of invention.
(Step 2A – Prong One) For the sake of identifying the abstract ideas, a copy of the claim is provided below. Abstract ideas are bolded.
The claims 1 and 12 recite:
- receiving at input a dataset comprising a plurality of input-output pairs relating to a phenomenon under consideration (insignificant extra-solution activity – data gathering and/or field of use);
- receiving at input a plurality of functions (Jphys, i) relating to principles (1, ... , Np) governing said phenomenon under consideration (insignificant extra-solution activity – data gathering and/or field of use);
- determining a first term (Jdata) starting from said dataset comprising plurality of input-output pairs (under its broadest reasonable interpretation, a mathematical concept and a mental process that convers performance in the human mind or with the aid of pencil and paper including an observation, evaluation, judgment or opinion);
- determining a second term (Jphys) starting from said functions (Jphys, i) relating to principles (1, ... , Np); (under its broadest reasonable interpretation, a mathematical concept and a mental process that convers performance in the human mind or with the aid of pencil and paper including an observation, evaluation, judgment or opinion as described); and
- generating a mathematical model by means of at least one artificial neural network (f, g) trained on basis of a loss function obtained starting from said first term (Jdata) and said second term (Jphys) (under its broadest reasonable interpretation, a mathematical concept and a mental process that convers performance in the human mind or with the aid of pencil and paper including an observation, evaluation, judgment or opinion).
Therefore, the limitations, under the broadest reasonable interpretation, have been identified to recite judicial exceptions, an abstract idea.
(Step 2A – Prong Two: integration into practical application) This judicial exception is not integrated into a practical application. In particular, the claims recite the following additional elements of “computer-implemented” (Claim 1-11) and “non-transitory computer readable medium having instructions stored thereon, such that when the instructions are read and executed by one or more processors, said one or more processors is configured to perform” (Claim 12) which is recited at high level generality and recited so generally that they represent more than mere instruction to apply the judicial exception on a computer (see MPEP 2106.05(f)). The limitation can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer (see MPEP 2106.05(d)). Further, the additional elements of “computer”/ “processor” does not (1) improve the functioning of a computer or other technology, (2) is not applied with any particular machine (except for generic computer components), (3) does not effect a transformation of a particular article to a different state, and (4) is not applied in any meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
The additional element of “a phenomenon under consideration” and “artificial neural network” is an insignificant extra-solution activity which is generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)).
Further claim 1 and 11 recite the limitation which is an insignificant extra-solution activity because it is a mere nominal or tangential addition to the claim, amounts to mere data gathering (see MPEP 2106.05(g)): “receiving at input a dataset comprising a plurality of input-output pairs relating to a phenomenon under consideration (insignificant extra-solution activity – data gathering and/or field of use);
-- receiving at input a plurality of functions (Jphys, i) relating to principles (1, ... , Np) governing said phenomenon under consideration (insignificant extra-solution activity – data gathering and/or field of use);”.
Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application and the claim is directed to the judicial exception.
(Step 2B - inventive concept) The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “computer-implemented” (Claim 1-11) and “non-transitory computer readable medium having instructions stored thereon, such that when the instructions are read and executed by one or more processors, said one or more processors is configured to perform” (Claim 12) which is recited at high level generality and recited so generally that they represent more than mere instruction to apply the judicial exception on a computer (see MPEP 2106.05(f)). The limitation can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer (see MPEP 2106.05(d)). The additional elements of “a phenomenon under consideration” and “artificial neural network” is an insignificant extra-solution activity which is generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)).
Further claim 1 and 11 recite the limitation which is an insignificant extra-solution activity because it is a mere nominal or tangential addition to the claim, amounts to mere data gathering (see MPEP 2106.05(g)) which is the element that the courts have recognized as well-understood, routine, conventional activity, such as storing and retrieving information in memory (MPEP 2106.05 (d) II iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93) and receiving or transmitting data (MPEP 2106.05 (d) II i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network); but see DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258, 113 USPQ2d 1097, 1106 (Fed. Cir. 2014) ("Unlike the claims in Ultramercial, the claims at issue here specify how interactions with the Internet are manipulated to yield a desired result‐‐a result that overrides the routine and conventional sequence of events ordinarily triggered by the click of a hyperlink." (emphasis added))): “-receiving at input a dataset comprising a plurality of input-output pairs relating to a phenomenon under consideration (insignificant extra-solution activity – data gathering and/or field of use);
-- receiving at input a plurality of functions (Jphys, i) relating to principles (1, ... , Np) governing said phenomenon under consideration (insignificant extra-solution activity – data gathering and/or field of use);”.
Further dependent claims 2-11 recite:
2) (Currently amended) The computer-implemented method according to claim 1, wherein said step of generating a mathematical model comprises training at least one artificial neural network (f, g) on the basis of a loss function composed of the weighted average of said first term (Jdata) and of said second term (Jphys). (a mathematical concept and a mental process)
3) (Currently amended) The computer-implemented method according to claim 1, wherein said of plurality of input-output pairs of the dataset is produced by said existing mathematical model. (insignificant extra-solution activity – data gathering and/or field of use)
4) (Currently amended) The computer-implemented method according to claim 1, wherein said dataset comprising a plurality of input-output pairs is defined by the following formula:
PNG
media_image10.png
55
406
media_image10.png
Greyscale
where u(t) is a time-dependent input signal
where y(t) is a time-dependent output signal
where Ns is the number of input-output pairs,
where Tj represents the duration of the j-th pair. (a mathematical concept and a mental process)
5) (Currently amended) The computer-implemented method according to claim 1, said generated surrogate mathematical model is defined by the following formula:
PNG
media_image11.png
139
370
media_image11.png
Greyscale
where f represents said first artificial neural network;
g represents said second artificial neural network;
x(t) is a vector with Nx elements, representing the internal state of the system;
u(t) is a time-dependent input signal; and
ỹ(t) is a time-dependent output signal predicted by the surrogate mathematical model. (a mathematical concept and a mental process)
6) (Currently amended) The computer-implemented method according to claim 5, wherein said first artificial neural network (f) and said second artificial neural network (g) are trained on the basis of a loss function composed of the weighted average of said first term (Jdata) and of said second term (Jphys). (a mathematical concept and a mental process)
7) (Currently amended) The computer-implemented method according to claim 1, wherein said first term (Jdata) is Euclidean standard distance between the plurality of time-dependent outputs (ŷj(t)) belonging to said dataset and the corresponding outputs (ỹj(t)) predicted by said surrogate model. (a mathematical concept and a mental process)
8) (Currently amended) The computer-implemented method according to claim 7, wherein said first term (Jdata) is defined by the following formula:
PNG
media_image12.png
101
411
media_image12.png
Greyscale
Where Jdata is said first term;
ỹj(t) are said outputs belonging to the dataset;
ỹj(t) are said outputs predicted by the surrogate mathematical model;
Ns is the number of input-output pairs; and
Tj represents the duration of the j-th pair. (a mathematical concept and a mental process)
9) (Currently amended) The computer-implemented method according to claim 1, wherein said second term (Jphys) is composed of the sum of said functions (Jphys, i). (a mathematical concept and a mental process)
10) (Currently amended) The computer-implemented method according to claim 9, wherein said second term (Jphys) is defined by the following formula:
PNG
media_image13.png
105
319
media_image13.png
Greyscale
where Jphys is said second term;
Jphys (f, g) represents said functions;
f is said first artificial neural network; g is said second artificial neural network; and
1, ..., NP represent said principles governing the phenomenon under consideration. (a mathematical concept and a mental process)
11) (Currently amended) The computer-implemented method according to claim 1, wherein said step of generating a mathematical model comprises at least one training step of said artificial neural networks (f, g) carried out according to the following optimization problem:
PNG
media_image8.png
69
354
media_image8.png
Greyscale
where ɳ and ɣ are vectors that collect the values of the parameters of the two networks (f, g), subject to the constraint given by:
PNG
media_image14.png
147
576
media_image14.png
Greyscale
(a mathematical concept and a mental process)
Considering the claim both individually and in combination, there is no element or combination of elements recited contains any “inventive concept” or adds “significantly more” to transform the abstract concept into a patent-eligible application.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
5. Claims 1-12 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Rassi et al. (“Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations”).
As per Claim 1 and 12, Rassi et al. discloses a computer-implemented method/ non-transitory computer readable medium having instructions stored thereon for generation of a mathematical model with reduced computational complexity (Abstract “physics-informed neural networks – neural networks that are trained to solve supervised learning tasks while respecting any given laws of physics described by general nonlinear partial differential equations.”; section 4.1.1 “a realistic scenario of incompressible fluid flow described by the ubiquitous Navier–Stokes equations. Navier–Stokes equations describe the physics of many phenomena of scientific and engineering interest.”), said the computer-implemented method comprising
- receiving at input a dataset comprising a plurality of input-output pairs relating to a phenomenon under consideration (section 4.1.1.
PNG
media_image15.png
141
308
media_image15.png
Greyscale
: inputs are coordinates and outputs are observed field values).;
- receiving at input a plurality of functions (Jphys, i) relating to principles (1, ... , Np) governing said phenomenon under consideration (section 4.1.1 “Navier–Stokes equations describe the physics of many phenomena of scientific and engineering interest….we are interested in learning the parameters λ as well as the pressure p(t, x, y). We define f (t, x, y) and g(t, x, y) to be given by” Equation (18): define f (t, x, y) and g(t, x, y) from governing equations i.e. Navier–Stokes equations);
- determining a first term (Jdata) starting from said dataset comprising plurality of input-output pairs (section 4.1.1 “mean squared error loss” Equation (19): first term of the equation (19));
- determining a second term (Jphys) starting from said functions (Jphys, i) relating to principles (section 4.1.1 “mean squared error loss” Equation (19): second term of the equation (19)); and
- generating a mathematical model by means of at least one artificial neural network (f, g) trained on basis of a loss function obtained starting from said first term (Jdata) and said second term (Jphys) (section 4.1.1 “equations (17) and (18) results into a physics-informed neural network [f (t, x, y) g(t, x, y)] .f (t, x, y) g(t, x, y). The parameters λ of the Navier–Stokes operator as well as the parameters of the neural networks ψ(t, x, y) p(t, x, y) and [f (t, x, y) g(t, x, y)]can be trained by minimizing the mean squared error loss MSE…(19)).
As per Claim 2, Rassi et al. discloses wherein said step of generating a mathematical model comprises training at least one artificial neural network (f, g) on the basis of a loss function composed of the weighted average of said first term (Jdata) and of said second term (Jphys) (section 4.1.1 equation (19)).
As per Claim 3, Rassi et al. discloses wherein said of plurality of input-output pairs of the dataset is produced by said existing mathematical model (section 4.1 Continuous time models “physics-informed neural network f (t, x).”).
As per Claim 4, Rassi et al. discloses wherein said dataset comprising a plurality of input-output pairs is defined by the following formula:
PNG
media_image10.png
55
406
media_image10.png
Greyscale
where u(t) is a time-dependent input signal
where y(t) is a time-dependent output signal
where Ns is the number of input-output pairs,
where Tj represents the duration of the j-th pair (section 4: section 4.1.1.
PNG
media_image15.png
141
308
media_image15.png
Greyscale
: inputs are time dependent coordinates and corresponding outputs are observed field values which is indexed by time, e. g. training on snapshots/time-indexed data and predicting at other times.).
As per Claim 5, Rassi et al. discloses said generated surrogate mathematical model is defined by the following formula:
PNG
media_image11.png
139
370
media_image11.png
Greyscale
where f represents said first artificial neural network;
g represents said second artificial neural network;
x(t) is a vector with Nx elements, representing the internal state of the system;
u(t) is a time-dependent input signal; and
ỹ(t) is a time-dependent output signal predicted by the surrogate mathematical model (section 4.1.1 “We define f (t, x, y) and g(t, x, y) to be given by … [equation] (18) and proceed by jointly approximating [ψ(t, x, y) p(t, x, y)] using a single neural network with two outputs….. equations (17) and (18) results into a physics-informed neural network [f (t, x, y) g(t, x, y)]. The parameters λ of the Navier–Stokes operator as well as the parameters of the neural networks ψ(t, x, y) p(t, x, y) and [f (t, x, y) g(t, x, y)]can be trained by minimizing the mean squared error loss MSE…(19)).
As per Claim 6, Rassi et al. discloses wherein said first artificial neural network (f) and said second artificial neural network (g) are trained on the basis of a loss function composed of the weighted average of said first term (Jdata) and of said second term (Jphys) (section 4.1.1 “We define f (t, x, y) and g(t, x, y) to be given by … [equation] (18) and proceed by jointly approximating [ψ(t, x, y) p(t, x, y)] using a single neural network with two outputs….. equations (17) and (18) results into a physics-informed neural network [f (t, x, y) g(t, x, y)]. The parameters λ of the Navier–Stokes operator as well as the parameters of the neural networks ψ(t, x, y) p(t, x, y) and [f (t, x, y) g(t, x, y)]can be trained by minimizing the mean squared error loss MSE…(19)).
As per Claim 7, Rassi et al. discloses wherein said first term (Jdata) is Euclidean standard distance between the plurality of time-dependent outputs (ŷj(t)) belonging to said dataset and the corresponding outputs (ỹj(t)) predicted by said surrogate model (section 4.1.1 “mean squared error loss MSE” equation (19)).
As per Claim 8, Rassi et al. discloses wherein said first term (Jdata) is defined by the following formula:
PNG
media_image12.png
101
411
media_image12.png
Greyscale
Where Jdata is said first term;
ỹj(t) are said outputs belonging to the dataset;
ỹj(t) are said outputs predicted by the surrogate mathematical model;
Ns is the number of input-output pairs; and
Tj represents the duration of the j-th pair (section 4.1.1 “mean squared error loss MSE” equation (19): the first term of the equation (19)).
As per Claim 9, Rassi et al. discloses wherein said second term (Jphys) is composed of the sum of said functions (Jphys, i) (section 4.1.1 “mean squared error loss MSE” equation (19): the second term of the equation (19)).
As per Claim 10, Rassi et al. discloses wherein said second term (Jphys) is defined by the following formula:
PNG
media_image13.png
105
319
media_image13.png
Greyscale
where Jphys is said second term;
Jphys (f, g) represents said functions;
f is said first artificial neural network; g is said second artificial neural network; and
1, ..., NP represent said principles governing the phenomenon under consideration (section 4.1.1 “mean squared error loss MSE” equation (19): the second term of the equation (19), e. g. supplement f and g).
11) (Currently amended) The computer-implemented method according to claim 1, wherein said step of generating a mathematical model comprises at least one training step of said artificial neural networks (f, g) carried out according to the following optimization problem:
PNG
media_image8.png
69
354
media_image8.png
Greyscale
where ɳ and ɣ are vectors that collect the values of the parameters of the two networks (f, g), subject to the constraint given by:
PNG
media_image14.png
147
576
media_image14.png
Greyscale
(section 3.1 “The shared parameters of the neural networks h(t, x) and f (t, x) can be learned by minimizing the mean squared error loss. optimize all loss functions using L-BFGS, a quasi-Newton, full-batch gradient-based optimization algorithm. For larger data-sets, such as the data-driven model discovery examples discussed in section 4, a more computationally efficient mini-batch setting can be readily employed using stochastic gradient descent and its modern variants”; section 4.1.1).
Conclusion
6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Guha et al. (US 20180082826 A1)
Ward et al. (US 20160327295 A1)
Bianch et al. (“An overview and comparative analysis of Recurrent Neural Networks for Short Term Load Forecasting”)
7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EUNHEE KIM whose telephone number is (571)272-2164. The examiner can normally be reached Monday-Friday 9am-5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571)272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
EUNHEE KIM
Primary Examiner
Art Unit 2188
/EUNHEE KIM/ Primary Examiner, Art Unit 2188