DETAILED ACTION
This action is responsive to the claims filed on October 31, 2022. Claims 1-20 are under examination.
Claims 1, 8, and 15 are objected to.
Claims 1-20 are rejected under 35 USC 112(b) as being indefinite.
Claims 1-20 are rejected under 35 USC 101 as being directed to a judicial exception without significantly more.
Claims 1, 8, and 15 are rejected under 35 USC 102(a)(1) as being anticipated by Melchert.
Claims 2, 6, 9, 13, 16, and 20 are rejected under 35 USC 103 over Melchert in view of Brownlee.
Claims 7 and 14 are rejected under 35 USC 103 over Melchert in view of Brownlee and Odegua.
Claims 3-5, 10-12, and 17-19 are rejected under 35 USC 103 over Melchert in view of Udrescu.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Note On Contingent Limitations
Claims 9, 11, and 12 recite method steps that include the word “if,” indicating a contingent limitation. Unlike an apparatus claim in which a configuration can exist that is never used, a method will not necessarily carry out a step if it depends on a contingency. When considering the scope of the claim, which relies on the broadest reasonable interpretation, the step can be ignored because it may never happen. If the Applicant intends to have these limitations considered, the Applicant is advised to remove the contingency. See MPEP 2111.04(II).
Claim Objections
Claim 1, 8, and 15 are objected to because of the following informalities:
Claims 1, 8, and 15 recite “the unknown,” but there is insufficient antecedent basis for this element. For purposes of examination, this is interpreted to mean “an unknown.”
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. NOTE: Please provide support from your specification for any amendments that are intended to overcome this rejection.
Claims 1, 8, and 15 recite, “one or more basic functions.” Basic is a relative term without a universally accepted scope in the art. The Applicant is advised to provide a closed definition, even if broad, of the term “basic” in the context of the claim. The amendment must have support in the specification or be an accepted definition in the art that has a definite scope.
Claims 1, 8, and 15 recite, “the best model evaluation metric.” The term, “best” is a relative term. The Applicant is advised to substitute with a value of the evaluation metric that satisfies a predefined condition.” Alternatively, the Applicant is invited to provide in the claim the standard for determining a best metric.
Claims 1, 8, and 15 recite, “return any models that have the best model evaluation network.” However, best is a singular concept. Even if one were to discern the standard for what “best” means, only one model would qualify as “the best.” Accordingly, the person of ordinary skill in the art would not be able to discern the metes and bounds of the claim.
Claims 1-2, 8-9, and 15-16 recite, “such that the plurality of intelligence systems gain intelligence.” It is unclear what “gain intelligence” means. It is not a recognized term in the art. The Applicant is advised to amend to provide an objective standard with a definition recognizable by a person of ordinary skill in the art.
Claims 2, 6, 9, 13, 16, and 20 recite, “common variables.” This term does not have a recognized meaning in the art. Also, common is a relative term. Accordingly, a person of ordinary skill in the art would not be able to discern the metes and bounds of the claim.
Claims 6, 13, and 16 recite,
(b) benchmark variables from the second data set; and
(c) adjust for selection bias by calibrating sampling weights with benchmark variables used as calibration variables.
The antecedence is unclear with regard to whether the benchmark variables from step (b) are related to the benchmark variables of step (c). Accordingly, a person of ordinary skill in the art would not be able to discern the metes and bounds of the claim.
Claims 6, 13, and 16 recite, “sampling weights.” This is not a term of art, so it is unclear what is meant by “adjust for selection bias by calibrating sampling weights with benchmark variables used as calibration variables. Because the term is not recognized in the art, the person of ordinary skill in the art would not know the metes and bounds of the claim or understand what it is that the person is excluded from commercially exploiting with the claim. The Applicant is required to provide a definition in the claim that clearly delineates the metes and bounds of the claim term.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Subject Matter Eligibility
Claims 1-20 are rejected under 35 U.S.C. 101 for being directed to a judicial exception without significantly more.
NOTE: As a general matter, assembling equations from other equations is something that has been performed by humans for as long as mathematics has existed. The high-level recitation that a computer determines elements of equations to incorporate from other equations and then evaluates them to find the best one until an equation that functions for the experimenter’s application is determined is simply how humans determine advanced equations. There is no recitation in the claims that differentiates from this concept, other than it is automated. It is a long-standing practice (See Cardionet v. Infobionic, 2020, Fed. Circ, precedential: "Contrary to the dissent’s suggestions, we do not hold today that it is impermissible for courts to “look[] outside the intrinsic evidence” as part of their Alice step one inquiry, Dissent Op. 9, or that all evidence presented by the parties that doctors have long used the claimed techniques would be irrelevant to the inquiry in this case. It is within the trial court’s discretion whether to take judicial notice of a longstanding practice where there is no evidence of such practice in the intrinsic record. But there is no basis for requiring, as a matter of law, consideration of the prior art in the step one analysis in every case. If the extrinsic evidence is overwhelming to the point of being indisputable, then a court could take notice of that and find the claims directed to the abstract idea of automating a funda-mental practice, see Bilski, 561 U.S. at 611—but the court is not required to engage in such an inquiry in every case.")Further, this application specifically addresses symbolic regression, a concept that is well-estabished and has become a conventional tool for mathematicians. Here are some examples of review article publications made of record that address the subject matter of the claims: Orzechowski et al., Abstract and La Cava et al., Abstract. They each contain references to numerous publications that describe the state of the art on the date the Applicant’s claims were filed that demonstrate the conventionality of the Applicant’s claims. Generic automation of known processes itself without specific contributions to technology will not confer eligibility. Also, the claims make use of mathematics and mental processes in order to generate mathematics. When one removes the abstract idea from the claims, the additional limitations that remain are tangential and do nothing to contribute to the inventive concept. Further still, these claims are also similar to the ones in Electric Power Group which involve analyzing information to output information that is not integrated into a practical application and does not constitute an inventive concept. Accordingly, at the judicially created step 1, the claims are, on their face, longstanding practices that automate a fundamental practice.
Step 1
Claims 1-8 and 15-20 are directed to a machine. Claims 9-14 are directed to a machine.
Independent Claims
Step 2A, Prong 1
Independent claims -----1 recites mental processes and mathematical concepts.
Claim 1 (Machine)
Claim 1, recites,
(b) apply [mental and mathematic] techniques to design an equation for use in development of a statistical model using the first set of data, wherein the equation is designed by selecting parameters including 1) one or more variables, 2) one or more model parameters that indicate the unknown, 3) one or more basic functions from a list of functions, and 4) one or more operators that assemble the one or more basic functions; (Mental Evaluation, Mental Process – Applying techniques to design an equation for use in development of a statistical model by selecting mathematical elements is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea. Also, applying techniques to design an equation for use in development of a statistical model by selecting mathematical elements is a mathematical concept, an abstract idea.)
(c) calculate […] the model performance evaluation metric for the developed model and return to procedure (b) to alter the equation; (Mental Evaluation, Mental Process – Calculating the model performance evaluation metric for the developed model and repeating the application of techniques to design an equation for use in development of a statistical model by selecting mathematical elements is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea. Also, Calculating the model performance evaluation metric for the developed model and repeating the application of techniques to design an equation for use in development of a statistical model by selecting mathematical elements is a mathematical concept, an abstract idea.)
Claim 1 recites mental processes and, hence, under MPEP 2106.04(a)(2)(III), an abstract idea.
Claim 1 recites an abstract idea.
Step 2A, Prong 2
The claims fail to recite additional limitations that integrate the abstract idea into a practical application.
The additional limitations:
Generic Computing1. A system comprising: a processor; a memory coupled to the processor;
instructions stored in the memory and executable by the processor that, when executed by the processor cause the system to:
[…] Artificial intelligence […]
[…] Model(s) […]
These are generic computing elements recited at a high level and, under MPEP 2106.05(f), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2.
Apply It/Insignificant Computer Application
(b) apply artificial intelligence techniques to […]
This is a high-level recitation of a step to generally apply knowledge/data and is an insignificant computer application. This is similar to the MPEP 2106.05(f) examples: “i. A commonplace business method or mathematical algorithm being applied on a general purpose computer” “iv. A method of using advertising as an exchange or currency being applied or implemented on the Internet” “v. Requiring the use of software to tailor information and provide it to the user on a generic computer” “vi. A method of assigning hair designs to balance head shape with a final step of using a tool (scissors) to cut the hair.” This is an apply it step under the definition of MPEP 2106.05(f), so it fails to integrate the abstract idea into a practical application at Step 2A, Prong 2.
Sending, Receiving, and Recording Data
(a) receive a first set of data of a first type and an indication of a model performance evaluation metric;
[…]
and report the model performance evaluation metric for the developed model
[…]
(d) record any models that have the best model evaluation metric; and
(e) provide such models to a plurality of artificial intelligence systems such that the plurality of artificial intelligence systems gain intelligence in model designs and model interpretability.
These are insignificant extra-solution activity similar to the MPEP 2106.05(g) examples: “a step of obtaining information about credit card transactions, which is recited as part of a claimed process of analyzing and manipulating the gathered information by a series of steps in order to detect whether the transactions were fraudulent.” “a printer that is used to output a report of fraudulent transactions, which is recited in a claim to a computer programmed to analyze and manipulate information about credit card transactions in order to detect whether the transactions were fraudulent.” “i. Performing clinical tests on individuals to obtain input for an equation” “iii. Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display” “v. Consulting and updating an activity log” “i. Cutting hair after first determining the hair style” “ii. Printing or downloading generated menus” “Some cases have identified insignificant computer implementation as an example of insignificant extra-solution activity” “Other cases have considered these types of limitations as mere instructions to apply a judicial exception.” These steps are insignificant extra-solution activity, and, under MPEP 2106.05(g), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2.
Should it be found that representative metrics of any quantities that the parameters represent in the claims are not elements of the abstract idea or insignificant extra-solution activity, these elements merely limit the abstract idea to a particular field of technology and, under MPEP 2106.05(h), fail to integrate the abstract idea into a practical application at Step 2A, Prong 2.
Claim 1 fails to provide additional limitations that integrate the abstract ideas into a practical application.
Claims 1 is directed to the abstract idea.
Step 2B
The claims fail to recite additional limitations that combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept.
The additional limitations:
Generic Computing1. A system comprising: a processor; a memory coupled to the processor;
instructions stored in the memory and executable by the processor that, when executed by the processor cause the system to:
[…] Artificial intelligence […]
[…] Model(s) […]
These are generic computing elements recited at a high level and, under MPEP 2106.05(f), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. at Step 2B.
Apply It/Insignificant Computer Application
(b) apply artificial intelligence techniques to […]
This is a high level recitation of a step to generally apply knowledge/data and is an insignificant computer application. This is similar to the MPEP 2106.05(f) examples: “i. A commonplace business method or mathematical algorithm being applied on a general purpose computer” “iv. A method of using advertising as an exchange or currency being applied or implemented on the Internet” “v. Requiring the use of software to tailor information and provide it to the user on a generic computer” “vi. A method of assigning hair designs to balance head shape with a final step of using a tool (scissors) to cut the hair.” This is an apply it step under the definition of MPEP 2106.05(f), so it fails to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept. at Step 2B.
Sending, Receiving, and Recording Data
(a) receive a first set of data of a first type and an indication of a model performance evaluation metric;
[…]
[…]
and report the model performance evaluation metric for the developed model
[…]
(d) record any models that have the best model evaluation metric; and
(e) provide such models to a plurality of artificial intelligence systems such that the plurality of artificial intelligence systems gain intelligence in model designs and model interpretability.
The steps are well-understood, routine, and conventional (WURC) activity similar to the MPEP 2106.05(d) examples: “i. Receiving or transmitting data over a network” “iii. Electronic recordkeeping” “iv. Storing and retrieving information in memory” “v. Electronically scanning or extracting data from a physical document” “i. Determining the level of a biomarker in blood by any means” “iv. Presenting offers and gathering statistics,” “vi. Arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price.” The steps are WURC and, as previously demonstrated, insignificant extra-solution activity, and, under MPEP 2106.05(d) and 2106.05(g), fail to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B.
Should it be found that representative metrics of any real-world quantities that the parameters represent in the claims are not elements of the abstract idea or insignificant extra-solution activity, these elements merely limit the abstract idea to a particular field of technology and, under MPEP 2106.05(h), fail to to combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B.
Claims 1 fails to provide additional limitations that combine with the other elements of the claim to provide significantly more than the abstract idea that would confer an inventive concept at Step 2B.
Claim 1 is ineligible.
Claim 8 (Process)
Claim 8 recites the process steps the system of claim 1 is configured to execute, so claim 8 is rejected for at least the same reasons as claim 1.
Claim 8 is ineligible.
Claim 15 (Machine)
Claim 15 recites a CRM configured to execute the steps the system of claim 1 is, which makes it an implementation of the memory recited in claim 1, so claim 15 is rejected for at least the same reasons as claim 1.
Claim 15 is ineligible.
Dependent Claims
The dependent claims are also ineligible for the following reasons. Generic computing elements (MPEP 2106.05(f)) and the real-world representative parameters in data (MPEP 2106.05(h)) already treated in the independent claims will not be treated again here.
Claims 2, 9, and 16
Claim 2 recites
(A) receive a second set of data of a second type; (Mere data gathering, WURC: This fails to confer eligibility for at least the same reason as the receive step in claim 1.)
(B) apply […] techniques to match the first type of data in the first set to the second type of data in the second set; (Evaluation, Mental Process – Application of techniques to match types of data in different sets is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea.)
(C) apply [a mind] to create a set of candidate calibration variables and construct each member of the set of calibration variables by selecting 1) one or more variables from a set of common variables in first and second sets of data, 2) one or more basic functions to apply to the one or more variables, and 3) one or more operations that assemble the one or more basic functions; (Evaluation, Mental Process; Mathematical Equation, Mathematical Equation – Application of techniques to match types of data in different sets is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea. Application of techniques to match types of data in different sets is a mathematical operation, a mathematical concept, an abstract idea.)
(D) modify the equation using the set of candidate calibration variables; (Evaluation, Mental Process; Mathematical Equation, Mathematical Equation - Modifying an equation using a set of variables is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea. Modifying an equation using a set of variables is a mathematical operation, a mathematical concept, an abstract idea.)
(E) obtain a first calibrated estimator and a first uncertainty bound; (F) obtain a second calibrated estimator and a second uncertainty bound; (Mere data gathering, WURC: This fails to confer eligibility for at least the same reason as the receive step in claim 1.)
(G) compare the second uncertainty bound to the first uncertainty bound, if the second uncertainty bound is smaller than the first uncertainty bound then repeat steps C through G; (Evaluation, Mental Process; Mathematical Equation, Mathematical Concept – Comparing uncertainty bounds is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea. Comparing uncertainty bounds is a mathematical operation, a mathematical concept, an abstract idea. The repetition of steps already rejected is rejected for the same reason as those steps. )
(H) identify any set of calibration variables that give the smallest uncertainty bound; (Evaluation, Mental Process; Mathematical Operation, Mathematical Concept – Identifying sets of variables that give a small uncertainty bound is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea. Identifying sets of variables that give a small uncertainty bound is a mathematical operation, a mathematical concept, an abstract idea.)
(I) record models derived from the modified equation with the set of calibration variables identified in step H; and (Insignificant Extra-Solution Activity, WURC: This fails to confer eligibility for at least the same reason as the record step in claim 1.)
(J) provide the models to a plurality of artificial intelligence systems such that the plurality of artificial intelligence systems gain intelligence in increasing model precision by incorporating information from the second data set. (This fails to confer eligibility for at least the same reasons as the provide step in claim 1.)
Claim 2 fails to provide any additional limitations that confer eligibility. Claims 9 and 16 recite substantially the same limitations as claim 2 and are rejected for at least the same reasons as claim 2.
Claims 2, 9, and 16 are ineligible.
Claims 3, 10, and 17
(a) determine whether parameters of the equation have explicit solutions; (b) determine whether the equation has a unique solution; and (c) determine whether the equation has a separable solution. (Evaluation, Mental Process; Mathematical Operation, Mathematical Concept – Determining whether parameters of an equation have explicit solutions, a unique solution, or a separable solution is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea. Determining whether parameters of an equation have explicit solutions, a unique solution, or a separable solution is a mathematical operation, a mathematical concept, an abstract idea.)
Claim 3 fails to provide any additional limitations that confer eligibility. Claims 10 and 17 recite substantially the same limitations as claim 3 and are rejected for at least the same reasons as claim 3.
Claims 3, 10, and 17 are ineligible.
Claims 4, 11, and 18
(d) if the parameters of the equation have explicit solutions, obtain a mathematical formula for estimators; and (This is mere data gathering and WURC for at least the same reasons as the receive step of claim 1)
(e) compute and […] a numeric estimator. (Evaluation, Mental Process; Mathematical Operation, Mathematical Concept – Computing a numeric estimator is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea. Computing a numeric estimator is a mathematical operation, a mathematical concept, an abstract idea.)
[…] report a numeric estimator (This is insignificant extra-solution activity and WURC for the at least the same reasons as the record step of claim 1)
Claim 4 fails to recite any additional limitations that would confer eligibility. . Claims 11 and 18 recite substantially the same limitations as claim 4 and are rejected for at least the same reasons as claim 4.
Claims 4, 11, and 18 are ineligible.
Claims 5, 12, and 19
(f) if the parameters of the equation do not have explicit solutions, report inexplicitly defined estimators; and (This fails to confer eligibility for at least the same reasons as the record step of claim 1)
(g) compute and report a numeric estimator. (Evaluation, Mental Process; Mathematical Operation, Mathematical Concept – Computing a numeric estimator is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea. Computing a numeric estimator is a mathematical operation, a mathematical concept, an abstract idea.)
Claim 5 fails to recite any additional limitations that would confer eligibility. . Claims 12 and 19 recite substantially the same limitations as claim 5 and are rejected for at least the same reasons as claim 5.
Claims 5, 12, and 19 are ineligible.
Claims 6, 13, and 20
(a) determine whether a discrepancy exists in distributions of common variables between the first data set and the second data set; (Evaluation, Mental Process; Mathematical Operation, Mathematical Concept – Determining discrepancies between distributions of variables in different datasets is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea Determining discrepancies between distributions of variables in different datasets is a mathematical operation, a mathematical concept, an abstract idea.)
(b) benchmark variables from the second data set; and (Evaluation, Mental Process; Mathematical Operation, Mathematical Concept – Benchmarking variables against a standard is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea. Benchmarking variables against a numerical standard is a mathematical operation, a mathematical concept, an abstract idea.)
(c) adjust for selection bias by calibrating sampling weights with benchmark variables used as calibration variables. (Evaluation, Mental Process; Mathematical Operation, Mathematical Concept – Adjusting a mathematical model by calibrating weights is an evaluation practically performable in the mind or with the aid of pen, paper, and/or a calculator, so it is a mental process, an abstract idea. Adjusting a mathematical model by calibrating weights is a mathematical operation, a mathematical concept, an abstract idea.)
Claim 6 fails to recite any additional limitations that would confer eligibility. . Claims 13 and 20 recite substantially the same limitations as claim 6 and are rejected for at least the same reasons as claim 6.
Claims 6, 13, and 20 are ineligible.
Claims 7 and 14
(a) upload design equations to a library of models; and (b) provide access to the library to a plurality of computer systems. (These steps are sending, receiving, and recording data and fail to confer eligibility for at least the same reasons as the receive, report, record, and provide steps of claim 1).
Claim 7 fails to recite any additional limitations that would confer eligibility. . Claim 14 recites substantially the same limitations as claim 7 and is rejected for at least the same reasons as claim 7.
Claims 7 and 14 are ineligible.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 8, and 15: Melchert
Claim(s) 1, 8, and 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by NPL: “Adaptive basis functions for prototype-based classification of functional data” by Melchert et al. (Melchert).
Regarding claim 1, Melchert Teaches
1. A system comprising: a processor; a memory coupled to the processor;
instructions stored in the memory and executable by the processor that, when executed by the processor cause the system to: (Melchert Abstract “Computer experiments show that the methods offer potential to improve classification performance significantly.” – A computer is used with memory and processor configured to execute instructions in memory.)
(a) receive a first set of data of a first type and an indication of a model performance evaluation metric; (Melchert Abstract “For comparison of the classification, a GMLVQ system is also applied to the raw input data, as well as on data expanded by a different predefined functional basis.” 1 Introduction “In a variety of modern systems, high-dimensional sensor data are produced that is usually difficult to handle with traditional methods. […] Such data are frequently recorded to serve as input for a classification task. Various machine learning algorithms can be applied, having specific advantages and disadvantages.” – Data sets are used as input. 1 Introduction “The selection of a suitable distance measure is a key issue in the design of a prototype-based classifier [6]. While using predefined distances such as the Euclidean or other Minkowski-like measures often yields reasonable results, the application of more flexible relevance schemes has shown to be beneficial” – Distances are used as performance evaluation metrics.)
(b) apply artificial intelligence techniques to design an equation for use in development of a statistical model using the first set of data, wherein the equation is designed by selecting parameters including (Melchert 1 Introduction “The popularity of prototype- and distance-based classification systems results from their intuitive interpretation and straightforward implementation [2, 7]. In this paper, an extension of the popular learning vector quantization (LVQ) [21] is used. LVQ systems comprise different prototypes which represent characteristic properties of their corresponding classes. Together with an appropriate distance measure, they constitute an efficient classification method. – AI techniques are used. Page 18216, Left Column, Fifth-Sixth Paragraphs “In addition to these two spectral datasets, two time series datasets are selected from the UCR Time Series Repository [11], namely the Plane and Symbols data sets. Apart from the key properties of the datasets given in Table 1, no further information on the interpretation of the data or associated classification tasks is available at the repository. For each of the datasets, we consider four alternative scenarios for the classification.” – Multiple datasets are used. Page 18216, Right Column, Scenario B “For the Gaussian functions, the parameters are chosen as ak = 1.0, bk = -1+2k/n and ck = 0.25, what yields an even distribution of the functions with respect to the functional input data space.” – This includes statistical equations to yield a statistical element for a model.)
1) one or more variables, 2) one or more model parameters that indicate the unknown, 3) one or more basic functions from a list of functions, and 4) one or more operators that assemble the one or more basic functions; (Melchert See Eqs. (13) and (14) “In both setups, the number of basis functions is varied as […]. For the Gaussian functions, the parameters are chosen as […], what yields an even distribution of the functions with respect to the functional input data space.” – All of these elements are present in the equations used as basis functions in the scenarios. Scenario C “To investigate the influence of different initializations of W it is initialized as in Scenario B either by using Chebyshev polynomials of first kind or by distributing n Gaussian functions gk(x) evenly over the input space. The control parameters for the update rule are kept constant α = β = 0.25, since previous studies [4] have shown minor influence on the results over a wide range of values.” – The equation is assembled from Chebyshev and Gaussian basis functions.)
PNG
media_image1.png
140
631
media_image1.png
Greyscale
PNG
media_image2.png
100
623
media_image2.png
Greyscale
(c) calculate and report the model performance evaluation metric for the developed model and return to procedure (b) to alter the equation; (Melchert Page 18216, Right Column, Scenario D “The control parameters for the cost function are chosen as γ = η = 0.25. Note that the parameters γ and η serve as a weighting within the cost function.” – The cost function is used with the distance data to determine the performance and the error to backpropagate to modify the resulting equation. Page 18217, Left Column, First Paragraph “In each experiment 2048 optimization steps were performed, to ensure final convergence of the model parameters” – Best models were determined based on convergence.)
(d) record any models that have the best model evaluation metric; and (Melchert Page 18217, Left Column, 4 Discussion “The observed classification accuracy as given in Table 2 for scenarios B, C and D shows that a functional representation of the data has the potential to increase the classification performance significantly with respect to the naive approach which ignores the functional nature of the data (Scenario A). For all examined datasets, the classification accuracy achieved using a functional expansion of the data exceeds the one obtained by training in the original input feature space. For the datasets Tecator, Sugar and Symbols, the best observed accuracy was achieved using the adaptive functional expansion (Scenario C and D). The best accuracy for the Plane dataset was observed in Scenario B, where a fixed functional basis was employed. Nevertheless, the use of an adaptive functional basis achieved classification performances exceeding the one achieved in Scenario A. […]” – The best models are recorded. Also, alternatively, performance against Scenario A can be the benchmark.)
(e) provide such models to a plurality of artificial intelligence systems such that the plurality of artificial intelligence systems gain intelligence in model designs and model interpretability. (Melchert Page 18220, Right Column, Fourth Paragraph “While the use of Chebyshev polynomials of first kind can be beneficial for the kNN, Tree and ANN classifier, the achieved accuracy is significantly harmed for the SVM classification system. Even worse, all of the evaluated classification algorithms showed poor performance using simple Gaussian basis functions. This observation confirms the hypothesis that Gaussian basis functions are not suitable for a classification focused approximation of the sugar dataset.” – This provides an AI system with knowledge of which basis functions work for which data sets. Also, See Table 2 on Page 18217)
PNG
media_image3.png
739
375
media_image3.png
Greyscale
Claims 8 and 15 recite substantially the same features as claim 1 and are rejected for at least the same reasons as claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2, 6, 9, 13, 16, and 20 : Melchert and Brownlee
Claims 2, 6, 9, 13, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over NPL: “Adaptive basis functions for prototype-based classification of functional data” by Melchert et al. (Melchert) in view of NPL: “How to Calibrate Probabilities for Imbalanced Classification” by Brownlee (Brownlee).
Claims 2, 9, and 16
Regarding claim 2, Melchert teaches the features of claim 1 and further teaches:
instructions stored in the memory and executable by the processor that, when executed by the processor cause the system to: (Melchert Abstract “Computer experiments show that the methods offer potential to improve classification performance significantly.” – A computer is used with memory and processor configured to execute instructions in memory.)
(A) receive a second set of data of a second type; (Melchert Page 18216, Left Column, Fifth-Sixth Paragraphs “In addition to these two spectral datasets, two time series datasets are selected from the UCR Time Series Repository [11], namely the Plane and Symbols data sets. Apart from the key properties of the datasets given in Table 1, no further information on the interpretation of the data or associated classification tasks is available at the repository. For each of the datasets, we consider four alternative scenarios for the classification.” – Multiple datasets of different types are used.)
(D) modify the equation using the set of candidate variables; (E) obtain a first estimator and a first uncertainty bound; (F) obtain a second estimator and a second uncertainty bound; (G) compare the second uncertainty bound to the first uncertainty bound, if the second uncertainty bound is smaller than the first uncertainty bound then repeat steps C through G; (H) identify any set of variables that give the smallest uncertainty bound; (Melchert Page 18216, Right Column, Scenario D “The control parameters for the cost function are chosen as γ = η = 0.25. Note that the parameters γ and η serve as a weighting within the cost function.” – The cost function is used with the distance data to determine the performance and the error to backpropagate to modify the resulting equation. Page 18217, Left Column, First Paragraph “In each experiment 2048 optimization steps were performed, to ensure final convergence of the model parameters” – Best models were determined based on convergence. This is the same as before except that the variables are not explicitly calibrated in Melchert.)
(I) record models derived from the modified equation with the set of variables identified in step H; and (Melchert Page 18217, Left Column, 4 Discussion “The observed classification accuracy as given in Table 2 for scenarios B, C and D shows that a functional representation of the data has the potential to increase the classification performance significantly with respect to the naive approach which ignores the functional nature of the data (Scenario A). For all examined datasets, the classification accuracy achieved using a functional expansion of the data exceeds the one obtained by training in the original input feature space. For the datasets Tecator, Sugar and Symbols, the best observed accuracy was achieved using the adaptive functional expansion (Scenario C and D). The best accuracy for the Plane dataset was observed in Scenario B, where a fixed functional basis was employed. Nevertheless, the use of an adaptive functional basis achieved classification performances exceeding the one achieved in Scenario A. […]” – The best models are recorded. Also, alternatively, performance against Scenario A can be the benchmark.)
(J) provide the models to a plurality of artificial intelligence systems such that the plurality of artificial intelligence systems gain intelligence in increasing model precision by incorporating information from the second data set. (Melchert Page 18220, Right Column, Fourth Paragraph “While the use of Chebyshev polynomials of first kind can be beneficial for the kNN, Tree and ANN classifier, the achieved accuracy is significantly harmed for the SVM classification system. Even worse, all of the evaluated classification algorithms showed poor performance using simple Gaussian basis functions. This observation confirms the hypothesis that Gaussian basis functions are not suitable for a classification focused approximation of the sugar dataset.” – This provides an AI system with knowledge of which basis functions work for which data sets. Also, See Table 2 on Page 18217)
Melchert does not appear to explicitly teach, but Melchert in view of Brownlee teaches:
(B) apply artificial intelligence techniques to match the first type of data in the first set to the second type of data in the second set; (C) apply artificial intelligence to create a set of candidate calibration variables and construct each member of the set of calibration variables by selecting 1) one or more variables from a set of common variables in first and second sets of data, 2) one or more basic functions to apply to the one or more variables, and 3) one or more operations that assemble the one or more basic functions; (D) set of candidate calibration variables; (E) calibrated ; (F) calibrated ; (G) ; (H) calibration ;(Brownlee Page 1, Third Paragraph “Unfortunately, the probabilities or probability-like scores predicted by many models are not calibrated. This means that they may be over-confident in some cases and under-confident in other cases. Worse still, the severely skewed class distribution present in imbalanced classification tasks may result in even more bias in the predicted probabilities as they over-favor predicting the majority class.” – Brownlee calibrates datasets with like variables to compensate for uneven balance in data. Page 5, Second Paragraph “There are two main techniques for scaling predicted probabilities; they are Platt scaling and isotonic regression. Platt Scaling. Logistic regression model to transform probabilities. Isotonic Regression. Weighted least-squares regression model to transform probabilities. Platt scaling is a simpler method and was developed to scale the output from a support vector machine to probability values. It involves learning a logistic regression model to perform the transform of scores to calibrated probabilities. Isotonic regression is a more complex weighted least squares regression model. It requires more training data, although it is also more powerful and more general. Here, isotonic simply refers to monotonically increasing mapping of the original probabilities to the rescaled values. – Platt scaling is an iterative procedure to determine calibration variables A and B that are used to statistically calibrate the data. This is done using very simple, basic functions. This is used to process the data prior to introduction to a machine learning model. As such, the calibrated weights are applied and make calibrated outputs from the machine learning model. Anything that results from these including the modified equation, reporting, and otherwise in the claim is calibrated by this operation for statistical consistency. When combined with Melchert, the input data is iteratively calibrated until the data is calibrated and ready for introduction into the machine learning model of Melchert. All parameters determined thereafter are so calibrated. By this process.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the data input of Melchert by the data preprocessing of Brownlee because the person of ordinary skill in the art would be motivated by the aim of Melchert to provide an adaptive and flexible machine learning scheme that can process raw and other input data to look to Brownlee for methods to correct imbalanced input data, which is required to ensure correct results from an imbalanced classification. (Melchert Abstract “Instead of using a predefined set of basis functions for the expansion, a more flexible scheme of an adaptive functional basis is employed. GMLVQ is applied on the resulting functional parameters to solve the classification task. For comparison of the classification, a GMLVQ system is also applied to the raw input data, as well as on data expanded by a different predefined functional basis. Computer experiments show that the methods offer potential to improve classification performance significantly. Furthermore, the analysis of the adapted set of basis functions give further insights into the data structure and yields an option for a drastic reduction of dimensionality.; Brownlee Page 1, Fourth Paragraph – The First Bullet “As such, it is often a good idea to calibrate the predicted probabilities for nonlinear machine learning models prior to evaluating their performance. Further, it is good practice to calibrate probabilities in general when working with imbalanced datasets, even of models like logistic regression that predict well-calibrated probabilities when the class labels are balanced. In this tutorial, you will discover how to calibrate predicted probabilities for imbalanced classification. After completing this tutorial, you will know: Calibrated probabilities are required to get the most out of models for imbalanced classification problems.)
Claims 9 and 16 recite substantially the same features as claim 2 and are rejected for at least the same reasons as claim 2.
Claims 6, 13, and 20
Regarding claim 6, Melchert in view of Brownlee teaches the features of claim 2 and further teaches:
further comprising: instructions stored in the memory and executable by the processor that, when executed by the processor cause the system to: (Melchert Abstract “Computer experiments show that the methods offer potential to improve classification performance significantly.” – A computer is used with memory and processor configured to execute instructions in memory.)
(a) determine whether a discrepancy exists in distributions of common variables between the first data set and the second data set; (b) benchmark variables from the second data set; and (c) adjust for selection bias by calibrating sampling weights with benchmark variables used as calibration variables. (Brownlee Page 1, Third Paragraph “Unfortunately, the probabilities or probability-like scores predicted by many models are not calibrated. This means that they may be over-confident in some cases and under-confident in other cases. Worse still, the severely skewed class distribution present in imbalanced classification tasks may result in even more bias in the predicted probabilities as they over-favor predicting the majority class.” – Brownlee calibrates datasets with like variables to compensate for uneven balance in data. Page 5, Second Paragraph “There are two main techniques for scaling predicted probabilities; they are Platt scaling and isotonic regression. Platt Scaling. Logistic regression model to transform probabilities. Isotonic Regression. Weighted least-squares regression model to transform probabilities. Platt scaling is a simpler method and was developed to scale the output from a support vector machine to probability values. It involves learning a logistic regression model to perform the transform of scores to calibrated probabilities. Isotonic regression is a more complex weighted least squares regression model. It requires more training data, although it is also more powerful and more general. Here, isotonic simply refers to monotonically increasing mapping of the original probabilities to the rescaled values. – Platt scaling is an iterative procedure to determine calibration variables A and B that are used to statistically calibrate the data. This iteratively determines a discrepancy and adjusts the scaling vectors of the simple functions involved. These variables A and B function as calibration weights. As such, the calibrated weights are applied to the inputs and make calibrated outputs from the machine learning model. Anything that results from these including the modified equation, reporting, and otherwise in the claim is calibrated by this operation for statistical consistency. When combined with Melchert, the input data is iteratively calibrated until the data is calibrated and ready for introduction into the machine learning model of Melchert. All parameters determined thereafter are so calibrated. By this process.)
Claims 13 and 20 recite substantially the same features as claim 6 and are rejected for at least the same reasons as claim 6.
Claims 7 and 14 : Melchert, Brownlee, and Odegua
Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over NPL: “Adaptive basis functions for prototype-based classification of functional data” by Melchert et al. (Melchert) in view of NPL: “How to Calibrate Probabilities for Imbalanced Classification” by Brownlee (Brownlee) and NPL: “How to put machine learning models into production” by Odegua (Odegua).
Claims 7 and 14
Regarding claim 7, Melchert in view of Brownlee teaches the features of claim 2 and further teaches:
further comprising: instructions stored in the memory and executable by the processor that, when executed by the processor cause the system to: (Melchert Abstract “Computer experiments show that the methods offer potential to improve classification performance significantly.” – A computer is used with memory and processor configured to execute instructions in memory.)
Melchert expects others to build on the work conducted and made the article publicly available, but Melchert in view of Brownlee does not explicitly teach, but Melchert in view of Brownlee and Odegua teaches:
(a) upload design equations to a library of models; and (Odegua Page 3, Paragraph 6 “T[here] are three key areas your team needs to consider before embarking on any ML projects are: 1. Data storage and retrieval 2. Frameworks and tooling 3. Feedback and iteration“ – This teaches storage of machine learning models.)
(b) provide access to the library to a plurality of computer systems. (Odegua Page 6, Third Paragraph “Your model isn’t going to train, run, and deploy itself. For that, you need frameworks and tooling, software and hardware that help you effectively deploy ML models. These can be frameworks like Tensorflow, Pytorch, and Scikit-Learn for training models, programming languages like Python, Java, and Go, and even cloud environments like AWS, GCP, and Azure.“ – This teaches deploying the models for use by users from a network.)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claims to modify the desired furture development of Melchert by the storage and deployment methods of Odegua because the person of ordinary skill in the art would be motivated by the aim of Melchert to further innovate on the model designs presented therein to look to Odegua that teaches model deployment, which is as important as model building. (Melchert Page 10, Paragraphs 4-6 “Finally, the presented results are obtained on a comparatively low number of example datasets and classification algorithms. With the increasing availability of annotated functional datasets, the proposed approach should to be further validated. […] “Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creative commons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.; Odegua Page 1 “The goal of building a machine learning model is to solve a problem, and a machine learning model can only do so when it is in production and actively in use by consumers. As such, model deployment is as important as model building.”)
Claim 14 recites substantially the same features as claim 7 and is rejected for at least the same reasons as claim 7.
Claims 3-5, 10-12. and 17-19 : Melchert and Udrescu
Claims 3-5, 10-12, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over NPL: “Adaptive basis functions for prototype-based classification of functional data” by Melchert et al. (Melchert) in view of NPL: Al Feynman: A physics-inspired method for symbolic regression” by Udrescu et al. (Udrescu).
Claims 3, 10, and 17
Regarding claim 3, Melchert teaches the features of claim 1, and further teaches:
further comprising: instructions stored in the memory and executable by the processor that, when executed by the processor cause the system to: (Melchert Abstract “Computer experiments show that the methods offer potential to improve classification performance significantly.” – A computer is used with memory and processor configured to execute instructions in memory.)
Melchert does not appear to explicitly teach, but Melchert in view of Udrescu teaches:
(a) determine whether parameters of the equation have explicit solutions; (Udrescu Page 1 of 16, Overall Algorithm “Generic functions f(x1 ,…, x n) are extremely complicated and near impossible for symbolic regression to discover. However, functions appearing in physics and many other scientific applications often have some of the following simplifying properties that make them easier to discover: (1) Units: f and the variables upon which it depends have known physical units. (2) Low-order polynomial: f (or part thereof) is a polynomial of low degree. See Fig. 1 (shown below) – These steps of the algorithm determine whether the equation has an explicit solution.)
(b) determine whether the equation has a unique solution; and (Udrescu Page 1-2, Overall algorithm “Generic functions f(x1 ,…, x n) are extremely complicated and near impossible for symbolic regression to discover. However, functions appearing in physics and many other scientific applications often have some of the following simplifying properties that make them easier to discover: (1) Units: f and the variables upon which it depends have known physical units. (2) Low-order polynomial: f (or part thereof) is a polynomial of low degree. (3) Compositionality: f is a composition of a small set of elementary functions, each typically taking no more than two arguments. (4) Smoothness: f is continuous and perhaps even analytic in its domain. (5) Symmetry: f exhibits translational, rotational, or scaling symmetry with respect to some of its variables. (6) Separability: f can be written as a sum or product of two parts with no variables in common. The question of why these properties are common remains controversial and not fully understood (28, 29). However, as we will see below, this does not prevent us from discovering and exploiting these properties to facilitate symbolic regression. Property (1) enables dimensional analysis, which often transforms the problem into a simpler one with fewer independent variables. Property (2) enables polynomial fitting, which quickly solves the problem by solving a system of linear equations to determine the polynomial coefficients. Property (3) enables f to be represented as a parse tree with a small number of node types, sometimes enabling f or a subexpression to be found via a brute-force search. Property(4) enables approximating f using a feed-forward neural network with a smooth activation function. Property (5) can be confirmed using said neural network and enables the problem to be transformed into a simpler one with one independent variable less (or even fewer for n > 2 rotational symmetry). Property (6) can be confirmed using said neural network and enables the independent variables to be partitioned into two disjoint sets and the problem to be transformed into two simpler ones, each involving the variables from one of these sets. – Symmetry indicates that a solution is not unique but has at least two solutions.)
(c) determine whether the equation has a separable solution. (Udrescu Page 1-2, Overall algorithm “Generic functions f(x1 ,…, x n) are extremely complicated and near impossible for symbolic regression to discover. However, functions appearing in physics and many other scientific applications often have some of the following simplifying properties that make them easier to discover: (1) Units: f and the variables upon which it depends have known physical units. (2) Low-order polynomial: f (or part thereof) is a polynomial of low degree. (3) Compositionality: f is a composition of a small set of elementary functions, each typically taking no more than two arguments. (4) Smoothness: f is continuous and perhaps even analytic in its domain. (5) Symmetry: f exhibits translational, rotational, or scaling symmetry with respect to some of its variables. (6) Separability: f can be written as a sum or product of two parts with no variables in common. The question of why these properties are common remains controversial and not fully understood (28, 29). However, as we will see below, this does not prevent us from discovering and exploiting these properties to facilitate symbolic regression. Property (1) enables dimensional analysis, which often transforms the problem into a simpler one with fewer independent variables. Property (2) enables polynomial fitting, which quickly solves the problem by solving a system of linear equations to determine the polynomial coefficients. Property (3) enables f to be represented as a parse tree with a small number of node types, sometimes enabling f or a subexpression to be found via a brute-force search. Property (4) enables approximating f using a feed-forward neural network with a smooth activation function. Property (5) can be confirmed using said neural network and enables the problem to be transformed into a simpler one with one independent variable less (or even fewer for n > 2 rotational symmetry). Property (6) can be confirmed using said neural network and enables the independent variables to be partitioned into two disjoint sets and the problem to be transformed into two simpler ones, each involving the variables from one of these sets. – The system determines whether the equation is separable.)
PNG
media_image4.png
694
322
media_image4.png
Greyscale
It would have been obvious to a person of ordinary skill in the art to modify the methods of Melchert by the algorithm of Udrescu because the person of ordinary skill in the art would be motivated by the express aim of reducing dimensionality and complexity in Melchert to look to Udrescu, which teaches algorithmic steps to simplify the functional components of an equation to reduce dimensionality. (Melchert Pages 1-2 Introduction “In a variety of modern systems, high-dimensional sensor data are produced that is usually difficult to handle with traditional methods. To cope with the high number of input dimensions, the use of prior knowledge of the data generating systems can be used for simplification and dimensionality reduction. […] Various machine learning algorithms can be applied, having specific advantages and disadvantages. The popularity of prototype- and distance-based classification systems results from their intuitive interpretation and straightforward implementation [2, 7]. In this paper, an extension of the popular learning vector quantization (LVQ) [21] is used. LVQ systems comprise different prototypes which represent characteristic properties of their corresponding classes. Together with an appropriate distance measure, they constitute an efficient classification method. […] In this paper, we introduce a further extension of GMLVQ with respect to functional data. An expansion of functional data by means of suitable basis functions in combination with GMLVQ has proven to make superior classification performance and drastic reduction of input dimensions possible.”; Udrescu Abstract “A core challenge for both physics and artificial intelligence (AI) is symbolic regression: finding a symbolic expression that matches data from an unknown function. Although this problem is likely to be NP-hard in principle, functions of practical interest often exhibit symmetries, separability, compositionality, and other simplifying properties. In this spirit, we develop a recursive multidimensional symbolic regression algorithm that combines neural network fitting with a suite of physics-inspired techniques.”)
Claims 10 and 17 recite substantially the same features as claim 3 and are rejected for at least the same reasons as claim 3.
Claims 4, 11, and 18
Regarding claim 4, Melchert in view of Udrescu teaches the features of claim 3 and further teaches:
further comprising: instructions stored in the memory and executable by the processor that, when executed by the processor cause the system to: (Melchert Abstract “Computer experiments show that the methods offer potential to improve classification performance significantly.” – A computer is used with memory and processor configured to execute instructions in memory.)
(d) if the parameters of the equation have explicit solutions, obtain a mathematical formula for estimators; and (e) compute and report a numeric estimator. (Udrescu “The overall algorithm (available at https://github.com/SJ001/AI-Feynman) is schematically illustrated in Fig. 1. It consists of a series of modules that try to exploit each of the above-mentioned properties. Like a human scientist, it tries many different strategies(modules) in turn, and if it cannot solve the full problem in one fell swoop, it tries to transform it and divide it into simpler pieces that can be tackled separately, recursively relaunching the full algorithm on each piece.” See Fig. 1 (Shown Below) – The system attempts to find the simplest equation to solve the system and provide numeric estimators. It attempts to see if there is an explicit polynomial fit, which is an explicit solution. It then “reports” the equation, as seen in Fig. 1.)
PNG
media_image4.png
694
322
media_image4.png
Greyscale
Claims 11 and 18 recite substantially the same features as claim 4 and are rejected for at least the same reasons as claim 4.
Claims 5, 12, and 19
Regarding claim 5, Melchert in view of Udrescu teaches the features of claim 3 and further teaches:
further comprising: instructions stored in the memory and executable by the processor that, when executed by the processor cause the system to: (Melchert Abstract “Computer experiments show that the methods offer potential to improve classification performance significantly.” – A computer is used with memory and processor configured to execute instructions in memory.)
(f) if the parameters of the equation do not have explicit solutions, report inexplicitly defined estimators; and (g) compute and report a numeric estimator. (Udrescu Udrescu provides several solutions to the issue of the equation being inexplicit. These include determining and exploiting symmetry, determining and exploiting separability, setting variables equal, and instituting further transformations. Each of these is described in its own section on pages 4-5. For example: Page 5, Setting variables equal. “We also exploit the neural network to explore the effect of setting two input variables equal and attempting to solve the corresponding new mystery y′ with one fewer variable. We try this for all variable pairs, and if the resulting new mystery is solved, we try solving the mystery y′′ ≡ y/y′ that has the found solution divided out. As an example, this technique solves the Gaussian probability distribution mystery I.6.2. After making ϴ and σ equal and dividing the initial equation by the result, we are getting rid of the denominator, and the remaining part of the equation is an exponential. After taking the logarithm of this (see the below section), the resulting expression can be easily solved by the brute-force method.” – If the model cannot find an explicit solution, the non-explicit solutions are processed and transformed. Page 5 Extra transformations “In addition, several transformations are applied to the dependent and independent variables, which proved to be useful for solving certain equations. Thus, for each equation, we ran the brute force and polynomial fit on a modified version of the equation in which the dependent variable was transformed by one of the following functions: square root, raise to the power of 2, log, exp, inverse, sin, cos, tan, arcsin, arccos, and arctan. This reduces the number of symbols needed by the brute force by one, and in certain cases, it even allows the polynomial fit to solve the equation, when the brute force would otherwise fail. For example, the formula for the distance between two points in the three-dimensional (3D) Euclidean space.” – If the model cannot find an explicit solution, the non-explicit solutions are processed and transformed. As illustrated in the Fig. 1 and Fig. 2 flowcharts, all of these elements are returned and the final equation is returned and is used to evaluate numerical estimators.
PNG
media_image5.png
689
315
media_image5.png
Greyscale
PNG
media_image6.png
544
251
media_image6.png
Greyscale
Claims 12 and 19 recite substantially the same features as claim 5 and are rejected for at least the same reasons as claim 5.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20140074829 A1 to Schmidt et al. (Teaches the use of symbolic regression to determine relevant expressions)
US 20100332422 A1 to Cheng et al. (Teaches using symbolic regression for search and retrieval)
US 20070208548 A1 to McConaghy et al. (Teaches using machine learning with symbolic regression to determine policies)
US 20020169563 A1 to De Carvalho Ferreira et al. (Teaches using canonical forms and symbolic regression to determine representative functions)
US 11080359 B2 to Horesh et al. (Teaches function finding and optimization using genetic algorithms and symbolic regression)
NPL: “Contemporary Symbolic Regression Methods and their Relative Performance” by La Cava et al. (Provides a review of studies on symbolic regression near the priority date of the claims)
NPL: “Basis Functions” by Lawrence (Teaches basis functions for use with symbolic regression)
NPL: “Where are we now?: a large benchmark study of recent symbolic regression methods” by Orzechowski et al. (Provides another review of studies on symbolic regression near the priority date of the claims)
NPL: “Basis function construction for hierarchical reinforcement learning” by Ostentoski et al. (Teaches constructing basis functions for reinforcement learning)
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAY MICHAEL WHITE whose telephone number is (571) 272-7073. The examiner can normally be reached Mon-Fri 11:00-7:00 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.W./Examiner, Art Unit 2188
/RYAN F PITARO/Supervisory Patent Examiner, Art Unit 2188