Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant's arguments filed 12/31/25 have been fully considered but they are not persuasive.
Regarding arguments pertaining to the USC 101 rejection and the newly added limitations/amendment,
obtaining, based on similarities between characteristics of a first target dataset and characteristics of a predetermined dataset,
a plurality of first performance metrics,
reads on: obtaining, a plurality of first performance metrics or more simply obtaining data.
And obtaining/receiving, outputting or storing data amounts to mere data gathering and output recited at a high level of generality - insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g).
wherein each first performance metric of the plurality of first performance metrics is associated with a candidate causal model configuration of a plurality of candidate causal model configurations,
reads on first piece of data being associated with one algorithm
the plurality of candidate causal model configurations are associated with corresponding to characteristics of the predetermined dataset;
reads on two algorithms associated with data
selecting, based on the plurality of first performance metrics, a target causal model configuration from the plurality of candidate causal model configurations;
reads on selecting an algorithm
and processing the first target dataset using a causal model which is built based on the target causal model configuration
reads on mental processing (thinking) about data using math.
Modeling amounts to a mental process of modeling with assistance of pen and paper.
It is well-settled that collecting and analyzing information by steps people go through in their minds or by mathematical algorithms, without more, are mental processes in the abstract-idea category. Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1353-54 (Fed. Cir. 2016); see SAP Am., Inc. v. InvestPic, LLC, 898 F.3d 1161, 1167 (Fed. Cir. 2018) ("[S]electing certain information, analyzing it using mathematical techniques, and reporting or displaying the results of the analysis" is abstract); Intellectual Ventures I LLC v. Cap. One Fin. Corp., 850 F.3d 1332, 1341 (Fed. Cir. 2017) ("Organizing, displaying, and manipulating data of particular documents" is abstract.); FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 1096-97 (Fed. Cir. 2016) (compiling and combining disparate data sources to generate a full picture of a user's activity, identity, frequency of activity, and the like in a computer environment to detect potential fraud does not differentiate a process from ordinary mental processes); In re Killian, 45 F.4th 1373, 1379 (Fed. Cir. 2022) ("These steps can be performed by a human, using 'observation, evaluation, judgment, [and] opinion,' because they involve making determinations and identifications, which are mental tasks humans routinely do").
The claims amount to data analysis/manipulation and using some form of AI as a tool. The transformation of data, or the mere "manipulation of basic mathematical constructs [i.e.,] the paradigmatic 'abstract idea,"' is not a transformation sufficient to integrate a judicial exception into a practical application. CyberSource v. Retail Decisions, 654 F.3d 1366, 1372 n.2 (Fed. Cir. 2011) (quoting In re Warmerdam, 33 F.3d 1354, 1355, 1360 (Fed. Cir. 1994)).
Claiming AI on a high level can amount to using a black box without specifying any real details of how the AI operates or what’s in the black box. The claims need to specify the technical details of the AI.
Although the claims may specify an improvement they are only improving the abstract idea not a computer.
Training a neural network to learn amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f).
Using a trained machine learning model to e.g., predict… amounts to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f).
Using AI to predict amounts to a mental process in the same way that a human can predict the weather with or without a computer.
"The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea." MPEP § 2106.04(a)(2).III. "Accordingly, the "mental processes" abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions." Id. For the purposes of this abstract idea, "[t]he courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g., pen and paper or a slide rule) to perform the claim limitation."
And see USC 101 rejection below.
Regarding arguments pertaining to USC 103 rejections and newly added limitations,
Mueller teaches causal model configurations (“An updated visual representation is generated including one or more updated causal models”, abstract) and a plurality of first performance metrics, wherein each first performance metric of the plurality of first performance metrics is associated with a candidate causal model configuration of a plurality of candidate causal model configurations, the plurality of candidate causal model configurations are associated with the predetermined dataset;
selecting, based on the plurality of first performance
(“A scoring function along with corresponding visual hints can be used to compare alternative causal models”, 0080; “users can examine learned models by clicking and/or selecting each tile colored by model scores” 0081; “a model scoring mechanism with visual hints for interactive model refinement”, 0011;
“Bayesian Information Criterion (BIC) of a model is computed from such residuals (referring hereinbelow to Equation (2), hence refining these miscalculated causal models based on their score change can also be difficult in these”, 0084, 0099, 00142).
Claim Rejections – 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 21-40 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: claims 21-40 are directed to either a process, machine, manufacture or composition of matter.
With respect to claims 21, 30, 37:
2A Prong 1:
selecting, based on the respective first performance, a target causal model configuration from the plurality of candidate causal model configurations (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can select/determine data);
processing a target dataset (mental process of modeling with assistance of pen and paper);
2A Prong 2: This judicial exception is not integrated into a practical application.
Additional elements:
Regarding clam 37, one processing unit; and at least one memory (computer component is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component; the mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention." Alice, 134 S. Ct. at 2358);
causal model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f));
processing the first target dataset using a causal model which is built based on the target causal model configuration (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level application of a previously trained model to make a prediction);
obtaining, based on similarities between characteristics of a first target dataset and characteristics of a predetermined dataset, a respective first performance of a plurality of candidate causal model configurations corresponding to characteristics of the predetermined dataset (mere data gathering and output recited at a high level of generality - insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)).
2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Additional elements:
Regarding clam 37, one processing unit; and at least one memory (computer component is recited at a high level of generality and amounts to no more than mere instructions to apply the exception using a generic computer component; the mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention." Alice, 134 S. Ct. at 2358);
causal model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f));
processing the first target dataset using a causal model which is built based on the target causal model configuration (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: high level application of a previously trained model to make a prediction);
obtaining, based on similarities between characteristics of a first target dataset and characteristics of a predetermined dataset, a respective first performance of a plurality of candidate causal model configurations corresponding to characteristics of the predetermined dataset (mere data gathering and output recited at a high level of generality - insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g));
Further, the obtaining steps were considered to be extra-solution activity in Step 2A Prong 2, and thus it is re-evaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The receiving and/or transmitting limitations constitute extra-solution activity. See buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355 (Fed. Cir. 2014) ("That a computer receives and sends the information over a network-with no further specification-is not even arguably inventive."). The court decisions cited in MPEP 2106.05(d)(II) indicate that merely Receiving and/or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information). Thereby, a conclusion that the claimed receiving/transmitting steps are well-understood, routine, conventional activity is supported under Berkheimer. The claim is not patent eligible.
22, 31, 38. (new): The method of claim 21, further comprising: obtaining the first target dataset;
determining characteristics of the first target dataset (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can generate/determine data);
determining corresponding similarities between characteristics of the first target dataset and characteristics of a set of candidate predetermined datasets (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can generate/determine data); and
selecting, from the set of candidate predetermined datasets, a candidate predetermined dataset having the highest similarities as the predetermined dataset (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can select/determine data).
23, 39. (new): The method of claim 22, wherein the characteristics comprise at least one of: ratio of binary data in the first target dataset, ratio of continuous data in the first target dataset, ratio of sequencing data in the first target dataset, ratio of categorical data in the first target dataset, characteristic dimensionality of the first target dataset, sample count in the first target dataset, ratio of missing data in the first target dataset, balance of target factor values in the first target dataset, structure characteristics built from the first target dataset, skewness of the first target dataset, kurtosis of the first target dataset, mean value of the first target dataset, and variance of the first target dataset (further expand mental process intended use).
24, 40. (new): The method of claim 21, wherein selecting the target causal model configuration comprises:
obtaining a respective second performance of the plurality of candidate causal model configurations corresponding to characteristics of the predetermined dataset (data gathering); and selecting, based on the respective first performance and the respective second performance, the target causal model configuration from the plurality of candidate causal model configurations(Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can select/determine data).
25. (new): The method of claim 24, wherein selecting, based on the respective first performance and the respective second performance, the target causal model (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f))configuration comprises:
for each candidate causal model configuration in the plurality of candidate causal model configurations,
determining (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can generate/determine data)the number of times the plurality of candidate causal model configurations are used for building a casual model, determining (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can generate/determine data)the number of times the candidate causal model configuration is used for building a causal model, and determining a performance indicator of the candidate causal model configuration based on the number of times the plurality of candidate causal model configurations are used for building a casual model, the number of times the candidate causal model configuration is used for building a causal model, and the first performance of the candidate causal model configuration and the second performance of the candidate causal model configuration; and
selecting, from the plurality of candidate causal model configurations, the candidate causal model configuration having the highest performance indicator as the target causal model configuration (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can select/determine data).
26. (new): The method of claim 21, further comprising: obtaining a user request which specifies a constraint associated with the target factor; and
determining, based on the user request and the causal model, one or more target strategies to be applied to the first target dataset(Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can select/determine data).
27. (new): The method of claim 26, further comprising:
determining changes of target factor resulted from applying the strategy to the second target dataset; and updating, based on changes of the target factor, the first performance corresponding to the target causal model configuration(Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can select/determine data).
28. (new): The method of claim 27, wherein updating the first performance corresponding to the target causal model configuration comprises: determining the number of times the target causal model configuration is used for building the causal model (Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can select/determine data); and
updating, based on changes of the target factor and the number of times the target causal model configuration is used for building the causal model, the first performance corresponding to the target causal model configuration(Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can generate/determine data).
29. (new): The method of claim 21, wherein the target causal model configuration comprises at least one of:
causal model method, and parameters of causal model method(further expand mental process intended use).
32. (new): The method of claim 30, further comprising: if similarities between the training dataset and each predetermined dataset in the set of candidate predetermined datasets are lower than a predetermined threshold, adding characteristics of the training dataset into characteristics of the set of candidate predetermined datasets (mental process – user can manually perform raw thinking in their head as a first stage and then using paper and pen to perform mathematical operation); and
setting the second performance of the plurality of candidate causal model configurations corresponding to characteristics of the training dataset as a predetermined second performance (mental process of modeling with assistance of pen and paper).
33. (new): The method of claim 32, further comprising:
determining the number of times the plurality of candidate causal model configurations are used for building a causal model; and determining the predetermined threshold based on the number of times(Abstract idea of analyzing data. Mental process. A human- mind with pen and paper can generate/determine data).
34. (new): The method of claim 30, the second performance metric comprises at least one of: category precision, recall rate, and Fl score(further expand mental process & user can manually perform raw thinking in their head as a first stage and then using paper and pen to perform mathematical operation).
35. (new): The method of claim 30, wherein the target causal model configuration comprises at least one of:
causal model method; and parameters of causal model method(mental process of modeling with assistance of pen and paper).
36. (new): The method of claim 30, further comprising:
adding a predetermined causal model configuration into the plurality of candidate causal model configurations, based on the number of times the plurality of candidate causal model configurations are used for building a causal model(mental process of modeling with assistance of pen and paper).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 21-24, 29-35, 37-40 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (WO 2021040791) in view of Mueller (WO 2020010350).
Zhang discloses:
21, 37. (new): A method for data processing, comprising:
obtaining, based on similarities between
“characteristics of a first target dataset” (not further defined reads on ideal data, training data, optimized data, centroid of a cluster, etc., Zhang: “The selection of the model may be based on the similarity of each data point of the test data set and the probability distribution of each training class”, abstract) and
“characteristics of a predetermined dataset” (not further defined in the claims nor the disclosure “The selection of the model may be based on the similarity of each data point of the test data set and the probability distribution of each training class”, abstract), a respective “first performance” (not further defined, reads on any metric used in modeling/classifying, e.g., difference between vectors, error signals when not at convergence or optimization) of a plurality of candidate causal model (see Mueller) configurations corresponding to characteristics of the predetermined dataset (“training numerous machine-learning models”, abstract);
selecting, based on the respective first performance, a target causal model configuration from the plurality of candidate causal model configurations; and processing the first target dataset using a causal model which is built based on the target causal model configuration (Zhang: “training numerous machine-learning models using training data sets with different probability distributions, and then selecting a model to execute on a test data set. The selection of the model may be based on the similarity of each data point of the test data set and the probability distribution of each training class. Examples include detecting and recommending a pre-trained model to generate outputs predicting a classification, such as a lithology, of a test data set. Recommending the trained model may be based on calculated prior probabilities that measure the similarity between the training and test data sets. The model with a training data set that is most similar to the test data set can be recommended for classifying a physical property of the subsurface rock for hydrocarbon formation.”, abstract; 0011-0016; 0030, 0036-0040; “determining, for each data point of a plurality of data points of the test data set, a distance between the data point and the probability distribution of each training class of the one or more training classes; and identifying the training class associated with a smallest distance between the data point and the probability distribution of the training class as compared to distances of remaining data points of the plurality of data points of the test data set.”, 0052).
Zhang fails to particularly call for the models to be causal models and a plurality of first performance metrics, wherein each first performance metric of the plurality of first performance metrics is associated with a candidate causal model configuration of a plurality of candidate causal model configurations, the plurality of candidate causal model configurations are associated with the predetermined dataset;
selecting, based on the plurality of first performance.
Mueller teaches causal model configurations (“An updated visual representation is generated including one or more updated causal models”, abstract) and a plurality of first performance metrics, wherein each first performance metric of the plurality of first performance metrics is associated with a candidate causal model configuration of a plurality of candidate causal model configurations, the plurality of candidate causal model configurations are associated with the predetermined dataset;
selecting, based on the plurality of first performance
(“A scoring function along with corresponding visual hints can be used to compare alternative causal models”, 0080; “users can examine learned models by clicking and/or selecting each tile colored by model scores” 0081; “a model scoring mechanism with visual hints for interactive model refinement”, 0011;
“Bayesian Information Criterion (BIC) of a model is computed from such residuals (referring hereinbelow to Equation (2), hence refining these miscalculated causal models based on their score change can also be difficult in these”, 0084, 0099, 00142).
It would have been obvious to combine the references before the effective filing date because they are in the same field of endeavor because e.g., causal models offer enabling prediction of intervention effects, simulating "what-if" scenarios, identifying true causal factors beyond mere correlations, and providing more robust, transparent, and explainable decision-making than traditional statistical models or machine learning. By selecting models based on performance metrics such as scoring data one can select causal models based on how well it performs as well as based on how similar it is to data.
22, 38. (new): The method of claim 21, further comprising: obtaining the first target dataset;
determining characteristics of the first target dataset;
determining corresponding similarities between characteristics of the first target dataset and characteristics of a set of candidate predetermined datasets; and
selecting, from the set of candidate predetermined datasets, a candidate predetermined dataset having the highest similarities as the predetermined dataset (determining the data used in selecting the models “The selection of the model may be based on the similarity of each data point of the test data set and the probability distribution of each training class. Examples include detecting and recommending a pre-trained model to generate outputs predicting a classification, such as a lithology, of a test data set. Recommending the trained model may be based on calculated prior probabilities that measure the similarity between the training and test data sets. The model with a training data set that is most similar to the test data set can be recommended for classifying a physical property of the subsurface rock for hydrocarbon formation.”, abstract, 0011-0016).
23, 39. (new): The method of claim 22, wherein the characteristics comprise at least one of: ratio of binary data in the first target dataset, ratio of continuous data in the first target dataset, ratio of sequencing data in the first target dataset, ratio of categorical data in the first target dataset, characteristic dimensionality of the first target dataset, sample count in the first target dataset, ratio of missing data in the first target dataset, balance of target factor values in the first target dataset, structure characteristics built from the first target dataset, skewness of the first target dataset, kurtosis of the first target dataset, mean value of the first target dataset, and variance of the first target dataset (Zhang: “training numerous machine-learning models using training data sets with different probability distributions, and then selecting a model to execute on a test data set. The selection of the model may be based on the similarity of each data point of the test data set and the probability distribution of each training class. Examples include detecting and recommending a pre-trained model to generate outputs predicting a classification, such as a lithology, of a test data set. Recommending the trained model may be based on calculated prior probabilities that measure the similarity between the training and test data sets. The model with a training data set that is most similar to the test data set can be recommended for classifying a physical property of the subsurface rock for hydrocarbon formation.”, abstract; 0011-0016; 0030, 0036-0040; “determining, for each data point of a plurality of data points of the test data set, a distance between the data point and the probability distribution of each training class of the one or more training classes; and identifying the training class associated with a smallest distance between the data point and the probability distribution of the training class as compared to distances of remaining data points of the plurality of data points of the test data set.”, 0052).
24, 40. (new): The method of claim 21, wherein selecting the target causal model configuration comprises:
obtaining a respective second performance of the plurality of candidate causal model configurations corresponding to characteristics of the predetermined dataset; and selecting, based on the respective first performance and the respective second performance, the target causal model configuration from the plurality of candidate causal model configurations (Zhang: “training numerous machine-learning models using training data sets with different probability distributions, and then selecting a model to execute on a test data set. The selection of the model may be based on the similarity of each data point of the test data set and the probability distribution of each training class. Examples include detecting and recommending a pre-trained model to generate outputs predicting a classification, such as a lithology, of a test data set. Recommending the trained model may be based on calculated prior probabilities that measure the similarity between the training and test data sets. The model with a training data set that is most similar to the test data set can be recommended for classifying a physical property of the subsurface rock for hydrocarbon formation.”, abstract; 0011-0016; 0030, 0036-0040; “determining, for each data point of a plurality of data points of the test data set, a distance between the data point and the probability distribution of each training class of the one or more training classes; and identifying the training class associated with a smallest distance between the data point and the probability distribution of the training class as compared to distances of remaining data points of the plurality of data points of the test data set.”, 0052).
29. (new): The method of claim 21, wherein the target causal model configuration comprises at least one of:
causal model method, and parameters of causal model method. (Mueller teaches causal model configurations (Mueller: “An updated visual representation is generated including one or more updated causal models”, abstract; 0108-0111).
30. (new): A method for processing data, comprising: obtaining, based on similarities between a training dataset (not further defined reads on ideal data, training data, optimized data, centroid of a cluster, etc., Zhang: “The selection of the model may be based on the similarity of each data point of the test data set and the probability distribution of each training class”, abstract) and a predetermined dataset (not further defined in the claims nor the disclosure “The selection of the model may be based on the similarity of each data point of the test data set and the probability distribution of each training class”, abstract), a respective second performance of a plurality of candidate causal model configurations corresponding to characteristics of the predetermined dataset(not further defined, reads on any metric used in modeling/classifying, e.g., difference between vectors, error signals when not at convergence or optimization);
selecting, based on the respective second performance, a target causal model(see Mueller) configuration from the plurality of candidate causal model configurations;
determining a second performance metric resulted from applying a causal model to the training dataset, the causal model being built based on the target causal model configuration (inherent operations of training, Zhang: “training numerous machine-learning models using training data sets with different probability distributions, and then selecting a model to execute on a test data set.”, abstract); and
updating (inherent operations of training, Zhang: “training numerous machine-learning models using training data sets with different probability distributions, and then selecting a model to execute on a test data set.”, abstract; reads on optimizing, going through iterations, etc. Mueller: iterations, 0214), based on the second performance metric, a second performance corresponding to the target causal model configuration. (Zhang: “training numerous machine-learning models using training data sets with different probability distributions, and then selecting a model to execute on a test data set. The selection of the model may be based on the similarity of each data point of the test data set and the probability distribution of each training class. Examples include detecting and recommending a pre-trained model to generate outputs predicting a classification, such as a lithology, of a test data set. Recommending the trained model may be based on calculated prior probabilities that measure the similarity between the training and test data sets. The model with a training data set that is most similar to the test data set can be recommended for classifying a physical property of the subsurface rock for hydrocarbon formation.”, abstract; 0011-0016; 0030, 0036-0040; “determining, for each data point of a plurality of data points of the test data set, a distance between the data point and the probability distribution of each training class of the one or more training classes; and identifying the training class associated with a smallest distance between the data point and the probability distribution of the training class as compared to distances of remaining data points of the plurality of data points of the test data set.”, 0052).
Zhang fails to particularly call for the models to be causal models.
Mueller teaches causal model configurations (“An updated visual representation is generated including one or more updated causal models”, abstract).
It would have been obvious to combine the references before the effective filing date because they are in the same field of endeavor because e.g., causal models offer enabling prediction of intervention effects, simulating "what-if" scenarios, identifying true causal factors beyond mere correlations, and providing more robust, transparent, and explainable decision-making than traditional statistical models or machine learning.
31. (new): The method of claim 30, further comprising: obtaining the training dataset; determining characteristics of the training dataset;
determining corresponding similarities between characteristics of the training dataset and characteristics of a set of candidate predetermined datasets; and selecting, from the set of candidate predetermined datasets, a candidate predetermined dataset having the highest similarities as the predetermined dataset (See rejection of 30).
34. (new): The method of claim 30, the second performance metric comprises at least one of: category precision (Zhang: “training numerous machine-learning models using training data sets with different probability distributions, and then selecting a model to execute on a test data set. The selection of the model may be based on the similarity of each data point of the test data set and the probability distribution of each training class. Examples include detecting and recommending a pre-trained model to generate outputs predicting a classification, such as a lithology, of a test data set. Recommending the trained model may be based on calculated prior probabilities that measure the similarity between the training and test data sets. The model with a training data set that is most similar to the test data set can be recommended for classifying a physical property of the subsurface rock for hydrocarbon formation.”, abstract; 0011-0016; 0030, 0036-0040; “determining, for each data point of a plurality of data points of the test data set, a distance between the data point and the probability distribution of each training class of the one or more training classes; and identifying the training class associated with a smallest distance between the data point and the probability distribution of the training class as compared to distances of remaining data points of the plurality of data points of the test data set.”, 0052) recall rate, and Fl score.
35. (new): The method of claim 30, wherein the target causal model configuration comprises at least one of:
causal model method; and parameters of causal model method(Mueller: “An updated visual representation is generated including one or more updated causal models”, abstract; 0108-0111).
Claims 25-28, 36 are not rejected under USC 102/103
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID R VINCENT whose telephone number is (571)272-3080. The examiner can normally be reached ~Mon-Fri 12-8:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at 5712703428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DAVID R VINCENT/Primary Examiner, Art Unit 2123