Prosecution Insights
Last updated: April 19, 2026
Application No. 17/888,810

SYSTEM AND METHOD FOR DOWNSAMPLING DATA

Final Rejection §101§103§112
Filed
Aug 16, 2022
Examiner
MORRIS, JOSEPH PATRICK
Art Unit
2188
Tech Center
2100 — Computer Architecture & Software
Assignee
Toyota Research Institute, Inc.
OA Round
2 (Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 6m
To Grant
77%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
4 granted / 15 resolved
-28.3% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
34 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
30.9%
-9.1% vs TC avg
§103
34.1%
-5.9% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
21.3%
-18.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 15 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Claims 1-2, 4-10, 12-18, and 20 are presented for examination. This Office Action is in response to submission of documents on January 16, 2026. Rejection of claims 1-2, 4-8, 10-18, and 20 under 35 U.S.C. 101 for being directed to unpatentable subject matter is maintained. Rejection of claims 1-2, 4-8, 10-18, and 20 under 35 U.S.C. 103 as being obvious over Melkumyan in view of Noack is maintained. New rejection of claims 1-2, 4-8, 10-18, and 20 under 35 U.S.C. 112(a) for failing to comply with the written description requirement. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Regarding rejection of the claims under 35 U.S.C. 101: Applicant asserts that “update an electronic data store based on the subset” is not a mental process. See Response at pg. 7. Applicant agrees. The step cannot be performed in the human mind. Thus, the limitation of “update an electronic data store based on the subset” is an additional element and must be analyzed further in Step 2A, Prong 2. However, Applicant further asserts that “[t]he independent claims as amended recite more easily and accurately selecting a subset of the dataset…” and “at least one characteristic being one of a type, an outcome, or an uncertainty level of the one or more potential experiments. The claims are directed to a real-world application of predicting an outcome based on a prediction data point. When compared to current technologies, this invention provides a more accurate prediction of the outcome for the prediction data point, is less resource-intensive, and simplifies the process of predicting the outcome for the prediction data point.” Response at pg. 8. Although the Applicant is asserting that the claims are directed to an improvement in technology (e.g., “this invention provides a more accurate prediction of the outcome for the prediction data point, is less resource-intensive, and simplifies the process”), Examiner disagrees that the claims recite such improvements. For example, the method can be performed in a manner that uses more resources and/or provides less accurate results. While the Examiner can appreciate that, in some instances, such improvements are realized, nothing in the claim requires steps and/or limitations that necessarily result in an improvement in technology. As analyzed herein, the limitation of “update an electronic data store based on the subset” is an idea of a solution. The limitation does not recite, with specificity, how the judicial exceptions are integrated into a practical application. Thus, the rejection of the pending claims is maintained. Regarding rejection of the claims under 35 U.S.C. 103: Applicant asserts that Melkumyan does not teach nor disclose all of the limitations of the independent claims. While the Examiner agrees with this, Examiner disagrees with the assertion that Noack does not cure the deficiencies of Melkumyan. As further explained in the rejection, see below, all of the limitations of the independent claims are taught by the combination of Melkumyan and Noack. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-2, 4-10, 12-18, and 20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Independent claims 1, 9, and 17 recite “updat[ing] an electronic data store based on the subset.” However, the “data store,” as disclosed in at least [0029]-[0030] of the Specification, makes no mention of updating “based on the subset.” Accordingly, the submitted amendment is not supported by the Specification and therefore is rejected under 35 U.S.C. 112(a). For the purposes of examination, the limitation “updat[ing] an electronic data store based on the subset” is interpreted as “an electronic data structure stored in the memory 220 or another data store, and that is configured with routines that can be executed by the processor 210 for analyzing stored data, providing stored data, organizing stored data, and so on. Thus, in one embodiment, the data store 240 stores data used by the control module 230 in executing various functions.” Spec. at [0029]. Thus, any prior art that discloses a change to data stored in memory, including “training the model” as recited in claim 4, “refit the covariance function with the subset” as disclosed in claim 5, and/or “training a second model” as recited in claim 6, would teach the limitation. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to judicial exceptions without significantly more. The claims recite mathematical calculations and mental processes. This judicial exception is not integrated into a practical application because the additional elements that are recited in the claims are extra-solution activities that do not integrate the judicial exceptions into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because courts have found that recitation of generic computer components are not significantly more than the recited judicial exception. Claim 1 Step 1: The claim is directed to a system, falling under one of the four statutory categories of invention. Step 2A, Prong 1: The claim 1 limitations include (bolded for abstract idea identification): Claim 1 Mapping Under Step 2A Prong 1 A system comprising: a processor; and a memory storing machine-readable instructions that, when executed by the processor, cause the processor to: train a model on a dataset to learn a covariance function; determine a covariance between a selected data value and data values in the dataset using the covariance function; select a subset of the dataset, the subset including the data values that have a covariance value that meets or exceeds a predetermined threshold value; predict at least one characteristic of one or more potential experiments based on the subset, the at least one characteristic being one of a type, an outcome, or an uncertainty level of the one or more potential experiments; and update an electronic data store based on the subset. Abstract Idea: Mathematical Calculations Training a model to learn a function includes setting and adjusting one or more parameters to result in a mathematical construct based on processing one or more data points. The process includes calculating one or more parameters based on mathematical functions to result in a model that operates based on a mathematical function. See MPEP § 2106.04(a)(2), Subsection I. Abstract Idea: Mathematical Calculations The limitation is directed to utilizing an mathematical function to calculate a value (i.e., a covariance) between the value and one or more other values. See MPEP § 2106.04(a)(2), Subsection I. Abstract Idea: Mental Process The limitation is directed to an operation that can be performed by a human. For example, a human can review a covariance value for a number of points and, utilizing observation, opinion, and judgment, select one data point over another. See e.g., MPEP 2106.04(a)(2), Subsection III. Abstract Idea: Mental Process Making a prediction is a mental process that requires observation, judgment, and opinion. See e.g., MPEP 2106.04(a)(2), Subsection 3. Step 2A, Prong 2: The claim 1 limitations recite (bolded for additional element identification): Claim 1 Mapping Under Step 2A Prong 2 A system comprising: a processor; and a memory storing machine-readable instructions that, when executed by the processor, cause the processor to: train a model on a dataset to learn a covariance function; determine a covariance between a selected data value and data values in the dataset using the covariance function; select a subset of the dataset, the subset including the data values that have a covariance value that meets or exceeds a predetermined threshold value; predict at least one characteristic of one or more potential experiments based on the subset, the at least one characteristic being one of a type, an outcome, or an uncertainty level of the one or more potential experiments; and update an electronic data store based on the subset. Reciting generic computer components is the additional element of instructions to apply the recited judicial exception, which courts have found does not integrate the judicial exception into a practical application. See MPEP 2106.05(f) Updating a database to reflect data determined using a judicial exception (mental process and/or mathematical concepts) is an idea of a solution that is not recited with specificity such that it integrates the judicial exception into a practical application and/or improves a technology. See MPEP 2106.05(f)(1). Step 2B: Regarding Step 2B, the inquiry is whether any of the additional elements (i.e., the elements that are not the judicial exception) amount to significantly more than the recited judicial exception. The only additional elements are generic computer components, which court have found do not amount to significantly more than the judicial exception, see MPEP 2106.05(f), Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014), Gottschalk v. Benson, 409 U.S. 63, 70, 175 USPQ 673, 676 (1972), Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 112 USPQ2d 1750 (Fed. Cir. 2014); Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 119 USPQ2d 1739 (Fed. Cir. 2016), and an idea of a solution, see Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015). Accordingly, claim 1 is rejected for being directed to unpatentable subject matter. Claim 2 Claim 2 recites wherein the selected data value is at least one of a data point or a cluster of data points. Accordingly, claim 2 is rejected for being directed to unpatentable subject matter. The limitation is directed to claiming the data value with more specificity and does not include additional elements beyond what is claimed in the parent claim. Accordingly, claim 2 does not include additional elements apart from those in claim 1 and is therefore not directed to patentable subject matter. Claim 4 Claim 4 recites train the model on the subset. Training a model includes setting and adjusting one or more parameters to result in a mathematical construct based on processing one or more data points. The process includes calculating one or more parameters based on mathematical functions to result in a model that operates based on a mathematical function. See MPEP § 2106.04(a)(2), Subsection I. Accordingly, claim 4 is rejected for being directed to unpatentable subject matter. Claim 5 Claim 5 recites refit the covariance function with the subset. The limitation is a mathematical concept that includes utilizing one or more functions and/or calculations to adjust a function based on a selected subset. See MPEP 2106.04(a)(2), Subsection I. Accordingly, claim 5 is rejected for being directed to unpatentable subject matter. Claim 6 Claim 6 recites train a second model with the subset. Training a model includes setting and adjusting one or more parameters to result in a mathematical construct based on processing one or more data points. The process includes calculating one or more parameters based on mathematical functions to result in a model that operates based on a mathematical function. See MPEP § 2106.04(a)(2), Subsection I. Accordingly, claim 6 is rejected for being directed to unpatentable subject matter. Claim 7 Claim 7 recites select the dataset from a larger dataset in a random manner. Randomly selecting a data point is a mental process that can be performed by a human using pencil and paper. For example, the user can select a data point at random (or based on an automatic random number generator) and designate the resulting value as a member of the subset. See MPEP 2106.04(a)(2), Subsection III. Accordingly, claim 7 is rejected for being directed to unpatentable subject matter. Claim 8 Claim 8 recites wherein the model is a Gaussian process model. The limitation further specifies a type of model and does not include additional elements that integrate the judicial exception into a practical application. According, claim 8 is rejected for being directed to unpatentable subject matter. Claim 9 Claim 9 recites a method that is substantially the same as the method performed by the system of claim 1. Accordingly, for at least the same reasons as claim 1, claim 9 is rejected under 35 U.S.C. 101 for being directed to unpatentable subject matter. Claims 10 and 12-16 Claims 10 and 12-16 recite substantially the same imitations as claims 2 and 4-8. Accordingly, for at least the same reasons as claims 2 and 4-8, claims 10 and 12-16 are rejected under 35 U.S.C. 101 for being directed to unpatentable subject matter. Claim 17 Claim 17 recites a non-transitory computer-readable medium including instructions that when executed by a processor cause the processor to perform a method that is substantially the same as the method of claim 1. The limitation of a non-transitory computer-readable medium is generic computer components, which is the additional element of instructions to apply the recited judicial exception. Courts have found such additional elements do not integrate the judicial exception into a practical application and are not significantly more than the recited exception. See MPEP 2106.05(f), Alice Corp. v. CLS Bank, 573 U.S. 208, 221, 110 USPQ2d 1976, 1982-83 (2014), Gottschalk v. Benson, 409 U.S. 63, 70, 175 USPQ 673, 676 (1972), Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 112 USPQ2d 1750 (Fed. Cir. 2014); Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 119 USPQ2d 1739 (Fed. Cir. 2016). Accordingly, for at least the same reasons as claim 1, claim 17 is rejected under 35 U.S.C. 101 for being directed to unpatentable subject matter. Claims 18 and 20 Claims 18-20 recite substantially the same imitations as claims 2 and 8. Accordingly, for at least the same reasons as claims 2 and 8, claims 18 and 20 are rejected under 35 U.S.C. 101 for being directed to unpatentable subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2,4-10,12-18,20 are rejected under 35 U.S.C. 103 as being obvious over Melkumyan, et al., (U.S. Pat. No. 8,849,622, hereinafter “Melkumyan”) in view Noack, et al., (“Autonomous materials discovery driven by Gaussian process regression with inhomogeneous measurement noise and anisotropic kernels,” hereinafter “Noack”). Claim 1 Melkumyan discloses: A system comprising: a processor; and The measurement sensor data generated by the sensors 230 is provided to a training processor 240 coupled to data storage 250. Melkumyan at col. 5, lines 17-18. The computing system 100 comprises suitable components necessary to receive, store and execute appropriate computer instructions. The components may include a processing unit 102, read only memory (ROM) 104, random access memory (RAM) 106… Melkumyan at col. 4, lines 7-9. a memory storing machine-readable instructions that, when executed by the processor, cause the processor to: The training processor 240 is adapted to organise the sensor data and determine a non-parametric, probabilistic, multi-scale representation of the data for use in terrain modelling, which is stored in the data storage 250. Melkumyan at col. 5, lines 19-22. The computing system 100 comprises suitable components necessary to receive, store and execute appropriate computer instructions. The components may include a processing unit 102, read only memory (ROM) 104, random access memory (RAM) 106… Melkumyan at col. 4, lines 7-9. train a model on a dataset to learn a covariance function; Training the GP for a given dataset is tantamount to optimising the hyperparameters of the underlying covariance function. This training can be done using machine learning. It can also be done manually, for example by estimating the values and performing an iterative fitting process. Melkumyan at col. 8, lines 56-61. “GP” is “Gaussian Process.” determine a covariance between a selected data value and data values in the dataset using the covariance function; For problems with thousands of observations, exact inference in normal GPs is intractable and approximation algorithms are required. Most of the approximation algorithms employ a subset of points to approximate the posterior distribution of a new point given the training data and hyperparameters. Melkumyan at col. 10, lines 28-33. The “posterior distribution of a new point” is analogous to a covariance and “hyperparameters” are analogous to the “covariance function.” The “data values in the dataset” are analogous to “a subset of points.” select a subset of the dataset, the subset including the data values that have a covariance value that meets or exceeds a predetermined threshold value; These approximations rely on heuristics to select the subset of points, or use pseudo targets obtained during the optimization of the log-marginal likelihood of the model. Melkumyan at col. 10, lines 33-36. A “pseudo target” is analogous to a “predetermined threshold value.” update an electronic data store based on the subset. Training the GP for a given dataset is tantamount to optimising the hyperparameters of the underlying covariance function. This training can be done using machine learning. It can also be done manually, for example by estimating the values and performing an iterative fitting process. Melkumyan at col. 8, lines 56-61. “GP” is “Gaussian Process.” The original dataset includes the subset and therefore the step of training the model on the dataset includes training the model on the subset. The model is stored in an electronic data store. Melkumyan does not appear to disclose: predict at least one characteristic of one or more potential experiments based on the subset, the at least one characteristic being one of a type, an outcome, or an uncertainty level of the one or more potential experiments; and Noack, which is analogous art, discloses: predict at least one characteristic of one or more potential experiments based on the subset, the at least one characteristic being one of a type, an outcome, or an uncertainty level of the one or more potential experiments; and Gaussian process regression (GPR) techniques have emerged as the method of choice for steering many classes of experiments. We have recently demonstrated the positive impact of GPR-driven decision-making algorithms on autonomously-steered experiments at a synchrotron beamline. Noack at Abstract. The success of GPR in steering experiments is due to its non-parametric nature; simply speaking, the more data that is gathered the more complicated the model function can become. The number of parameters of the function, and therefore its complexity, does not have to be defined a priori. This is in contrast to neural networks, which need a specification of an architecture (number of layers, layer width, activation function) beforehand. GPR also naturally includes uncertainty quantification, which is an absolute necessity in experimental sciences. Noack at pg. 3. Noack is analogous art to the claimed invention because both are directed to autonomous design of experiments using a model, particularly a Gaussian process model. It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the application, to combine the training of a Gaussian process model, as disclosed in Melkumyan, with the design of experiments using a Gaussian process, as disclosed in Noack, to result in a system that trains a model using gathered data and predicts experiments using the trained model. Motivation to combine includes reducing the complexity of execution of the model by limiting the data required to predict experiment design, thereby reducing computation time and resources. Claim 2 Melkumyan discloses: wherein the selected data value is at least one of a data point For problems with thousands of observations, exact inference in normal GPs is intractable and approximation algorithms are required. Most of the approximation algorithms employ a subset of points to approximate the posterior distribution of a new point given the training data and hyperparameters. Melkumyan at col. 10, lines 28-33. Noack discloses: wherein the selected data value is For the GPR computations, the search space was restricted to 1.0≤𝑥≤48.0 mm and 1.0≤𝑦≤49.0 mm. The objective function used was described previously, given by Eq. (11)… Noack at pg. 12, paragraph 2. Claim 4 Melkumyan discloses: train the model on the subset. Training the GP for a given dataset is tantamount to optimising the hyperparameters of the underlying covariance function. This training can be done using machine learning. It can also be done manually, for example by estimating the values and performing an iterative fitting process. Melkumyan at col. 8, lines 56-61. “GP” is “Gaussian Process.” The original dataset includes the subset and therefore the step of training the model on the dataset includes training the model on the subset. Claim 5 Melkumyan does not appear to disclose: refit the covariance function with the subset. Noack discloses: refit the covariance function with the subset. The variance of real experimental measurements vary greatly across the parameter space, and this has to be reflected in the steering process as well as in the final model creation. For instance, in x-ray scattering experiments, the variance of a raw measurement depends strongly on the exposure time; computed quantities can have wildly different variances depending on the raw data in that part of the space (e.g. fit quality will not be uniform), and material heterogeneity will depend strongly on location within the parameter space. These inhomogeneities in the measurement noise need to be actively included in the final model to avoid interpolation mistakes and consequently erroneous models. Noack at pg. 2, paragraph 5. “Actively including” inhomogeneities into the model is analogous to “refitting” the covariance function. Claim 6 Melkumyan discloses: train a second model with the subset. The output 530 of the Gaussian process evaluation 520 is a digital elevation map/grid at the chosen resolution and region of interest together with an appropriate measure of uncertainty for every point in the map. The digital elevation map may be used as is or may be rapidly processed into a digital surface/terrain model and used thereafter for robotic vehicle navigation and the like in known fashion. Melkumyan at col. 6, line63-col. 7, line 2. The “digital surface/terrain model” is a second model. Claim 7 Melkumyan discloses: select the dataset from a larger dataset in a random manner. The inference set contains the points used to perform inference on the testing points. For each case the experiment is repeated 1500 times with randomly selected inference and testing sets. Melkumyan at col. 14, lines 37-40. Claim 8 Melkumyan discloses: wherein the model is a Gaussian process model. In another embodiment the kernel machine uses a Gaussian learning process. Melkumyan at col. 2, lines 61-62. Claims 9-10 and 12-16 Claims 9-10 and 13-16 recite a method that is substantially the same as the method performed by the system recited in claims 1-2 and 4-8. Accordingly, for at least the same reasons and based on the same prior art as claims 1-2 and 4-8, claims 9-10 and 12-16 are rejected under 35 U.S.C. 103 as being obvious over Melkumyan in view of Noack. Claims 17-18 and 20 Claim 17 recites: A non-transitory computer-readable medium including instructions that when executed by a processor The computing system may include storage devices such as a disk drive 108 which may encompass solid state drives, hard disk drives, optical drives or magnetic tape drives. The computing system 100 may use a single disk drive or multiple disk drives. A suitable operating system 112 resides on the disk drive or in the ROM of the computing system 100 and cooperates with the hardware to provide an environment in which software applications can be executed. Melkumyan at col. 4, lines 27-34. The claim further recites a method stored on the medium that is substantially the same as the method recited in claim 1. Claims 18 and 20 disclose substantially the same limitations as claims 2 and 8. Accordingly, for at least the same reasons and based on the same prior art as claims 1-2 and 8, claims 17-18 and 20 are rejected under 35 U.S.C. 103 as being obvious over Melkumyan in view of Noack. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Srinivasan, et al., “Efficient subset selection via the kernelized Renyi distance.” Duplyakin, et al., “Active Learning in Performance Analysis.” Yeh, et al., “An Empirical Study of the Sample Size Variability of Optimal Active Learning Using Gaussian Process Regression.” Kloppenburg, U.S. Pat. No. 10,402,739. Abdolshah, et al., WIPO App. No. 2022/051794. Middlebrooks, et al., U.S. Pat. Pub. No. 2021/0286270 Qian, et al., “Gaussian Process Models for Computer Experiments With Qualitative and Quantitative Factors.” Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Communication Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH MORRIS whose telephone number is (703)756-5735. The examiner can normally be reached M-F 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Pitaro can be reached at (571) 272-4071. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JOSEPH MORRIS Examiner Art Unit 2188 /JOSEPH P MORRIS/Examiner, Art Unit 2188 /RYAN F PITARO/Supervisory Patent Examiner, Art Unit 2188
Read full office action

Prosecution Timeline

Aug 16, 2022
Application Filed
Oct 03, 2025
Non-Final Rejection — §101, §103, §112
Jan 13, 2026
Examiner Interview Summary
Jan 13, 2026
Applicant Interview (Telephonic)
Jan 16, 2026
Response Filed
Feb 26, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579465
ESTIMATING RELIABILITY OF CONTROL DATA
2y 5m to grant Granted Mar 17, 2026
Patent 12560921
MACHINE LEARNING PLATFORM FOR SUBSTRATE PROCESSING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
77%
With Interview (+50.0%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 15 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month