Prosecution Insights
Last updated: April 19, 2026
Application No. 18/279,521

PREDICTION DEVICE, PREDICTION METHOD, AND RECORDING MEDIUM

Non-Final OA §101§103
Filed
Aug 30, 2023
Examiner
SINGH, AMRESH
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
NEC Corporation
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
463 granted / 610 resolved
+20.9% vs TC avg
Strong +22% interview lift
Without
With
+22.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
32 currently pending
Career history
642
Total Applications
across all art units

Statute-Specific Performance

§101
18.8%
-21.2% vs TC avg
§103
46.0%
+6.0% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 610 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. DETAILED ACTION Claims 1-10 are presented for examination. Claims 1-3, 6-8 and 10 are amended. This is a Non- Final Action . Claim Rejections - 35 U.S.C. §101 35 U.S.C. §101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1- 10 are rejected under 35 USC 101 as directed to an abstract idea without significantly more. With respect to independent claims, 1 , 9 and 10 , specifically claim 1 recites “ calculate a predicted value of a production volume of the well or a sand return amount of the well, based on the feature quantity ;” . These limitations could be reasonably and practically performed by the human mind because they constitute mental process involving evaluating input data and determining an output based on the data, a process that can be performed in the human mind or using pen and paper . Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application. At step 2A, prong two, claim(s) 1, 9 and 1 0 recites the additional elements of “ one or more processors, non-transitory computer readable recording medium storing a program, acquire…; output… … machine learning model ” are elements merely invoking a generic computer environment (processor, database, memory) , basic data-gathering or outputting functions (MPEP 21.96.05(f)) and generically applying a technical field (ML/AI) to the abstract idea hence reciting insignificant extra solution activities. The claims do not recite any specific improvement to computer technology, a particular machine implementing the process in a non-generic manner, a transformation of an article to a different state or thing, or any other meaningful limitation that applies the abstract idea in a manger that imposes a meaningful limit on the claim. Instead, the additional element simply applies the abstract idea using generic data processing operations, which amounts to implementing the mental processes using a computer environment. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claims, 1 , 9 and 1 0 at step 2B do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As explained with respect to Step 2A Prong Two , the additional elements as recited in step 2A prong 2 recite conventional computer executing data gathering and outputting, as well as, applying of generic AI/ML technology . No elements individually or in combination adds “significantly more” than the abstract idea hence are no more than well-understood, routine and conventional computer functions that merely apply the abstract idea on a generic computer. When viewed as an ordered combination, these additional elements do not integrate the abstract idea into a practical application and do not add significantly more than the abstract idea itself. According, claim 1 is ineligible under 101. Claims 2- 8 are dependent claims and do not recite any additional elements that would amount to significantly more than the abstract idea. Specifically, Claim 2. With respect to step 2A prong 1 “ selecting a linear prediction formula… calculating the predicted value…; output weight coefficient… ” recites a bstract idea of mental steps ( observation & evaluation ), These limitations could be reasonably and practically performed by the human mind because they constitute mental process involving evaluating input data and determining an output based on the data, a process that can be performed in the human mind or using pen and paper. With respect to step 2A prong 2 “the prediction device… wherein the machine learning model includes… one or more processors output…” recites additional elements of insignificant extra solution activity. With respect to step 2B the recited insignificant extra solution activity is recited at a high level of generality which are well-understood, routine and conventional as taught by the prior art of records. Claim 3. “generating auxiliary information…” recites a bstract idea of mental steps ( observation & evaluation ), These limitations could be reasonably and practically performed by the human mind because they constitute mental process involving evaluating input data and determining an output based on the data, a process that can be performed in the human mind or using pen and paper. With respect to step 2A prong 2 “one or more processors… machine learning mode… ” recites additional elements of insignificant extra solution activity. With respect to step 2B the recited insignificant extra solution activity is recited at a high level of generality which are well-understood, routine and conventional as taught by the prior art of records. Claim 4. With respect to step 2A prong 2 “ wherein the auxiliary information is a feature quantity on which the prediction using the machine learning model is based or training data of the machine learning model on which the prediction is based. ” recites additional elements of insignificant extra solution activity. With respect to step 2B the recited insignificant extra solution activity is recited at a high level of generality which are well-understood, routine and conventional as taught by the prior art of records. Claim 5. With respect to step 2A prong 2 “ wherein the auxiliary information is information representing the machine learning model by a decision tree or a rule model. ” recites additional elements of insignificant extra solution activity. With respect to step 2B the recited insignificant extra solution activity is recited at a high level of generality which are well-understood, routine and conventional as taught by the prior art of records. Claim 6. With respect to step 2A prong 2 “ wherein the feature quantity includes information related to proppant used for the well. ” recites additional elements of insignificant extra solution activity. With respect to step 2B the recited insignificant extra solution activity is recited at a high level of generality which are well-understood, routine and conventional as taught by the prior art of records. Claim 7. With respect to step 2A prong 2 “ wherein the feature quantity includes information related to a fluid used for the well. ” recites additional elements of insignificant extra solution activity. With respect to step 2B the recited insignificant extra solution activity is recited at a high level of generality which are well-understood, routine and conventional as taught by the prior art of records. Claim 8. With respect to step 2A prong 2 “ wherein the machine learning model is trained using training data divided for each region. ” recites additional elements of insignificant extra solution activity. With respect to step 2B the recited insignificant extra solution activity is recited at a high level of generality which are well-understood, routine and conventional as taught by the prior art of records. Claims 9 and 10 are similar to claim 1 hence rejected similarly. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 , 3 , 4, 6 - 10 are rejected under 35 U.S.C. 103 as being unpatentable over Sun et al. (US 11,428,078) in view of Hiroshi et al. (WO2019/130974 – (IDS - English translation provided) 1. Sun teaches, A prediction device comprising: a memory configured to store instructions; and one or more processors configured to execute the instructions to ( Fig 11 – teaches a system architecture, Sun ) : acquire a feature quantity related to a well of gas or oil ( Col 19: lines 25-42, Fig 10 – teaches obtain an input sequence of input data features associated with a well… comprising well production rates… and well operation constraints – disclosing acquiring feature quantities (input data features) related to a well, Sun ) ; calculate a predicted value of a production volume of the well or a sand return amount of the well, based on the feature quantity, using a machine learning model ( Col 20: lines 22-28 - teaches building a well production model… using machine learning… generating a forecast… comprising a future well production rate – disclosing ML-based prediction of well production values , Sun ) ; and output the predicted value ( Abstract – teaches generate a forecast for the well … the forecast comprising a future well production rate , Sun ) . Sun does not explicitly teach, …s hale gas or shale oil ; and .... and a contribution degree of the feature amount to the predicted value. However, Sun teaches , hydrocarbon (Oil) well productivity ( Abstract, Fig 1, Col 4: lines 32-39 , Sun ) . It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to apply the teachings of Sun to wells of shale gas or shale oil, as recited in the claims, because sun broadly teaches forecasting hydrocarbon well productivity using input data features such as production rates and operational constrains without limitation to any particular reservoir type. The recitation of “shale gas or shale oil” is considered a field of use limitation that does not impose any structural or functional distinction on the claimed device or method, as the claims do not require any shale-specific modeling, feature engineering or operational parameters. A person of ordinary skill in the art would have recognized that shale wells are a known subset of hydrocarbon wells and that applying Sun’s machine learning prediction framework to such wells would have been a predictable use of prior art elements according to their established functions, yielding no unexpected results , and therefore the claimed subject matter would have bene obvious over Sun. However, Hiroshi teaches, a c ontribution degree of the feature amount to the predicted value (Page 2, Claim 2 – teaches the output information includes information indicating the contribution degree of the first feature amount… to the prediction results, Hiroshi) . It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to combine the machine learning based well prediction prediction system of Sun with the feature contribution determination techniques of Hiroshi, because Sun teaches generating predicted well production values using input feature quantities, while Hiroshi teaches determining and outputting a contribution degree of feature quantities to a prediction result. A POSITA would have been motivated to incorporate the feature contribution analysis of the Hiroshi into Sun’s prediction system to improve interpretability, transparency, and user understanding of the prediction results, particularly in high-stakes industrial applications such as well production forecasting, where understanding the influence of input parameters is beneficial for decision making and optimization. The combination would have amounted to the predictable use of prior art elements according to their established functions, namely, applying known explainability techniques to known ML prediction systems and would have yielded no unexpected results, thereby rendering the claimed invention obvious under 103. 3. The combination of Sun and Hiroshi teaches, The prediction device according to claim 1, wherein the one or more processors are further configured to execute the instructions to generate auxiliary information indicating a basis of prediction using the machine learning model ( Pages 1-2, Abstract and claim 2, - teaches information indicating the contribution degree of the first feature amount… to the prediction result – disclosing generating information explaining how input features affect the prediction, which constitutes auxiliary information indicating a basis of prediction (i.e. explanation of why the prediction was made), Hiroshi) , wherein the one or more processors output the auxiliary information as the contribution degree of the feature quantity ( page 2, Claim 2 - teaches the output information includes information indicating the contribution degree of the first feature amount… , Hiroshi) . 4. The combination of Sun and Hiroshi teaches, The prediction device according to claim 3, wherein the auxiliary information is a feature quantity on which the prediction using the machine learning model is based (page 2, Claim 2 - teaches contribution degree of the first feature amount… to the prediction result – thus disclosing that the output includes feature quantities contributing to the prediction, which constitutes auxiliary information identifying features on which the prediction is based, Sun) or training data of the machine learning model on which the prediction is based (Abstract - teaches training set… test data subset… building a well production model using machine learning – disclosing that the prediction model is trained using training data, and that predictions are based such trained models, thereby satisfying the limitation of training data on which the prediction is based, Hiroshi) . 6. The combination of Sun and Hiroshi teaches, The prediction device according to any one of claim 1, wherein the feature quantity includes information related to proppant used for the well ( Col 7: lines 11-24; Col 9: lines 55-67 - teaches a well completion / hydraulic fracturing parameters, including materials and operational inputs used in well production modeling – disclosing that prediction models for well production are based on input parameters related to well completion and fracturing operations, which inherently include materials used in hydraulic fracturing such as proppant. A POSITA would understand proppant as a fundamental parameter in shale well operations affecting production performance , Sun) . 7. The combination of Sun and Hiroshi teaches, The prediction device according to any one of claim 1, wherein the feature quantity includes information related to a fluid used for the well (Col 10: lines 42-51 – teaches input constrains… such as … types of fluid systems (i.e. slick water… linear gel) – thus disclosing fluid system used in wells as input parameter, Sun) . 8. The combination of Sun and Hiroshi teaches, The prediction device according to any one of claim 1, wherein the machine learning model is trained using training data divided for each region ( Col 2: lines 45-48 – teaches t he system also may extract features and responses from the temporal data and the spatial data. The data may be divided into a training data subset, a validation data subset, and a test data subset – thus disclosing datasets from multiple locations (wells) and operational environments. A POSITA would recognize that such spatial data inherently corresponds to geographic location or regions, and that well production characteristics vary significantly across different regions. It would have been obvious to organize or divide training data based on such regions to improve model accuracy and account for regional variability, thereby resulting in training data divided for each region, Sun ) . Claim s 9 and 10 are similar to claim 1 hence rejected similarly. All limitations of claim 1 are taught above. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Sun et al. (US 11,428,078) in view of Hiroshi et al. (WO2019/130974 – (IDS - English translation provided) further in view of Singh et el. (US 7,702,597) 2. The combination of Sun and Hiroshi teaches, … wherein the one or more processors output … as the contribution degree (Page 2, claim 2 - teaches the output information includes information indicating the contribution degree of the first feature amount, Hiroshi) . The combination of Sun and Hiroshi do not explicitly teach, wherein the machine learning model includes a plurality of linear prediction formulas for calculating the predicted value, and conditions for selecting the linear prediction formula used to calculate the predicted value based on the feature quantity , wherein the one or more processors output a weight coefficient of the feature quantity in the linear prediction formula used to calculate the prediction value . However, Singh teaches, wherein the machine learning model includes a plurality of linear prediction formulas for calculating the predicted value (Col 6: lines 5- 35 - teaches piecewise linear regression method with break points, (for crop yield < break point m; for crop yield > break point m – disclosing multiple linear equation’s, each used to calculate predicted value under different conditions, Singh) , and conditions for selecting the linear prediction formula used to calculate the predicted value based on the feature quantity ( Fig 1: S110, Col 6: lines 5-35 – teaches identifying piecewise linear empirical equation with at least one break point – disclosing breakpoint conditions used to select which linear equation applies. The breakpoint is derived from data inputs (feature quantities such as NDVI, SM, rainfall), thus constituting feature-based selection of prediction formulas , Singh) , wherein the one or more processors output a weight coefficient of the feature quantity (Col 6: Table 1, Table of coefficients – teaches coefficients associated with feature quantities which correspond to weight coefficients for the feature quantity, Singh) in the linear prediction formula used to calculate the prediction value ( Col 6: lines 20-50 - teaches piecewise linear empirical equation… which includes coefficients multiplied by features – thus disclosing that the coefficients are part of the linear prediction formulas used to compute predicted values, directly satisfying this limitation, Singh ) . It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to employ a plurality of linear prediction formulas with selection conditions based on feature quantities as taught by Singh; because Singh discloses a piecewise linear regression model including multiple linear equations and a condition (break point) for selecting which linear equation to use based on input data values. Singh further teaches determining coefficients associated with feature quantities in each linear equation, thereby corresponding to weight coefficients of the feature quantities. A POSITA would have been motivated to apply such piecewise linear modeling techniques to prediction systems, such as that of Sun, to improve prediction accuracy in heterogeneous datasets by applying different linear models under different feature conditions, which represents a predictable use of prior art elements according to their established functions. All limitations of claim 3 are taught above. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Sun et al. (US 11,428,078) in view of Hiroshi et al. (WO2019/130974 – (IDS - English translation provided) further in view of Sandepudi et al. (US 2020/0387835) 5. The combination of Sun and Hiroshi teaches, wherein the auxiliary information is information representing the machine learning model (Page 2, claim 2 - teaches output information includes information indicating the contribution degree of the first feature amount… to the prediction result , Hiroshi). The combination of Sun and Hiroshi do not explicitly teach, … by a decision tree or a rule model. However, S a ndepudi teaches, by a decision tree (Fig 3 - teaches access a machine learning classifier comprising a plurality of decision trees – discloses a machine learning model represented using decision trees, S a ndepudi ) or a rule model (Paragraph 55 - teaches possible candidate rule #1… Paragraph 66 – teaches identifying… candidate transactions classification rules – disclosing generating classification rules derived from ML models, which constitutes a rule model presentation of the ML system, S a ndepudi ) . It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to modifying the prediction system of Sun to include generation and output of auxiliary information presenting a basis for prediction as taught by Hiroshi, because Sun teaches predicting production values using a machine learning model based on input feature quantities, while Hiroshi teaches generating and outputting contribution information indicating how feature quantities affect a prediction result. A POSITA would have been motivated to incorporate such contribution or explanatory information in Sun’s prediction system to improve interpretability, transparency and usability of prediction outputs in decision-making context. Furthermore, it would have been obvious to present such auxiliary or contribution information using decision tree or rule-based representations as taught by Sandepudi which discloses a machine learning classifier comprising decision trees and generating classification rules to express model behavior in an interpretable form. A POSITA would have been motivated to express the contribution information of Hiroshi within the prediction system of Sun using such decision tree or rule-based structures to further enhance human interpretability and understating of the model, which represents a predictable use of prior art elements according to their established functions and yields no unexpected results. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT AMRESH SINGH whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-3560 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday-Friday 8am-5pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Ann J. Lo can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-9767 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMRESH SINGH/ Primary Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Aug 30, 2023
Application Filed
Mar 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591804
SYSTEMS AND METHODS FOR DISTRIBUTED LEARNING FOR WIRELESS EDGE DYNAMICS
2y 5m to grant Granted Mar 31, 2026
Patent 12585549
BACKING UP DATABASE FILES IN A DISTRIBUTED SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12585715
SYSTEMS AND METHODS FOR INDEPENDENT AUDIT AND ASSESSMENT FRAMEWORK FOR AI SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12561572
METHOD FOR CALIBRATING PARAMETERS OF HYDROLOGY FORECASTING MODEL BASED ON DEEP REINFORCEMENT LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12554774
GRAPH DATA LOADING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
98%
With Interview (+22.0%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 610 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month