Prosecution Insights
Last updated: April 19, 2026
Application No. 17/556,549

PREDICTING WELL PRODUCTION BY TRAINING A MACHINE LEARNING MODEL WITH A SMALL DATA SET

Final Rejection §103
Filed
Dec 20, 2021
Examiner
LAHAM BAUZO, ALVARO SALIM
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Aramco Services Company
OA Round
4 (Final)
33%
Grant Probability
At Risk
5-6
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
1 granted / 3 resolved
-21.7% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
27 currently pending
Career history
30
Total Applications
across all art units

Statute-Specific Performance

§101
32.4%
-7.6% vs TC avg
§103
44.3%
+4.3% vs TC avg
§102
7.3%
-32.7% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Amendments This Office Action is in response to the amendment filed on January 21, 2026. Claim(s) 1, 8, and 15 have been amended. No claims have been cancelled. No new claims have been added. The objections and rejections from the prior correspondence that are not restated herein are withdrawn. Response to Arguments Applicant's arguments filed on January 21, 2026 have been fully considered. Applicant's arguments regarding the 35 U.S.C. 103 rejections of the previous office action have been fully considered but are not persuasive. Applicant argues: “Applicant respectfully asserts that Shelley, Bush and Zhang, whether considered separately or in combination, fail to teach, at least, the above-referenced limitation (i). The same is true for amended independent claims 8 and 15, which recite a substantially similar limitation. Shelley relates to the training of multiple ML models (i.e., multiple ANNs) using different combinations of training data parameters selected via genetic algorithms (e.g., Shelley, paragraphs [0058] and [0128]). Shelley makes no mention of averaging outputs of the multiple trained ML models and, therefore, cannot possibly teach averaging outputs of top-ranked trained ML models, as recited in limitation (i). Shelley, paragraph [0075]. Applicant notes that the set of completion parameters selected in Shelley is a multi-dimensional point, in contrast with limitation (i), which requires determining a range. Further contrasting with limitation (i), each set analyzed in Shelley includes multiple completion parameters. Shelly fails to consider an individual component of these sets, as required by limitation (i). In particular, Shelley is silent with respect to computing a plurality of predicted well production values by varying an individual component, and determining a preferred range that includes a value of the individual component that optimizes a predicted well production, as recited in limitation (i). Thus, even if Shelley were assumed to describe averaging outputs of multiple top-ranked trained ML models, Shelley would still fail to teach limitation (i). [ … ] Thus, the variations of the estimated well bore production in Shelley cannot be attributed to an individual component in this set, and the preferred range from limitation (i) cannot be determined.” Examiner respectfully disagrees. The reference of SHELLEY alone teaches the limitation (i). SHELLEY [0028] discloses an embodiment wherein more than one “best” predictive model may be selected for determining completion parameters that optimize production. In this particular embodiment, each of the “best” predictive models is subjected to the sensitivity analysis described in SHELLEY [0064]. Additionally, SHELLEY [0064] teaches adding and/or subtracting a range of each input of the gathered data (i.e., geological, completion, and petrophysics information) while keeping the other inputs unchanged. This step is explicitly varying an individual component of geological, completion, or petrophysical data. Then, the output values of the neural network (i.e., more than one “best” predictive model) are averaged (i.e., averaging outputs of the models). SHELLEY [0068-0070] teaches using these predictive model output values to determine optimal hydraulic treatment parameters. Furthermore, SHELLEY [0068-0070] teaches a company proposing to stimulate well “A” with 40 hydraulic fracturing treatments, then using the predictive models to optimize production. The predictive models determined that “at about 30 hydraulic fracturing treatments” the benefits gain of increasing fracturing treatments becomes insignificant. Furthermore, SHELLEY [Fig. 18] discloses a graph showing how each hydraulic treatment relates to net present value (i.e., estimated best month Cumulative oil production and oil recovery). The company proposed 40 hydraulic treatments, but the graph shows that values between 30 and 35 hydraulic treatments would result in a better optimization than 40. The preferred range can be interpreted as the values that provide a better optimization of production over the proposed value. Specifically, SHELLEY teaches: computing, by averaging outputs of the plurality of top-ranked individually trained ML models, a plurality of predicted well production values by varying an individual component of the geological, completion, and petrophysical data (SHELLEY [0033] teaches: "the data gathering step 122 includes gathering or obtaining geological information". SHELLEY [0034] teaches: "the data gathering step 122 includes gathering or obtaining petrophysics information (i.e., petrophysical data)" SHELLEY [0031] teaches:"[ ... ] The data can come from a wide variety of information sources, such as drilling, geology, completion, stimulation, and production.” SHELLEY [0064] teaches: "In some instances, selection of a final neural network also includes conducting sensitivity analysis 232 on neural network predictions on the inputs. This type of analysis may reveal unrealistic, over-sensitive relationships between inputs and outputs due to the training of the neural network to match the provided dataset. In some instances, sensitivity analysis is performed by adding and/or subtracting a percentage (e.g., 5-20% in some implementations and 10% in some implementations) of the range of each input while keeping the rest of the inputs unchanged (i.e., by varying an individual component of the geological, completion, and petrophysical data). Then, the neural network is tested with each dataset and average values of outputs are calculated (i.e., computing by averaging outputs of the […] individually trained ML model a plurality of predicted well production values). By calculating the percentage of change in outputs, sensitivities are identified. To this end, an exemplary chart 270 showing the relative sensitivities of various parameters to production parameters is provided in FIG. 16. Further, the chart 270 also indicates the relative sensitivities of parameters that are controllable (e.g., well completion and hydraulic fracturing design parameters) and those that are non-controllable, reservoir defined properties (e.g., gas production and mud weight).” SHELLEY [0028] teaches: “Further, in some instances more than one "best" predictive model (i.e., the plurality of top-ranked individually trained ML models) may be identified for a particular field. In such instances, the results from each of the "best" predictive models may be taken into consideration in identifying the optimized well completion parameters.” Examiner’s note: SHELLEY [0028] teaches an embodiment where more than one “best” predictive model is used for identifying the optimized well completion parameters. The more than one “best” predictive models would go through the same sensitivity analysis process described in SHELLEY [0064] to identify a set of wellbore completion parameters from the plurality of available wellbore completion parameters that optimizes estimated wellbore production.) Applicant argues: “Adapting the individual-component setting from limitations (i) to the multi-component setting in Shelley would require varying each parameter within the set of parameters in Shelley, and performing multi-dimensional variation analysis of the estimated wellbore production with respect to all parameters combined. A person of ordinary skill in the art would have no motivation to perform such modification because the computational cost would increase exponentially with the number of components involved, rendering the modified approach impractical. Thus, modifying Shelley to arrive at limitation (i) would require transforming the selection method in Shelley into a sensitivity analysis. A person of ordinary skill in the art would have no motivation to perform such modification without the benefit of Applicant's own disclosure as a guide. Accordingly, in view of the above, Shelley fails to render obvious limitation (i).” Examiner respectfully disagrees. Examiner relies on SHELLEY [0028], [0033-0034], [0064], [0068-0069], and [Fig. 18] for teaching limitation (i) of computing, by averaging outputs of the plurality of top-ranked individually trained ML models, a plurality of predicted well production values by varying an individual component of the geological, completion, and petrophysical data, and not on SHELLEY [0075]. Accordingly, applicant’s argument regarding modifying or adapting SHELLEY is unpersuasive because the cited portions of SHELLEY disclose the sensitivity analysis functionality, under broadest reasonable interpretation. Applicant argues that BUSH, ZHANG, and ZHAO fail to remedy the alleged defects of SHELLEY in teaching limitation (i). Applicant argues that due to the cited references not showing support for an obviousness rejection of amended claims 1, 8, and 15, respective dependent claims 4-7 and 11-14, and 17-20 are also patentable. Examiner respectfully disagrees. BUSH, ZHANG, and ZHAO are not relied upon for teaching limitation (i) for independent claim 1, which SHELLEY teaches as indicated above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6-8, 13-15, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over SHELLEY (US 20140067353 A1) in view of BUSH (US 20030204311 A1) and ZHANG (US 20220129788 A1), hereafter SHELLEY, BUSH, and ZHANG respectively. Regarding Claim 1: SHELLEY teaches: A method for predicting well production of a reservoir, comprising: obtaining a training data set and a validation data set for training a machine learning (ML) model, (SHELLEY [0057] teaches: "To this end, in some instances data associated with a particular subset of wells in the available dataset are withheld as a testing group. Accordingly, in some instances the dataset is separated into a first set of data for training and a second, separate set of data for testing". Additionally, SHELLEY [0033-0034] teaches the data gathering step 122, which includes obtaining training data and testing data (i.e., validation data set).) wherein the ML model comprises an artificial neural network (ANN) that generates predicted well production data based on geological, completion, and petrophysical data of interest, (SHELLEY [0030] teaches: "In this regard, Artificial Neural Networks (ANN) and Genetic Algorithms (GA) are used for data modeling and finding hidden patterns within the large volumes of data." SHELLEY [0031] teaches: "[ ... ] The data can come from a wide variety of information sources, such as drilling, geology, completion, stimulation, and production." SHELLEY [0033] teaches: "the data gathering step 122 includes gathering or obtaining geological information". SHELLEY [0034] teaches: "the data gathering step 122 includes gathering or obtaining petrophysics information". Examiner’s note: under BRI, “that generates predicted well production data” can be interpreted as “used for data modeling and finding hidden patterns within the large volumes of data”.) wherein the training data set and the validation data set comprise historical well production data and corresponding geological, completion, and petrophysical data; (SHELLEY [0031] teaches:"[ ... ] the method begins at step 122 with data gathering. Data related to the wells of a particular field/reservoir, both completed and uncompleted, are gathered from all the readily available sources. The initial data gathering step 122 is intended to capture all data that may be useful in evaluating the completed wells that can be used to predict or estimate the impact of various completion design parameters for a particular uncompleted well. The data can come from a wide variety of information sources, such as drilling, geology, completion, stimulation, and production." SHELLEY [0039] teaches: "In some instances, the data gathering step 122 includes gathering or obtaining production information 146, such as oil, water, and gas production rates, cumulative productions, water cut, production decline parameters, estimated ultimate recovery, and/or other information regarding production". Examiner’s note: under BRI, “historical well production data” can be interpreted as the obtained production information. Furthermore, SHELLEY teaches that the source of the training data is a particular field or reservoir.) performing a sensitivity analysis comprising: (SHELLEY [0064] teaches: "In some instances, selection of a final neural network also includes conducting sensitivity analysis 232 on neural network predictions on the inputs.) computing, by averaging outputs of the plurality of top-ranked individually trained ML models, a plurality of predicted well production values by varying an individual component of the geological, completion, and petrophysical data (SHELLEY [0033] teaches: "the data gathering step 122 includes gathering or obtaining geological information". SHELLEY [0034] teaches: "the data gathering step 122 includes gathering or obtaining petrophysics information (i.e., petrophysical data)" SHELLEY [0031] teaches:"[ ... ] The data can come from a wide variety of information sources, such as drilling, geology, completion, stimulation, and production.” SHELLEY [0064] teaches: "In some instances, selection of a final neural network also includes conducting sensitivity analysis 232 on neural network predictions on the inputs. This type of analysis may reveal unrealistic, over-sensitive relationships between inputs and outputs due to the training of the neural network to match the provided dataset. In some instances, sensitivity analysis is performed by adding and/or subtracting a percentage (e.g., 5-20% in some implementations and 10% in some implementations) of the range of each input while keeping the rest of the inputs unchanged (i.e., by varying an individual component of the geological, completion, and petrophysical data). Then, the neural network is tested with each dataset and average values of outputs are calculated (i.e., computing by averaging outputs of the […] individually trained ML model a plurality of predicted well production values). By calculating the percentage of change in outputs, sensitivities are identified. To this end, an exemplary chart 270 showing the relative sensitivities of various parameters to production parameters is provided in FIG. 16. Further, the chart 270 also indicates the relative sensitivities of parameters that are controllable (e.g., well completion and hydraulic fracturing design parameters) and those that are non-controllable, reservoir defined properties (e.g., gas production and mud weight).” SHELLEY [0028] teaches: “Further, in some instances more than one "best" predictive model (i.e., the plurality of top-ranked individually trained ML models) may be identified for a particular field. In such instances, the results from each of the "best" predictive models may be taken into consideration in identifying the optimized well completion parameters.” Examiner’s note: SHELLEY [0068-0069] teaches a company proposing to stimulate well “A” with 40 hydraulic fracturing treatments, then using the predictive models to optimize production. The model determined that “at about 30 hydraulic fracturing treatments” the benefits gain of increasing fracturing treatments becomes insignificant. Furthermore, SHELLEY [Fig. 18] discloses a graph showing how each hydraulic treatment relates to net present value. The company proposed 40 hydraulic treatments, but the graph shows that values between 30 and 35 hydraulic treatments would result in a better optimization than 40. The preferred range can be interpreted as the values that provide a better optimization of production over the proposed value. Moreover, SHELLEY [0028] teaches an embodiment where more than one “best” predictive model is used for identifying the optimized well completion parameters. The more than one “best” predictive models would go through the same sensitivity analysis process described in SHELLEY [0064] to identify a set of wellbore completion parameters from the plurality of available wellbore completion parameters that optimizes estimated wellbore production.) determining, based on the plurality of predicted well production values, a preferred range that includes a value of the individual component that optimizes a predicted well production; (SHELLEY [0069] teaches: "Further, the predictive model indicated that the number of hydraulic fracturing stages (i.e., based on the plurality of predicted well production values) could be reduced from 40 to 30 without significant loss of production (i.e., determining […] a preferred range that includes a value of the individual component that optimizes a predicted well production). FIG. 17 shows a graph 280 that plots the estimated production and recovery predictions made by the predictive model using the reduced treatment volumes for various numbers of hydraulic fracturing stages. As can be seen, at about 30 hydraulic fracturing treatments, the incremental best month oil cumulative production gain for each additional hydraulic fracturing treatment becomes insignificant. For example, the predictive model estimated that with 30 hydraulic fracturing treatments, a best month oil production of 27,213 BBL and estimated ultimate recovery ("EUR") of 854,550 BBL, while 40 hydraulic fracturing treatments would only obtain a slight increase to a best month oil production of 27,798 BBL and EUR of 860,967 BBL." Examiner’s note: Number of hydraulic fracture stages is a completion parameter and 30 hydraulic fracturing treatments is the individual value of the completion parameter.) generating, using values of the geological, completion, and petrophysical data of interest based on the preferred range as input to the plurality of top-ranked individually trained ML models, a plurality of individual predicted well production data; (SHELLEY [0033] teaches: "the data gathering step 122 includes gathering or obtaining geological information". SHELLEY [0034] teaches: "the data gathering step 122 includes gathering or obtaining petrophysics information (i.e., petrophysical data of interest)" SHELLEY [0031] teaches:"[...] The data can come from a wide variety of information sources, such as drilling, geology, completion, stimulation, and production.” SHELLEY [0075] teaches: "[ ... ] and applying the gathered data (i.e., using values of the geological, completion, and petrophysical data of interest) regarding the uncompleted wellbore and a plurality of available wellbore completion parameters to the best predictive model (i.e., as input to the plurality of top-ranked individually trained ML models) to identify (i.e., generating) a set of wellbore completion parameters from the plurality of available wellbore completion parameters that optimizes estimated wellbore production (i.e., a plurality of individual predicted well production data) based on the best predictive model;" SHELLEY [0077] teaches: “The set of wellbore completion parameters that optimizes estimated wellbore production includes a total number of hydraulic fractures (i.e., based on the preferred range) in some embodiments.” Examiner’s note: Number of hydraulic fracture stages is a completion parameter and 30 hydraulic fracturing treatments is the individual value of the completion parameter. Additionally, SHELLEY [0028] teaches an embodiment where more than one “best” predictive model is used for identifying the optimized well completion parameters. The more than one “best” predictive models would be used to identify a set of wellbore completion parameters from the plurality of available wellbore completion parameters that optimizes estimated wellbore production.) However, SHELLEY is not relied upon for teaching, but BUSH teaches: generating a plurality of sets of initial guesses of model parameters of the ML model (BUSH [0126] teaches: "[…] Before the network is trained, random values are selected for each of the weights." Examiner's note: under BRI, the randomly generated weights can be interpreted as the random values selected for each of the weights.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHELLEY and BUSH before them, to include BUSH's random value selection for weights and drilling location determination in SHELLEY's optimization method. One would have been motivated to make such a combination in order to improve the recovery of hydrocarbon because neural networks can adjust these weights automatically, and thus they do not require that the weights be known a priori. (BUSH [0187] and [0127]). SHELLEY in view of BUSH teaches: wherein each of the plurality of sets of initial guesses of model parameters of the ML model comprises randomly generated weights associated with connections between neural nodes of the ANN; (SHELLEY [0058] teaches: "Part of the data modeling step 128 is determining what combination of inputs should be used to construct the neural network. In some instances, the input combinations are determined in two parts. First, an initial combination of data parameters is selected in such a way as to represent geology, wellbore condition, completion, stimulation, and fluid properties. Then, by the use of one or more genetic algorithms 222 thousands of neural networks are trained with different combinations of inputs." Examiner's note: while SHELLEY is silent regarding the specific way in which model parameters (i.e., weights) are generated for the neural network, SHELLEY [0057] teaches: “In the training stage, the inputs and desired outputs are fed to the neural network and, through the application of a learning algorithm, prediction errors are minimized to the extent possible for each neural network. In this regard, hundreds, thousands, tens of thousands, and/or more neural networks, which may also be referred to as predictive models, are generated based on the obtained, screened, and processed dataset.” A person having ordinary skill in the art can apply BUSH’s random weight generation process to SHELLEY’s neural network prior to beginning the learning process because neural networks can adjust these weights automatically and thus they do not require that the weights be known a priori (see BUSH [0127]).) generating, using an ML algorithm applied to the training data set, a plurality of individually trained ML models, wherein each individually trained ML model is generated based on a different one of the plurality of sets of guesses of initial model parameters […] (SHELLEY [0055] teaches: "Referring again to FIG. 3, with the data screened and processed, the method 120 continues to step 128 where data modeling is performed, which leads to model selection at step 130 and then model application at step 132. In this regard, at the data modeling stage 128, neural networks are trained and tested thousands of times with the help of genetic algorithms to generate a plurality of predictive models." SHELLEY [0058] teaches: "Part of the data modeling step 128 is determining what combination of inputs should be used to construct the neural network. In some instances, the input combinations are determined in two parts. First, an initial combination of data parameters is selected in such a way as to represent geology, wellbore condition, completion, stimulation, and fluid properties. Then, by the use of one or more genetic algorithms 222 thousands of neural networks are trained with different combinations of inputs." Additionally, BUSH [0126] teaches selecting random weights before training the neural network (i.e., plurality of initial model parameters).) generating, by comparing the validation data set and respective predicted well production data of the plurality of individually trained ML models, a ranking of the plurality of individually trained ML models […]; (SHELLEY [0055] teaches:"[ ... ] In some instances, final model selection is done at step 130 based on two main criteria: performance of each predictive model, and engineering validation derived from the process knowledge.") selecting, based on the ranking, a plurality of top-ranked individually trained ML models […]; (SHELLEY [0028] teaches:"[ ... ] Further, in some instances more than one "best" predictive model may be identified for a particular field. In such instances, the results from each of the "best" predictive models may be taken into consideration in identifying the optimized well completion parameters." Examiner's note: more than one "best" predictive model inherently describes a ranking among the models to select the ones that perform the best.") […] individually trained ML models that are differentiated by the randomly generated weights of the ANN (BUSH [0126] teaches: "[…] Before the network is trained, random values are selected for each of the weights." Examiner’s note: under BRI, an “individually trained models” can be interpreted as an individual network, that has random values selected for each of the weights before training. The “differentiated by the randomly generated weights” refers to the initial set of guesses before the ML models were trained. At the time of having the generated individually trained ML model, the weights will be different (i.e., optimized or updated) from the initial weights that were set prior to training. Moreover, BUSH [0127] teaches: “Neural networks are superior to conventional statistical models for certain tasks because neural networks can adjust these weights automatically, and thus they do not require that the weights be known a priori. Thus, neural networks are capable of building the structure of the relationship (or model) between the input data and the output data by adjusting the weights […]”.) generating, based on the plurality of individual predicted well production data, a final predicted well production data. (SHELLEY [0075] teaches: "[ ... ] and applying the gathered data regarding the uncompleted wellbore and a plurality of available wellbore completion parameters to the best predictive model to identify a set of well bore completion parameters from the plurality of available wellbore completion parameters that optimizes estimated wellbore production based on the best predictive model; and completing the uncompleted wellbore based on the set of wellbore completion parameters identified using the best predictive model.") Examiner's note: under BRI, "plurality of individual predicted well production data" can be interpreted as a plurality of available wellbore completion parameters, and "a final predicted well production data" can be interpreted as the wellbore completion parameters identified using the best predictive model.) performing, at a location identified based on the final predicted well production data, drilling operation of a new well to extract hydrocarbon from the reservoir. (BUSH [0044] teaches: "using a neural network and the collected seismic data to determine one or more optimal locations for an offset well; drilling an offset well in a determined location; and using the offset well for an enhanced hydrocarbon recovery process.") SHELLEY in view of BUSH is not relied upon for teaching, but ZHANG teaches: and identical training data from the training data set; (ZHANG [0039] teaches: “An ensemble of machine learning models 120 is trained based on training data parameters 110.” ZHANG [0045] teaches: “[...] In some embodiments, all of models 202A, 202B, 202C, and 202D have been trained using the same training data.”) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHELLEY, BUSH, and ZHANG before them, to include ZHANG’s training ensemble of machine learning models using the same training data in SHELLEY/BUSH’s optimization method. One would have been motivated to make such a combination in order to provide companies with the ability for improved decision making regarding whether a given area is worthy of development, deciding on a well placement and well path, adjusting drilling mud weights or the design of casing strings to avoid wellbore instability events, and optimizing well completion designs, (ZHANG [0030]). Regarding Claim 6: SHELLEY in view of BUSH and ZHANG teaches the elements of Claim 1 as outlined above. SHELLEY further teaches: The method of claim 1, wherein the ML algorithm is applied to the training data set to generate a set of trained model parameters for each of the plurality of individually trained ML models. (SHELLEY [0061] teaches: "As an initial step in identifying a best or final neural network, the available neural networks are compared to a set of desired performance criteria. In some instances, the mean squared errors of the networks are compared." Examiner's note: under broadest reasonable interpretation, "set of desired performance criteria" can be interpreted as a validation data set.) Regarding Claim 7: SHELLEY in view of BUSH and ZHANG teaches the elements of Claim 1 as outlined above. SHELLEY further teaches: The method of claim 1, wherein generating the ranking of the plurality of individually trained ML models is based on a loss function representing a mean squared error (MSE) between the validation data set and respective predicted well production data of the plurality of individually trained ML models. (SHELLEY [0061] teaches: "As an initial step in identifying a best or final neural network, the available neural networks are compared to a set of desired performance criteria. In some instances, the mean squared errors of the networks are compared." Examiner's note: under broadest reasonable interpretation, "set of desired performance criteria" can be interpreted as a validation data set.) Regarding Claim 8: The claim recites similar limitations as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Additionally, BUSH teaches: wherein a drilling operation of a new well is performed, at a location identified based on the final predicted well production data, to extract hydrocarbon from the reservoir. (BUSH [0044] teaches: "using a neural network and the collected seismic data to determine one or more optimal locations for an offset well; drilling an offset well in a determined location; and using the offset well for an enhanced hydrocarbon recovery process.") Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHELLEY, and BUSH before them, to include BUSH's random value selection for weights and drilling location determination in SHELLEY's optimization method. One would have been motivated to make such a combination in order to improve the recovery of hydrocarbon because neural networks can adjust these weights automatically, and thus they do not require that the weights be known a priori. (BUSH [0187] and [0127]). Regarding Claim 13: SHELLEY in view of BUSH and ZHANG teaches the elements of Claim 8 as outlined above. SHELLEY further teaches: wherein the ML algorithm is applied to the training data set to generate a set of trained model parameters for each of the plurality of individually trained ML models. (SHELLEY [0055] teaches: "Referring again to FIG. 3, with the data screened and processed, the method 120 continues to step 128 where data modeling is performed, which leads to model selection at step 130 and then model application at step 132. In this regard, at the data modeling stage 128, neural networks are trained and tested thousands of times with the help of genetic algorithms to generate a plurality of predictive models.") Regarding Claim 14: SHELLEY in view of BUSH and ZHANG teaches the elements of Claim 8 as outlined above. SHELLEY further teaches: The analysis and modeling engine of claim 8, wherein generating the ranking of the plurality of individually trained ML models is based on a loss function representing a mean squared error (MSE) between the validation data set and respective predicted well production data of the plurality of individually trained ML models. (SHELLEY [0061] teaches: "As an initial step in identifying a best or final neural network, the available neural networks are compared to a set of desired performance criteria. In some instances, the mean squared errors of the networks are compared." Examiner's note: under broadest reasonable interpretation, "set of desired performance criteria" can be interpreted as a validation data set.) Regarding Claim 15: The claim recites similar limitations as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Additionally, SHELLEY teaches: A system for predicting well production of a reservoir, the system comprising: (SHELLEY [0039] teaches: “To that end, in some instances one or more of the outputs of the predictive models of the present disclosure include estimated production values, such as total production, peak production, average production, and/or other production parameters over a set amount of time (e.g., per day, per week, per month, per quarter, per year, or otherwise).”) a data repository storing a training data set and a validation data set for training a machine learning (ML) model, wherein the training data set and the validation data set comprise historical well production data and corresponding geological, completion, and petrophysical data; (SHELLEY [0078] teaches: “The system includes non-transitory, computer readable medium having a plurality of instructions stored thereon for executing the following steps: receiving data regarding a plurality of completed wellbores in a field; receiving data regarding an uncompleted wellbore in the field; utilizing the gathered data regarding the plurality of completed wellbores in the field to define a plurality of predictive models, each of the plurality of predictive models providing an estimate of wellbore production based on the gathered data regarding the plurality of completed wellbores; […]”. SHELLEY Fig. 4. Teaches the data gathering step 122, which obtains geological data, petrophysics data, drilling data, completion data, stimulation data, reservoir data, and production data. SHELLEY [0031] teaches:"[ ... ] The data can come from a wide variety of information sources, such as drilling, geology, completion, stimulation, and production." SHELLEY [0033] teaches: "the data gathering step 122 includes gathering or obtaining geological information". SHELLEY [0034] teaches: "the data gathering step 122 includes gathering or obtaining petrophysics information". SHELLEY [0041] teaches: “The outcome of the data gathering process is a dataset that includes a list of wells with their respective attributes (data types).” SHELLEY [0057] teaches: “Accordingly, in some instances the dataset is separated into a first set of data for training and a second, separate set of data for testing.” Examiner’s note, under BRI, “a data repository” can be interpreted as the non-transitory, computer readable medium having a plurality of instructions stored thereon for executing the following steps of gathering data such as step 122. The “training data set” and the “validation data set” can be interpreted as the dataset that is separated into a first set of data for training and a second, separate set of data for testing.) Regarding Claim 19: SHELLEY in view of BUSH and ZHANG teaches the elements of Claim 15 as outlined above. SHELLEY further teaches: wherein the ML algorithm is applied to the training data set to generate a set of trained model parameters for each of the plurality of individually trained ML models. (SHELLEY [0055] teaches: "Referring again to FIG. 3, with the data screened and processed, the method 120 continues to step 128 where data modeling is performed, which leads to model selection at step 130 and then model application at step 132. In this regard, at the data modeling stage 128, neural networks are trained and tested thousands of times with the help of genetic algorithms to generate a plurality of predictive models.") wherein each of the plurality of sets of initial guesses of model parameters of the ML model comprises randomly generated model parameter values, and (BUSH [0126] teaches: "[…] Before the network is trained, random values are selected for each of the weights.") Regarding Claim 20: SHELLEY in view of BUSH and ZHANG teaches the elements of claim 15 as outlined above. Additionally, the claim recites similar limitations as corresponding claims 7 and 14 and is rejected for similar reasons as claims 7 and 14 using similar teachings and rationale. Claims 4-5, 11-12, and 17- 18 are rejected under 35 U.S.C. 103 as being unpatentable over SHELLEY in view of BUSH and ZHANG as applied respectively above to claims 1, 8, and 15, and further in view of ZHAO (“Regional to Local Machine-Learning Analysis for Unconventional Formation Reserve Estimation: Eagle Ford Case Study"), hereafter ZHAO. Regarding Claim 4: SHELLEY in view of BUSH and ZHANG teaches the elements of Claim 1 as outlined above. SHELLEY further teaches: wherein the training data set comprises historical well production data and corresponding geological, completion, and petrophysical data that are obtained from less than 100 production wells of the reservoir. (SHELLEY [0028] teaches: "[…] However, the methods and systems of the present disclosure are suitable for use with large or small data sets (e.g., ten or fewer wells for a field).") However, SHELLEY in view of BUSH and ZHANG is not relied upon for teaching, but ZHAO teaches: The method of claim 1, wherein the reservoir is a tight reservoir; and (ZHAO [pg. 1, Abstract] teaches: "Unconventional tight reservoirs currently make up more than 60% of domestic oil and gas production in the United States. […] Therefore, this work aimed to leverage machine-learning techniques with big data to analyze the multivariant relationship of geological and engineering parameters with unconventional reservoir production and to improve the prediction of estimated ultimate recovery in unconventional formations.") Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHELLEY, BUSH, ZHANG, and ZHAO before them, to incorporate ZHAO’s tight reservoirs into SHELLEY/BUSH/ZHANG's optimization method. One would have been motivated to make such a combination in order to "improve the prediction of estimated ultimate recovery in unconventional formations." (ZHAO [pg.1, Abstract]). Regarding Claim 5: SHELLEY in view of BUSH and ZHANG teaches the elements of Claim 1 as outlined above. However, SHELLEY in view of BUSH and ZHANG is not relied upon for teaching, but ZHAO teaches: The method of claim 1, wherein generating the final predicted well production data comprises averaging the plurality of individual predicted well production data. (ZHAO [pg. 7, Trend Model (Regional Inference)] teaches: "Then, the final prediction is obtained by averaging all the predictions trained by bootstrapped subsets.”) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHELLEY, BUSH, ZHANG and ZHAO before them, to incorporate ZHAO’s averaging of all predictions into SHELLEY/BUSH/ZHANG's optimization method. One would have been motivated to make such a combination in order to reduce model variance (ZHAO [pg. 7, Trend Model (Regional Inference)]). Regarding Claim 11: SHELLEY in view of BUSH and ZHANG teaches the elements of claim 8 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Regarding Claim 12: SHELLEY in view of BUSH and ZHANG teaches the elements of claim 8 as outlined above. Additionally, the claim recites similar limitations as corresponding claim 5 and is rejected for similar reasons as claim 5 using similar teachings and rationale. Regarding Claim 17: SHELLEY in view of BUSH and ZHANG teaches the elements of claim 15 as outlined above. Additionally, the claim recites similar limitations as corresponding claims 4 and 11 and is rejected for similar reasons as claims 4 and 11 using similar teachings and rationale. However, SHELLEY in view of BUSH and ZHANG is not relied upon for teaching, but ZHAO teaches: a tight reservoir; (ZHAO [pg. 1, Abstract] teaches: "Unconventional tight reservoirs currently make up more than 60% of domestic oil and gas production in the United States. [ ... ] Therefore, this work aimed to leverage machine-learning techniques with big data to analyze the multivariant relationship of geological and engineering parameters with unconventional reservoir production and to improve the prediction of estimated ultimate recovery in unconventional formations.") Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of SHELLEY, BUSH, ZHANG and ZHAO before them, to incorporate ZHAO’s tight reservoirs into SHELLEY/BUSH/ZHANG's optimization method. One would have been motivated to make such a combination in order to "improve the prediction of estimated ultimate recovery in unconventional formations." (ZHAO [pg.1, Abstract]). Regarding Claim 18: SHELLEY in view of BUSH and ZHANG teaches the elements of claim 15 as outlined above. Additionally, the claim recites similar limitations as corresponding claims 5 and 12 and is rejected for similar reasons as claims 5 and 12 using similar teachings and rationale. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).!!!!!!) A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alvaro S Laham Bauzo whose telephone number is (571)272-5650. The examiner can normally be reached Mon-Fri 7:30 AM - 11:00 AM | 1:00 PM - 5:30 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached on (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.S.L./Examiner, Art Unit 2146 /USMAAN SAEED/Supervisory Patent Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Dec 20, 2021
Application Filed
Apr 04, 2025
Non-Final Rejection — §103
Jun 10, 2025
Response Filed
Jun 26, 2025
Final Rejection — §103
Aug 22, 2025
Response after Non-Final Action
Sep 23, 2025
Request for Continued Examination
Oct 05, 2025
Response after Non-Final Action
Oct 16, 2025
Non-Final Rejection — §103
Jan 21, 2026
Response Filed
Mar 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12475388
MACHINE LEARNING MODEL SEARCH METHOD, RELATED APPARATUS, AND DEVICE
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
33%
Grant Probability
99%
With Interview (+100.0%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month