DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
The following is a non-final, first office action in response to the communication filed on 11/08/2022. Claims 1—20 are currently pending.
Priority
The Applicant’s claim for benefit of US Provisional Patent Application 63/276,928 filed on 11/08/2021, has been received and acknowledged.
Information Disclosure Statement
Information Disclosure Statements received 3/23/2023 and 8/25/2025 have been reviewed and considered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1—20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 of the USPTO’s eligibility analysis entails considering whether the claimed subject matter falls within the four statutory categories of patentable subject matter identified by 35 U.S.C. 101: Process, machine, manufacture, or composition of matter.
Claims 1, 8, and 15 are directed to a method (process), non-transitory computer-readable storage media storing computer instructions (machine or manufacture), and a system (machine or manufacture), respectively. As such, the claims are directed to statutory categories of invention.
If the claim recites a statutory category of invention, the claim requires further analysis in Step 2A. Step 2A of the 2019 Revised Patent SUBJECT Matter Eligibility Guidance is a two-prong inquiry. In Prong One, examiners evaluate whether the claim recites a judicial exception
Claims 1, 8, and 15 recite the abstract limitations of, or substantially similar to:
“combining raw field data from a plurality of wells… with user-based data received from a user interface to generate an input dataset” (e.g., a mental process); and
“generating an optimized production forecast model from the plurality of trained completion forecast models” (e.g., a mental process performed with or without the benefit of a mathematical concept).
Under the broadest reasonable interpretation, the above identified limitations recite actions performable in the mind or by a human using pen and paper. For example, a human mind is capable of selecting and combining data to form a dataset. Likewise, a human mind is capable of generating an optimized model by selecting a preferred or optimal model from a collection of models. In some scenarios, a mathematical concept may be beneficial in generating and/or selecting an optimal model; however, mathematical concepts also constitute abstract ideas. As such, the foregoing limitations recite abstract ideas comprising mental processes and/or mathematical concepts. More specifically, nothing in the limitations as claimed precludes the aforementioned steps from practically being performed in the human mind, or by a human using pen and paper (e.g., with or without the benefit of a mathematical concept). The mere recitation of generic computing elements and/or sensors does not take the claim out of the mental process grouping. Thus the claim recites an abstract idea.
If the claim recites a judicial exception (i.e., an abstract idea enumerated in Section I of the 2019 Revised Patent Subject Matter Eligibility Guidance, a law of nature, or a natural phenomenon), the claim requires further analysis in Prong Two. In Prong Two, examiners evaluate whether the claim recites additional elements that integrate the exception into a practical application of that exception.
Claims 1, 8, and 15 recite the additional element of, or substantially similar to:
“training, based on the input dataset and utilizing a deep learning computing technique, a plurality of completion forecast models” (e.g., a model training step recited at a high level of generality equivalent to reciting “apply it.”).
The functions of the above identified are additional elements whose functions are recited at a high level of generality and are merely invoked as tools to perform the abstract idea.
Accordingly, in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
If the additional elements do not integrate the exception into a practical application, then the claim is directed to the recited judicial exception, and requires further analysis under Step 2B to determine whether they provide an inventive concept (i.e., whether the additional elements amount to significantly more than the exception itself).
As discussed above, the additional element of “training, based on the input dataset and utilizing a deep learning computing technique, a plurality of completion forecast models,” is recited at a high level of generality and does not provide the level of particularity required to show a practical application. For example, a generic model training step which applies a generically recited type of algorithm (e.g., a “deep learning computing technique” accounts for a whole class of algorithms which can provide either regression or classification outputs) to a generically recited dataset (e.g., raw data and user data is very generic) does not provide for an additional element with the level of specificity to show a practical application.
Thus, even when viewed as an ordered combination, nothing in the claims add significantly more (i.e., an inventive concept) to the abstract idea.
Claims 2 and 9 recite the limitation “interpreting the plurality of completion forecast models…” which constitutes a mental process insofar as making an interpretation is an action performable in a human mind with or without the benefit of a mathematical concept or pen and paper. Performing the interpretation with the benefit of a computer would not take the claims out of the mental process grouping. As such, claims 2 and 9 are directed to an abstract idea.
Claims 3, 4, 10, 11, and 18 recite limitations directed to the above identified additional element of training the model; however, as discussed above, the limitations are recited at such a high level of generality that they do not provide for a practical application of the identified judicial exceptions. For example, merely stating the use of a specific common algorithm would not function to integrate the judicial exception into a practical application (e.g., because it would be identified as well understood, routine, and conventional); however, even that type of limitation is more specific than those provided in claims 3, 4, 10, 11, and 18. As such, while the claims do not recite any further judicial exceptions, they do not succeed in reciting limitations which provide for a practical application.
Claims 5, 12, and 19 are directed to a mental process insofar as “expanding data” using either a scatter or box plot is an action which is performable in the human mind with the benefit of either or both of: 1.) a pen and paper; and 2.) a mathematical concept. Additionally, performing the data expansion with the benefit of a computer would not take the claims out of the mental process and/or mathematical concept grouping. As such, claims 5, 12, and 19 are directed to an abstract idea.
Claims 6, 13, and 17 recite limitations directed to displaying a result generated from the model which is equivalent to reciting the limitation “apply it.” As addressed in MPEP 2106.05(f), mere instruction to apply an exception cannot provide for a practical application of the judicial exception. For example, the MPEP states “[t]he recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words ‘apply it'. See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016).” Examiner notes that merely presenting a result from a model fails to specifically implement any potential outcome from the model in any type of practical or tangible way which could reasonably provide for a practical application. As such, claims 6, 13, and 17 do not recite limitations which integrate the judicial exception into a practical application.
Claims 7, 14, and 20 recite limitations directed to the mathematical concept of repeatedly supplying a numerical dataset (e.g., measured production data is numerical data) to a model to generate a numerical prediction which constitutes an abstract idea (e.g., judicial exception). The claims do not recite any additional elements which function to integrate the judicial exceptions identified in the independent claims into a practical application. Moreover, claims 7, 14, and 20 themselves are exclusively directed to a judicial exception and therefore do not provide for a practical application.
Claim 16 recites the additional element of “wherein the user-based data is received from a user interface presented by a user device,” which amounts to mere data gathering as identified in MPEP 2106.05(g). Mere data gathering amounts to extra solution activity which cannot provide for a practical application of the judicial exception. Furthermore, providing user-based data through a computer amounts to electronic recordkeeping which the courts have identified as well-understood, routine, and conventional activity. For example, the MPEP states “[t]he courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity… iii. Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log); iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93…”. (MPEP 2106.05(d), Section II). Extra-solution activity which is determined to be well-understood, routine, and conventional cannot provide for a practical application of the identified judicial exception. As such the limitations of claim 16 do not integrate the judicial exception of claim 15 into a practical application.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1—4, 6—11, 13—18, and 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Published US Patent Application to Sun et al., hereinafter “Sun” (US 20210010351 A1).
Regarding claim 1, Sun discloses [a] method for generating a forecast model of a well field (para. [0017], “[a] well productivity system may apply deep learning techniques such as long-short term memory (LSTM) or gated recurrent unit (GRU) and/or convolutional neural networks (CNN) to forecast well production rates and optimize corresponding development and completion strategies for one or more wells.”; para. [0046], “[t]he well productivity system 201 can be implemented for forecasting well productivity accurately and efficiently, in real-time, using deep learning neural networks as described herein. In this example, the well productivity system 201 can include compute components 202, a model development engine 204, a forecast engine 206, and a storage 208.), the method comprising: combining raw field data (“retrieve or obtain production rates,” as discussed in para. [0018] below) from a plurality of wells (“one or more wells,” as discussed in para. [0018] below) of the well field with user-based data (“well operation constraints,” as discussed in para. [0018] below) received from a user interface (all of the foregoing data is retrieved from one or more databases, where a database is a data structure viewable on a computer interface) to generate an input dataset (para. [0018], “[f]irst, the system may retrieve or obtain production rates and well operation constraints from one or more sources such as one or more databases. The system also may retrieve or obtain other temporal and spatial measurement data associated with the one or more wells… The system also may extract features and responses from the temporal data and the spatial data. The data may be divided into a training data subset, a validation data subset, and a test data subset.”); training, based on the input dataset and utilizing a deep learning computing technique, a plurality of completion forecast models (para. [0019], “the system may use the training data subset, the validation data subset, and the test data subset to build and train a model using LSTM or GRU and/or CNN.” Examiner notes that long-short term memory (LSTM; e.g., a type of recurrent neural network), gated recurrent unit (GRU; e.g., a type of recurrent neural network architecture), and convolutional neural networks (CNN; e.g., a deep learning algorithm which is a feedforward neural network) are each distinct machine learning algorithms and architectures which, when utilized separately (e.g., “or”) or in combination (e.g., “and”), generate a plurality of models. As such, building a model using LSTM or GRU and/or CNN will generate a plurality of models.); and generating an optimized production forecast model from the plurality of trained completion forecast models (para. [0017], “[a] well productivity system may apply deep learning techniques such as long-short term memory (LSTM) or gated recurrent unit (GRU) and/or convolutional neural networks (CNN) to forecast well production rates and optimize corresponding development and completion strategies for one or more wells.”; para. [0019], “[a]fter the model is built and trained, the system can deploy the model on one or more computing devices and optimize well operation of one or more wells based on an oil recovery factor, net present value, and/or other Key Performance Indicators (KPI).”).
Regarding claim 2, Sun discloses interpreting the plurality of completion forecast models based on one or more model- agnostic evaluation techniques (para. [0019], “[t]he system may perform history matching and model training and perform model evaluation based on prediction accuracy of the test data subset.” Examiner notes that performing evaluation based on prediction accuracy is model-agnostic insofar as the specifics of the model have no bearing on comparing the predicted production value to the actual production value.).
Regarding claim 3, Sun discloses wherein training of the plurality of completion forecast models comprises executing a plurality of different modeling techniques with the input dataset (para. [0019], “the system may use the training data subset, the validation data subset, and the test data subset to build and train a model using LSTM or GRU and/or CNN.” Examiner notes, LSTM, GRU, and CNN constitute distinct modeling techniques as discussed above with respect to the rejection of claim 1.).
Regarding claim 4, Sun discloses wherein the plurality of different modeling techniques is at least one of a tree-based modeling technique or a deep neural network technique (para. [0019], “the system may use the training data subset, the validation data subset, and the test data subset to build and train a model using LSTM or GRU and/or CNN.”; para. [0046], “[t]he well productivity system 201 can be implemented for forecasting well productivity accurately and efficiently, in real-time, using deep learning neural networks as described herein. In this example, the well productivity system 201 can include compute components 202, a model development engine 204, a forecast engine 206, and a storage 208. Examiner notes, convoluted neural networks (“CNN”) are a deep learning algorithm comprising a feedforward neural network as discussed above with respect to the rejection of claim 1.).
Regarding claim 6, Sun discloses displaying, on the user interface, a generated result of the optimized production forecast model based on a well completion dataset (para. [0046], “[i]n some implementations, the well productivity system 201 can also include a display device 210 for displaying data and graphical elements such as images, videos, text, simulations, and any other media or data content.” See FIG. 9 which depicts a forecasted production rate for gas 904, oil 902, and water 906 projected forward from time 908. Examiner notes the well or wells are completed insofar as the graph depicts historical production where the projection is made based on the production from the completed well thereby meeting the limitations of the claim.).
Regarding claim 7, Sun discloses recursively executing the optimized production forecast model to generate a completion prediction of a well of the well field (para. [0017], “[a] well productivity system may apply deep learning techniques such as long-short term memory (LSTM) or gated recurrent unit (GRU) and/or convolutional neural networks (CNN) to forecast well production rates and optimize corresponding development and completion strategies for one or more wells.”; para. [0076], “[a]fter the model is generated, the model may be deployed to a server computing device such as a cloud computing device including the virtual private cloud. Parameters such as operation constraints may be input and modified to determine an impact on well production responses. The well operation may be optimized based on an oil recovery factor and/or a net present value (NPV) and/or other KPI… Input constraints associated with pumping schedule… may be adjusted… In another example, a water-flooding field development plan may be optimized. Input constraints such as water injection period or amount could be adjusted.” Examiner notes that “adjusting” and “modifying” the input constraints requires generating a forecast utilizing a first operation constraint followed by generating an additional forecast using constraints which are modified from the first constraints thereby recursively executing the deployed model), the optimized production forecast model receiving measured production data from the field data (as disclosed with respect to claim 1, the forecast model is trained using production data and therefore has already received production data; however, the model may further take pumping schedules (e.g., related to artificial lift and therefore related to production data) or water-flooding data).
Regarding claim 8, Sun discloses [o]ne or more tangible non-transitory computer-readable storage media storing computer- executable instructions for performing a computer process on a computing system (para. [0090], “[m]ethods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can include, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or a processing device to perform a certain function or group of functions.” See also FIG. 11 and para. [0083]—[0087]), the computer process comprising: combining raw field data (“retrieve or obtain production rates”) from a plurality of wells (“one or more wells”) of the well field with user-based data (“well operation constraints”) received from a user interface (all of the foregoing data is retrieved from one or more databases, where a database is a data structure viewable on a computer interface) to generate an input dataset (para. [0018], “[f]irst, the system may retrieve or obtain production rates and well operation constraints from one or more sources such as one or more databases. The system also may retrieve or obtain other temporal and spatial measurement data associated with the one or more wells… The system also may extract features and responses from the temporal data and the spatial data. The data may be divided into a training data subset, a validation data subset, and a test data subset.”); training, based on the input dataset and utilizing a deep learning computing technique, a plurality of completion forecast models (para. [0019], “the system may use the training data subset, the validation data subset, and the test data subset to build and train a model using LSTM or GRU and/or CNN.” See also para. [0017]. Examiner notes that long-short term memory (LSTM; e.g., a type of recurrent neural network), gated recurrent unit (GRU; e.g., a type of recurrent neural network architecture), and convolutional neural networks (CNN; e.g., a deep learning algorithm which is a feedforward neural network) are each distinct machine learning algorithms and architectures which, when utilized separately (e.g., “or”) or in combination (e.g., “and”), generate a plurality of models. As such, building a model using LSTM or GRU and/or CNN will generate a plurality of models.); and generating an optimized production forecast model from the plurality of trained completion forecast models (para. [0019], “[a]fter the model is built and trained, the system can deploy the model on one or more computing devices and optimize well operation of one or more wells based on an oil recovery factor, net present value, and/or other Key Performance Indicators (KPI).” para. [0017], “[a] well productivity system may apply deep learning techniques such as long-short term memory (LSTM) or gated recurrent unit (GRU) and/or convolutional neural networks (CNN) to forecast well production rates and optimize corresponding development and completion strategies for one or more wells.”).
Regarding claim 9, Sun discloses the computer process further comprising: interpreting the plurality of completion forecast models based on one or more model- agnostic evaluation techniques (para. [0019], “[t]he system may perform history matching and model training and perform model evaluation based on prediction accuracy of the test data subset.” Examiner notes that performing evaluation based on prediction accuracy is model-agnostic insofar as the specifics of the model have no bearing on comparing the predicted production value to the actual production value.)..
Regarding claim 10, Sun discloses wherein training of the plurality of completion forecast models comprises executing a plurality of different modeling techniques with the input dataset (para. [0019], “the system may use the training data subset, the validation data subset, and the test data subset to build and train a model using LSTM or GRU and/or CNN.” Examiner notes, LSTM, GRU, and CNN constitute distinct modeling techniques as discussed above with respect to the rejection of claim 1.).
Regarding claim 11, Sun discloses wherein the plurality of different modeling techniques is at least one of a tree-based modeling technique or a deep neural network technique (para. [0019], “the system may use the training data subset, the validation data subset, and the test data subset to build and train a model using LSTM or GRU and/or CNN.” Examiner notes, convoluted neural networks (“CNN”) are a deep learning algorithm comprising a feedforward neural network as discussed above with respect to the rejection of claim 1.).
Regarding claim 13, Sun discloses the computer process further comprising: displaying, on the user interface, a generated result of the optimized production forecast model based on a well completion dataset (see FIG. 9 which depicts a forecasted production rate for gas 904, oil 902, and water 906 projected forward from time 908. Examiner notes the well or wells are completed insofar as the graph depicts historical production where the projection is made based on the production from the completed well thereby meeting the limitations of the claim.).
Regarding claim 14, Sun discloses recursively executing the optimized production forecast model to generate a completion prediction of a well of the well field (para. [0076], “[a]fter the model is generated, the model may be deployed to a server computing device such as a cloud computing device including the virtual private cloud. Parameters such as operation constraints may be input and modified to determine an impact on well production responses. The well operation may be optimized based on an oil recovery factor and/or a net present value (NPV) and/or other KPI… Input constraints associated with pumping schedule… may be adjusted… In another example, a water-flooding field development plan may be optimized. Input constraints such as water injection period or amount could be adjusted.” Examiner notes that “adjusting” and “modifying” the input constraints requires generating a forecast utilizing a first operation constraint followed by generating an additional forecast using constraints which are modified from the first constraints thereby recursively executing the deployed model), the optimized production forecast model receiving measured production data from the field data (as disclosed with respect to claim 1, the forecast model is trained using production data and therefore has already received production data; however, the model may further take pumping schedules (e.g., related to artificial lift and therefore related to production data) or water-flooding data).
Regarding claim 15, Sun discloses [a] system for generating a forecast model of a well field, the system comprising: a waterflood completion optimization system (para. [0058], “[a]s another example, a water flooding field development plan can be optimized by modifying input constraints by the forecast engine 206 such as a water injection period or a water amount.”) having at least one processor (processor 1110, see FIG. 11; para. [0090], “[m]ethods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can include, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or a processing device to perform a certain function or group of functions.” See also FIG. 11 and para. [0083]—[0087]) configured to train a plurality of completion forecast models using a deep learning computing technique (para. [0019], “the system may use the training data subset, the validation data subset, and the test data subset to build and train a model using LSTM or GRU and/or CNN.” Examiner notes that long-short term memory (LSTM; e.g., a type of recurrent neural network), gated recurrent unit (GRU; e.g., a type of recurrent neural network architecture), and convolutional neural networks (CNN; e.g., a deep learning algorithm which is a feedforward neural network) are each distinct machine learning algorithms and architectures which, when utilized separately (e.g., “or”) or in combination (e.g., “and”), generate a plurality of models. As such, building a model using LSTM or GRU and/or CNN will generate a plurality of models.) and based on an input dataset (para. [0018], “[f]irst, the system may retrieve or obtain production rates and well operation constraints from one or more sources such as one or more databases. The system also may retrieve or obtain other temporal and spatial measurement data associated with the one or more wells… The system also may extract features and responses from the temporal data and the spatial data. The data may be divided into a training data subset, a validation data subset, and a test data subset.”), the input dataset generated by combining raw field data (“retrieve or obtain production rates” from citation to para. [0018] above) from a plurality of wells (“one or more wells” from citation to para. [0018] above) of the well field with user-based data (“well operation constraints” from citation to para. [0018] above; para. [0059], “the storage 208 can store input data used by the well productivity system 201, outputs or results generated by the well productivity system 201 (for example, data and/or calculations from the model development engine 204, the forecast engine 206, etc.), user preferences, parameters and configurations, data logs, documents, software, media items, GUI content, and/or any other data and content.”), the waterflood completion optimization system generating an optimized production forecast model from the plurality of trained completion forecast models (para. [0019], “[a]fter the model is built and trained, the system can deploy the model on one or more computing devices and optimize well operation of one or more wells based on an oil recovery factor, net present value, and/or other Key Performance Indicators (KPI).”).
Regarding claim 16, Sun discloses wherein the user-based data is received from a user interface presented by a user device (para. [0048], “well productivity system 201 can be part of, or implemented by, one or more computing devices, such as one or more servers, one or more personal computers, one or more processors, one or more mobile devices (for example, a smartphone, a camera, a laptop computer, a tablet computer, a smart device, etc.), and/or any other suitable electronic devices.”; para. [0086], “[t]o enable user interaction with the computing device architecture 1100, an input device 1145 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth… In some instances, multimodal computing devices can enable a user to provide multiple types of input to communicate with the computing device architecture 1100. The communications interface 1140 can generally govern and manage the user input and computing device output.”).
Regarding claim 17, Sun discloses wherein the user interface is configured to present a generated result of the optimized production forecast model based on a well completion dataset (para. [0046], “[i]n some implementations, the well productivity system 201 can also include a display device 210 for displaying data and graphical elements such as images, videos, text, simulations, and any other media or data content.” See FIG. 9 which depicts a forecasted production rate for gas 904, oil 902, and water 906 projected forward from time 908. Examiner notes the well or wells are completed insofar as the graph depicts historical production where the projection is made based on the production from the completed well thereby meeting the limitations of the claim.).
Regarding claim 18, Sun discloses wherein training of the plurality of completion forecast models comprises executing a plurality of different modeling techniques with the input dataset (para. [0019], “the system may use the training data subset, the validation data subset, and the test data subset to build and train a model using LSTM or GRU and/or CNN.” Examiner notes, LSTM, GRU, and CNN constitute distinct modeling techniques as discussed above with respect to the rejection of claim 1.).
Regarding claim 20, Sun discloses wherein waterflood completion optimization system recursively executes the optimized production forecast model to generate a completion prediction of a well of the well field (para. [0076], “[a]fter the model is generated, the model may be deployed to a server computing device such as a cloud computing device including the virtual private cloud. Parameters such as operation constraints may be input and modified to determine an impact on well production responses. The well operation may be optimized based on an oil recovery factor and/or a net present value (NPV) and/or other KPI… Input constraints associated with pumping schedule… may be adjusted… In another example, a water-flooding field development plan may be optimized. Input constraints such as water injection period or amount could be adjusted.” Examiner notes that “adjusting” and “modifying” the input constraints requires generating a forecast utilizing a first operation constraint followed by generating an additional forecast using constraints which are modified from the first constraints thereby recursively executing the deployed model), the optimized production forecast model receiving measured production data from the field data (as disclosed with respect to claim 1, the forecast model is trained using production data and therefore has already received production data; however, the model may further take pumping schedules (e.g., related to artificial lift and therefore related to production data) or water-flooding data).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 5, 12, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Published US Patent Application to Sun et al., hereinafter “Sun” (US 20210010351 A1) as applied to claims 1, 8, and 15 above, and further in view of Published US Patent Application to Despinois et al., hereinafter “Despinois,” (US 20240133293 A1).
Regarding claim 5, while Sun at para. [0019] discloses “[t]he system may perform… model evaluation based on prediction accuracy of the test data subset,” Sun may not expressly disclose expanding the raw field data through one of a scatter plot of the raw field data or a box plot of the raw field data. However, generating a scatter plot (e.g., or cross plot) which plots an actual production value (e.g., raw production data categorized into the test dataset) versus a predicted production value (e.g., as generated by the model) is one manner in which the prediction accuracy of a model is assessed. The foregoing is taught by Despinois which is in the same field of endeavor as the instant application insofar as it is directed to generating machine-learning-based predictive models for hydrocarbon extraction operations. Specifically, Despinois at para. [1037] teaches “FIG. 9 presents a scatter plot on the validation set between actual and predicted concentrations. The points below the red line are underestimated and those above are overestimated. The distribution of points shows that the prediction model correctly estimates the real concentrations. The conical shape of the distribution shows that the error increases for higher lithium concentrations. This is due to the under-representation of wells with high lithium concentrations in the dataset, see class A in FIG. 4.”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have utilized the scatter plot analysis technique as disclosed by Despinois as one of the specific methods for evaluating the model accuracy as generically disclosed by Sun. The replacement of the generic method disclosed by Sun for the specific method as taught by Despinois would render the predictable result of assessing which outcomes from the model were overestimated and/or underestimated relative to the actual production data (e.g., raw data).
Regarding claim 12, while Sun at para. [0019] discloses “[t]he system may perform… model evaluation based on prediction accuracy of the test data subset,” Sun may not expressly disclose expanding the raw field data through one of a scatter plot of the raw field data or a box plot of the raw field data. However, generating a scatter plot (e.g., or cross plot) which plots an actual production value (e.g., raw production data categorized into the test dataset) versus a predicted production value (e.g., as generated by the model) is one manner in which the prediction accuracy of a model is assessed. The foregoing is taught by Despinois which is in the same field of endeavor as the instant application insofar as it is directed to generating machine-learning-based predictive models for hydrocarbon extraction operations. Specifically, Despinois at para. [1037] teaches “FIG. 9 presents a scatter plot on the validation set between actual and predicted concentrations. The points below the red line are underestimated and those above are overestimated. The distribution of points shows that the prediction model correctly estimates the real concentrations. The conical shape of the distribution shows that the error increases for higher lithium concentrations. This is due to the under-representation of wells with high lithium concentrations in the dataset, see class A in FIG. 4.”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have utilized the scatter plot analysis technique as disclosed by Despinois as one of the specific methods for evaluating the model accuracy as generically disclosed by Sun. The replacement of the generic method disclosed by Sun for the specific method as taught by Despinois would render the predictable result of assessing which outcomes from the model were overestimated and/or underestimated relative to the actual production data (e.g., raw data).
Regarding claim 19, while Sun at para. [0019] discloses “[t]he system may perform… model evaluation based on prediction accuracy of the test data subset,” Sun may not expressly disclose wherein waterflood completion optimization system expands the raw field data through one of a scatter plot of the raw field data or a box plot of the raw field data. However, generating a scatter plot (e.g., or cross plot) which plots an actual production value (e.g., raw production data categorized into the test dataset) versus a predicted production value (e.g., as generated by the model) is a known method for which the prediction accuracy of a model is assessed. The foregoing is taught by Despinois which is in the same field of endeavor as the instant application insofar as it is directed to generating machine-learning-based predictive models for hydrocarbon extraction operations. Specifically, Despinois at para. [1037] teaches “FIG. 9 presents a scatter plot on the validation set between actual and predicted concentrations. The points below the red line are underestimated and those above are overestimated. The distribution of points shows that the prediction model correctly estimates the real concentrations. The conical shape of the distribution shows that the error increases for higher lithium concentrations. This is due to the under-representation of wells with high lithium concentrations in the dataset, see class A in FIG. 4.”
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have utilized the scatter plot analysis technique as disclosed by Despinois as one of the specific methods for evaluating the model accuracy as generically disclosed by Sun. The replacement of the generic method disclosed by Sun for the specific method as taught by Despinois would render the predictable result of assessing which outcomes from the model were overestimated and/or underestimated relative to the actual production data (e.g., raw data).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Published US Patent Application to Fighel et al., (US 20190392252 A1) which is directed to machine learning methods for analyzing and forecasting time-series data;
Published US Patent Application to Delgoshaie et al., (US 20220025765 A1) which is directed to a machine learning model which is developed to replace a traditional reservoir model and which takes a waterflood scenario as an input and generates a production forecast as an output;
Published US Patent Application to Huang et al., (US 20230237225 A1) which is directed to reservoir modelling using machine learning techniques which replace traditional physics-based models which are computationally expensive;
Published US Patent Application to Madasu et al., (US 20210027144 A1) which is directed to utilizing machine learning techniques to create a proxy flow model which may be used for predictive analysis of reservoir behavior and may further be used to facilitate reservoir development;
Published US Patent Application to Gryzlov et al., (US 20230114088 A1) which is directed to a data-driven model which may be used to control a production system and optimize hydrocarbon production; and
Published US Patent Application to Samson et al., (US 20230104036 A1) which is directed to using a coarse-grid physics-based simulator to predict the location of a waterflood front where the simulation is augmented using machine-learning relationships.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to URSULA NORRIS whose telephone number is (703)756-4731. The examiner can normally be reached Monday to Friday, 7 AM to 4 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TARA SCHIMPF can be reached at 571-270-7741. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/U.L.N./Examiner, Art Unit 3676
/TARA SCHIMPF/Supervisory Patent Examiner, Art Unit 3676