Prosecution Insights
Last updated: April 19, 2026
Application No. 18/507,818

ELECTRIC SUBMERSIBLE PUMP OPERATING PARAMETERS

Non-Final OA §103
Filed
Nov 13, 2023
Examiner
TRAN, VI N
Art Unit
2117
Tech Center
2100 — Computer Architecture & Software
Assignee
Saudi Arabian Oil Company
OA Round
1 (Non-Final)
46%
Grant Probability
Moderate
1-2
OA Rounds
4y 1m
To Grant
83%
With Interview

Examiner Intelligence

Grants 46% of resolved cases
46%
Career Allow Rate
46 granted / 99 resolved
-8.5% vs TC avg
Strong +36% interview lift
Without
With
+36.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
39 currently pending
Career history
138
Total Applications
across all art units

Statute-Specific Performance

§101
15.5%
-24.5% vs TC avg
§103
53.8%
+13.8% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 99 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 9, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beck et al. (US20210071508A1 -hereinafter Beck) in view of Alanazi et al. (US20200362674A1 -hereinafter Alanazi). Regarding Claim 1, Beck teaches a method for controlling an electric submersible pump (ESP) installed in a wellbore (see [0001]; Beck: “will require at least further search and further consideration.”), the method comprising: determining a target production rate for the wellbore; (see [0018]; Beck: “a goal set associated with fluid production from the geologic formation.”) providing the target production rate as input to a neural network (see [0018]; Beck: “In centralized control, a deep learning model associated with a motor controller of an ESP receives various inputs. The various inputs include one or more of operating conditions, application input, supervisory input, model inputs, and a goal set associated with fluid production from the geologic formation.” See [0029]: “The deep learning model 134, 136 is a neural network…”) that provides as output ESP operating parameters to achieve the target production rate (see [0039]; Beck: “The goal set 212 would be input into the deep learning model so that the deep learning model 202 can generate outputs for controlling the ESP to meet the desired operating conditions in the goal set.”), the ESP operating parameters comprising a choke size percentage and a motor speed (see [0042]; Beck: “The operating parameters may impact how long an ESP should pump fluid, at which rate, motor speed etc. to meet fluid production objectives.”), and the neural network modeling an ESP- equipped wellbore; and (see [0017]; Beck: “The deep learning model allows for intelligent control of the ESP to meet fluid production goals of the wellbore and well system”). controlling the ESP to operate according to the ESP operating parameters. (see [0049]; Beck: “At 306, operation of the ESP may be adjusted based on the output.” See [0050]: “The operating parameters output may produce not only a desired change in fluid production of a given ESP but also a desired change in fluid production by the reservoir.”) However, Beck does not explicitly teach the ESP operating parameters comprising a choke size percentage… Alanazi from the same or similar field of endeavor teaches the ESP operating parameters comprising a choke size percentage… (see [0041]; Alanazi: “The production optimization techniques can estimate flowing parameters of individual laterals, determine the optimum pressure drop across each downhole valve, and estimate productivity of each lateral during a commingled production at various choke valves settings.” See [0148]: “the computer-implemented system can send commands to ICVs of particular laterals, such as to choke particular laterals to specific percentages.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of Beck to include Alanazi’s features of the ESP operating parameters comprising a choke size percentage. Doing so would predict optimum ICV settings of smart well completions (SWCs) in real-time and maximize performance of multilateral wells through combining lateral production data and improved estimation of downhole parameters. (Alanazi, [0004]) Regarding Claim 9, the limitations in this claim is taught by the combination of Beck and Alanazi as discussed connection with claim 1. Regarding Claim 17, the limitations in this claim is taught by the combination of Beck and Alanazi as discussed connection with claim 1. Claim(s) 2, 10, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beck in view of Alanazi in view of Dufour et al. (US20210285605A1 -hereinafter Dufour). Regarding Claim 2, the combination of Beck and Alanazi teaches all the limitations of claim 1 above, Beck further teaches wherein the input to the neural network further comprises a real- time data comprising: an intake pressure (see [0027]; Beck: “the downhole sensors 108 may provide measurement data related to operating conditions downhole in and around the ESP 150 such as … pump intake pressure.”), an ESP motor load, (see [0019]: Beck: the computational loads on each motor controller”) Alanazi further teaches an oil production rate, a water cut, (see [0062]; Alanazi: “downhole parameters, such as PI, FBHP, oil rate, and water cut.”) The same motivation to combine Beck and Alanazi a set forth for Claim 1 equally applies to Claim 2. However, it does not explicitly teach: …and an upstream-downstream (US/DS) differential pressure (DP). Dufour from the same or similar field of endeavor teaches …and an upstream-downstream (US/DS) differential pressure (DP). (see [0203]; Dufour: “All the flow rate calculation methods are based on the upstream pressure (and/or the downstream pressure) and the upstream/downstream pressure differential of the element on which the flow rate will be modeled.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of Beck and Alanazi to include Dufour’s features of an upstream-downstream (US/DS) differential pressure (DP). Doing so would optimize an energy and/or economic cost factor. Dufour, [0244]) Regarding Claim 10, the limitations in this claim is taught by the combination of Beck, Alanazi, and Dufour as discussed connection with claim 2. Regarding Claim 18, the limitations in this claim is taught by the combination of Beck, Alanazi, and Dufour as discussed connection with claim 2. Claim(s) 3-6, 11-14, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beck in view of Alanazi in view of AlAjmi et al. (US20170293835A1 -hereinafter AlAjmi). Regarding Claim 3, the combination of Beck and Alanazi teaches all the limitations of claim 1 above; however, it does not explicitly teach wherein the neural network is trained based on historical production data. AlAjmi from the same or similar field of endeavor teaches wherein the neural network is trained based on historical production data. (see [0004]; AlAjmi: “building a feed-forward back propagation neural network; calibrating the simulation model utilizing actual production history from the training data set;”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of Beck and Alanazi to include AlAjmi’s features of the neural network is trained based on historical production data. Doing so would produce the most accurate output and improve the network prediction performance. (AlAjmi, [0069] and [0075]) Regarding Claim 4, the combination of Beck and Alanazi teaches all the limitations of claim 1 above; however, it does not explicitly teach: wherein generating the neural network comprises: obtaining historical production data comprising data points associated with respective wells, each data point comprising at least one of oil production rate, a water cut, an intake pressure, an ESP motor load, or an upstream-downstream (US/DS) differential pressure (DP); splitting the historical production into training data and testing data; and iteratively training the neural network using the training data to generate the neural network. AlAjmi from the same or similar field of endeavor teaches wherein generating the neural network comprises: obtaining historical production data comprising data points associated with respective wells (see [0088]; AlAjmi: “At 504, the oil production rate test data are collected, uploaded, and divided into subsets by a downstream-to-upstream pressure ratio.”), each data point comprising at least one of oil production rate, a water cut, an intake pressure, an ESP motor load, or an upstream-downstream (US/DS) differential pressure (DP); (see Abstract; AlAjmi: “The oil production rate test data are collected, uploaded, and divided into subsets by a downstream-to-upstream pressure ratio.”) splitting the historical production into training data and testing data; and (see [0092]; AlAjmi: “At 512, the simulation model is calibrated utilizing actual production history from the training data set.” See [0093]: “At 514, the model performance is tested utilizing actual production history from the testing data set.” See [0089]: “At 508, for each subset-split, the data is split randomly into training data sets and testing data sets.”) iteratively training the neural network using the training data to generate the neural network. (see [0072]; AiAjmi: “An iterative process, that is indicated by FIG. 2 for example, can minimize any resulting error. Once the network is trained, a testing dataset can be introduced to the ANN model to predict the outputs and to validate the ANN model's performance.”) The same motivation to combine Beck, Alanazi, and AiAjmi a set forth for Claim 3 equally applies to Claim 4. Regarding Claim 5, the combination of Beck, Alanazi, and AiAjmi teaches all the limitations of claim 4 above, Beck further teaches wherein iteratively training the neural network comprises: creating the neural network; (see [0081]; Beck: “At 902, an initial deep learning model may be defined.”) defining initial hyperparameters for the neural network; (see [0081]; Beck: “The initial deep learning model may be a neural network with more than two hidden layers.”) the hidden layers reads on ‘initial hyperparameters’] training the neural network based on the training data; (see [0081]; Beck: “at 904, the training dataset may be input into the initial deep learning model.”) determining, based on at least one performance indicator, whether the training of the neural network is complete; (see [0081]; Beck: “At 908, the output is compared to the goal set.”) if the training the neural network is complete, deploying the neural network; and (see [0081]; Beck: “The training data may be input into the deep learning model and steps 904 to 910 iteratively carried out until the output matches the goal data.” See [0080]: “the submodel and/or ESP/well specific deep learning model may be trained on the centralized computer system and then sent to the respective motor controller to control the ESP.”) if the training the neural network is incomplete, returning to training the neural network using the training data to generate new hyperparameter values for the neural network. (see [0081]; Beck: “The training data may be input into the deep learning model and steps 904 to 910 iteratively carried out until the output matches the goal data.” See [0082]: “The more than two layers of the deep learning model allow for modeling the changing conditions in the wellbore over time.”) Regarding Claim 6, the combination of Beck, Alanazi, and AiAjmi teaches all the limitations of claim 5 above, Beck further teaches wherein determining, based on at least one performance indicator, whether the training of the neural network is complete comprises: determining whether the at least one performance indicator satisfies a respective threshold. (see [0081]; Beck: “At 908, the output is compared to the goal set. At 910, revisions of deep learning model may be made, e.g. by adaptation of the neural network associated with the deep learning model, in an iterative process to refine the deep learning model by the comparison of the output to the goal data. For example, the comparison may be classified in terms of a level of match such as correlation. If the correlation is less than a threshold amount, further analysis may be undertaken, e.g. the deep learning model may be adapted by adjusting the weights of the neural network, either by the model itself or by another process or by human intervention until the output matches the goal data.”) Regarding Claim 11, the limitations in this claim is taught by the combination of Beck, Alanazi, and AiAjmi as discussed connection with claim 3. Regarding Claim 12, the limitations in this claim is taught by the combination of Beck, Alanazi, and AiAjmi as discussed connection with claim 4. Regarding Claim 13, the limitations in this claim is taught by the combination of Beck, Alanazi, and AiAjmi as discussed connection with claim 5. Regarding Claim 14, the limitations in this claim is taught by the combination of Beck, Alanazi, and AiAjmi as discussed connection with claim 6. Regarding Claim 19, the limitations in this claim is taught by the combination of Beck, Alanazi, and AiAjmi as discussed connection with claim 3. Regarding Claim 20, the limitations in this claim is taught by the combination of Beck, Alanazi, and AiAjmi as discussed connection with claim 4. Claim(s) 7-8 and 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beck in view of Alanazi in view of AiAjmi in view of Koch et al. (US20180240041A1 -hereinafter Koch). Regarding Claim 7, the combination of Beck, Alanazi, and AiAjmi teaches all the limitations of claim 5 above, Beck further teaches wherein the at least one performance indicator comprises at least one of a correlation coefficient (CC) (see [0081]; Beck: “the comparison may be classified in terms of a level of match such as correlation.”), However, it does not explicitly teach: a root mean squared error (RMSE), or an average absolute percentage error (AAPE). Koch from the same or similar field of endeavor teaches a root mean squared error (RMSE) (see [0080]; Koch: “The FACTMAC procedure computes the biases and factors by using a stochastic gradient descent (SGD) algorithm that minimizes a root mean square error (RMSE) criterion.”), or an average absolute percentage error (AAPE). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teaching of Beck, Alanazi, and AiAjmi to include Koch’s features of a root mean squared error (RMSE). Doing so would determine the best model configuration and govern the quality of the resulting predictive models. (Koch, [0002]-[0003]) Regarding Claim 8, the combination of Beck, Alanazi, and AiAjmi teaches all the limitations of claim 4 above, Beck further teaches wherein the initial hyperparameters comprise a number of neuron layers (see [0109]; Beck: “the first deep learning model is a neural network with more than two hidden layers.”), However, it does not explicitly teach a number of neurons per layer, and a seed number for the neural network. Koch from the same or similar field of endeavor teaches a number of neurons per layer (see [0003]; Koch: “a number of hidden layers and neurons in each layer in a neural network model type”), and a seed number for the neural network. (see [0131]; Koch: “a random seed value may be specified”) The same motivation to combine Beck, Alanazi, AiAjmi, and Koch a set forth for Claim 7 equally applies to Claim 8. Regarding Claim 15, the limitations in this claim is taught by the combination of Beck, Alanazi, AiAjmi, and Koch as discussed connection with claim 7. Regarding Claim 16, the limitations in this claim is taught by the combination of Beck, Alanazi, AiAjmi, and Koch as discussed connection with claim 8. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sandnes et al. (US20210181374A1) discloses estimating of individual well rates and total (separator) rates are obtained by evaluating the neural network for a given set of controls and states. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VI N TRAN whose telephone number is (571)272-1108. The examiner can normally be reached Mon-Fri 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ROBERT FENNEMA can be reached at (571) 272-2748. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /V.N.T./ Examiner, Art Unit 2117 /ROBERT E FENNEMA/ Supervisory Patent Examiner, Art Unit 2117
Read full office action

Prosecution Timeline

Nov 13, 2023
Application Filed
Jan 22, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12528200
LIGHT FOR TEACH PENDANT AND/OR ROBOT
2y 5m to grant Granted Jan 20, 2026
Patent 12523972
Event Engine for Building Management System Using Distributed Devices and Blockchain Ledger
2y 5m to grant Granted Jan 13, 2026
Patent 12525808
TIME-SHIFTING OPTIMIZATIONS FOR RESOURCE GENERATION AND DISPATCH
2y 5m to grant Granted Jan 13, 2026
Patent 12494653
CONTROLLING A HYBRID POWER PLANT
2y 5m to grant Granted Dec 09, 2025
Patent 12467818
DETECTING GAS LEAKS FROM IMAGE DATA AND LEAK DETECTION MODELS
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
46%
Grant Probability
83%
With Interview (+36.3%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 99 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month