Prosecution Insights
Last updated: April 19, 2026
Application No. 18/423,388

LINER HANGER OPERATIONS FRAMEWORK

Non-Final OA §101§102§103§112
Filed
Jan 26, 2024
Examiner
LO, KENNETH M
Art Unit
2116
Tech Center
2100 — Computer Architecture & Software
Assignee
Schlumberger Technology Corporation
OA Round
1 (Non-Final)
44%
Grant Probability
Moderate
1-2
OA Rounds
4y 1m
To Grant
76%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
106 granted / 243 resolved
-11.4% vs TC avg
Strong +32% interview lift
Without
With
+32.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
7 currently pending
Career history
250
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
41.5%
+1.5% vs TC avg
§102
23.0%
-17.0% vs TC avg
§112
24.7%
-15.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 243 resolved cases

Office Action

§101 §102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As per claim 1, The claim language "receiving data from field equipment... generating an inference... using one or more machine learning models; and controlling... based at least in part on the inference" describes steps that could be performed mentally or with generic computing tools, aligning with the mental process and mathematical concepts groupings in MPEP §2106.04(a)(2). 2A Prong 1: The claim language "generating an inference... using one or more machine learning models" amounts to an abstract idea under §101. 2A Prong 2: This judicial exception is not integrated into a practical application. Specification [0023], [0024] describes the framework as including machine learning models such as CNNs for monitoring liner hanger jobs, but does not tie Claim 1 to a specific unconventional hardware configuration or transformation, which is critical for integrating the abstract idea into a practical application per MPEP §2106.04(d). Additional elements: "machine learning models", "controlling the performance" (Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) - Examiner's note: Machine learning model is generic with no description or limitations that make it any more than a generic off the shelf machine learning model. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. " controlling the performance" (MPEP 2106.05(d)(II) indicate that merely "sending and receiving data" is a well- understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed receiving/view data steps are well-understood, routine, conventional activity is supported under Berkheimer). As per claims 2-20, these claims have similar mental steps and generic machine learning to claim 1, and are rejected for similar reasons. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because “One or more computer-readable storage media” is not solely directed to non-transitory medium and therefore includes non-statutory subject matter. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 11 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 11 recites “a vanilla CNN based model”. It is unclear from the claim language what CNN stands for in the claims. Further it is unclear from the specification the meets and bounds of the term ‘vanilla’ as the requisite degree of the term is indefinite. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-9, 12-14, 16, 18-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Lolla et al (hereinafter Lolla) (US PGPUB 20190324166). As per Claim 1, 19, 20, Lolla discloses “A method comprising: receiving data from field equipment during performance of a liner hanger job at a wellsite;” [FIG. 3L shows an exemplary data segment corresponding to a slimhole/other event, which may be caused by noise from slimhole, liner hanger, casing patch, or other sources 0097, receiving, from the seismic monitoring system, microseismic data representative of the microseismic waves; processing the microseismic data to obtain a plurality of data panels corresponding to microseismic data measured over a predetermined time interval; 0008] “generating an inference as to an occurrence of an event associated with the performance of the liner hanger job based on at least a portion of the data using one or more machine learning models;” [Classifier tools contemplated herein include manual or computer-based algorithms or programs that enable identification to which of a set of categories a new observation belongs, based on a training set of data containing observations (or instances) whose category membership is known. Classifiers are examples of a broader class of “pattern recognition” tools, an actively researched area of supervised machine learning and artificial intelligence. The task of “training” a classifier refers to running a chosen machine learning model with a data set of observations or features whose labels are known, and iteratively adjusting the parameters of the model so that the predictions of the model match the labels assigned to the training data set 0232] “and controlling the performance of the liner hanger job based at least in part on the inference.” [Once a microseismic event is classified, the classification of the event may drive one or more changes to the production and/or injection operations. 0246] As per Claim 2, Lolla discloses “wherein the data comprise surface data generated by surface equipment, wherein the surface data comprise one or more of hook load, standpipe pressure, block position and revolutions per minute.” [ Sensor receivers may be mounted on a coiled tubing 110 and cemented in place for the purpose of maintaining the desired position and orientation of the sensor receivers within the monitoring well 100. Alternatively, the sensor receivers may be run on a cable and clamped to the wellbore in a retrievable configuration. The sensor receivers may be connected to instrumentation cables 116 that may run to a junction or instrumentation box 118 near the surface of the well 100. The instrumentation box 118 may be connected or otherwise communicate with a computer located near the well 100 or in a control building nearby. 0081] As per Claim 3, Lolla discloses, “wherein the event corresponds to a release of a running tool disposed at least in part in a wellbore at the wellsite.” [Note that rod noise may in some instances resemble CMR (as seen by comparing FIG. 3H and FIG. 3D) and that processes to recover CMR may be designed to address this possibility. In practice, in the rare chance that continuous such events occur, it is always possible to test for rod noise or CMR by stopping the rod pumps for a short time interval. If the noise ceases, the source must have been the rod pumping motion. 0118] As per Claim 4, Lolla discloses, “wherein the inference is based on pattern recognition in at least a portion of the data for a number of patterns, wherein each of the number of patterns is associated with a different event associated with the performance of the liner hanger job.” [determined on a minimum number of sensor receivers, such as two or three. For each receiver with a valid trigger on its data panel, the data amplitude m(t), (or, equivalently, velocity or acceleration) may be calculated according to the following: m(t)=√{square root over (V E(t)2 +V N(t)2 +V D(t)2)}  (Eq. 2) where, VE VN(t) and VD (t) respectively denote the seismic data traces within the sensor receiver's data panel in Easting-Northing-Depth for instance. As previously described, the receiver data may be projected onto any preferred orthonormal coordinate system to calculate m(t) 00146 and further 00147-0200] As per Claim 5, Lolla discloses, “wherein the data comprise time-series data” [to compute the spectral densities of the time-series of the data traces from the entire data panel, for all the receiver traces (which may also be referred to as channels). The spectral density of a time-series, denoted by p(ω), describes the power contained in the signal as a function of frequency, per unit frequency. 0120] As per Claim 6, Lolla discloses, “comprising implementing a sliding window to process the time-series data” [the noise identification may be performed by placing greater emphasis on the frequency spectra around the triggered time-windows, and lesser on the times far from the triggered windows. This may be accomplished by computing Short-Time Fourier Transforms (STFT), spectral densities and normalized cumulative power spectra of the data trace around each distinct triggered window and discarding the triggered windows on the basis of the aforementioned thresholding procedures. Different types of window functions may be used in STFTs, including, but not limited to, Bartlett window, Hann window, Hamming window etc. As a particular case of the Short-Time Fourier Transform with a Gaussian weight function, one may compute the Gabor transform of the data trace, which automatically places greater weight on local data. The spectral densities and the normalized cumulative spectra then become functions of time, in addition to frequency. At each triggered time of a given data trace, the local normalized cumulative spectrum may be used to identify noisy triggers on the trace. This approach is suitable in cases where the frequency characteristics of the data trace are expected to vary with time. 0124] As per Claim 7, Lolla discloses, “comprising implementing a delay mechanism that controls the generating of the inference with respect to an occurrence of a full event pattern for the event” [In some embodiments according to the present disclosure, a “total event score” may be computed on the basis of at least two of: a magnitude score (e.g. RPPV*), a polarity score, a proximity score, a SH/SV score, and a P/S score described above. For example, a point-based classification system designed with a primary objective of identifying casing failures may rely on higher overall scores as indicators of a higher degree of certainty that a detected event is caused by a casing failure. For instance, using score ranges described above, any event with an overall score of 27 points may be classified as casing failure. 0230] As per claim 8, Lolla discloses, “wherein the delay mechanism provides a compromise between machine learning model timeliness and machine learning model accuracy” [A qualitative likelihood of the event being caused by a casing failure may be assigned in the following exemplary fashion: events with a total score between 27 and 30 points may be labeled as “Low Probability Casing Failures,” those with a total score between 30 and 32 points may be classified as “Medium Probability Casing Failures,” and those with a total score higher than 32 points may be classified as “High Probability Casing Failure.” As another example, for situations where the producer/injector wells near the located event are equipped with surface or conductor casing(s), a total event score exceeding a predetermined value (e.g., 27 points), with an event depth d* of less than the maximum surface or conductor casing depth, may indicate that the event may be classified as a casing slip. As yet an additional example, in situations where the total event score exceeds a predetermined value (e.g., 27 points) and the wellhead pressures of the candidate injection wells indicate a high-pressure injection ongoing at the time of the event, the event may be classified as a “High Pressure Casing Failure.” In other situations, the event may be classified as a “Low Pressure Casing Failure.” It should be understood that the foregoing are not an exclusive list of possible classification outcomes based on a total score and/or additional attributes, and many permutations are possible and contemplated herein on the basis of the general principles that have been described. 0233] As per Claim 9, Lolla discloses, “wherein the one or more machine learning models comprise a neural network model” [At block 1508, noise spikes may be identified and fed into the neural network analysis. At block 1510 the neural network analysis is run using the above identified input parameters and according to known principles. At block 1512 the neural network analysis identifies noise events (i.e., events with noise attributes) and/or non-noise events (i.e., events with non-noise attributes). 0259] As per Claim 12, Lolla discloses, “comprising training the one or more machine learning models using data from one or more prior liner hanger jobs” [Classifiers are examples of a broader class of “pattern recognition” tools, an actively researched area of supervised machine learning and artificial intelligence. The task of “training” a classifier refers to running a chosen machine learning model with a data set of observations or features whose labels are known, and iteratively adjusting the parameters of the model so that the predictions of the model match the labels assigned to the training data set. A second validation data set may also be utilized to test the trained model and further adjust the model parameters to obtain good predictive performance. 0232] As per Claim 13, Lolla discloses, “wherein the controlling comprises rendering a control graphic to a graphical user interface. “ [A display adapter may be driven by the CPU to control a display driver and a display on a display device, for example, to present microseismic data and information generated through application of the microseismic analyses of the present disclosure. 00267] As per Claim 14, Lolla discloses, “wherein the controlling comprises issuing a control signal” [Leveraging current technology, it is possible for a casing failure to be detected within a matter of minutes by the methods and systems disclosed herein, whereas prior processes have required hours at a minimum Automated event processing is also a crucial component in any closed-loop system that can autonomously take actions based on the event classification. For example, if a casing failure is detected by the automated event processing system, controller devices (either centrally located or distributed at multiple locations) may be programmed to automatically shut-in the candidate well. The manual method is also dependent on personnel schedules and thus susceptible to associated issues related to time of day and other factors. 0263] As per Claim 16, Lolla discloses, “wherein the receiving and the generating are performed using at least one computational framework” [receiving, from the seismic monitoring system, microseismic data representative of the microseismic waves; processing the microseismic data to obtain a plurality of data panels corresponding to microseismic data measured over a predetermined time interval; determining, with a neural network analysis implemented on the computer, whether any of the plurality of data panels includes a noise event or a non-noise event 0017] As per Claim 18, Lolla discloses, “wherein the liner hanger job comprises at least three different events” [The term “microseismic event” refers to any source of seismic activity or disturbances detectable by a passive monitoring system. Examples include, but are not limited to, well integrity events such as casing breaks or failures, casing slips, cement cracks, or continuous microseismic radiation (CMR), small harmonic tremors, as well as other events surrounding a wellbore or injection site, such as shear-dominated events and other surface events. The terms “seismic event” and “acoustic event” may be used interchangeably with the term “microseismic event.” 0070] Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 10, 11, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lolla et al (hereinafter Lolla) (US PGPUB 20190324166) in view of Gao et al (hereinafter Gao) (US PGPUB 20190114544) As per claim 10, Lolla discloses, “The method of claim 1” Lolla fails to explicitly disclose, “wherein the one or more machine learning models comprise at least one convolution neural network model” However Gao discloses, “wherein the one or more machine learning models comprise at least one convolution neural network model” [A convolutional neural network is a special type of neural network. The fundamental difference between a densely connected layer and a convolution layer is this: Dense layers learn global patterns in their input feature space, whereas convolution layers learn local patters: in the case of images, patterns found in small 2D windows of the inputs. This key characteristic gives convolutional neural networks two interesting properties: (1) the patterns they learn are translation invariant and (2) they can learn spatial hierarchies of patterns. 0104] One of ordinary skill in the art would have recognized that applying the known technique of Lolla with the known techniques of Gao namely, specifics of using the well known use of convolution neural networks, would have yielded predictable results and resulted in an improved system. Accordingly, applying the teachings of Loall with the teachings of Gao would have been recognized by those of ordinary skill in the art as resulting in a use of a commonly known type of neural network “A convolutional neural network learns highly non-linear mappings by interconnecting layers of artificial neurons arranged in many different layers with activation functions that make the layers dependent.” [0107] As per Claim 11, Lolla discloses, “The method of claim 1” Lolla fails to explicitly disclose “wherein the one or more machine learning models comprise one or more of a U-Net based model and a vanilla CNN based model” However Gao discloses, “wherein the one or more machine learning models comprise one or more of a U-Net based model and a vanilla CNN based model” [A convolutional neural network is a special type of neural network. The fundamental difference between a densely connected layer and a convolution layer is this: Dense layers learn global patterns in their input feature space, whereas convolution layers learn local patters: in the case of images, patterns found in small 2D windows of the inputs. This key characteristic gives convolutional neural networks two interesting properties: (1) the patterns they learn are translation invariant and (2) they can learn spatial hierarchies of patterns. 0104] One of ordinary skill in the art would have recognized that applying the known technique of Lolla with the known techniques of Gao namely, specifics of using the well known use of convolution neural networks, would have yielded predictable results and resulted in an improved system. Accordingly, applying the teachings of Lolla with the teachings of Gao would have been recognized by those of ordinary skill in the art as resulting in a use of a commonly known type of neural network “A convolutional neural network learns highly non-linear mappings by interconnecting layers of artificial neurons arranged in many different layers with activation functions that make the layers dependent.” [0107] As per Claim 17, Gao discloses, “wherein the generating occurs within less than 20 seconds from receipt of at least a portion of the data indicative of a full event pattern for the event.” [Some layers also use batch normalization (Ioffe and Szegedy 2015). Regarding batch normalization, distribution of each layer in a convolution neural network (CNN) changes during training and it varies from one layer to another. This reduces the convergence speed of the optimization algorithm. Batch normalization is a technique to overcome this problem. Denoting the input of a batch normalization layer with x and its output using z, batch normalization applies the following transformation on x: 0211] It would have been obvious to those of ordinary skill in the art of computational models and neural network models, that modifying and tuning a neural computation model to output results in under 20 seconds is a design choice comprised of tuning the model inputs, training, and computational intensity to meet the time output constraint of 20 seconds. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lolla et al (hereinafter Lolla) (US PGPUB 20190324166) in view of Loviken et al (hereinafter Loviken) (WIPO WO2022235481) As per Claim 15, Loviken discloses, “wherein the liner hanger job comprises different events and wherein the one or more machine learning models comprise different machine learning models for at least two of the different events.” [As explained, one or more types of ML models may be utilized with one or more types of loss. In various instances, an approach to loss may dictate performance for a particular task or tasks. In various instances, a method can include utilizing the same ML model (e.g., a common neural network) multiple times to compute one element of the loss, for example, where the total loss is a sum or mean of loss elements. [0106] One of ordinary skill in the art would have recognized that applying the known technique of Lolla with the known techniques of Lovinken namely, specifics of using the well known use of different types of models for different type of events, would have yielded predictable results and resulted in an improved system. Accordingly, applying the teachings of Lolla with the teachings of Lovinken would have been recognized by those of ordinary skill in the art as resulting in a use of a “For example, consider one partial loss for each data point, which can be a measure of how similar the network output of that data point is to a known ideal output. As another example, each loss element can be based on relationships of pairs of points. As explained, one or more types of ML models may be utilized to transform data points into scores, from which a loss can be computed.” [0107] Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KENNETH M LO whose telephone number is (571)272-9774. The examiner can normally be reached M-F 830a - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Cottingham can be reached at 571-272-9877. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. KENNETH M. LO Supervisory Patent Examiner Art Unit 2136 /KENNETH M LO/ Supervisory Patent Examiner, Art Unit 2116
Read full office action

Prosecution Timeline

Jan 26, 2024
Application Filed
Mar 09, 2026
Non-Final Rejection — §101, §102, §103
Mar 19, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591515
REDUCING MEMORY POWER USAGE IN FAR MEMORY
2y 5m to grant Granted Mar 31, 2026
Patent 12591210
APPARATUS, METHOD, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12554637
RUNTIME DE-INTERLEAVE AND RE-INTERLEAVE OF SYSTEM MEMORY
2y 5m to grant Granted Feb 17, 2026
Patent 12535958
NETWORK ATTACHED STORAGE (NAS) SERVER PLACEMENT IN A HETEROGENEOUS STORAGE CLUSTER
2y 5m to grant Granted Jan 27, 2026
Patent 12474833
Queue Bandwidth Estimation for Management of Shared Buffers and Allowing Visibility of Shared Buffer Status
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
44%
Grant Probability
76%
With Interview (+32.4%)
4y 1m
Median Time to Grant
Low
PTA Risk
Based on 243 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month