Prosecution Insights
Last updated: April 19, 2026
Application No. 18/366,343

Increasing Petrophysical Image Log Resolution using Deep Learning Techniques

Final Rejection §103
Filed
Aug 07, 2023
Examiner
HAIDER, SYED
Art Unit
2633
Tech Center
2600 — Communications
Assignee
Saudi Arabian Oil Company
OA Round
2 (Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
709 granted / 850 resolved
+21.4% vs TC avg
Minimal +4% lift
Without
With
+4.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
35 currently pending
Career history
885
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
22.9%
-17.1% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 850 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed on 10/30/2025, with respect to independent claims 1, 8, and 15, (and their respective dependent claims) have been fully considered but they are not persuasive. Regarding claims 1, 8, and 15, Applicant argues that “The Applicants respectfully submit that the training dataset of Yao is explicitly specified as "several pairs of low-resolution images and high-resolution images." Yao, para. [0093]. The Examiner fails to show that the training dataset of Yao encompasses "random image data." Guner does not remedy this deficiency of Yao. For at least this reason, the present claims are allowable over the cited references.” (please see Remarks, page 1). Examiner respectfully disagrees, as noted by Applicant that Yao discloses in paragraph 93, that “The training dataset herein includes several pairs of low-resolution images and high-resolution images”. Here, please note that the training dataset corresponds to claimed “random image data”, i.e., dataset contains image pairs randomly (several) without specific numbers of image pairs, however, Yao does not explicitly discloses that (the random image data) that is collection of images without specificity, however Guner in paragraph 29, discloses random image data that is collection of images without specificity. For instance, Guner paragraph 29, discloses “the described methods are equally applicable to borehole images obtained via other means such as acoustic imagers or density imagers” and also please see paragraphs 69 and 82, hence Guner reference discloses the argued limitation as presented by the Applicant. Applicant further argues that “The Examiner's statement that "[i]t would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify Rao in view of Guner teachings by implementing a deep neural network to the system, as taught by Lin" is not sufficient to establish a prima facie case of obviousness as the Examiner simply makes a conclusory statement without describing the combination of Yao, Guner, and Lin. The Examiner fails to describe "how" the purported model of Lin operates as modified by the alleged reduced resolution of images of Lin. Referring to the present rejection, Yao purports to describe constructing a training dataset including "paired low-resolution images and high-resolution images" that "are input into the foregoing configured super-resolution network." Yao, para. [0093]. Guner purports to describe a petrophysical image log. Guner, para. [0029]. Lin purports to describe reducing a resolution of images. Lin, para. [0061]. The Examiner fails to show how the specified pairs of low- resolution images and high-resolution images in Yao are combined with the petrophysical image log of Guner, and how the resolution of the specified images are reduced as allegedly described in Lin. Since the Examiner has not provided a reasoned explanation of how to combine Yao, Guner, and Lin, independent claims 1, 8, and 15 are allowable over the cited references. Applicant respectfully requests that the Examiner withdraw the rejection of claims 1-20.” (please see Remarks, page 2, and page 3). Examiner respectfully disagrees, In response to applicant’s argument that Examiner has not provided a reasoned explanation of how to combine Yao, Guner and Lin, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Examiner has provided proper reasoning of how to combine YAO, Guner and Lin teachings, for instance the primary reference of Yao relates to processing images using machine learning (Yao, paragraph 8), similarly, Guner reference relates to processing images using machine learning (Guner, paragraph 21), similarly Lin reference relates to processing images using machine learning (Lin, paragraph 48), hence as can be seen that each of the cited reference relates to processing images using machine learning and try to resolve the well-known issue i.e., improve image processing utilizing machine learning, therefore the combination of references is simple and straightforward as being explained below and for motivation to combine the references Examiner relied upon teachings of the prior art not the disclosure of the pending Application. Hence in view of above explanation Yao in view of Guner in view of Lin reads on the argued limitations as being presented by Applicant. Examiner suggests Applicant to further elaborate on how the training is performed in the claims if different from the cited references in order to overcome the references. Applicant arguments with respect to claim objections have been fully considered and they are persuasive (please see Remarks, page 1, claim objections) in view of amendments, hence said objection has been withdrawn. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 8-9, and 15-16, is/are rejected under 35 U.S.C. 103 as being unpatentable over Yao (US PGPUB 2021/0004935 A1) and further in view of Guner (US PGPUB 2023/0245278 A1) and further in view of Lin (US PGPUB 2016/0328644 A1). As per claim 1, Yao discloses a computer-implemented method for increasing (petrophysical image log) image resolution (Yao, Figs. 1-5), the method comprising: preparing, using at least one hardware processor, a group of images for training a machine learning model, wherein a respective image of the group of images comprises random image data (Yao, paragraph 93, discloses a training dataset needs to be constructed. The training dataset herein includes several pairs of low-resolution images and high-resolution images); training, using the at least one hardware processor, the machine learning model using the prepared group of images, wherein the machine learning model learns a function that increases a resolution of the prepared group of images (Yao, paragraphs 22 and 93, discloses The paired low-resolution images and high-resolution images are input into the foregoing configured super-resolution network, and are computed using weight vectors of the layers of the super-resolution network, to output a reconstructed high-resolution image. A loss function is computed based on the output high-resolution image and the high-resolution image included in the training dataset, computing is performed based on a back propagation algorithm of the neural network, and a parameter of each layer that needs to be learned in the super-resolution network, to obtain an updated parameter of each layer, namely, a weight vector W); and inputting, using the at least one hardware processor, (unseen) images to the trained machine learning model, wherein the trained machine learning model outputs respective high-resolution images (Yao, paragraphs 22, 93-95, and 147, discloses the trained deep neural network may enable the output high-resolution image to have optimal image quality. Therefore, each weighting value obtained using the trained deep neural network can enable the output feature map to have a better expression capability. Further, when there is a group of weighting values, different weighting values may be more pertinently assigned to different local features in the input feature map such that there are more possible output feature maps, and a high-resolution image of higher quality is obtained using the image super-resolution method ). Although Yao discloses random image data and generation of super resolution images by inputting images to machine learning model as being discussed above however does not explicitly disclose petrophysical image log, (random) image data that is collection of images without specificity and inputting unseen images to machine learning model, Guner discloses petrophysical image log (Guner, paragraphs 29 and 49), (random) image data that is collection of images without specificity (Guner, paragraphs 29, 69 and 82) and inputting unseen images to machine learning model (Guner, paragraphs 88 and 90), It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Yao teachings by providing specific images to the machine learning model of Yao, as taught by Guner. The motivation would be to provide an improved system with reduced cost (paragraph 88), as taught by Guner. Yao in view of Guner discloses wherein preparing the group of images for training the machine learning model as being explained above however, Yao in view of Guner does not explicitly disclose the machine learning model comprises reducing a resolution of respective images of the group of images. Lin discloses the machine learning model comprises reducing a resolution of respective images of the group of images (Lin, paragraphs 61 and 72, discloses The new configuration may be determined by reducing image resolution of the current artificial neural network. In particular, the image resolution may be reduced at various stages of the DCN). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Yao in view of Guner teachings by training a neural network of Yao, as taught by Lin. The motivation would be to provide an efficient artificial neural network with improved performance (paragraph 62), as taught by Lin. As per claim 2, Yao in view of Guner in view of Lin further discloses the computer-implemented method of claim 1, wherein preparing the group of images for training the machine learning model comprises resizing respective images of the group of images (Yao, paragraphs 7, 23 and 120, discloses size conversion). As per claim 8, Yao discloses an apparatus comprising a non-transitory, computer readable, storage medium that stores instructions that, when executed by at least one processor, cause the at least one processor to perform operations (Yao, paragraphs 222 and 225) comprising: For rest of claim limitations please see the analysis of claim 1. As per claim 9, please see the analysis of claim 2. As per claim 15, Yao discloses a system, comprising: one or more memory modules (Yao, paragraphs 222 and 225); one or more hardware processors communicably coupled to the one or more memory modules, the one or more hardware processors configured to execute instructions stored on the one or more memory modules to perform operations (Yao, paragraphs 222 and 225) comprising: For rest of claim limitations please see the analysis of claim 1. As per claim 16, please see the analysis of claim 2. Claim(s) 4-5, 11-12, and 18-19, is/are rejected under 35 U.S.C. 103 as being unpatentable over Yao (US PGPUB 2021/0004935 A1) and further in view of Guner (US PGPUB 2023/0245278 A1) and further in view of Lin (US PGPUB 2016/0328644 A1) and further in view of Chen (US PGPUB 2021/0056433 A1). As per claim 4, Yao in view of Guner in view of Lin further discloses the computer-implemented method of claim 1, wherein the trained machine learning model comprises three convolution layers (Yao, paragraphs 9, 90 and 104). Yao in view of Guner in view of Lin does not explicitly disclose trained machine learning model comprises a reshaping layer. Chen discloses trained machine learning model comprises a reshaping layer (Chen, paragraph 38). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Rao in view of Guner in view of Lin teachings by implementing a reshaping layer to the machine learning model, as taught by Chen. The motivation would be to provide an improved machine learning model to avoid waste of computing resources (paragraph 42), as taught by Chen. As per claim 5, Yao in view of Guner in view of Lin further discloses the computer-implemented method of claim 1, wherein the trained machine learning model comprises a (reshaping) layer that increases a size of the respective high-resolution images (Yao, paragraphs 9, 83, and 93, discloses The direct branch can well overcome a convergence problem of the deep neural network in order to improve an effect of image super-resolution based on the deep neural network. In addition, because values of the input feature map and the input feature map obtained after the nonlinear transformation are selected according to a corresponding proportion and then added in the weighted processing manner used herein, image quality of the finally reconstructed high-resolution image is greatly improved) to (an original size of the) unseen images (Guner, paragraphs 71, 82, 88 and 90). Yao in view of Guner in view of Lin does not explicitly disclose a reshaping layer that increases a size of the respective image to an original size of the image. Chen discloses a reshaping layer that increases a size of the respective image to an original size of the image (Chen, paragraph 38, discloses reshape layer that adjusts the input data to specific size). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Rao in view of Guner in view of Lin teachings by implementing a reshaping layer to the machine learning model, as taught by Chen. The motivation would be to provide an improved machine learning model to avoid waste of computing resources (paragraph 42), as taught by Chen. As per claim 11, please see the analysis of claim 4. As per claim 12, please see the analysis of claim 5. As per claim 18, please see the analysis of claim 4. As per claim 19, please see the analysis of claim 5. Claim(s) 7, and 14, is/are rejected under 35 U.S.C. 103 as being unpatentable over Yao (US PGPUB 2021/0004935 A1) and further in view of Guner (US PGPUB 2023/0245278 A1) and further in view of Lin (US PGPUB 2016/0328644 A1) and further in view of Abbot (US PGPUB 2019/0108421 A1). As per claim 7, Yao in view of Guner in view of Lin further discloses the computer-implemented method of claim 1, wherein preparing the group of images for training the machine learning model comprises Yao in view of Guner in view of Lin does not explicitly disclose converting the images to greyscale. Abbot discloses converting the images to greyscale (Abbot, paragraph 22). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Yao in view of Guner in view of Lin teachings by performing image conversion, as taught by Abbot. The motivation would be to provide an improved machine learning model with increased accuracy (paragraph 94), as taught by Abbot. As per claim 14, please see the analysis of claim 7. Claim(s) 6, 13, and 20, is/are rejected under 35 U.S.C. 103 as being unpatentable over Yao (US PGPUB 2021/0004935 A1) and further in view of Guner (US PGPUB 2023/0245278 A1) and further in view of Lin (US PGPUB 2016/0328644 A1) and further in view of Korkin (US PGPUB 2016/0117800 A1). As per claim 6, Yao in view of Guner in view of Lin further discloses the computer-implemented method of claim 1, wherein the Yao in view of Guner in view of Lin does not explicitly disclose machine learning model is iteratively trained until a pixel-wise signal to noise ratio of outputs of the trained machine learning model is improved relative to earlier iterations of the trained machine learning model. Korkin discloses machine learning model is iteratively trained until a pixel-wise signal to noise ratio of outputs of the trained machine learning model is improved relative to earlier iterations of the trained machine learning model (Korkin, paragraphs 51, 83, and 85, discloses luminosity patch may be processed using a pre-trained neural network to improve signal-to-noise ratio and resolution). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Yao in view of Guner in view of Lin teachings by implementing a pre-trained neural network to the system, as taught by Korkin. The motivation would be to enhance the properties of the images (paragraph 30), as taught by Korkin. As per claim 13, please see the analysis of claim 6. As per claim 20, please see the analysis of claim 6. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED Z HAIDER whose telephone number is (571)270-5169. The examiner can normally be reached MONDAY-FRIDAY 9-5:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SAM K Ahn can be reached at 571-272-3044. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SYED HAIDER/Primary Examiner, Art Unit 2633
Read full office action

Prosecution Timeline

Aug 07, 2023
Application Filed
Jul 30, 2025
Non-Final Rejection — §103
Oct 30, 2025
Response Filed
Nov 29, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602430
Method for Constructing Positioning DB Using Clustering of Local Features and Apparatus for Constructing Positioning DB
2y 5m to grant Granted Apr 14, 2026
Patent 12604296
NETWORKED ULTRAWIDEBAND POSITIONING
2y 5m to grant Granted Apr 14, 2026
Patent 12597163
Systems and Methods to Optimize Imaging Settings for a Machine Vision Job
2y 5m to grant Granted Apr 07, 2026
Patent 12586394
METHOD, APPARATUS AND SYSTEM FOR AUTO-LABELING
2y 5m to grant Granted Mar 24, 2026
Patent 12579676
EGO MOTION-BASED ONLINE CALIBRATION BETWEEN COORDINATE SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
88%
With Interview (+4.4%)
2y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 850 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month