Prosecution Insights
Last updated: April 19, 2026
Application No. 18/698,757

Neural Network Image Enhancement

Non-Final OA §101§103§112
Filed
Apr 04, 2024
Examiner
DRYDEN, EMMA ELIZABETH
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Purdue Research Foundation
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
83%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
7 granted / 12 resolved
-3.7% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
34 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
56.4%
+16.4% vs TC avg
§102
16.6%
-23.4% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application is a National Stage application of PCT/US2021/054721. Priority to PCT/US2021/054721 with a priority date of 10/13/2021 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Specification The disclosure is objected to because of the following informalities: in para 32, “the instructions 221” should read “the instructions 222”. Appropriate correction is required. Claim Objections Claims 7-8 and 13 are objected to because of the following informalities: 1) claim 7 should be dependent on claim 5 and 2) in claims 8 and 13, “an increase image quality that is higher than the base resolution” should read “an increased image quality that is higher than the base image quality”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 3-5 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. Claim 3 contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Sufficient detail describing how an amount of increase in increased resolution and an amount of increase in increased image quality is balanced using neural network calculations is absent from the specification. Dependent claims 4-5 are similarly rejected due to their dependence on a rejected base claim. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3-5 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In claim 3, the term “to balance”, in regard to an amount of increase in increased resolution and an amount of increase in increased image quality, renders the claim indefinite. The metes and bounds of what is considered a balanced amount of increase in increased resolution and increased image quality is not defined by the claim or the specification, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For examination purposes, neural network calculations will be interpreted to “balance an amount of increase in the increased resolution with an amount of increase in the increased image quality of the enhanced image” if visual features from the original image before compression are preserved in the generated enhanced image. Dependent claims 4-5 are similarly rejected due to their dependence on a rejected base claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4 and 6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Under Step 1, claim 1 is a product claim. Under Step 2A Prong One, all claims recite mathematical concepts – mathematical relationships, formulas, and calculations (see MPEP § 2106.04(a)(2), subsection I). These mathematical concepts are more particularly recited in claim 1 as: perform, via an individual neural network, a plurality of neural network calculations on the captured image to form an enhanced image. In claim 1, a neural network performs mathematical calculations to generate an image. Dependent claims 2-4 and 6 recite limitations that modify either the neural network or the final result of the calculations, and are thus further part of the abstract idea of claim 1. Thus, claims 1-4 and 6 recite mathematical concepts. Under Step 2A Prong Two, this judicial exception is not integrated into a practical application because each of claims 1-4 and 6 do not recite additional elements that integrate the exception into a practical application. The additional elements (processor and memory resource, machine-readable instructions, and performing inference and convolutional NN calculations of claims 1-2 and 4) are recited at a high level of generality and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform mathematical concepts, which is not indicative of integration into a practical application as per MPEP 2106.05(f). See MPEP 2106.05(f)(2): “Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more.” The additional elements of the enhanced image having increased resolution and image quality (claim 1) and claims 3 and 6 recite only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished. The claims omit any details as to how the NN solves a technical problem, and instead recites only the idea of a solution or outcome. Thus, the claim invokes a generic NN/CNN of a computing device merely as a tool for making the recited mathematical calculation rather than purporting to improve the technology or a computer. See MPEP 2106.05(f). Therefore, the limitation represents no more than mere instructions to apply the judicial exception on a computing device. Under Step 2B, each of claims 1-4 and 6 do not recite additional elements that are indicative of an inventive concept. The additional elements are simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception as per MPEP 2106.05(d) and 2106.07(a)(III). Regarding the additional elements of claims 1-4 and 6 (listed above in Step 2A Prong Two), use of generic computer components to perform an abstract idea amounts to merely an instruction to apply the abstract idea using generic computer elements, and does not integrate the judicial exception into a practical application (see MPEP 2106.05(f) and MPEP 2106.05(I)(A)). Therefore, the additional elements do not amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (Liu, H., Cao, F., Wen, C., & Zhang, Q. (2020). Lightweight multi-scale residual networks with attention for image super-resolution. Knowledge-Based Systems, 203, 106103.), hereinafter Liu, in view of El-Khamy et al. (cited in IDS: U.S. Patent No. 2020/0090305 A1), hereinafter El-Khamy. Regarding claim 1, Liu teaches a computing device, comprising: a processor resource (Liu, pg. 8, section 4.2: “Titan Xp GPU”); and cause the processor resource to: identify a base resolution of a captured image having a base image quality (Liu, resolution and quality of low-resolution, LR, images input to the model, pg. 7, last sentence: “The entire network adopt the unprocessed LR image as the input”; see LR image in FIG. 7); and perform, via an individual neural network (Liu, AMSRN framework introduced in section 3.5 on pg. 5; see also FIG. 7 attached below), a plurality of neural network calculations on the captured image (Liu, neural network steps executed by the blocks in FIG. 7 are outlined in section 3.5 on pg. 5-7) to form an enhanced image (Liu, high-resolution, HR image; left column on pg. 7: “obtain the final HR image”) having: an increased resolution that is higher than the base resolution (Liu, LR to HR image in FIG. 7; super-resolution task described in abstract on pg. 1); and an increased image quality that is higher than the base image quality (Liu, image quality is recovered, and thus increased from the input, top left of pg. 2: “A lightweight model, AMSRN, is proposed, which alternately utilizes an SCAR block and a residual ASPP block to recover high-quality images without sacrificing numerous parameters”; right column of pg. 11: “image reconstruction quality of the proposed model”). PNG media_image1.png 358 663 media_image1.png Greyscale Liu fails to explicitly teach a non-transitory memory resource storing machine-readable instructions stored thereon. However, El-Khamy teaches a non-transitory memory resource storing machine-readable instructions stored thereon that, when executed cause a processor to perform neural network calculations including increasing the resolution of an image (El-Khamy, para 12: “in a system for super resolution imaging, the system includes: a processor; and a memory coupled to the processor, wherein the memory stores instructions that, when executed by the processor, cause the processor to: receive a low resolution image; generate an intermediate high resolution image having an improved resolution compared to the low resolution image”; para 14: “convolution neural network”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the non-transitory memory resource of El-Khamy with the computing device of Liu in order to store instructions to execute the neural network calculations (El-Khamy, see citation above). Regarding claim 2 (dependent on claim 1), Liu in view of El-Khamy teaches wherein the processor resource is to perform the plurality of neural network calculations during inference (Liu, section 4.1 on pg. 8 describes a testing phase). Regarding claim 3 (dependent on claim 2), Liu in view of El-Khamy teaches wherein the processor resource is to perform the neural network calculations to balance an amount of increase in the increased resolution with an amount of increase in the increased image quality of the enhanced image (Liu, image reconstruction at a higher scale preserves lines, patterns, and/or texture from the original image, right column on pg. 11: “evaluations in terms of the visual quality are provided here. The visual contrasts of the different methods are shown in Figs. 12–15. It is observed that the image reconstruction quality of the proposed model surpasses most of the previous models in lines, patterns, or textures”). Regarding claim 4 (dependent on claim 3), Liu in view of El-Khamy teaches wherein the neural network is a convolutional neural network (CNN) and wherein the plurality of neural network calculations are a plurality of CNN calculations (Liu, see FIGs. on pg. 6 and CNN in abstract on pg. 1). Regarding claim 5 (dependent on claim 4), Liu in view of El-Khamy teaches wherein the CNN calculations include CNN calculations to: segment, via a convolution layer of the CNN, the captured image into image dimensions having a given height and a given width (Liu, 3x3 convolution block, section 3.5 on pg. 5: “, ILR is fed into a single 3 × 3 convolution layer”); pool, via a pooling layer of the CNN, the image dimensions to form pooled image dimensions (Liu, atrous spatial pyramid pooling operations, section 3.1 on pg. 3); extract features from the pooled image dimensions (Liu, multi-scale and deep feature extraction using SCAR blocks, see pg. 6); and perform subpixel convolution based on the extracted features and the pooled image dimensions to form the enhanced image (Liu, PixelShuffle in upscale block, see FIG. 6 and section 3.4 on pg. 5). Regarding claim 6 (dependent on claim 1), Liu in view of El-Khamy teaches wherein the increased resolution is 1.5 times or greater than the base resolution (Liu, pg. 8, SR scales 3 and 4 described in FIG. 9-10 and different scales x2-x4 listed in section 4.2). Regarding claim 7 (dependent on claim 5), Liu in view of El-Khamy teaches wherein the processor resource is to further to pool the image dimensions using atrous spatial pyramid pooling (ASPP) (Liu, atrous spatial pyramid pooling operations, section 3.1 on pg. 3). Regarding claim 8, Liu teaches a processor resource (Liu, pg. 8, section 4.2: “Titan Xp GPU”) to: identify a base resolution of a captured image having a base image quality (Liu, resolution and quality of low-resolution, LR, images input to the model, pg. 7, last sentence: “The entire network adopt the unprocessed LR image as the input”; see LR image in FIG. 7); and perform, via an individual convolutional neural network (CNN) (Liu, AMSRN framework introduced in section 3.5 on pg. 5; see FIGs. on pg. 6 and CNN in abstract on pg. 1), a plurality of CNN calculations on the captured image to form an enhanced image having an increased resolution that is higher than the base resolution (Liu, LR to HR image in FIG. 7; super-resolution task described in abstract on pg. 1) and an increased image quality that is higher than the base image quality (Liu, image quality is recovered, and thus increased from the input, top left of pg. 2: “A lightweight model, AMSRN, is proposed, which alternately utilizes an SCAR block and a residual ASPP block to recover high-quality images without sacrificing numerous parameters”; right column of pg. 11: “image reconstruction quality of the proposed model”), the CNN calculations including CNN calculations to: segment the image data into image dimensions having a given height and a given width (Liu, 3x3 convolution block, section 3.5 on pg. 5: “, ILR is fed into a single 3 × 3 convolution layer”); pool the image dimensions to form pooled image dimensions (Liu, atrous spatial pyramid pooling operations, section 3.1 on pg. 3); extract features from the pooled image dimensions (Liu, multi-scale and deep feature extraction using SCAR blocks, see pg. 6); and perform subpixel convolution based on the extracted features and the pooled image dimension to form the enhanced image (Liu, PixelShuffle in upscale block, see FIG. 6 and section 3.4 on pg. 5). Liu fails to explicitly teach a non-transitory memory resource storing machine-readable instructions stored thereon. However, El-Khamy teaches a non-transitory memory resource storing machine-readable instructions stored thereon that, when executed cause a processor to perform neural network calculations including increasing the resolution of an image (El-Khamy, para 12: “in a system for super resolution imaging, the system includes: a processor; and a memory coupled to the processor, wherein the memory stores instructions that, when executed by the processor, cause the processor to: receive a low resolution image; generate an intermediate high resolution image having an improved resolution compared to the low resolution image”; para 14: “convolution neural network”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the non-transitory memory resource of El-Khamy with the device of Liu in order to store instructions to execute the neural network calculations (El-Khamy, see citation above). Regarding claim 12 (dependent on claim 8), Liu in view of El-Khamy teaches wherein the processor resource is to determine: an upsampling factor; and utilize the upsampling factor to form the enhanced image (El-Khamy, para 59: “the first individual super resolution network S.sub.1 receives the LR input image 202, or alternatively, a bicubic upsampled version of the LR input image 202, where the upsampling ratio is according to a target upsampling ratio”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the upsampling factor of El-Khamy with the device of Liu in order to control the spatial resolution of the inputs/outputs of the model (El-Khamy, see para 59 and equation). Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of El-Khamy, in further view of Caballero et al. (cited in IDS: U.S. Patent No. 10,701,394 B1), hereinafter Caballero. Regarding claim 9 (dependent on claim 8), Liu in view of El-Khamy fails to explicitly teach wherein the captured image has undergone lossy image compression to reduce an image resolution of the captured image from an original resolution to the base resolution. However, Caballero teaches an image processing method wherein the captured image has undergone lossy image compression to reduce an image resolution of the captured image from an original resolution to the base resolution (Caballero, col 44, ln 38-40: “The compression of the scene can be implemented using any well-known lossy image/video compression algorithm”). Liu in view of El-Khamy discloses a base method for increasing the resolution and quality of low-resolution images, but does not specify specific methods for how the image was compressed from its original resolution. Caballero teaches the known technique of a lossy image compression method. A person having ordinary skill in the art, before the effective filing date of the claimed invention, could have applied the known technique, as taught by Caballero, in the same way to the method performed by the device of Liu in view of El-Khamy and achieved predictable results of utilizing a well-known image compression technique to reduce file size and preserve memory resources as needed. Regarding claim 10 (dependent on claim 9), Liu in view of El-Khamy and Caballero teaches wherein the processor resource is to increase the image quality by: sharpening feature boundaries dulled by the lossy image compression; reduction of noise imparted by the lossy image compression; reduction of compression artifacts imparted by the lossy image compression; or any combination thereof (Liu, see the portion of FIG. 7 below where the features, for example the lines of the butterfly wing, in the high-resolution image are sharpened compared to that of the low-resolution image; image reconstruction at a higher scale preserves lines, patterns, and/or texture from the original image, right column on pg. 11: “evaluations in terms of the visual quality are provided here. The visual contrasts of the different methods are shown in Figs. 12–15. It is observed that the image reconstruction quality of the proposed model surpasses most of the previous models in lines, patterns, or textures”). PNG media_image2.png 247 390 media_image2.png Greyscale Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of El-Khamy, in further view of Musunuri et al. (Musunuri, Y. R., & Kwon, O. S. (2021). Deep residual dense network for single image super-resolution. Electronics, 10(5), 555.), hereinafter Musunuri. Regarding claim 11 (dependent on claim 8), Liu in view of El-Khamy fails to teach wherein the processor resource is to extract the features from the pooled image dimensions using a residual in residual dense block (RRDB) approach. However, Musunuri teaches a method for increasing image resolution (Musunuri, see super-resolution in abstract on pg. 1) wherein the processor resource is to extract the features using a residual in residual dense block (RRDB) approach (Musunuri, see FIG. 1b on pg. 3; above the figure on pg. 5: “the residual block (RRDB) has five convolutional layers to extract the features”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have implemented the RRDB approach, taught by Musunuri, with the pooled image of the device of Liu in view of El-Khamy in order to improve the performance of the model (Musunuri, abstract on pg. 1: “Based on human perceptual characteristics, the residual in residual dense block strategy (RRDB) is exploited to implement various depths in network architectures”). Claims 13 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Caballero. Regarding claim 13, Liu teaches a computing device, comprising: an image with a base resolution and a base image quality (Liu, resolution and quality of low-resolution, LR, images input to the model, pg. 7, last sentence: “The entire network adopt the unprocessed LR image as the input”; see LR image in FIG. 7); and a processor resource (Liu, pg. 8, section 4.2: “Titan Xp GPU”) to: receive image data of an image (Liu, LR image which is input to the CNN); and perform, via an individual convolutional neural network (CNN) (Liu, AMSRN framework introduced in section 3.5 on pg. 5; see FIGs. on pg. 6 and CNN in abstract on pg. 1), a plurality of CNN calculations on the captured image to form an enhanced image having an increased resolution that is higher than the base resolution (Liu, LR to HR image in FIG. 7; super-resolution task described in abstract on pg. 1) and an increased image quality that is higher than the base image quality (Liu, image quality is recovered, and thus increased from the input, top left of pg. 2: “A lightweight model, AMSRN, is proposed, which alternately utilizes an SCAR block and a residual ASPP block to recover high-quality images without sacrificing numerous parameters”; right column of pg. 11: “image reconstruction quality of the proposed model”), the CNN calculations including CNN calculations to: segment the image data into image dimensions having a given height and a given width (Liu, 3x3 convolution block, section 3.5 on pg. 5: “, ILR is fed into a single 3 × 3 convolution layer”); pool the image dimensions to form pooled image dimensions (Liu, atrous spatial pyramid pooling operations, section 3.1 on pg. 3); extract features from the pooled image dimensions (Liu, multi-scale and deep feature extraction using SCAR blocks, see pg. 6); and perform subpixel convolution based on the extracted features and the pooled image dimension to form the enhanced image (Liu, PixelShuffle in upscale block, see FIG. 6 and section 3.4 on pg. 5). Liu fails to teach 1) comprising an image capture device to capture an image with a base resolution and a base image quality and 2) receive image data of an image from the image capture device. However, Caballero teaches a computing device comprising an image capture device to capture an image with a base resolution and a base image quality and receive image data of an image from the image capture device (Caballero, input video frames from a camera, col 38, ln 10-16: “The input video concerned may be media for playback, such as recorded video or live streamed video, or it can be videoconference video or any other video source such as video recorded or being recorded on a portable device such as a mobile phone or a video recording device such as a video camera or surveillance camera”; see the computing process in FIG. 16 attached below). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the image capture device of Caballero with the computing device of Liu in order to improve the resolution of images transmitted from the device during a videoconference (Caballero, see col 38, ln 7-20 cited in claim 15). Regarding claim 15 (dependent on claim 13), Liu in view of Caballero teaches wherein the computing device further comprises a first computing device that is communicatively coupled to and in a teleconference with a second computing device (See citations below), and wherein the processor resource is to further to: receive, during the teleconference, the image data of the image from an image capture device of the first computing device (Caballero, col 36, ln 25-40: “a technique 1500 is used to increase the resolution of visual data will now be described in detail. These embodiments can be used in combination with other embodiments described elsewhere in this specification. Received video data 1540 is provided into a decoder system and is a lower-resolution video encoded in a standard video format… The system then separates video data 1510 into single frames at step 1520, i.e. into a sequence of images at the full resolution of the received video data 310”); and provide the enhanced image to the second computing device during the teleconference (Caballero, col 38, ln 7-20: “Further, output video 1570 in process 1500 or 1600 can be output directly to a display or may be stored for viewing on a display on a local or remote storage device, or forwarded to a remote node for storage or viewing as required. The input video concerned may be media for playback, such as recorded video or live streamed video, or it can be videoconference video or any other video source such as video recorded or being recorded on a portable device such as a mobile phone or a video recording device such as a video camera or surveillance camera”). PNG media_image3.png 330 612 media_image3.png Greyscale Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Caballero, in further view of Musunuri. Regarding claim 14 (dependent on claim 13), Liu in view of Caballero teaches wherein the processor resource is to further to: pool the image dimensions to form the pooled image dimensions using atrous spatial pyramid pooling (ASPP) (Liu, atrous spatial pyramid pooling operations, section 3.1 on pg. 3), but fails to teach to extract the features from the pooled image dimensions using a residual in residual dense block (RRDB). However, Musunuri teaches a method for increasing image resolution (Musunuri, see super-resolution in abstract on pg. 1), including to extract the features using a residual in residual dense block (RRDB) (Musunuri, see FIG. 1b on pg. 3; above the figure on pg. 5: “the residual block (RRDB) has five convolutional layers to extract the features”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have implemented the RRDB approach, taught by Musunuri, with the pooled image of the device of Liu in view of Caballero in order to improve the performance of the model (Musunuri, abstract on pg. 1: “Based on human perceptual characteristics, the residual in residual dense block strategy (RRDB) is exploited to implement various depths in network architectures”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Li et al. (U.S. Patent No. 2023/0177646 A1) teaches a convolutional neural network that increases the resolution of image data and may be implemented during a video call (FIG. 2, abstract, para 96). Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMA E DRYDEN whose telephone number is (571)272-1179. The examiner can normally be reached M-F 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW BEE can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMMA E DRYDEN/Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Apr 04, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561873
IMAGE PROCESSING APPARATUS AND METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12543950
SLIT LAMP MICROSCOPE, OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC SYSTEM, METHOD OF CONTROLLING SLIT LAMP MICROSCOPE, AND RECORDING MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12526379
AUTOMATIC IMAGE ORIENTATION VIA ZONE DETECTION
2y 5m to grant Granted Jan 13, 2026
Patent 12340443
METHOD AND APPARATUS FOR ACCELERATED ACQUISITION AND ARTIFACT REDUCTION OF UNDERSAMPLED MRI USING A K-SPACE TRANSFORMER NETWORK
2y 5m to grant Granted Jun 24, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
83%
With Interview (+25.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month