Prosecution Insights
Last updated: April 19, 2026
Application No. 18/456,603

METHOD AND SYSTEM FOR IMAGE ENHANCEMENT

Non-Final OA §102§103
Filed
Aug 28, 2023
Examiner
KEUP, AIDAN JAMES
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Centre For Intelligent Multidimensional Data Analysis Limited
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
92%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
48 granted / 60 resolved
+18.0% vs TC avg
Moderate +12% lift
Without
With
+12.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
22 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
45.8%
+5.8% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 60 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status The status of claims 1-20: Claims 1-20 are pending. Claim Objections Claim 1 is objected to because of the following informalities: in line 8, the claim language states “to a sample specific property a normal image” when it should state “to a sample specific property of a normal image”. Appropriate correction is required. Claim 10 is objected to because of the following informalities: there is no period at the end of the claim. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “memory reading module” in claims 13-14 and 18-19 “adaptive fusion module” in claims 13-16 “output module” in claim 13. “pre-trained image enhancer” in claim 13. “pre-trained feature generator” in claim 13. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ye et al. (Ye, D., Ni, Z., Yang, W., Wang, H., Wang, S., & Kwong, S. (2023). Glow in the dark: Low-light image enhancement with external memory. IEEE Transactions on Multimedia, 26, 2148-2163., hereinafter “Ye”). Since the application names fewer joint inventors than the publication (Wenhan Yang and Hanli Wang not listed in application), it is not readily apparent from the publication that it is an inventor-originated disclosure and does not qualify under the prior art exception in 102(b)(1)(A). MPEP 2153.01(a). In order to be effective to show that a grace period inventor-originated public disclosure is not prior art under AIA 35 U.S.C. 102(a)(1) because the AIA 35 U.S.C. 102(b)(1)(A) exception applies, the statement must convey the same information as would be required in a declaration under 37 CFR 1.130(a). See MPEP §§ 717.01(a)(1) , 2155.01, and 2155.03. Regarding claim 1, Ye discloses a computer implemented method of image enhancement (Ye Page 2148: “In this paper, we introduce external memory to form an external memory-augmented network (EMNet) for low-light image enhancement”) comprising the steps of: receiving an input image (Ye Page 2151: “The pre-trained image enhancer is a pre-trained low-light image enhancement model that generates the enhanced image ˆI from the input low-light image I”, Ye Fig. 2: input image), wherein the input image is a low light image (Ye Page 2148: “In this paper, we introduce external memory to form an external memory-augmented network (EMNet) for low-light image enhancement”), processing the input image, by a pre trained enhancer (Ye Fig. 2: Pre-trained image enhancer), to generate an initial enhanced image (Ye Page 2148: “In this paper, we introduce external memory to form an external memory-augmented network (EMNet) for low-light image enhancement”), accessing, from an image memory (Ye Fig. 2: memory reading), a response value corresponding to a sample specific property a normal image (Ye Page 2151: “The feature generator is a pre-trained ResNet-18, which takes the low-light image I as input and outputs a query feature q. q is utilized to retrieve the most relevant response values vr from the memory dictionary via the memory reading module”; Ye Fig. 2: response value), generating an adjustment factor based on the response value from the image memory (Ye Page 2151: “In the testing phase, vr and o are fused adaptively to generate the adaptively adjusted factor”; Ye Fig. 2: adaptively adjusted factor), generating a final enhanced image by applying the adjustment factor to the initial enhanced image (Ye Page 2151: which adjusts the final enhancement result via Ia = ˆI · a and makes it align with the normal-light image”; Ye Fig. 2: Final adjusted image). Regarding claim 2, Ye discloses the method, wherein the adjustment factor is generated based on combining the response value with global average pooling data related to the initial enhanced image (Ye Page 2151: “Then ˆI is globally average pooled into o for further adaptive fusion”; Ye Fig. 2: adaptive fusion). Regarding claim 3, Ye discloses the method, wherein the adjustment factor is generated by applying an adaptive fusion function (Ye Page 2151: “Then ˆI is globally average pooled into o for further adaptive fusion”; Ye Fig. 2: adaptive fusion). Regarding claim 4, Ye discloses the method, wherein applying an adaptive fusion function comprises: calculating a ratio of a sample specific property (Ye Page 2152: “Specifically, the vr and o are first used to generate a ratio through element-wise division”, Ye Fig. 5: division) calculating a global average pooling data through element wise division (Ye Page 2152: “Specifically, the vr and o are first used to generate a ratio through element-wise division”, Ye Fig. 5: division), concatenating the sample specific property and the global average pooling data, to generate a concatenated value (Ye Page 2152: “A softmax function is employed to generate the weight from the concatenation of vr and o”; Ye Fig. 5: concatenation), determining one or more weight vectors by applying a softmax function to the concatenated value (Ye Page 2152: “A softmax function is employed to generate the weight from the concatenation of vr and o”; Ye Fig. 5: softmax), deriving the adaptive adjustment factor based on the one or more weight vectors and the ratio of the sample specific property and global average pooling data (Ye Page 2152: “Then, the final adaptively adjusted factor a are derived as follows: a = ω1 × vr/o + ω2”; Ye Fig. 5: addition). Regarding claim 5, Ye discloses the method, wherein the step of deriving the adaptive adjustment factor comprises multiplying a first weight vector by the ratio of the sample specific property and global average pooling data (Ye Page 2152: “Then, the final adaptively adjusted factor a are derived as follows: a = ω1 × vr/o + ω2”; Ye Fig. 5: w1 multiplied by vr/o ratio) and summing with a second weight vector data (Ye Page 2152: “Then, the final adaptively adjusted factor a are derived as follows: a = ω1 × vr/o + ω2”; Ye Fig. 5: addition), wherein the first weight vector corresponds to the sample specific property and the second weight vector corresponds to the global average pooling data (Ye Page 2152: “where ω1 and ω2 are the weight vector from vr and o, repectively”). Regarding claim 6, Ye discloses the method, comprising additional steps of: processing the input image, by a feature generator (Ye Fig. 2: feature generator), to generate a feature query (Ye Page 2152: “The feature generator is a pre-trained ResNet-18, which takes the low-light image I as input and outputs a query feature q”), accessing the response value, from the image memory, based on the query feature (Ye Page 2152: “q is utilized to retrieve the most relevant response values vr from the memory dictionary via the memory reading module”). Regarding claim 7, Ye discloses the method, comprising the steps of: identifying a memory key that corresponds to the query feature, within the image memory, (Ye Page 2152: “Given q and K, we compute the cosine similarity between q and the i-th memory key ki to find out the most relevant memory key kr”), identifying a response value that corresponds to the identified memory key (Ye Page 2152: “After receiving the memory key, we acquire the response memory value vr by accessing the most relevant memory address for memory writing in the training or adaptive fusion in the next step”), accessing the response value from a memory address of the image memory that corresponds to the memory key (Ye Page 2152: “After receiving the memory key, we acquire the response memory value vr by accessing the most relevant memory address for memory writing in the training or adaptive fusion in the next step”). Regarding claim 8, Ye discloses the method, wherein the memory key is identified from a plurality of memory keys, by computing the cosine similarity between the query and the plurality of memory keys to identify the memory key that has the closest cosine similarity to the query (Ye Page 2152: “Given q and K, we compute the cosine similarity between q and the i-th memory key ki to find out the most relevant memory key kr”). Regarding claim 9, Ye discloses the method, wherein the step of processing the input image by the image enhancer comprises the steps of: the input image is fed into a first convolution layer (Ye Page 2153: “Given an input image I ∈ RH×W×3, a 3 × 3 convolution layer is first applied to extract the low-level feature”), extracting one of more low-level features by processing the input image by the first convolution layer (Ye Page 2153: “Given an input image I ∈ RH×W×3, a 3 × 3 convolution layer is first applied to extract the low-level feature”), wherein each low level feature is defined by a size that comprises spatial dimensions and a number of channels (Ye Page 2153: “to extract the low-level feature with size ofRH×W×C , where H × W denotes the spatial dimensions and C is the number of channels”), the one or more low level features are passed through a 4-level symmetric encoder-decoder (Ye Page 2153: “Then the feature passes through a 4-level symmetric encoder-decoder to obtain the enhanced image ˆI”), generating an initial enhanced image from the 4-level symmetric encoder-decoder (Ye Page 2153: “Then the feature passes through a 4-level symmetric encoder-decoder to obtain the enhanced image ˆI”). Regarding claim 10, Ye discloses the method, wherein the method comprises a memory writing process comprising the steps of: receiving the response value corresponding to the query feature (Ye Page 2152: “As shown in Fig. 4, memory is updated when a response value vr and the desired memory m are obtained during the training stage”), obtaining a desired memory value from processing a reference image (Ye Page 2152: “The desired memory is m is computed from the referenced ground-truth image by taking average of each channel”), updating one or more memory keys within a dictionary using the response value (Ye Page 2152: “As shown in Fig. 4, the memory keeps updating in two different cases, depending on whether the distance d between the response value vr and the desired memory value m is within the threshold γ or not”), the one or more memory keys are updated if the difference between the response value and the desired memory value is within a threshold (Ye Page 2152: “the one or more memory keys are updated if the difference between the response value and the desired memory value is within a threshold”). Regarding claim 11, Ye discloses the method, wherein the method is applied to low light images to enhance the low light images (Ye Page 2150: “Inspired by these works, our EMNet attempt to propose a low-light image enhancement network with external memory to allow for adaptive illumination adjustment”), and the image memory comprises a plug and play mechanism for integration into an image enhancement method (Ye Page 2150: “Second, our external memory is plug-and-play and can be also integrated with IMLUT [58] to further improve the enhancement quality”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ye in view of Xiong (U.S. Patent Publication No 2025/0200718, hereinafter “Xiong”). Regarding claim 12, Ye discloses a system for image enhancement (Ye Page 2148: “In this paper, we introduce external memory to form an external memory-augmented network (EMNet) for low-light image enhancement”) comprising: an image memory (Ye Page 2150: “Inspired by these works, our EMNet attempt to propose a low-light image enhancement network with external memory to allow for adaptive illumination adjustment”), the image memory defining a memory dictionary (Ye Page 2150: “We introduce the external memory, i.e. a memory dictionary, to “remember” the useful critical information of the normal-light image. The dictionary is obtained by learning through the entire training set, while each item in the memory dictionary aims to capture the sample specific properties”), the memory dictionary comprising one or more memory keys and one or more response values (Ye Page 2151: “As shown in Fig. 3, we define the memory dictionary M that consists of the memory key ki”), each memory key corresponding to a response value (Ye Page 2151: “we compute the cosine similarity between q and the i-th memory key ki to find out the most relevant memory key kr”), wherein the response value corresponds to a value from a normal image (Ye Page 2151: “The feature generator is a pre-trained ResNet-18, which takes the low-light image I as input and outputs a query feature q. q is utilized to retrieve the most relevant response values vr from the memory dictionary via the memory reading module”; Ye Fig. 2: response value), wherein the image enhancement system is configured to: receive an input image (Ye Page 2151: “The pre-trained image enhancer is a pre-trained low-light image enhancement model that generates the enhanced image ˆI from the input low-light image I”, Ye Fig. 2: input image), wherein the input image is a low light image (Ye Page 2148: “In this paper, we introduce external memory to form an external memory-augmented network (EMNet) for low-light image enhancement”), process the input image, by a pre trained enhancer (Ye Fig. 2: Pre-trained image enhancer), to generate an initial enhanced image (Ye Page 2148: “In this paper, we introduce external memory to form an external memory-augmented network (EMNet) for low-light image enhancement”), access, from the image memory (Ye Fig. 2: memory reading), a response value corresponding to a sample specific property a normal image (Ye Page 2151: “The feature generator is a pre-trained ResNet-18, which takes the low-light image I as input and outputs a query feature q. q is utilized to retrieve the most relevant response values vr from the memory dictionary via the memory reading module”; Ye Fig. 2: response value), generate an adjustment factor based on the response value from the image memory (Ye Page 2151: “In the testing phase, vr and o are fused adaptively to generate the adaptively adjusted factor”; Ye Fig. 2: adaptively adjusted factor), output a final enhanced image by applying the adjustment factor to the initial enhanced image (Ye Page 2151: which adjusts the final enhancement result via Ia = ˆI · a and makes it align with the normal-light image”; Ye Fig. 2: Final adjusted image). Ye does not explicitly disclose the system comprising: an image enhancement processor. However, Xiong teaches the system comprising: an image enhancement processor (Xiong [0007]: “In a third aspect, an embodiment of the present disclosure also provides an electronic device, which comprises: a processor”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a processor as taught by Xiong with the system of Ye because a processor is necessary in order to run the system. This motivation for the combination of Ye and Xiong is supported by KSR exemplary rationale (A) Combining prior art elements according to known methods to yield predictable results. MPEP 2141 (III). Regarding claim 13, Ye discloses the system, wherein the image enhancement processor comprising a learning network (Ye Fig. 2: whole network), the learning network comprising: a pre trained image enhancer adapted to receive one or more input images and generate the initial enhanced image (Ye Fig. 2: pre-trained image enhancer receiving input image), a pre-trained feature generator configured to receive the one or more input images and generate a query feature based on feedforwarding the one or more input images into the feature generator (Ye Page 2151: “The feature generator is a pre-trained ResNet-18”; Ye Fig. 2: feature generator), a memory reading module configured to receive the query feature and access the response value from the image memory based on the received query feature (Ye Fig. 2: memory writing and memory reading), an adaptive fusion module configured to receive the response value and generate the adjustment factor based on the response value (Ye Fig. 2: adaptive fusion), an output module configured to apply the adjustment factor to the initial enhanced image and output the final enhanced image (Ye Fig. 2: output of final adjusted image). Regarding claim 14, Ye discloses the system, wherein image enhancement processor is configured to generate global average pooling data from the initial enhanced image (Ye Fig. 2: pooling information), the adaptive fusion module is configured to: receive global average pooling data and a response value from the memory reading module (Ye Fig. 2: memory reading), generate the adjustment factor based on combining the response value with global average pooling data related to the initial enhanced image (Ye Page 2151: “Then ˆI is globally average pooled into o for further adaptive fusion”; Ye Fig. 2: adaptive fusion), and; wherein the adaptive fusion module is configured to apply an adaptive fusion function to generate the adjustment factor (Ye Page 2151: “Then ˆI is globally average pooled into o for further adaptive fusion”; Ye Fig. 2: adaptive fusion). Regarding claim 15, Ye discloses the system, wherein the adaptive fusion module is further configured to: calculate a ratio of a sample specific property and the global average pooling data through element wise division (Ye Page 2152: “Specifically, the vr and o are first used to generate a ratio through element-wise division”, Ye Fig. 5: division), concatenate the sample specific property and the global average pooling data, to generate a concatenated value (Ye Page 2152: “A softmax function is employed to generate the weight from the concatenation of vr and o”; Ye Fig. 5: concatenation), determine one or more weight vectors by applying a softmax function to the concatenated value (Ye Page 2152: “A softmax function is employed to generate the weight from the concatenation of vr and o”; Ye Fig. 5: softmax), derive the adaptive adjustment factor based on the one or more weight vectors and the ratio of the sample specific property and global average pooling data (Ye Page 2152: “Then, the final adaptively adjusted factor a are derived as follows: a = ω1 × vr/o + ω2”; Ye Fig. 5: addition). Regarding claim 16, Ye discloses the system, wherein the adaptive fusion module, as part of generating the adaptive adjustment factor, is configured to: multiply a first weight vector by the ratio of the sample specific property and global average pooling data (Ye Page 2152: “Then, the final adaptively adjusted factor a are derived as follows: a = ω1 × vr/o + ω2”; Ye Fig. 5: w1 multiplied by vr/o ratio) and summing with a second weight vector data (Ye Page 2152: “Then, the final adaptively adjusted factor a are derived as follows: a = ω1 × vr/o + ω2”; Ye Fig. 5: addition), wherein the first weight vector corresponds to the sample specific property and the second weight vector corresponds to the global average pooling data (Ye Page 2152: “where ω1 and ω2 are the weight vector from vr and o, repectively”). Regarding claim 17, Ye discloses the system, wherein the image enhancer is a pre-trained transformer based image enhancer (Ye Page 2148: “To further augment the capacity of the model, we take the transformer as our baseline network”) comprising a symmetric encoder – decoder comprising four level pyramid feature maps with skip connections (Ye Page 2153: “Specifically, our image enhancer employs a symmetric encoder-decoder architecture that has four level pyramid feature maps with skip-connection”). Regarding claim 18, Ye discloses the system, wherein the memory reading module is further configured to: identify a memory key that corresponds to the query feature, within the image memory, (Ye Page 2152: “Given q and K, we compute the cosine similarity between q and the i-th memory key ki to find out the most relevant memory key kr”), identify a response value that corresponds to the identified memory key (Ye Page 2152: “After receiving the memory key, we acquire the response memory value vr by accessing the most relevant memory address for memory writing in the training or adaptive fusion in the next step”), access the response value from a memory address of the image memory that corresponds to the memory key (Ye Page 2152: “After receiving the memory key, we acquire the response memory value vr by accessing the most relevant memory address for memory writing in the training or adaptive fusion in the next step”). Regarding claim 19, Ye discloses the system, wherein the memory reading module is configured to: compute the cosine similarity between the query and the plurality of memory keys (Ye Page 2152: “Given q and K, we compute the cosine similarity between q and the i-th memory key ki to find out the most relevant memory key kr”), identify the memory key from a plurality of memory keys based on the output of the cosine similarity computation, wherein the memory key is identified as the memory key having the closest cosine similarity to the query (Ye Page 2152: “Given q and K, we compute the cosine similarity between q and the i-th memory key ki to find out the most relevant memory key kr”). Regarding claim 20, Ye discloses the system, wherein the image enhancer is further configured to: feed the received input image into a first convolution layer (Ye Page 2153: “Given an input image I ∈ RH×W×3, a 3 × 3 convolution layer is first applied to extract the low-level feature”), process the input image by applying the first convolution layer (Ye Page 2153: “Given an input image I ∈ RH×W×3, a 3 × 3 convolution layer is first applied to extract the low-level feature”), extract one or more low level features based on the processing in the first convolution layer (Ye Page 2153: “Given an input image I ∈ RH×W×3, a 3 × 3 convolution layer is first applied to extract the low-level feature”), wherein each low level feature is defined by a size that comprises spatial dimensions and a number of channels (Ye Page 2153: “Given an input image I ∈ RH×W×3, a 3 × 3 convolution layer is first applied to extract the low-level feature with size ofRH×W×C , where H × W denotes the spatial dimensions and C is the number of channels”), pass the one or more low level features are through a 4-level symmetric encoder-decoder (Ye Page 2153: “Then the feature passes through a 4-level symmetric encoder-decoder to obtain the enhanced image ˆI”), generate an initial enhanced image from applying the 4-level symmetric encoder-decoder (Ye Page 2153: “Then the feature passes through a 4-level symmetric encoder-decoder to obtain the enhanced image ˆI”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AIDAN KEUP whose telephone number is (703)756-4578. The examiner can normally be reached Monday - Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AIDAN KEUP/ Examiner, Art Unit 2666 /Molly Wilburn/ Primary Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Aug 28, 2023
Application Filed
Nov 15, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602774
Regional Pulmonary V/Q via image registration and Multi-Energy CT
2y 5m to grant Granted Apr 14, 2026
Patent 12597140
METHOD, SYSTEM AND DEVICE OF IMAGE SEGMENTATION
2y 5m to grant Granted Apr 07, 2026
Patent 12597168
METHOD FOR CONVERTING NEAR INFRARED IMAGE TO RGB IMAGE AND APPARATUS FOR SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12592082
DEVICE AND METHOD FOR PROVIDING INFORMATION FOR VEHICLE USING ROAD SURFACE
2y 5m to grant Granted Mar 31, 2026
Patent 12586182
Multi-Prong Multitask Convolutional Neural Network for Biomedical Image Inference
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
92%
With Interview (+12.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 60 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month