Prosecution Insights
Last updated: April 19, 2026
Application No. 18/294,057

IMAGE PROCESSING MODULE

Final Rejection §103
Filed
Jan 31, 2024
Examiner
DRYDEN, EMMA ELIZABETH
Art Unit
2677
Tech Center
2600 — Communications
Assignee
LG Innotek Co., Ltd.
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
83%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
7 granted / 12 resolved
-3.7% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
34 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
56.4%
+16.4% vs TC avg
§102
16.6%
-23.4% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application is a National Stage application of PCT/KR2022/011565 dated 08/04/2022. Receipt is acknowledged that application claims priority to foreign application with application number KR10-2021-0103284 dated 08/05/2021, application number KR10-2021-0106985 dated 08/12/2021, application number KR10-2021-0106986 08/12/2021. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Claims 1-20 have been afforded the benefit of filing date 08/05/2021. Information Disclosure Statement The information disclosure statement filed 12/16/2025 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered. A concise explanation and/or English language translation of the cited Office Action has not been provided. See 37 CFR 1.98. Response to Amendment The amendment filed 03/09/2026 has been entered. Applicant’s amendments to the drawings and claims have overcome each and every objection and 35 U.S.C. 112(b) rejection previously set forth in the Non-Final Office Action mailed 12/09/2025. Claims 8-20 remain pending in the application, with claims 1-7 having been cancelled. Response to Arguments Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Appropriate correction is required. Claim Interpretation Regarding claim 19, the claims are interpreted to require at least one element of the claimed list, in accordance with the plain English meaning of “at least one among” the listed elements. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “image sensing unit” in claim 11 “output unit” in claims 12, 14 “alignment unit” in claims 13, 14 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 8, 10-12, 15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Kim et al. (U.S. Patent No. 2022/0182537 A1), hereinafter Kim, in view of Mennel et al. (Mennel, L., Symonowicz, J., Wachter, S., Polyushkin, D. K., Molina-Mendoza, A. J., & Mueller, T. (2020). Ultrafast machine vision with 2D material neural network image sensors. Nature, 579(7797), 62-66.), hereinafter Mennel. Regarding claim 8, Kim teaches an image processing method of an image sensor (Kim, camera module w/ image sensor, para 31: “The IP network module 22 may receive first image data generated by the image sensor 11 of the camera module 10”; also see abstract), the method comprising the steps of: generating a first image data by light transmitted through a display panel (Kim, para 34: “the camera module 10 may generate the image data (e.g., first image data as described herein) based on light reflected from the subject, passing through the display 60, and reaching the optical lens of the image sensor”); and outputting a second image data from the first image data by a learned deep learning neural network (Kim, para 31: “The IP network module 22 may receive first image data generated by the image sensor 11 of the camera module 10 and may generate second image data by performing the image processing operations on the first image data”; para 37: “The NN of the multilayered structure may be referred to as a deep neural network (DNN) or a deep learning architecture”; see also FIG 2 for the NN and output data in para 83), wherein the second image data is image data in which at least a portion of noise, which is a picture quality degradation phenomenon that occurs when the light transmits through a display panel (Kim, para 34: “A path of the light reflected from the subject may be changed by the pixels included in the display 60 while passing through the display 60 so that the image data obtained by capturing the subject may be distorted”), is removed (Kim, para 30: “denoise operation”; para 35: “removing the distortion in the image data generated by the UDC module”; see also para 48 where noise is removed from the first image data). Kim teaches wherein the embodiments of the disclosed invention are modifiable (Kim, para 93: “FIGS. 8A to 8C are views illustrating modifiable embodiments of the image processing apparatus 1000 of FIG. 3”), but fails to explicitly teach wherein the deep learning neural network is formed inside the image sensor. However, Mennel teaches an image processing method of an image sensor wherein a deep learning neural network is formed inside the image sensor (Mennel, pg. 62, 2nd para: “we present a photodiode array that itself constitutes an ANN that simultaneously senses and processes images projected onto the chip”; last para on pg. 62: “An integrated neural network and imaging array can now be formed by interconnecting the subpixels…various types of ANNs for image processing can be implemented (see Fig. 1c, d)”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the integrated deep learning neural network and image sensor of Mennel with the method of Kim in order to increase the efficiency of the image processing method (Mennel, last para on pg. 66: “In conclusion, we have presented an ANN vision sensor for ultrafast recognition and encoding of optical images. The device concept is easily scalable and provides various training possibilities for ultrafast machine vision applications”; see also abstract). Regarding claim 10 (dependent on claim 8), Kim in view of Mennel teaches wherein the first image data is received from an image sensor disposed under the display panel (Kim, image sensor 11 of para 34 - para 34: “the camera module 10 may be disposed (e.g., located) under the display 60, such that the display 60 is located between the camera module 10 and an exterior environment”), and wherein the second image data is outputted to an image signal processor (Kim, See FIG 8C where the output of the IP network module, the neural network, is to another processor; see further wherein the processor may be an ISP in para 118: “may include, may be included in, and/or may be implemented by one or more instances of processors such as hardware including logic circuits…For example, a processor as described herein more specifically may include, but is not limited to…an Image Signal Processor (ISP)”; see para 105-107 regarding FIG 8C and para 93: “FIGS. 8A to 8C are views illustrating modifiable embodiments of the image processing apparatus 1000 of FIG. 3”). Regarding claim 11, Kim teaches an image sensor (Kim, 4100 of FIG 8C, para 105: “camera module 4100”; para 93: “FIGS. 8A to 8C are views illustrating modifiable embodiments of the image processing apparatus 1000 of FIG. 3”) comprising: an image sensing unit (Kim, para 105: “image sensor 4110”) configured to generate a first image data by light transmitted through a display panel (Kim, para 34: “the camera module 10 may generate the image data (e.g., first image data as described herein) based on light reflected from the subject, passing through the display 60, and reaching the optical lens of the image sensor 11”; FIG 8C is in accordance with FIGs 1 and 3, see para 45 and 93); and a deep learning neural network configured to output a second image data from the first image data (Kim, para 31: “The IP network module 22 may receive first image data generated by the image sensor 11 of the camera module 10 and may generate second image data by performing the image processing operations on the first image data”; para 37: “The NN of the multilayered structure may be referred to as a deep neural network (DNN) or a deep learning architecture”; see also FIG 2 for the NN and output data in para 83); wherein the second image data is image data from which at least a portion of noise, which is a picture quality degradation phenomenon that occurs when the light transmits through the display panel (Kim, para 34: “A path of the light reflected from the subject may be changed by the pixels included in the display 60 while passing through the display 60 so that the image data obtained by capturing the subject may be distorted”), is removed (Kim, para 30: “denoise operation”; para 35: “removing the distortion in the image data generated by the UDC module”; see also para 48 where noise is removed from the first image data). Kim teaches wherein the embodiments of the disclosed invention are modifiable (Kim, para 93: “FIGS. 8A to 8C are views illustrating modifiable embodiments of the image processing apparatus 1000 of FIG. 3”), but fails to explicitly teach wherein the deep learning neural network is formed inside the image sensor. However, Mennel teaches an image processing method of an image sensor wherein a deep learning neural network is formed inside the image sensor (Mennel, pg. 62, 2nd para: “we present a photodiode array that itself constitutes an ANN that simultaneously senses and processes images projected onto the chip”; last para on pg. 62: “An integrated neural network and imaging array can now be formed by interconnecting the subpixels…various types of ANNs for image processing can be implemented (see Fig. 1c, d)”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the integrated deep learning neural network and image sensor of Mennel with the image sensor taught by Kim in order to increase the efficiency of the image processing method (Mennel, last para on pg. 66: “In conclusion, we have presented an ANN vision sensor for ultrafast recognition and encoding of optical images. The device concept is easily scalable and provides various training possibilities for ultrafast machine vision applications”; see also abstract). Regarding claim 12 (dependent on claim 11), Kim in view of Mennel teaches comprising: an output unit (Kim, software of the IP network module that outputs the processed data, see next citation) configured to output the second image data to outside from the image sensor (Kim, See FIG 8C where the output of the IP network module, the neural network, is to another processor – para 107: “The IP network module 4120 may transmit the second image data IDTb to the main processor 4300”; see further wherein the processor may be an ISP in para 118: “may include, may be included in, and/or may be implemented by one or more instances of processors such as hardware including logic circuits…For example, a processor as described herein more specifically may include, but is not limited to…an Image Signal Processor (ISP)”; see para 105-107 regarding FIG 8C and para 93: “FIGS. 8A to 8C are views illustrating modifiable embodiments of the image processing apparatus 1000 of FIG. 3”), wherein the deep learning neural network outputs the second image data according to an output format of the output unit (Kim, See FIG 4A-4C where the output of the neural module changes depending on the subsequent image processing. In the relied upon embodiment, the output format is Bayer data; para 68: “For example, the IP network module 210b may remove noise and blur from the second tetra data IDTb and may convert the second tetra data IDTb into image data in the Bayer pattern”). Regarding claim 15 (dependent on claim 11), Kim in view of Mennel teaches wherein the first image data and the second image data have different noise levels (Kim, since the neural network output of Kim, mapped to second image data, has noise removed from the first image data, the two image datasets have different noise levels; see further para 30 and 48). Regarding claim 17 (dependent on claim 11), Kim in view of Mennel teaches wherein at least one of the first image data or the second image data is Bayer image data (Kim, output from the neural network, mapped to second image data, is Bayer data - see FIG 4B and para 70: “The main processor 300b may receive the Bayer data IDTc from the neural network processor 200b”). Regarding claim 18 (dependent on claim 11), Kim in view of Mennel teaches wherein the second image data is outputted to an image signal processor outside the image sensor (Kim, See FIG 8C where the output of the IP network module, the neural network, is to another processor – para 107: “The IP network module 4120 may transmit the second image data IDTb to the main processor 4300”; see further wherein the processor may be an ISP in para 118: “may include, may be included in, and/or may be implemented by one or more instances of processors such as hardware including logic circuits…For example, a processor as described herein more specifically may include, but is not limited to…an Image Signal Processor (ISP)”; see para 105-107 regarding FIG 8C and para 93: “FIGS. 8A to 8C are views illustrating modifiable embodiments of the image processing apparatus 1000 of FIG. 3”). Regarding claim 19 (dependent on claim 8), Kim in view of Mennel teaches wherein the noise comprises at least one among low intensity, blur, haze, reflection ghost, color separation, flare, fringe pattern, and yellowish phenomenon (Kim, para 89: “The raw image generated by the UDC module may include distortions such as blur, ghost, haze, and flare”; para 4: “Such image processing operations may include removing defects such as blur, ghost, flare, and haze included in image data generated by a camera module (also referred to herein as a camera)”). Regarding claim 20 (dependent on claim 8), Kim in view of Mennel teaches wherein the first image data and the second image data have different noise levels (Kim, since the neural network output of Kim, mapped to second image data, has noise removed from the first image data, the two image datasets have different noise levels; see further para 30 and 48). Claims 9 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Mennel, in further view of Do et al. (U.S. Patent No. 2022/0353401 A1), hereinafter Do. Regarding claim 9 (dependent on claim 8), Kim in view of Mennel teaches wherein a training set of the deep learning neural network comprises a first image data generated by light transmitted through a display panel (Kim, images generated by the UDC, para 89: “the IP network module 22 may be learned (e.g., a neural network model used by the IP network module 22 may be trained) based on using the image (e.g., raw image) generated by the UDC module as the input data”), but fails to teach wherein a second image data generated by light not transmitted through a display panel (Kim teaches a second image data for training, but it is corrected UDC data, para 89). However, Do teaches a similar method (Do, para 90: “the electronic device may correct the color around the light source to be similar to a color around the light source had the image been captured by a camera that was not an UDC”) including a training set wherein a second image data is generated by light not transmitted through a display panel (Do, para 90: “For example, the dataset for light source processing may include a pair of an input image obtained by capturing a scene (e.g., a scene including a light source with a preset brightness or greater) that satisfies a rainbow artifact occurring condition by a UDC and a ground truth image obtained by capturing the same scene by an externally exposed camera module”, emphasis added). Thus, Kim and Do each disclose a method for removing noise from a UDC-captured image using training data to train a model. A person of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that the training set of Kim could have been substituted for the training set of Do because both serve the purpose of utilizing a second training dataset with images that contain less UDC noise than that of the first training dataset. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of training a machine learned model for noise reduction in an image. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to substitute the training set of Kim in view of Mennel with the training set of Do according to known methods to yield the predictable result of teaching the model features of images captured by an externally exposed camera, compared to that of a UDC camera. Regarding claim 16 (dependent on claim 11), Kim in view of Mennel teaches wherein a training set of the deep learning neural network comprises a first image data generated by light transmitted through a display panel (Kim, images generated by the UDC, para 89: “the IP network module 22 may be learned (e.g., a neural network model used by the IP network module 22 may be trained) based on using the image (e.g., raw image) generated by the UDC module as the input data”), but fails to teach a second image data generated by light not transmitted through a display panel (Kim teaches a second image data for training, but it is corrected UDC data, para 89). However, Do teaches a similar method (Do, para 90: “the electronic device may correct the color around the light source to be similar to a color around the light source had the image been captured by a camera that was not an UDC”) including a training set wherein a second image data is generated by light not transmitted through a display panel (Do, para 90: “For example, the dataset for light source processing may include a pair of an input image obtained by capturing a scene (e.g., a scene including a light source with a preset brightness or greater) that satisfies a rainbow artifact occurring condition by a UDC and a ground truth image obtained by capturing the same scene by an externally exposed camera module”, emphasis added). Thus, Kim and Do each disclose a system/method for removing noise from a UDC-captured image using training data to train a model. A person of ordinary skill in the art, before the effective filing date of the claimed invention, would have recognized that the training set of Kim could have been substituted for the training set of Do because both serve the purpose of utilizing a second training dataset with images that contain less UDC noise than that of the first training dataset. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of training a machine learned model for noise reduction in an image. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to substitute the training set of Kim in view of Mennel with the training set of Do according to known methods to yield the predictable result of teaching the model features of images captured by an externally exposed camera, compared to that of a UDC camera. Claims 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Kim in view of Mennel, in further view of Roberts (U.S. Patent No. 2020/0174748 A1). Regarding claim 13 (dependent on claim 11), Kim in view of Mennel teaches comprising: an alignment unit (Kim, pre-processor of S120 in FIG 5, see para 79-80; para 78: “FIG. 5 is a flowchart illustrating a method of operating the image processing systems 1200, 1200a, 1200b, and 1200c described above with reference to FIGS. 3 to 4C”) configured to output a third image data (Kim, referred to in this embodiment as second image data in para 80 of Kim, para 80: “Specifically, the image processing system 1200 may generate the second image data by (e.g., based on) performing a pre-processing operation”), wherein the deep learning neural network outputs the second image data from the third image data (Kim, Step 130-140 of FIG 5, mapped second image data, the output of the NN model in Kim, is referred to here as the third image data; see input and output data in para 82-83). However, Kim in view of Mennel fails to teach wherein the third image data is output by decomposing or rearranging at least a portion of the first image data (Kim only discloses BPC operation, LSC operation, crosstalk correction operation, and/or WB correction operation in para 80). Roberts teaches a neural network (Roberts, abstract) for processing image data (Roberts, para 15). Roberts teaches a “presorter” (Roberts, para 41 and 304 in FIG 3), similar to the alignment unit mapped to the pre-processor of Kim. Roberts discloses wherein image data is output by decomposing or rearranging at least a portion of the first image data, or input data (Roberts, presorter process in FIG 4; para 59: “in step 400, the presorter may have rearranged the actual instances of input data into the sorted order in a memory (e.g., memory 204, presort buffer 306, etc.) and step 402 involves signaling the neural network processor”; para 60: “The neural network processor then uses the sorted order to control an order in which instances of input data from among the set of instances of input data are processed through the neural network (step 404)”; see also para 41). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the rearranging of first image data, as taught by Roberts, with the image sensor of Kim in view of Mennel in order to increase efficiency of the neural network system (Roberts, para 58: “Recall that the described embodiments determine the sorted order to enable more efficient reuse of stored result values in the neural network—a goal that is met with even smaller improvements in an initial order for instances of input data”; see also para 28). Regarding claim 14 (dependent on claim 13), Kim in view of Mennel and Roberts teaches wherein the alignment unit (Preprocessor of Kim, modified by Roberts - see claim 13 rejection) outputs the third image data according to an output format of an output unit (Kim, output unit is software of the IP network module that outputs the processed data; see FIG. 4A and para 61-63 where the output of the pre-processor is the same format, Tetra, as the output from the neural network). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMA E DRYDEN whose telephone number is (571)272-1179. The examiner can normally be reached M-F 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW BEE can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMMA E DRYDEN/Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Jan 31, 2024
Application Filed
Dec 04, 2025
Non-Final Rejection — §103
Mar 09, 2026
Response Filed
Mar 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561873
IMAGE PROCESSING APPARATUS AND METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12543950
SLIT LAMP MICROSCOPE, OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC SYSTEM, METHOD OF CONTROLLING SLIT LAMP MICROSCOPE, AND RECORDING MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12526379
AUTOMATIC IMAGE ORIENTATION VIA ZONE DETECTION
2y 5m to grant Granted Jan 13, 2026
Patent 12340443
METHOD AND APPARATUS FOR ACCELERATED ACQUISITION AND ARTIFACT REDUCTION OF UNDERSAMPLED MRI USING A K-SPACE TRANSFORMER NETWORK
2y 5m to grant Granted Jun 24, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
83%
With Interview (+25.0%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month