Prosecution Insights
Last updated: April 19, 2026
Application No. 18/036,807

METHOD FOR DIGITAL IMAGE PROCESSING

Final Rejection §102§103
Filed
May 12, 2023
Examiner
TISSIRE, ABDELAAZIZ
Art Unit
2638
Tech Center
2600 — Communications
Assignee
K|Lens GmbH
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 3m
To Grant
98%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
584 granted / 693 resolved
+22.3% vs TC avg
Moderate +13% lift
Without
With
+13.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
23 currently pending
Career history
716
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
49.3%
+9.3% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 693 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner's responses to Applicant's remark applicant has canceled claims 26, 27, 51 and 52, amended claims 24, 28 and 33, and added new claims 59-62. Claims 24-25, 28-50 and 53-62 are currently pending. Applicant’s arguments, see Remarks, filed 08/21/2025, and in light of Applicant’s amendment to claim 24, have been fully considered and are not persuasive. Applicants argued that: "a linear Gaussian dense layer considers the noise within the training data set" does not mean that an original digital image is denoised. This rather means that according to Tasfi synthetic noise is applied for increasing robustness, i.e. synthetic noise is injected. The method according to the presently claimed invention removes noise, e.g. real optical sensor noise from the original digital image. Moreover, Tasfi does not mention any blurring. Examiner respectfully disagrees. Applicants are reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claims. More specifically, and as presented in the previous office action, Tasfi discloses The linear Gaussian dense layer considers the noise within the training dataset in conjunction with the calculated response to achieve a more robust value out of the unit when doing regression (Figs. 3-4, [0031]&[0035]). Examiner note that a synthetic noise in a noise addition process can be understood as a form of denoising because denoising is not limited to removing noise, but more broadly involves recovering, reconstructing and restoring missing regions or features in the image signal. If Applicant contends was that the type of noise is intended to “removes noise, e.g. real optical sensor noise from the original digital image” and not for increasing robustness, i.e. synthetic noise is injected as suggested by Tasfi. Examiner noted that: 1-the features upon which applicant argued “removes noise, e.g. real optical sensor noise from the original digital image” is not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). 2-a recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim. Therefore, and Based on the foregoing analysis, the Examiner respectfully stands behind the teachings of Tasfi, as applied in the 35 U.S.C. rejection to independent claim 1 (see rejection, infra). Applicants argued that: “Relative to new independent claim 59, which is made up of the features of claims 24, 38 and 39, Tasfi does not disclose any noise injection according to a Poisson-Gaussian noise model. Tasfi uses only a linear Gaussian dense layer. However, use of a Gaussian dense layer and noise injection according to a Poisson- Gaussian noise model is completely different. A Gaussian-only model may be useful for the application in the technical context of the image scaling using a convolutional neural network of Tasfi. However, the inventors have found that for the present method, the Poisson-Gaussian model results in a more accurate modeling because more technical aspects of real-world photography are considered. Since Tasfi neither mentions nor hints to anything other than the use of the linear Gaussian dense layer, applicant submits that the invention recited in claim 59 is not anticipated by Tasfi”. Applicant argued that the “use of a Gaussian dense layer and noise injection according to a Poisson- Gaussian noise model is completely different”. Examiner respectfully disagrees. Applicants are reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claims. More specifically. Gaussian noise can be considered similar to a Poisson-Gaussian noise model because the Poisson component often behaves like a signal-dependent from Gaussian noise, especially when the signal level is moderate to high. In these cases, the variance of the Poisson distribution becomes large enough that its shape closely approximates a Gaussian with a slightly signal-dependent variance. As a result, treating the noise as purely Gaussian can still capture the main statistical behavior of Poisson-Gaussian noise. Therefore, and Based on the foregoing analysis, the Examiner respectfully stands behind the teachings of Tasfi, as applied in the 35 U.S.C. rejection to claims pendent claims 38-39 and 59 (see rejection, infra). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 24-25, 38-50 and 53-62 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tasfi; Norman (US 20170287109 A1, hereinafter “Tasfi”). Regarding claim 24, Tasfi teaches a digital image processing method (Figs. 3-4), comprising the steps of: image-processing an original digital image for generating an image-processed digital image (Figs. 3-4, [0029]&[0035]: The preprocess module 232 of the image scaling engine preprocesses pixel values of the input image for use as inputs to the CNN module 234. Preprocessing may include statistical analysis such as determining minimum values, mean values, and maximum values of pixels so that pixels can be scaled appropriately for the CNN.); reducing the resolution of the image-processed digital image for generating a starting digital image (Figs. 3-4, [0037]: The image scaling engine 230 ingests 405 a training image. The image scaling engine 230 scales down 410 the training image to an input image for processing by the image scaling engine.); using the original digital image and the starting digital image for forming a training data set for a machine learning system for increasing the resolution of digital images (Figs. 3-4, [0036]-[0037]: The image scaling engine 230 scales up the input image to the same size as the training image using the scaling process of FIG. 3. The CNN module 234 increases 325 the image resolution using a dense layer.), wherein the image processing includes altering the original digital image (Figs. 3-4, [0031]&[0035]: The preprocess module 232 preprocesses the input image to extract and modify input image pixel values for input to the CNN. The color values may be expressed as an 8-bit value (e.g., 0-255) or other color value (e.g., 16-bit, 24-bit, etc.).), wherein the altering of the original digital image includes denoising and/or blurring (Figs. 3-4, [0031]&[0035]: a linear Gaussian dense layer considers the noise within the training dataset in conjunction with the calculated response to achieve a more robust value. The values are modelled by a linear function of the previous layer with the addition of Gaussian noise.). Regarding claim 25, Tasfi teaches the method according to claim 24, in addition Tasfi discloses wherein the machine learning system is a neural network learning system (Figs. 3-4, [0036]-[0037]: The image scaling engine 230 scales up the input image to the same size as the training image using the scaling process of FIG. 3. The CNN module 234 increases 325 the image resolution using a dense layer.). Regarding claim 38, Tasfi teaches the method according to claim 24, in addition Tasfi discloses wherein the image processing includes injecting noise (Figs. 3-4, [0031]&[0035]: a linear Gaussian dense layer considers the noise within the training dataset in conjunction with the calculated response to achieve a more robust value. The values are modelled by a linear function of the previous layer with the addition of Gaussian noise.). Regarding claim 39, Tasfi teaches the method according to claim 38, in addition Tasfi discloses including injecting realistic noise according to a Poisson-Gaussian noise model (Figs. 3-4, [0031]&[0035]: a linear Gaussian dense layer considers the noise within the training dataset in conjunction with the calculated response to achieve a more robust value. The values are modelled by a linear function of the previous layer with the addition of Gaussian noise.). Regarding claim 40, Tasfi teaches the method according to claim 38, in addition Tasfi discloses including changing an image data format ([0035]: This allows the image scaling engine 230 to consistently process image data received in a variety of formats.). Regarding claim 41, Tasfi teaches the method according to claim 40, in addition Tasfi discloses including providing for an image data format comprising non-processed or minimally processed data from an image sensor ([0029]: The preprocess module 232 of the image scaling engine preprocesses pixel values of the input image for use as inputs to the CNN module 234. Preprocessing may include statistical analysis such as determining minimum values, mean values, and maximum values of pixels so that pixels can be scaled appropriately for the CNN.). Regarding claim 42, Tasfi teaches the method according to claim 41, in addition Tasfi discloses wherein the image data format is a RAW image format ([0029]-[0030]: original input image pixel values (e.g., 0-255)). Regarding claim 43, Tasfi teaches the method according to claim 41, in addition Tasfi discloses including carrying out the changing of the image data format before the injection of noise and after the injection of noise ([0031]: The linear Gaussian dense layer considers the noise within the training dataset in conjunction with the calculated response to achieve a more robust value. The values are modelled by a linear function of the previous layer with the addition of Gaussian noise. The Gaussian noise is sampled from a distribution where the mean and variance is equal to that of the training dataset sample mean and covariance matrix respectively.), changing the image data format in the image data format of the original digital image which is using an RGB color space, the resulting digital image forming the starting digital image for generating the trial digital image ([0035]: color values may be expressed as an 8-bit value (e.g., 0-255) or other color value (e.g., 16-bit, 24-bit, etc.). In one embodiment, the preprocess module 232 normalizes color values in the matrix). Regarding claim 44, Tasfi teaches the method according to claim 24, in addition Tasfi discloses including using the original digital image and the starting digital image for training the machine learning system (Figs. 3-4, [0036]-[0037]: The image scaling engine 230 scales up the input image to the same size as the training image using the scaling process of FIG. 3. The CNN module 234 increases 325 the image resolution using a dense layer). Regarding claim 45, Tasfi teaches the method according to claim 44, in addition Tasfi discloses wherein the training includes increasing the resolution of the starting digital image for generating a trial digital image ([0036]: The CNN module 234 detects 320 image features using a convolutional layer. The CNN module 234 increases 325 the image resolution using a dense layer. Steps 320 and 325 may be performed multiple times in many different orders to optimize model performance.). Regarding claim 46, Tasfi teaches the method according to claim 45, in addition Tasfi discloses wherein the training includes comparing the trial digital image with the original digital image ([0034]&[0037]: The input training image is scaled by the CNN module, which returns an output training image. The training module compares the output training image to the training image using a cost function such as the mean squared error function.). Regarding claim 47, Tasfi teaches the method according to claim 24, in addition Tasfi discloses including increasing the resolution of a digital image generated with an optical device using the machine learning system having been trained using the training data set ([0036]: The CNN module 234 detects 320 image features using a convolutional layer. The CNN module 234 increases 325 the image resolution using a dense layer. Steps 320 and 325 may be performed multiple times in many different orders to optimize model performance.). Regarding claim 48, Tasfi teaches a method for digital image processing, wherein resolution of a digital image a digital image generated with an optical device, is increased using a machine learning system that is trained by carrying out the steps according to claim 24 ([0036]: The CNN module 234 detects 320 image features using a convolutional layer. The CNN module 234 increases 325 the image resolution using a dense layer. Steps 320 and 325 may be performed multiple times in many different orders to optimize model performance.). Regarding claim 49, claim 49 has been analyzed and rejected with regard to claim 24 and in accordance with Tasfi's further teaching on: a computer program product, comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method according to claim 24 ([0040]-[0042]: Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.). Regarding claim 50, claim 50 has been analyzed and rejected with regard to claim 49 and in accordance with Tasfi's further teaching on: the computer program product according to claim 49, wherein the computer program product is a computer program stored on a data carrier, a device, a device with an embedded processor, a computer embedded in a device, a smartphone, a computer of a device for producing an image recording, or is a signal sequence representing data suitable for transmission via a computer network ([0040]-[0042]: Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.). Regarding claim 53, Tasfi teaches the computer program product according to claim 50, in addition Tasfi discloses wherein the device for producing an image recording is a photo and/or video camera ([0015]: content included in a section is referred to herein as “content items” or “articles,” which may include textual articles, images, videos). Regarding claim 54, Tasfi teaches a device for digital image processing, comprising means for carrying out the method according to claim 24 ([0028]: The image scaling engine 230 scales images to a larger size using a trained model for inclusion in pages in a digital magazine. ). Regarding claim 55, Tasfi teaches a trained machine-learning model trained in accordance with the method according to claim 44 ([0028]: The image scaling engine 230 scales images to a larger size using a trained model for inclusion in pages in a digital magazine. ). Regarding claim 56, Tasfi teaches a device for digital image processing, using the trained machine-learning model according to claim 55, for increasing resolution of a digital image (Fig. 1, [0028]: The image scaling engine 230 scales input images by increasing the resolution of the images and returns scaled output images.). Regarding claim 57, Tasfi teaches the device for digital image processing according to claim 54, wherein the device is part of an image capturing system ([0015]: content included in a section is referred to herein as “content items” or “articles,” which may include textual articles, images, videos). Regarding claim 58, Tasfi teaches a data carrier signal that transmits the computer program product according to claim 49 ([0018]: The sources 110 communicate with the client device 130 and the digital magazine server 140 via the network 120, which may comprise any combination of local area and/or wide area networks, using both wired and/or wireless communication systems.). Regarding claim 59, claim 59 has been analyzed and rejected with regard to claim 24 and in accordance with Tasfi’s further teaching wherein the image processing includes injecting noise according to a Poisson-Gaussian noise model (Figs. 3-4, [0031]&[0035]: view of a linear Gaussian dense layer considers the noise within the training dataset in conjunction with the calculated response to achieve a more robust value. The values are modelled by a linear function of the previous layer with the addition of Gaussian noise (Gaussian noise can be considered similar to a Poisson-Gaussian noise model as presented in Examiner response supra)). Regarding claim 60, Tasfi teaches the method according to claim 59, in addition Tasfi discloses including using the original digital image and the starting digital image for training the machine learning system ([0036]: The CNN module 234 detects 320 image features using a convolutional layer. The CNN module 234 increases 325 the image resolution using a dense layer. Steps 320 and 325 may be performed multiple times in many different orders to optimize model performance.). Regarding claim 61, Tasfi teaches the method according to claim 59, in addition Tasfi discloses including increasing the resolution of a digital image generated with an optical device using the machine learning system having been trained using the training data set ([0036]: The CNN module 234 detects 320 image features using a convolutional layer. The CNN module 234 increases 325 the image resolution using a dense layer. Steps 320 and 325 may be performed multiple times in many different orders to optimize model performance.). Regarding claim 62, claim 62 has been analyzed and rejected with regard to claim 59 and in accordance with Tasfi's further teaching on: computer program product, comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method according to claim 59 ([0040]-[0042]: Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 28-29 are rejected under 35 U.S.C. 103 as being unpatentable over Tasfi; Norman (US 20170287109 A1, hereinafter “Tasfi”), in view of LEE et al. (US 20220070376 A1, hereinafter “LEE”). Regarding claim 28, Tasfi teaches the method according to claim 24, except wherein the blurring corresponds to a blurring of a real optical device. However, LEE discloses wherein the blurring corresponds to a blurring of a real optical device ([0101]: image preprocessing performed based on blur information obtained from an optical characteristic). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate wherein the blurring corresponds to a blurring of a real optical device as taught by LEE into Tasfi image training process. The suggestion/ motivation for doing so would be to generate a higher-definition image (LEE: [0078]). Regarding claim 29, Tasfi and LEE combination teaches the method according to claim 28, in addition LEE discloses wherein the blurring is carried using a blurring kernel and/or a point spread function ([0101]: The image preprocessing may be performed based on blur information obtained from an optical characteristic that is based on the arrangement of the hole regions 945, and include removing a double image from the raw image using the blur information. The blur information may include information associated with a PSF determined based on the arrangement of the hole regions 945.). Claims 30-32 are rejected under 35 U.S.C. 103 as being unpatentable over Tasfi and LEE combination as applied above, in view of YANG; JIANCHAO (US 20160335747 A1, hereinafter “YANG”). Regarding claim 30, Tasfi and LEE combination teaches the method according to claim 24, except including performing the method steps multiply using different original digital images and/or carrying out different image processings. However, YANG discloses including performing the method steps multiply using different original digital images and/or carrying out different image processings (Figs. 3 [0031] method 100 begins by receiving 102 input data representing a blurry digital image and applying 104 a variable scale filter to the blurry digital image at an original resolution to obtain a set of filtered observations at different scale levels. The variable scale filter may, for example, include a set of Gaussian noise filters with decreasing radius, or a set of directional noise filters. The method 100 continues by calculating 112 an estimated blur kernel based on each of the filtered observations (estimated blur kernel for uniform and non-uniform blur cases).). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate performing the method steps multiply using different original digital images and/or carrying out different image processings as taught by YANG into Tasfi and LEE combination. The suggestion/ motivation for doing so would be to provide an improved scale adaptive blind deblurring technique (YANG: [0003]&[0017]). Regarding claim 31, Tasfi, LEE and YANG combination teaches the method according to claim 30, in addition YANG discloses wherein the method is multiply performed using different blurrings, each blurring corresponding to a different real optical device (Figs. 3 [0031] method 100 begins by receiving 102 input data representing a blurry digital image and applying 104 a variable scale filter to the blurry digital image at an original resolution to obtain a set of filtered observations at different scale levels. The variable scale filter may, for example, include a set of Gaussian noise filters with decreasing radius, or a set of directional noise filters. The method 100 continues by calculating 112 an estimated blur kernel based on each of the filtered observations (estimated blur kernel for uniform and non-uniform blur cases).). Regarding claim 32, Tasfi, LEE and YANG combination teaches the method according to claim 30, in addition YANG discloses wherein the method is multiply performed using different blurrings, each corresponding to different Gaussian filters (Figs. 3 [0031] method 100 begins by receiving 102 input data representing a blurry digital image and applying 104 a variable scale filter to the blurry digital image at an original resolution to obtain a set of filtered observations at different scale levels. The variable scale filter may, for example, include a set of Gaussian noise filters with decreasing radius, or a set of directional noise filters. The method 100 continues by calculating 112 an estimated blur kernel based on each of the filtered observations (estimated blur kernel for uniform and non-uniform blur cases).). The suggestion/ motivation for doing so would be to provide an improved scale adaptive blind deblurring technique (YANG: [0003]&[0017]). Claims 33-34 are rejected under 35 U.S.C. 103 as being unpatentable over Tasfi; Norman (US 20170287109 A1, hereinafter “Tasfi”), in view of YANG; JIANCHAO (US 20160335747 A1, hereinafter “YANG”). Regarding claim 33, Tasfi teaches the method according to claim 24, except wherein the blurring or the blurrings differ in the image plane representing the image. However, YANG discloses wherein the blurring or the blurrings differ in the image plane representing the image (Figs. 3 [0031] method 100 begins by receiving 102 input data representing a blurry digital image and applying 104 a variable scale filter to the blurry digital image at an original resolution to obtain a set of filtered observations at different scale levels. The variable scale filter may, for example, include a set of Gaussian noise filters with decreasing radius, or a set of directional noise filters. The method 100 continues by calculating 112 an estimated blur kernel based on each of the filtered observations (estimated blur kernel for uniform and non-uniform blur cases).). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate wherein the blurring or the blurrings differ in the image plane representing the image as taught by YANG into Tasfi image training process. The suggestion/ motivation for doing so would be to provide an improved scale adaptive blind deblurring technique (YANG: [0003]&[0017]). Regarding claim 34, Tasfi and YANG combination teaches the method according to claim 33, in addition YANG discloses wherein the blurring or the blurrings differ in strength or type of blurring (Figs. 3 [0031] method 100 begins by receiving 102 input data representing a blurry digital image and applying 104 a variable scale filter to the blurry digital image at an original resolution to obtain a set of filtered observations at different scale levels. The variable scale filter may, for example, include a set of Gaussian noise filters with decreasing radius, or a set of directional noise filters. The method 100 continues by calculating 112 an estimated blur kernel based on each of the filtered observations (estimated blur kernel for uniform and non-uniform blur cases).). Claim 35 is rejected under 35 U.S.C. 103 as being unpatentable over Tasfi and LEE combination as applied above, in view of Martinello et al. (US 20190279051 A1, hereinafter “Martinello”). Regarding claim 35, Tasfi and LEE combination teaches the method according to claim 28, except wherein the real optical device is a plenoptical imaging system. However, Martinello discloses wherein the real optical device is a plenoptical imaging system (Fig. 1, [0018]: The light-field camera 110 (also known as “plenoptical”) captures light-field images 170 of objects 150. Each light-field image 170 contains many views of the object 150 taken simultaneously from different viewpoints.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate wherein the real optical device is a plenoptical imaging system as taught by Martinello into Tasfi and LEE combination. The suggestion/ motivation for doing so would be to allow the ability to capture both spatial and angular information of light rays. Claims 36 and 37 are rejected under 35 U.S.C. 103 as being unpatentable over Tasfi, LEE and Martinello combination as applied above, in view of Bailey et al. (US 20070223099A1, hereinafter “Bailey”). Regarding claim 36, Tasfi, LEE and Martinello combination teaches the method according to claim 35, except wherein the real optical device is a kaleidoscope. However, Bailey discloses wherein the real optical device is a plenoptical imaging system (Fig. 1, [0039]: a kaleidoscope for connecting with a video monitor for receiving an image from the video monitor and generating a kaleidoscopic effect of the image.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate wherein the real optical device is a kaleidoscope as taught by Bailey into Tasfi, LEE and Martinello combination. The suggestion/ motivation for doing so would be to generate a kaleidoscopic effect of the image provided by the video monitor (Bailey: [0007]). Regarding claim 37, Tasfi, LEE and Martinello combination teaches the method according to claim 35, in addition Martinello discloses wherein the plenoptical imaging system generates multiple images of an object to be captured (Fig. 1, [0018]: The light-field camera 110 captures light-field images 170 of objects 150. Each light-field image 170 contains many views of the object 150 taken simultaneously from different viewpoints.). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDELAAZIZ TISSIRE whose telephone number is (571)270-7204. The examiner can normally be reached on Monday through Friday from 8 AM to 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ye Lin can be reached on 571-272-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDELAAZIZ TISSIRE/ Primary Examiner, Art Unit 2638
Read full office action

Prosecution Timeline

May 12, 2023
Application Filed
May 17, 2025
Non-Final Rejection — §102, §103
Aug 21, 2025
Response Filed
Nov 21, 2025
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593518
IMAGE SENSOR AND OPERATION METHOD THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12587749
CONTROL APPARATUS AND CONTROL METHOD THEREFOR
2y 5m to grant Granted Mar 24, 2026
Patent 12587757
SOLID-STATE IMAGING DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12581204
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12581177
CAMERA DEVICE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
98%
With Interview (+13.2%)
2y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 693 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month