Prosecution Insights
Last updated: April 19, 2026
Application No. 18/841,817

METHOD AND IMAGE PROCESSOR UNIT FOR PROCESSING IMAGE DATA

Non-Final OA §102§112
Filed
Aug 27, 2024
Examiner
GILES, NICHOLAS G
Art Unit
2639
Tech Center
2600 — Communications
Assignee
Dream Chip Technologies GmbH
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
683 granted / 834 resolved
+19.9% vs TC avg
Strong +16% interview lift
Without
With
+16.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
25 currently pending
Career history
859
Total Applications
across all art units

Statute-Specific Performance

§101
4.0%
-36.0% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
24.4%
-15.6% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 834 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The current abstract is substantially longer than 150 words. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length. See MPEP § 608.01(b) and 1826. The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-13 and 15-21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites “a finer sub-pixel matrix of light sensitive elements” and “a group of light sensitive elements” after previously reciting “a matrix of light sensitive elements”. It is unclear whether the “light sensitive elements” of the recitations are related or not. Claim 1 recites the limitation "the pixel matrices". There is insufficient antecedent basis for this limitation in the claim. Claim 1 recites “of sub-pixel values” multiple times after previously reciting “group of sub-pixel values”. It is unclear whether the “sub-pixel values” of the recitations are related or not. Claims 2-12 and 15 depend on claim 1 and therefore are rejected. Claim 2 recites the limitations “η.sub.n`”, “Y.sub.n`”, "the n-th view image pixel matrix", “the n-th view pixel matrix”, and “the m-th view image pixel matrix. There is insufficient antecedent basis for these limitations in the claim. Claim 3 recites the limitations “the plurality of sub-pixel values”, "the captured pixel matrix", and “the variations between the sub-pixel values”. There is insufficient antecedent basis for these limitations in the claim. Claim 3 recites “the views”. As “plurality of views”, “other views”, and “set of views” was previously recited it is unclear what “the views” is referring to. Claim 3 also recites “joined pre-processing”. It is unclear if the “joined pre-processing” is referring to a previous step or meant to be a new step in the method. Based on context the examiner believes “joined” is meant to be “joining”. Claim 4 depends on claim 3 and therefore is rejected. Claim 4 recites the limitation "the sum of the sub-pixel values". There is insufficient antecedent basis for this limitation in the claim. Claim 5 recites the limitation "the average of the sub-pixel values". There is insufficient antecedent basis for this limitation in the claim. Claim 6 recites the limitation "the captured pixel matrix". There is insufficient antecedent basis for this limitation in the claim. Claim 7 recites the limitation "the variation of brightness of sub-pixel values". There is insufficient antecedent basis for this limitation in the claim. Claim 8 recites the limitations "the combined views of an image", “the combined view”, “the combined pixel values”, and “the separate pixel matrices”. There is insufficient antecedent basis for these limitations in the claim. Claim 10 recites the limitations “the sub-pixel matrixes”, “the difference values”, “the positions in the matrix”, and “the set of sub-pixel matrices”. There is insufficient antecedent basis for these limitations in the claim. Claim 11 recites the limitation "the disparity of sub-pixel values". There is insufficient antecedent basis for this limitation in the claim. Claim 12 recites the limitation "the focal depth". There is insufficient antecedent basis for this limitation in the claim. Claim 13 recites “a finer sub-pixel matrix of light sensitive elements” and “a group of light sensitive elements” after previously reciting “a matrix of light sensitive elements”. It is unclear whether the “light sensitive elements” of the recitations are related or not. Claim 13 recites “of sub-pixel values” multiple times after previously reciting “matrix of sub-pixel values”. It is unclear whether the “sub-pixel values” of the recitations are related or not. Claims 16-21 depend on claim 13 and therefore are rejected. Claim 16 recites the limitations “η.sub.n`”, “Y.sub.n`”, "the n-th view image pixel matrix", “the n-th view pixel matrix”, and “the m-th view image pixel matrix. There is insufficient antecedent basis for these limitations in the claim. Claim 17 recites the limitations “the plurality of sub-pixel values”, "the captured pixel matrix", and “the variations between the sub-pixel values”. There is insufficient antecedent basis for these limitations in the claim. Claim 3 recites “the views”. As “plurality of views”, “other views”, and “set of views” was previously recited it is unclear what “the views” is referring to. Claim 3 also recites “performed joined pre-processing”. It is unclear if the “performed joined pre-processing” is referring to a previous step or meant to be a new step in the method. Based on context the examiner believes “performed” is meant to be “performing”. Claim 18 recites the limitation "the sum of the sub-pixel values". There is insufficient antecedent basis for this limitation in the claim. Claim 19 recites the limitation "the average of the sub-pixel values". There is insufficient antecedent basis for this limitation in the claim. Claim 20 recites the limitation "the captured pixel matrix". There is insufficient antecedent basis for this limitation in the claim. Claim 21 recites the limitation "the variation of brightness of sub-pixel values". There is insufficient antecedent basis for this limitation in the claim. It is noted that due to the numerous indefinite issues with the claims that there are likely additional indefinite language issues that will arise from amending the claims, and therefore need to be corrected. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, 7, 9-13, 15, and 21 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Fukuda (U.S. Pub. No. 20220137328). Regarding claim 1, Fukuda discloses: A method for processing image data of an image sensor, wherein the image sensor comprises a matrix of light sensitive elements (array of pixels of an imaging element where pixels have sub pixels with a plurality of photoelectric conversion units, par. 56, 60, 62, 204, 207, and Figs. 2, 3A, 3B, 26, 27A, 27B) and a plurality of lens elements (a microlens 305 for converging the incident light at the light-receiving surface side of each pixel (+z direction) is formed, par. 60, 207, and Figs. 3B and 27B) and/or filter elements arranged in a pixel matrix (a color filter 306 is formed between a microlens 305 and a pixels with photoelectric conversion units, par. 61 and Figs. 3B and 27B) in front of a finer sub-pixel matrix of light sensitive elements (pixels have sub pixels with a plurality of photoelectric conversion units, par. 56, 60, 62, 204, 207, and Figs. 2, 3A, 3B, 26, 27A, 27B), wherein a group of light sensitive elements are placed behind a common lens element and/or a common filter element to provide sub-pixel values for a respective position in the pixel matrix (sub pixels with a plurality of photoelectric conversion units have a microlens 305 that converges the incident light at the light-receiving surface side of each pixel, where an electron and a hole (positive hole) are generated through pair production according to the amount of the received light and separated by a depletion layer, and thereafter, electrons are accumulated, and the electrons accumulated in the first photoelectric conversion unit 301 and the second photoelectric conversion unit 302 are transferred to a capacitance unit (FD) via a transfer gate and then converted into a voltage signal, par. 56, 60, 62, 204, 207, and Figs. 2, 3A, 3B, 26, 27A, 27B), and wherein said image sensor is adapted to capture image data for a plurality of views, wherein each view comprises a matrix of a selected group of sub-pixel values of the pixel matrices captured by the matrix of light sensitive elements (a signal from a certain sub pixel among the first sub pixel 201 and the second sub pixel 202 divided as 2×1 (the first sub pixel to the N.sub.LFth sub pixel which are divided as Nx×Ny) is selected from the LF data (input data) such that a viewpoint image corresponding to the certain partial pupil area among the first partial pupil area 501 and the second partial pupil area 502 (the first partial pupil area to the N.sub.LFth partial pupil area) can be generated, or a signal from the certain sub pixel among the first sub pixel 201 to the fourth sub pixel 204 divided as the four parts (the first sub pixel to the N.sub.LFth sub pixel which are divided as Nx×Ny) is selected for each pixel from the LF data (input image) corresponding to the pixel array illustrated in FIG. 26, thereby generating a first viewpoint image to a fourth viewpoint image, par. 70, 211), the method comprising: for each captured view of the image, determining first variations of sub-pixel values in the respective view separately from other views of the same image (depending on the exit pupil distance Dl of the focusing optical system is relation to the set pupil distance Ds of the imaging element a shading occurs, and as necessary, to improve the shagging (misspelled and should be shading based on context) of each viewpoint image, shading correction processing (optical correction processing) may be performed every RGB with respect to each of the first viewpoint image and the second viewpoint image (the first viewpoint image to the N.sub.LFth viewpoint image), par. 84-88 and Fig. 9); determining second variations of sub-pixel values related to the same position in the pixel matrix behind a respective lens element and/or filter element in a set of views of the same image (for the first viewpoint image and second viewpoint image a difference between the viewpoint images is used to determine crosstalk correction, par. 128-135 and Figs. 9 and 20); and processing the image data for the image by use of the determined first and second variations (at the end of Fig. 9 after shading correction and crosstalk correction a refocused image is generated, par. 84-88, 128-135, 166, and Fig. 9). Regarding claim 7, Fukuda further discloses: evaluating the variation of brightness of sub-pixel values related to the same pixel position in the set of views of an image (if the image brightness signal Y(j, i) is smaller than the low brightness threshold Ymin, the value of the image contrast distribution C(j, i) is set as 0 when generating image contrast distribution C(j, i), par. 95). Regarding claim 9, Fukuda further discloses: evaluating a view blur by correlating related sub-pixel values of at least two views of the image, wherein a first sub-pixel matrix of a first group of sub-pixel values is correlated with a second sub-pixel matrix of a second group of sub- pixel values (refocus processing for re-modifying a focus position with respect to the image after the photographing is performed by using the relationship between the defocus amount between the first viewpoint image and the second viewpoint image (the first viewpoint image to the N.sub.LFth viewpoint image), and the image shift amount therebetween, par. 78). Regarding claim 10, Fukuda further discloses: correlating the sub-pixel matrixes with each other by automatically processing the difference values for each of the positions in the matrix as difference of the sub-pixel value of one sub-pixel matrix of the set of sub-pixel matrices and the sub-pixel value of another sub-pixel matrix of the set of sub-pixel matrices (refocus processing for re-modifying a focus position with respect to the image after the photographing is performed by using the relationship between the defocus amount between the first viewpoint image and the second viewpoint image (the first viewpoint image to the N.sub.LFth viewpoint image), and the image shift amount therebetween, par. 78). Regarding claim 11, Fukuda further discloses: automatically estimating a disparity between the views of an image, wherein the disparity of sub-pixel values related to the same pixel position is determined (refocus processing for re-modifying a focus position with respect to the image after the photographing is performed by using the relationship between the defocus amount between the first viewpoint image and the second viewpoint image (the first viewpoint image to the N.sub.LFth viewpoint image), and the image shift amount therebetween, par. 78). Regarding claim 12, Fukuda further discloses: automatically estimating depth values of the captured image related to the focal depth of the captured image related to a focal plane of the image sensor (the when an allowable confusion circle diameter is denoted by δ and an aperture value of the focusing optical system is denoted by F, a depth of field at the aperture value F is ±F×δ, par. 154-156) Regarding claim 13, Fukuda discloses: An image processor unit (CPU 121 drives various circuits incorporated into the camera on the basis of a predetermined program stored in the ROM to execute a series of operations including AF control, shooting processing, image processing, record processing, and the like, par. 52) for processing raw image data provided by an image sensor, said image sensor comprising a matrix of light sensitive elements (array of pixels of an imaging element where pixels have sub pixels with a plurality of photoelectric conversion units, par. 56, 60, 62, 204, 207, and Figs. 2, 3A, 3B, 26, 27A, 27B) and a plurality of lens elements (a microlens 305 for converging the incident light at the light-receiving surface side of each pixel (+z direction) is formed, par. 60, 207, and Figs. 3B and 27B) and/or filter elements arranged in a pixel matrix (a color filter 306 is formed between a microlens 305 and a pixels with photoelectric conversion units, par. 61 and Figs. 3B and 27B) in front of a finer sub-pixel matrix of light sensitive elements (pixels have sub pixels with a plurality of photoelectric conversion units, par. 56, 60, 62, 204, 207, and Figs. 2, 3A, 3B, 26, 27A, 27B), wherein a group of light sensitive elements are placed behind a common lens element and/or a common filter element to provide sub-pixel values for a respective position in the pixel matrix (sub pixels with a plurality of photoelectric conversion units have a microlens 305 that converges the incident light at the light-receiving surface side of each pixel, where an electron and a hole (positive hole) are generated through pair production according to the amount of the received light and separated by a depletion layer, and thereafter, electrons are accumulated, and the electrons accumulated in the first photoelectric conversion unit 301 and the second photoelectric conversion unit 302 are transferred to a capacitance unit (FD) via a transfer gate and then converted into a voltage signal, par. 56, 60, 62, 204, 207, and Figs. 2, 3A, 3B, 26, 27A, 27B), and wherein said raw image data comprising a matrix of pixel values for the pixel matrix captured by the matrix of light sensitive elements (electrons accumulated in the first photoelectric conversion unit 301 and the second photoelectric conversion unit 302 (of the array of pixels) are transferred to a capacitance unit (FD) via a transfer gate and then converted into a voltage signal, par. 56, 60, 62, 204, 207, and Figs. 2, 3A, 3B, 26, 27A, 27B), said matrix of pixel values being divided in a set of views, each view comprising a matrix of sub-pixel values comprising a view specific sub-pixel of the sub-pixel matrix for each position in the matrix, which is related to a respective lens element and/or filter element (a signal from a certain sub pixel among the first sub pixel 201 and the second sub pixel 202 divided as 2×1 (the first sub pixel to the N.sub.LFth sub pixel which are divided as Nx×Ny) is selected from the LF data (input data) such that a viewpoint image corresponding to the certain partial pupil area among the first partial pupil area 501 and the second partial pupil area 502 (the first partial pupil area to the N.sub.LFth partial pupil area) can be generated, or a signal from the certain sub pixel among the first sub pixel 201 to the fourth sub pixel 204 divided as the four parts (the first sub pixel to the N.sub.LFth sub pixel which are divided as Nx×Ny) is selected for each pixel from the LF data (input image) corresponding to the pixel array illustrated in FIG. 26, thereby generating a first viewpoint image to a fourth viewpoint image, par. 70, 211), wherein the image processor unit is configured to: determine, for each captured view of the image, of first variations of sub-pixel values in the respective view separately from other views of the same image (depending on the exit pupil distance Dl of the focusing optical system is relation to the set pupil distance Ds of the imaging element a shading occurs, and as necessary, to improve the shagging (misspelled and should be shading based on context) of each viewpoint image, shading correction processing (optical correction processing) may be performed every RGB with respect to each of the first viewpoint image and the second viewpoint image (the first viewpoint image to the N.sub.LFth viewpoint image), par. 84-88 and Fig. 9); determine second variations of sub-pixel values related to the same position in the pixel matrix behind a respective lens element and/or filter element in a set of views of the same image (for the first viewpoint image and second viewpoint image a difference between the viewpoint images is used to determine crosstalk correction, par. 128-135 and Figs. 9 and 20); and process the image data for the image by use of the determined first and second variations (at the end of Fig. 9 after shading correction and crosstalk correction a refocused image is generated, par. 84-88, 128-135, 166, and Fig. 9). Regarding claim 15, Fukuda discloses: A non-transitory computer readable medium comprising a computer program including instructions which, when the program is executed by a processing unit, causes the processing unit to carry out the steps of the method of claim 1 (see rejection of claim 1 and Fukuda discloses CPU 121 drives various circuits incorporated into the camera on the basis of a predetermined program stored in the ROM to execute a series of operations including AF control, shooting processing, image processing, record processing, and the like, par. 52). Regarding claim 21, Fukuda further discloses: image processor unit is configured to evaluate the variation of brightness of sub-pixel values related to the same pixel position in the set of views of an image (if the image brightness signal Y(j, i) is smaller than the low brightness threshold Ymin, the value of the image contrast distribution C(j, i) is set as 0 when generating image contrast distribution C(j, i), par. 95). Conclusion Regarding claim(s) 2-6, 8, 16-20, taking into consideration the indefiniteness described in the 35 USC 112 rejection(s) above, no prior art could be found and applied as a prior art rejection. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS G GILES whose telephone number is (571)272-2824. The examiner can normally be reached M-F 6:45AM-3:15PM EST (HOTELING). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Twyler Haskins can be reached at 571-272-7406. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS G GILES/ Primary Examiner, Art Unit 2639
Read full office action

Prosecution Timeline

Aug 27, 2024
Application Filed
Feb 27, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604111
IMAGE SENSING DEVICE FOR OBTAINING HIGH DYNAMIC RANGE IMAGE AND IMAGING DEVICE INCLUDING THE SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12598402
Partial Pixel Oversampling for High Dynamic Range Imaging
2y 5m to grant Granted Apr 07, 2026
Patent 12581213
SOLID-STATE IMAGING DEVICE AND METHOD OF CONTROLLING SOLID-STATE IMAGING DEVICE FOR SUPPRESSING DETERIORATION IN IMAGE QUALITY
2y 5m to grant Granted Mar 17, 2026
Patent 12581221
COMPARATOR AND IMAGE SENSOR INCLUDING THE SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12581580
APPARATUSES AND METHODOLOGIES FOR FLICKER CONTROL
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+16.5%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 834 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month