Prosecution Insights
Last updated: April 19, 2026
Application No. 17/599,853

IMAGE CAPTURE ELEMENT AND IMAGE CAPTURE APPARATUS

Non-Final OA §103
Filed
Jan 10, 2022
Examiner
CUTLER, ALBERT H
Art Unit
2637
Tech Center
2600 — Communications
Assignee
Nikon Corporation
OA Round
7 (Non-Final)
79%
Grant Probability
Favorable
7-8
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
811 granted / 1024 resolved
+17.2% vs TC avg
Strong +21% interview lift
Without
With
+21.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
33 currently pending
Career history
1057
Total Applications
across all art units

Statute-Specific Performance

§101
3.3%
-36.7% vs TC avg
§103
45.9%
+5.9% vs TC avg
§102
29.0%
-11.0% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1024 resolved cases

Office Action

§103
DETAILED ACTION This office action is responsive to communication filed on December 24, 2025. Claims 1-8 and 10-14 are pending in the application and have been examined by the Examiner. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 24, 2025 has been entered. Response to Arguments Applicant's arguments filed December 24, 2025 have been fully considered but they are not persuasive. Applicant argues, with respect to claim 1, that Shigiya, Niwa, and Zhang differ in (i) technical field, (ii) problems to be solved, and (iii) functions and effects. Applicant accordingly submits that the person of ordinary skill in the art would not have turned to these three references, nor would that skilled person have derived any motivation or other rationale to combine them. See, e.g., MPEP §§ 2141.01(a) and 2143. The Examiner respectfully disagrees. All three of these references are (i) in the technical field of digital imaging, (ii) address the problem of noise caused by dark current, and (iii) reduce the noise cause by dark current by subtracting the outputs of optical black pixels from the outputs of regular pixels. The Examiner stands by the motivation and/or rationale provided in the body of the rejection of claim 1. Applicant argues, with respect to claim 1, that Shigiya discloses that, to improve the accuracy of optical black correction performed for a plurality of imaging regions having different exposure times, the control conditions of each imaging region and the control conditions of the corresponding reference region (optical black region) used to correct that imaging region should be the same. On the other hand, Zhang adopts a configuration in which the control conditions of imaging pixels and dark calibration pixels (optical black pixels) are intentionally made different so as to perform correction while taking into account the difference in dark current between the imaging pixels and the dark calibration pixels (optical black pixels). Applicant accordingly submits that the configuration assumed in Zhang is contrary to the premise underlying Shigiya and that Shigiya accordingly teaches away from Zhang. Because "[i]t is improper to combine references where the references teach away from their combination," Applicant submits that the person of ordinary skill in the art accordingly would not have combined Shigiya and Zhang, with or without Niwa, as proposed by the Office. See MPEP §2145. The Examiner respectfully disagrees. The principal goal of Shigiya et al. is to “perform appropriate offset correction” (paragraph 0006), and Shigya et al. does this by using a reference region to calculate an appropriate offset value for an imaging region (paragraph 0063). Shigiya et al. nowhere teaches that in order to achieve this goal, the exposure conditions of the imaging region and reference region must be the same. Zhang likewise teaches correcting for a dark current error by subtracting a dark calibration pixel readout from an imaging pixel readout (see column 1, lines 36-44). However, Zhang teaches that this method is not always suitable because sometimes the rate of dark current generation is different between the dark current calibration pixels and the imaging pixels (see column 1, lines 45-62, column 3, lines 31-37). In view of this problem, Zhang teaches that control conditions of the particular light-shielding pixel region of the plurality of light-shielding pixel regions are set independently of the control conditions of the particular imaging region of the plurality of imaging regions (For instance, see figure 3. A dark current ratio (DC ratio) between the dark current calibration pixels and the imaging pixels is calculated in a calibration stage during steps 305 and 310, column 4, line 65 through column 5, line 28. An exposure period (i.e. control condition) for the imaging pixels is then determined in step 315, column 5, lines 36-43. After this, an exposure period (i.e. control condition) is independently set for dark current calibration pixels in step 320, column 5, lines 44-56. As shown in figures 4A and 4B, the control conditions are independently set for the light-shielding pixel region and the imaging region by adjusting the timing of a reset signal, column 6, lines 32-67.). Zhang does not teach away from Shigiya et al. because both references are directed toward removing dark current noise and both references do so by subtracting reference region data from imaging region data. Shigiya et al. does not contemplate the problem addressed by Zhang, that sometimes the rate of dark current generation is different between the dark current calibration pixels and the imaging pixels. Because Zhang solves this problem, the teaching of Zhang would actually help to better achieve the goal of Shigiya et al. (i.e. to provide appropriate offset correction). As such, Zhang does not teach away from Shigiya et al. Therefore, the rejection is maintained by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5, 8 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Shigiya et al. (US 2019/0191112) in view of Niwa et al. (US 2020/0021769) and Zhang (US 8,698,922). The Examiner’s response to Applicant’s arguments, as outlined above, is hereby incorporated into the rejections of claims 1-5, 8 and 10 by reference. Consider claim 1, Shigiya et al. teaches: An image capture element (figure 3), comprising: an imaging pixel region having a plurality of imaging regions (e.g. 14a and 14b in figure 3, paragraph 0052), in which different control conditions are capable of being set for each of the plurality of imaging regions (For instance, different exposure times are capable of being set for each of the plurality of imaging regions (14a, 14b), paragraphs 0059 and 0060.), and each of the plurality of imaging regions (14a, 14b) having: (i) a plurality of first pixels that each include a first photoelectric converter configured to receive light from an optical system and convert the received light to an electric charge (see paragraphs 0051 and 0052, and the photodiode (PD) of figure 7, paragraph 0099) and a first circuit that is connected to the first photoelectric converter (For instance, see the transistors (M1-M6) connected to the photodiode (PD) in figure 7, paragraphs 0098 and 0099.), the first pixels being arrayed in a first direction and a second direction that intersects with the first direction (i.e. in a “plurality of rows and a plurality of columns”, paragraph 0054), and (ii) a first control line that is connected to the plurality of first pixels (All of the pixels having a same drive condition, such as those in imaging region 14a, are connected to a common control line (OFD), paragraphs 0104 and 0111.), and to which a signal that controls the plurality of first pixels is output (i.e. to perform a global shutter operation, see paragraphs 0104, 0108 and 0111); and a plurality of light-shielding pixel regions (e.g. including regions 18a, 18b and a column OB region on the left side of the pixel unit (10), paragraphs 0056, 0053 and 0054, figure 3) each having: (i) a plurality of second pixels (e.g. in region 18a) that each include a second photoelectric convertor (PD) that is shielded from light (“covered with a light-shielding film”, paragraphs 0051 and 0053) and a second circuit that is connected to the second photoelectric converter (For instance, see the transistors (M1-M6) connected to the photodiode (PD) in figure 7, paragraphs 0098 and 0099.), the second pixels being arrayed in the first direction and the second direction (i.e. in a “plurality of rows and a plurality of columns”, paragraph 0054), and (ii) a second control line that is connected to the plurality of second pixels (All of the pixels having a same drive condition, such as those in light-shielding pixel region 18a, are connected to a common control line (OFD), paragraphs 0104 and 0111.), and to which a signal that controls the second pixels is output (i.e. to perform a global shutter operation, see paragraphs 0104, 0108 and 0111), wherein: the plurality of the imaging regions (14a, 14b) are arranged in the first direction and the second direction (Each of the imaging regions (14a, 14b) and the light-shielding pixel regions (18a, 18b) of Shigiya et al. includes “a plurality of rows and a plurality of columns” (paragraph 0054). As such, the imaging regions (14a, 14b) and the light-shielding pixel regions (18a, 18b) are arranged in both the row-wise direction and the column-wise direction (i.e. in the first direction and the second direction).); the plurality of light-shielding pixel regions are arranged in the first direction and the second direction (Each of the imaging regions (14a, 14b) and the light-shielding pixel regions (18a, 18b) of Shigiya et al. includes “a plurality of rows and a plurality of columns” (paragraph 0054). As such, the imaging regions (14a, 14b) and the light-shielding pixel regions (18a, 18b) are arranged in both the row-wise direction and the column-wise direction (i.e. in the first direction and the second direction).); control conditions are capable of being set for each of the plurality of light-shielding pixel regions (18a, 18b) independently of the control conditions for the plurality of imaging regions (14a, 14b). For instance, the exposure time of reference region 18a is set independently of the exposure time of imaging region 14b (paragraphs 0059-0061, figure 5). Additionally, the exposure time of reference region 18b is set independently of the exposure time of imaging region 14a (paragraphs 0059-0061, figure 5), and output signals from the plurality of imaging regions (14a, 14b) are adjusted using output signals from the plurality of light-shielding pixel regions (As detailed in paragraph 0084, “An offset correction process of subtracting output values of the pixels P arranged in the reference regions 18a and 18b from output values of the pixels P arranged in the imaging regions 14a and 14b may be performed inside the imaging device 100 or may be performed outside the imaging device 100.”) so that the output signal from a particular imaging region is adjusted using the output signal from a particular light-shielding pixel region (For instance, a pixel signal from imaging region 14a is adjusted using a pixel signal from reference region 18a, and a pixel signal from imaging region 14b is adjusted using a pixel signal from reference region 18b, paragraph 0063). However, Shigiya et al. does not explicitly teach that the plurality of light-shielding pixel regions are arranged outside of the imaging pixel region, which includes all of the plurality of imaging regions. Niwa similarly teaches an image capture element (figure 12) having a plurality of light-shielding pixel regions (upper OPB region, 370, lower OPB region, 380, paragraphs 0110 and 0111) each of which is associated with an imaging region (e.g. an imaging region containing odd lines and an imaging region containing even lines, respectively, paragraph 0112) of an imaging pixel region (effective pixel region, 310). However, Niwa additionally teaches that that the plurality of light-shielding pixel regions (370, 380) are arranged outside of the imaging pixel region (310), which includes all of the plurality of imaging regions (“Furthermore, the upper OPB region 370 is arranged above the effective pixel region 310 and the lower OPB region 380 is arranged on the lower side of the effective pixel region 310 when the column direction is defined as the up-down direction.” See paragraph 0111, figure 12.). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have the plurality of light-shielding pixel regions taught by Shigiya et al. be arranged outside of the imaging pixel region which includes all of the plurality of imaging regions as taught by Niwa as this only involves combining prior art elements according to known methods to yield predictable results such as enabling a readout to be performed with a reduced time compared with a case of reading out one line at a time (Niwa, paragraph 0115). However, the combination of Shigiya et al. and Niwa does not explicitly teach that control conditions of the particular light-shielding pixel region of the plurality of light-shielding pixel regions are set independently of the control conditions of the particular imaging region of the plurality of imaging regions. Zhang similarly teaches an imaging device (figure 1) including a pixel array (110) having imaging pixels (102) and dark calibration pixels (103), column 2, lines 37-49. Zhang likewise teaches correcting for a dark current error by subtracting a dark calibration pixel readout from an imaging pixel readout (see column 1, lines 36-44). However, Zhang teaches that this method is not always suitable because sometimes the rate of dark current generation is different between the dark current calibration pixels and the imaging pixels (see column 1, lines 45-62, column 3, lines 31-37). In view of this problem, Zhang teaches that control conditions of the particular light-shielding pixel region of the plurality of light-shielding pixel regions are set independently of the control conditions of the particular imaging region of the plurality of imaging regions (For instance, see figure 3. A dark current ratio (DC ratio) between the dark current calibration pixels and the imaging pixels is calculated in a calibration stage during steps 305 and 310, column 4, line 65 through column 5, line 28. An exposure period (i.e. control condition) for the imaging pixels is then determined in step 315, column 5, lines 36-43. After this, an exposure period (i.e. control condition) is independently set for dark current calibration pixels in step 320, column 5, lines 44-56. As shown in figures 4A and 4B, the control conditions are independently set for the light-shielding pixel region and the imaging region by adjusting the timing of a reset signal, column 6, lines 32-67.). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have the control conditions of the particular light-shielding pixel region and the particular imaging region taught by the combination of Shigiya et al. and Niwa be independently set as taught by Zhang for the benefit of enabling more accurate pixel values to be obtained by accounting for non-uniformity of dark currents between the light-shielding pixel region and the imaging region (Zhang, column 4, lines 22-27, column 3, lines 31-42, column 1, lines 59-62). Consider claim 2, and as applied to claim 1 above, Shigiya et al. teaches: in the imaging pixel region (14a, 14b) two or more types of the control conditions are set for the plurality of imaging regions (For instance, a global shutter control condition and a row-by-row sequential readout control condition are set as shown in figure 12, paragraphs 0040, 0119 and 0120.); and the image capture element comprises an optical black pixel region (16, figure 10) having a number of the plurality of light-shielding pixel regions (e.g. including regions 18a, 18b and a column OB region on the left side of the pixel unit (10), paragraphs 0056, 0053 and 0054, figure 12) that is greater than or equal to a number of the imaging regions (14a, 14b) within the imaging pixel region (e.g. 3 is greater than 2); a same control condition as a reference origin imaging region is set for each of the plurality of light-shielding pixel regions in the optical black pixel region (For instance, see “exposure time 1” and “exposure time 2” in figure 12.); and two or more types of the control conditions are set for the plurality of light-shielding pixel regions (For instance, a global shutter control condition and a row-by-row sequential readout control condition are set as shown in figure 12, paragraphs 0040, 0119 and 0120.). Consider claim 3, and as applied to claim 2 above, Shigiya et al. teaches: the reference origin imaging region (e.g. 14a) and a reference destination light-shielding pixel region (e.g. 18a) that is referred to by the reference origin imaging region (14a) are arrayed in a prescribed direction (see figure 3). Consider claim 4, and as applied to claim 3 above, Shigiya et al. teaches: the prescribed direction is a read direction of an output signal from a pixel group constituting the reference origin imaging region (14a) and the reference destination light-shielding pixel region (18a, i.e. a vertical direction in figure 3). Consider claim 5, and as applied to claim 3 above, Shigiya et al. teaches: the prescribed direction is a direction orthogonal to a read direction of an output signal from a pixel group constituting the reference origin imaging region and the reference destination light-shielding pixel region (For instance, a column OB region on the left side of the pixel unit (10) and the origin imaging region 14a are arranged orthogonally to the vertical direction in which the readout scan is performed, paragraphs 0056, 0053 and 0054, figure 3.). Consider claim 10, and as applied to claim 2 above, Shigiya et al. teaches: a signal processor configured to correct the output signals from the plurality of imaging regions based on the output signals from the plurality of light-shielding pixel regions (“An offset correction process of subtracting output values of the pixels P arranged in the reference regions 18a and 18b from output values of the pixels P arranged in the imaging regions 14a and 14b may be performed inside the imaging device 100 or may be performed outside the imaging device 100. When an offset correction process is performed inside the imaging device 100, the imaging device 100 further includes a signal processing unit that performs the offset correction process.” paragraph 0084). Consider claim 8, Shigiya et al. teaches: An imaging device (figure 16), comprising: the image capture element according to claim 1 (imaging device, 201, paragraph 0166, see claim 1 rationale); and a signal processor (signal processing unit, 208) configured to perform black level correction on output from the plurality of imaging regions based on output from the plurality of light-shielding pixel regions (“The offset correction process to subtract output values of the pixels P arranged in the reference region 18 from output values of the pixels P arranged in the imaging region 14 may be performed in the signal processing unit 208.” Paragraph 0166). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Shigiya et al. (US 2019/0191112) in view of Niwa et al. (US 2020/0021769) and Zhang (US 8,698,922), as applied to claim 1 above, and further in view of Joboji et al. (US 8,854,507). Consider claim 6, and as applied to claim 1 above, Shigiya et al. teaches: each of the light-shielding pixel regions has a first optical black pixel group, each pixel of which has a photoelectric converter (i.e. a photoelectric conversion element on which a light-shielding film is arranged, see paragraph 0051). However, the combination of Shigiya et al., Niwa et al. and Zhang does not explicitly teach that the each of the light-shielding pixel regions has a second optical black pixel group, each pixel of which does not have the photoelectric conversion element. Joboji et al. similarly teaches an image capture unit (figure 1) comprising an imaging region (P(x,y)) and a first optical black pixel group (Pob(x,y)) wherein each pixel includes a photodiode (PD(x,y), see column 5, lines 30-42, figure 2). However, Joboji et al. additionally teaches a second optical black pixel group (Pog(x,y), figure 1), each pixel of which does not have the photoelectric conversion element (see column 5, lines 43-50, column 7, lines 10-28, figure 3). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have each of the light-shielding pixel regions taught by the combination of Shigiya et al., Niwa et al. and Zhang include a second optical black pixel group, each pixel of which does not have the photoelectric conversion element as taught by Joboji et al. for the benefit of making it possible to correct for both an offset variation and a gain variation of output signals of the imaging area of the solid-state imaging device in real time (Joboji et al., column 4, lines 33-35). Allowable Subject Matter Claims 7 and 11-14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Consider claim 7, the prior art of record does not teach nor reasonably suggest that among a first of the light-shielding pixel regions and a second of the light-shielding pixel regions that are adjacent to each other, a first partial region where the first light-shielding pixel region and the second light-shielding pixel region are adjacent to each other is either one of the first optical black pixel group and the second optical black pixel group, and among the first light-shielding pixel region and the second light-shielding pixel region, a second partial region other than the first partial region is another of the first light-shielding pixel region and the second light-shielding pixel region, in combination with the other elements recited in parent claims 1 and 6. Consider claim 11, the prior art of record does not teach nor reasonably suggest that the signal processor is configured to determine a correction method for correcting the output signals from the plurality of imaging regions based on correction information including information relating to the control conditions of the plurality of light-shielding pixel regions and information relating to the control conditions of the plurality of imaging regions, in combination with the other elements recited in parent claims 1, 2 and 10. Claims 12-14 contain allowable subject matter as depending from claim 11. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALBERT H CUTLER whose telephone number is (571)270-1460. The examiner can normally be reached approximately Mon - Fri 8:00-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at (571)272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALBERT H CUTLER/Primary Examiner, Art Unit 2637
Read full office action

Prosecution Timeline

Jan 10, 2022
Application Filed
Nov 23, 2022
Non-Final Rejection — §103
May 25, 2023
Response Filed
Jul 28, 2023
Final Rejection — §103
Dec 29, 2023
Request for Continued Examination
Dec 31, 2023
Response after Non-Final Action
Jan 04, 2024
Non-Final Rejection — §103
Jun 06, 2024
Response Filed
Jul 18, 2024
Final Rejection — §103
Dec 26, 2024
Request for Continued Examination
Jan 07, 2025
Response after Non-Final Action
Jan 15, 2025
Non-Final Rejection — §103
Jul 17, 2025
Response Filed
Jul 22, 2025
Final Rejection — §103
Dec 24, 2025
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Jan 29, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592997
PERIPHERAL BUS VIDEO COMMUNICATION USING INTERNET PROTOCOL
2y 5m to grant Granted Mar 31, 2026
Patent 12587765
IMAGING DEVICE AND ELECTRONIC APPARATUS COMPRISING IMAGING DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12587763
COMPARISON CIRCUIT AND IMAGE SENSING DEVICE INCLUDING THE SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12581765
ACTIVE PIXEL IMAGE SENSOR AND DISPLAY DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12563286
METHOD AND MOBILE DEVICE FOR CAPTURING AN IMAGE OF A FOOT USING AUGMENTED REALITY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+21.3%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 1024 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month