Prosecution Insights
Last updated: April 19, 2026
Application No. 18/364,871

SENSOR DEVICE AND METHOD OF DETERMINING CORRECTION COEFFICIENTS

Non-Final OA §102§103
Filed
Aug 03, 2023
Examiner
DAGNEW, MEKONNEN D
Art Unit
2638
Tech Center
2600 — Communications
Assignee
Shanghai Tianma Micro-Electronics Co. Ltd.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
604 granted / 728 resolved
+21.0% vs TC avg
Strong +16% interview lift
Without
With
+15.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
29 currently pending
Career history
757
Total Applications
across all art units

Statute-Specific Performance

§101
4.5%
-35.5% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 728 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim 10 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by Duran et al. (US 20180343390 A1). As of Claim 10: Duran teaches a method of determining correction coefficients to be used by a controller of a sensor device in correcting measurement signals of a plurality of pixels of the sensor device (¶¶0146,0148), each of the plurality of pixels including a photodetector and a pixel circuit configured to output a signal from the photodetector, and the method comprising: acquiring measured signals from the plurality of pixels under a plurality of reference lights having different intensities(¶¶0148-0149 and note after capturing the video data, the camera device combines first video data of the first subset of video data with second video data of the second subset of video data to generate an HDR frame. The first subset of the video data meets the one or more first predefined criteria and determining whether the second subset of the video data meets the one or more second predefined criteria includes determining whether video data of the HDR frame meets one or more predefined HDR criteria. In some implementations, the HDR criteria include whether a current exposure ratio equals a minimum exposure ratio, whether a number of pixels in boundary bins (e.g., sigma bins representing the uppermost/lowermost 1%, 2%, or 5% of pixels) meets pixel count criteria, and/or whether an average light intensity is less than a light intensity target.) ; determining a statistic value of the measured signals over the plurality of pixels under each of the plurality of reference lights (¶¶0151); and determining a correction coefficient for each pixel based on the statistic value of the measured signals over the plurality of pixels and a measured signal of the pixel under each of the plurality of reference lights (¶¶0149,0153-0154) . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1&2 are rejected under 35 U.S.C. 103 as being unpatentable over Duran et al. (US 20180343390 A1) in view of Bales (US 20220038110 A1). As of Claim 1: Duran teaches a sensor device (¶0147 and image sensor 418)comprising: a plurality of pixels (¶0142 and note he histograms are graphs showing a number of pixels in respective images at each different intensity value found in that image. The light intensity histogram 802 in FIG. 8A shows that the pixels in long exposure image 702 are binned in the medium-to-high light intensity); and a controller configured to correct measurement signals of the plurality of pixels, wherein each of the plurality of pixels includes (¶¶0145, 0206,0211) a photodetector (¶0147 and image sensor 418); and a pixel circuit configured to output a signal from the photodetector, and wherein the controller is configured to: acquire an unknown measured signal from one of the plurality of pixels (¶¶0046,0092,0153 and note); and correct the signal of the one pixel with a correction coefficient which is based on ratios between values obtained from measured signals of the one pixel under a plurality of reference lights having different intensities (¶¶0148-0149 and note after capturing the video data, the camera device combines first video data of the first subset of video data with second video data of the second subset of video data to generate an HDR frame. The first subset of the video data meets the one or more first predefined criteria and determining whether the second subset of the video data meets the one or more second predefined criteria includes determining whether video data of the HDR frame meets one or more predefined HDR criteria. In some implementations, the HDR criteria include whether a current exposure ratio equals a minimum exposure ratio, whether a number of pixels in boundary bins (e.g., sigma bins representing the uppermost/lowermost 1%, 2%, or 5% of pixels) meets pixel count criteria, and/or whether an average light intensity is less than a light intensity target.) and statistic values obtained from the values obtained from the measured signals of the plurality of pixels under a plurality of reference lights having different intensities (¶¶142-0143 and note FIGS. 8A-8B are example light intensity histograms for the images of FIGS. 7A-7B in accordance with some implementations. The histograms are graphs showing a number of pixels in respective images at each different intensity value found in that image. The light intensity histogram 802 in FIG. 8A shows that the pixels in long exposure image 702 are binned in the medium-to-high light intensity bins (also sometimes called sigma bins). The light intensity histogram 804 in FIG. 8B shows that the pixels in the short exposure image 704 are binned in the medium-to-low light intensity bins. long exposure and short exposure statistics are obtained prior to fusing (e.g., as illustrated in FIG. 5A), including data regarding light intensity per pixel as illustrated in FIGS. 8A-8B. In some implementations, light intensity data for a particular color (e.g., red, green, or blue) is obtained and analyzed to determine whether to disable or exit HDR mode, including, when the camera device is a video camera, while the camera device is operating in an active or live video mode). Bales is a similar or analogous system to the claimed invention as evidenced Bales teaches correction coefficients are then derived from the error estimate. These correction coefficients are written into the appropriate correction coefficient registers output signal components, which are acquired from the photoelectric conversion devices during the image readout, is made with the sensitivity correction signal components that would have prompted a predictable variation of Duran by applying Bales’s known principal of correct the unknown measured signal of the one pixel with a correction coefficient (¶0007 and note that the correction coefficients are measured continuously from unknown input signals. It relies on certain input signal statistics. The correction coefficient registers are set based on previous estimate (e.g., the correction coefficient registers were set in the initial foreground calibration or in a subsequent background calibration). Background calibration includes a periodic computation of the residual error estimate, resulting in incremental updates to the correction coefficient registers.). In view of the motivations such as to measure one or more calibration signals; and the calibration coefficient computation circuits are configured to calculate updated calibration coefficient values based on calibration measurements from the calibration measurement circuits thereby further improving an accuracy of the respective coefficient values as disclosed in ¶0016 of Bales, it is necessary that the outputs obtained from the PD's are corrected in accordance with the variations in sensitivity one of ordinary skill in the art would have implemented the claimed variation of the prior art system of Duran. Therefore, the claimed invention would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. As of Claim 2: Duran in view of Bales further teaches the correction coefficient is given by a linear combination of the ratios between the values obtained from the measured signals of the one pixel under the plurality of reference lights having different intensities (Duran ¶¶0142-0145) and the statistic values obtained from the values obtained from the measured signals of the plurality of pixels under the plurality of reference lights having different intensities (Duran ¶0145 and note the histograms in FIGS. 8A-8B range from 0 to 255, the target light intensity is 128, and the exposure ratio is 8, the short exposure average light intensity is 63, and the long exposure average light intensity is 180. In this example, adjusting the long exposure to correct the average (e.g., by the ratio of target light intensity to long exposure average light intensity (128/180)) results in saturation of pixels from the short exposure. Adjusting the short exposure by the short exposure average light intensity multiplied by the above ratio (63*128/180) results in a short exposure average light intensity of 44.8. Thus, the average pixels in the short exposure would be 44.8 multiplied by the exposure ratio (8) resulting in a value of 358, which is outside of the range of the histogram (indicating that these pixels would be saturated)). Claims 3-4 & 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Duran et al. (US 20180343390 A1) in view of Bales (US 20220038110 A1), and further in view of Murakoshi (US 20040061081 A1). As of Claim 3: Murakoshi is a similar or analogous system to the claimed invention as evidenced Murakoshi teaches output signal components, which are acquired from the photoelectric conversion devices during the image readout, is made with the sensitivity correction signal components that would have prompted a predictable variation of Duran by applying Murakoshi’s known principal of each of the statistic values obtained from measured signals of the plurality of pixels is a mean of values each obtained by subtracting a measured signal of a pixel in a dark state from the measured signal of the pixel (¶0080). In view of the motivations such as thereby further providing an image having good image quality as disclosed in ¶008 of Murakoshi, it is necessary that the outputs obtained from the PD's are corrected in accordance with the variations in sensitivity one of ordinary skill in the art would have implemented the claimed variation of the prior art system of Duran. Therefore, the claimed invention would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. As of Claim 4: Duran in view of Bales in view of Murakoshi further teaches the controller is configured to determine the correction coefficient based on the statistic values obtained from measured signals of the plurality of pixels under the plurality of reference lights having different intensities and the values obtained from measured signals of the one pixel under the plurality of reference lights having different intensities (Duran ¶¶0153-0154 and note that The camera device determines (1008) whether the long exposure light intensity data meets one or more first criteria. For example, the camera device determines whether the long exposure light intensity data meets the one or more first criteria using a camera module 444. In some implementations, determining whether the long exposure light intensity data meets the one or more first criteria includes determining whether a threshold number of pixels have respective light intensities above a particular light intensity threshold. In some implementations, determining whether the long exposure light intensity data meets the one or more first criteria includes determining whether an average light intensity for the long exposure is greater than a target light intensity). As of Claim 7: Duran in view of Bales in view of Murakoshi further teaches the controller is configured to determine the correction coefficient based on the ratios under the plurality of reference lights having different intensities and the unknown measured signal of the one pixel (Duran ¶¶0145, 153-0155 and note that FIGS. 8A-8B range from 0 to 255, the target light intensity is 128, and the exposure ratio is 8, the short exposure average light intensity is 63, and the long exposure average light intensity is 180. In this example, adjusting the long exposure to correct the average (e.g., by the ratio of target light intensity to long exposure average light intensity (128/180)) results in saturation of pixels from the short exposure. Adjusting the short exposure by the short exposure average light intensity multiplied by the above ratio (63*128/180) results in a short exposure average light intensity of 44.8. Thus, the average pixels in the short exposure would be 44.8 multiplied by the exposure ratio (8) resulting in a value of 358, which is outside of the range of the histogram (indicating that these pixels would be saturated).). As of Claim 8: Duran in view of Bales in view of Murakoshi further teaches the correction coefficient f is expressed by a following formula: f = Ekm*<Rm>/Rm, where m represents an identifier of the plurality of reference lights having different intensities (Murakoshi ¶¶0087-0090 and note thatthe correction signal component calculating means 36, the read-out stage sensitivity signal components H.sub.n (p) are divided by the corresponding reference signal components H.sub.r (p), and signal components H.sub.n' (p) are obtained from the division processing. The signal components H.sub.n' (p) are represented by the formula H.sub.n' (p)=H.sub.n (p)/H.sub.r (p) and have a profile illustrated in FIG. 5. By way of example, the reciprocals of the signal components H.sub.n' (p) may be taken as the sensitivity correction signal components. However, in this embodiment, unsharp masking processing is performed on the signal components H.sub.n' (p) in order to remove locality change components with respect to the reference light source 28. ); km represents a weight assigned to reference light m; Rm represents the value obtained from a measured signal of the one pixel under the reference light m; and <Rm> represents the statistic value over the plurality of pixels under the reference light m (Murakoshi ¶¶0088-0090). Allowable Subject Matter Claims 5, 6 & 9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. As of Claim 5: the prior art of record fails to teach or fairly suggest the limitations of claim 5, combination with claim 1&4, that includes “wherein, in determining the correction coefficient, the controller is configured to: calculate the statistic values over the plurality of pixels from values obtained after subtracting signals measured in a dark state from signals measured under the plurality of reference lights having different intensities; obtain first differences by subtracting a signal of the one pixel measured in a dark state from signals of the one pixel measured under the plurality of reference lights having different intensities; and calculate a linear combination of quotients obtained by dividing a statistic value by a first difference for each one of the plurality of reference lights and weight coefficients for each one of the plurality of reference lights, and wherein, in correcting an unknown measured signal of the one pixel, the controller is configured to: calculate a second difference by subtracting the signal of the one pixel measured in a dark state from the unknown measured signal of the one pixel; and calculate a product of the second difference and the linear combination.” As of Claim 6: Claim 6 depends from Claim 5 and is objected as Allowable subject matter as well. As of Claim 9: the prior art of record fails to teach or fairly suggest the limitations of claim 9, combination with claim 1&8, that includes “wherein a value So^ obtained by correcting the unknown measured signal is expressed by a following formula: So^ = (Sx - Rd)*f, where Sx represents the unknown measured signal and Rd represents a measured signal of the one pixel in a dark state, wherein the value Rm obtained from a measured signal of the one pixel under the reference light m is expressed by a following formula: Rm = RIm - Rd, where RIm represents the measured signal of the one pixel under the reference light m, and wherein the statistic value <Rm> under the reference light m is a mean of values over the plurality of pixels obtained after subtracting measured signals in a dark state from measured signals under the reference light m for each pixel. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEKONNEN D DAGNEW whose telephone number is (571)270-5092. The examiner can normally be reached on 8:00AM-5:00PM M-Th. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached on 571-272-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MEKONNEN D DAGNEW/Primary Examiner, Art Unit 2638
Read full office action

Prosecution Timeline

Aug 03, 2023
Application Filed
Mar 07, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593143
SOLID-STATE IMAGING DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12586142
IMAGE CAPTURING METHOD AND DISPLAY METHOD FOR RECOGNIZING A RELATIONSHIP AMONG A PLURALITY OF IMAGES DISPLAYED ON A DISPLAY SCREEN
2y 5m to grant Granted Mar 24, 2026
Patent 12585173
LENS BARREL
2y 5m to grant Granted Mar 24, 2026
Patent 12581022
DATA CREATION METHOD AND DATA CREATION PROGRAM
2y 5m to grant Granted Mar 17, 2026
Patent 12574662
THRESHOLD VALUE DETERMINATION METHOD, THRESHOLD VALUE DETERMINATION PROGRAM, THRESHOLD VALUE DETERMINATION DEVICE, PHOTON NUMBER IDENTIFICATION SYSTEM, PHOTON NUMBER IDENTIFICATION METHOD, AND PHOTON NUMBER IDENTIFICATION PROCESSING PROGRAM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+15.8%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 728 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month