Prosecution Insights
Last updated: April 19, 2026
Application No. 18/294,093

Exposure Control for Image-Capture

Final Rejection §102§103
Filed
Jan 31, 2024
Examiner
GARCES-RIVERA, ANGEL L
Art Unit
2637
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
92%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
510 granted / 625 resolved
+19.6% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
25 currently pending
Career history
650
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
36.3%
-3.7% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 625 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is in response to the Amendment filed on 11/03/2025. Status of the Claims: Claim(s) 1 and 15 has/have been amended. Claim(s) 16-17 has/have been newly added. Claim(s) 1-17 is/are pending in this Office Action. Response to Arguments Applicant’s arguments are deemed moot since they are directed to the newly added claim limitations, not previously presented and not against the previous rejected limitations. Newly added limitations are addressed below. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-2, 5-6 and 10-17 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by IDS provided reference US 2008/0219654 to Border et al. (hereinafter Border). Regarding independent claim 1, Border teaches a method comprising: determining, based on captured sensor data representative of a scene (auto exposure detector 46, see Fig. 1), a likelihood of exposure-related defects in the scene to be captured by multiple image-capture devices, the likelihood of exposure-related defects including a first likelihood of blur defects and a second likelihood of high-noise defects (the primary capture stage is set for a relatively long exposure so that the digital noise in the image is low, but any motion present either from movement of the camera or from movement of objects in the scene results in motion blur. Simultaneously, the secondary capture stage is set for a relatively fast exposure so that digital noise in the image is higher, but the motion blur is less. See par. [0150]); determining, based on the first likelihood, to apply a first exposure time to decrease the blur defect (the secondary capture stage is set for a relatively fast exposure so that digital noise in the image is higher, but the motion blur is less, see par. [0150]); determining, based on the second likelihood, to apply a second exposure time, the second exposure time longer than the first exposure time, to decrease the high-noise defect (the primary capture stage is set for a relatively long exposure so that the digital noise in the image is low, see par. [0150]); causing a first image-capture device of the multiple image-capture devices to capture a first image of the scene using the first exposure time (second image stage 2 with zoom lens 4 and image sensor 14, see Fig. 1 captures using a second exposure time, see par. [0150]) and a second image-capture device of the multiple image-capture devices to capture a second image of the scene using the second exposure time (image stage 1 with zoom lens 2 and image sensor 12, see Fig. 1 captures using a first exposure time, see par. [0150]); and providing the first and second image captures to an image-merging module to create a single image from the first and second image captures (a modified image is then created by replacing portions of the primary image with portions of the secondary image, see par. [0150]). Regarding claim 2, Border teaches a method described in claim 1, wherein one or more additional image-capture devices are used to capture one or more additional image captures of the scene, and wherein providing the first and second image captures provides the additional image captures to the image-merging module (step 276 continuously capture video, see Fig. 5). Regarding claim 5, Border teaches a method described in claim 1, wherein the first and second image captures are captured at a same brightness and wherein the brightness is defined by a sensor gain multiplied by an exposure time (the gain of the secondary image is increased so the average pixel values in the secondary image match those of the primary image, see par. [0150]). Regarding claim 6, Border teaches a method described in claim 1, wherein the sensor data includes non-imaging data collected from a radar system usable to determine movement in the scene to be captured (camera GPS and electronic compass to provide camera pointing direction to determine scene movement, see par. [0079]). Regarding claim 10, Border teaches a method described in claim 1, wherein the sensor data is imaging data collected by the image-capture device (can also use image data to form a range map to enable motion tracking, see par. [0077]). Regarding claim 11, Border teaches a method described in claim 1, further comprising determining an object of focus based on the sensor data and further comprises using the image-merging module to create the single image of the scene by incorporating the first image capture for the object of focus and incorporating the second image capture for a remaining background portion of the scene (can also use image data to enable object extraction, see par. [0077], a modified image is then created by replacing portions of the primary image with portions of the secondary image, see pars. [0120, 0150]). Regarding claim 12, Border teaches a method described in claim 11, wherein the second image capture is incorporated to create a motion scene in the background portion, the motion scene in the background portion being a blurred image-capture indicating motion within the scene (different augmentation of modifications are contemplated with the primary and secondary image, and includes video or a series of images, hence to convey motion, see pars. [0144-0146]). Regarding claim 13, Border teaches a method described in claim 1, wherein the first and second image captures are multi-frame image captures, and the single image created by the image-merging module is a multi-frame image, the multi-frame image including multiple single-frame image captures captured in succession (different augmentation of modifications are contemplated with the primary and secondary image, and includes video or a series of images, hence multi-frame, see pars. [0144-0146]). Regarding claim 14, Border teaches a method described in claim 1, further comprising displaying the single image created from the image-merging module (comprises a display 70 to display captured images, see Fig. 1 and par. [0127]). Regarding independent claim(s) 15, claim(s) is/are drawn to the apparatus corresponding to the method of using same as claimed in claim(s) 1 and is/are rejected for the same reasons used above. Regarding claim 16, Border teaches the method described in claim 1, further comprising: determining, based on the first exposure time and the second exposure time, a first sensor gain and a second sensor gain (determines gains for the objects of the different sensors, see par. [0120]), wherein the first sensor gain is greater than the second sensor gain (the gain of the secondary image is increased so the average pixel values in the secondary image match those of the primary image, see par. [0150]); and causing a first image-capture device of the multiple image-capture devices to capture a first image of the scene using the first exposure time and the first sensor gain and a second image-capture device of the multiple image-capture devices to capture a second image of the scene using the second exposure time and the second sensor gain (improve the dynamic range within images by applying gain adjustments to objects as a whole, see par. [0117]). Regarding independent claim(s) 17, claim(s) is/are drawn to the non-transitory computer-readable storage medium used by the corresponding method in claim(s) 1 and is/are rejected for the same reasons used above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Border in view of US 2021/0374931 to Kumar et al. (hereinafter Kumar). Regarding claim 3, Border discloses the claimed invention except for “wherein determining the likelihood of the exposure-related defects is determined, at least partially, through machine learning based on previous image captures”. However, Kumar teaches “wherein determining the likelihood of the exposure-related defects is determined, at least partially, through machine learning based on previous image captures (systems, methods, and computer storage media for detecting and classifying an exposure defect in an image using neural networks, see abstract)”. References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to exposure defects in digital cameras. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above method as taught by Border, by incorporating the teachings as taught by Kumar. One of ordinary skill in the art would have been motivated to do this modification in order to efficiently and accurately detect and classify exposure detects in images using a neural network as suggested by Kumar (see par. [0003]). Regarding claim 4, Border discloses the claimed invention except for “wherein determining the first or second exposure time is determined, at least partially, through machine learning based on previous image-captures captured using different exposure times”. However, Kumar teaches “wherein determining the first or second exposure time is determined, at least partially, through machine learning based on previous image-captures captured using different exposure times (the neural network(s) may be trained and used to predict exposure levels (e.g., overexposure, underexposure, good exposure) for each digital image, see par. [0019])”. References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to exposure determination in digital cameras. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above method as taught by Border, by incorporating the teachings as taught by Kumar. One of ordinary skill in the art would have been motivated to do this modification in order to efficiently and accurately detect and classify exposure levels in images using a neural network as suggested by Kumar (see par. [0019]). Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Border in view of US 2022/0366584 to CHOI et al. (hereinafter CHOI). Regarding claim 7, Border teaches a method described in claim 1 except for “wherein the sensor data includes non-imaging data collected from a flicker sensor usable to determine a banding defect in the scene to be captured”. However, CHOI teaches “wherein the sensor data includes non-imaging data collected from a flicker sensor usable to determine a banding defect in the scene to be captured (electronic device comprising multiple cameras 221, 222, 223 and 224, and a flicker sensor 310 that includes non-imaging data, see pars. [0003, 0061, 0066] and Fig. 3)”. References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to image capturing devices with multiple cameras. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above method as taught by Border, by incorporating the teachings as taught by CHOI. One of ordinary skill in the art would have been motivated to do this modification in order to determine the best camera to capture based on the flicker detection as suggested by CHOI (see par. [0203]). Claim(s) 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Border in view of CHOI as applied to claim 7 above, and further in view of IDS provided reference US 2021/0029290 to Okuike (hereinafter Okuike). Regarding claim 8, Border in view of CHOI teaches a method described in claim 7 except for “wherein causing the second image-capture device to capture the second image at the second exposure time causes the second exposure time to be greater than a time associated with a frequency of flickering of light within the scene to be captured, the frequency collected by the flicker sensor”. However, Okuike teaches an imaging device comprising two or more imaging devices 10 and 20 (see Fig. 2) “wherein causing the second image-capture device to capture the second image at the second exposure time causes the second exposure time to be greater than a time associated with a frequency of flickering of light within the scene to be captured, the frequency collected by the flicker sensor (causing the second image device 20 to capture at a second exposure at a time associated with the flicker frequency (see lighting from LED in Figs. 5-7) and imaging device 20 exposure in Figs. 5-7, see pars. [0132-0134])”. References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to image capturing devices with multiple cameras. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above method as taught by Border in view of CHOI, by incorporating the teachings as taught by Okuike. One of ordinary skill in the art would have been motivated to do this modification in order to obtain an appropriate captured image that allows recognition of an object even when a flicker phenomenon is occurring as suggested by Okuike (see par. [0010]). Regarding claim 9, Border in view of CHOI and Okuike teaches a method described in claim 8, wherein the second exposure time is at least 8.33 milliseconds and the second image is a band-free image (the second image device 20 to capture at a second exposure at a time associated with the flicker frequency of 120 Hz, hence 1/120Hz = 8.33 milliseconds, see pars. [0132-0134])”. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANGEL L GARCES-RIVERA whose telephone number is (571)270-7268. The examiner can normally be reached Mon-Fri 9AM-5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at 571-727-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANGEL L GARCES-RIVERA/Examiner, Art Unit 2637 /SINH TRAN/Supervisory Patent Examiner, Art Unit 2637
Read full office action

Prosecution Timeline

Jan 31, 2024
Application Filed
Jul 02, 2025
Non-Final Rejection — §102, §103
Nov 03, 2025
Response Filed
Feb 07, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601329
OPTICAL SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12581198
CONTROL APPARATUS, APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12581186
IMAGE PICKUP APPARATUS, CONTROL METHOD FOR IMAGE PICKUP APPARATUS, AND STORAGE MEDIUM CAPABLE OF EASILY RETRIEVING DESIRED-STATE IMAGE AND SOUND PORTIONS FROM IMAGE AND SOUND AFTER SPECIFIC SOUND IS GENERATED THROUGH ATTRIBUTE INFORMATION ADDED TO IMAGE AND SOUND
2y 5m to grant Granted Mar 17, 2026
Patent 12542976
CONTROL DEVICE, IMAGING APPARATUS, CONTROL METHOD, AND CONTROL PROGRAM
2y 5m to grant Granted Feb 03, 2026
Patent 12483798
ELECTRONIC DEVICE INCLUDING CAMERA AND OPERATION METHOD OF ELECTRONIC DEVICE
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
92%
With Interview (+10.3%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 625 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month