Prosecution Insights
Last updated: April 19, 2026
Application No. 18/014,467

APPARATUSES, SYSTEMS, AND METHODS FOR DISCOUNTING AN OBJECT WHILE MANAGING AUTO-EXPOSURE OF IMAGE FRAMES DEPICTING THE OBJECT

Non-Final OA §103
Filed
Jan 04, 2023
Examiner
GARCES-RIVERA, ANGEL L
Art Unit
2637
Tech Center
2600 — Communications
Assignee
Intuitive Surgical Operations, Inc.
OA Round
3 (Non-Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
92%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
510 granted / 625 resolved
+19.6% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
25 currently pending
Career history
650
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
36.3%
-3.7% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 625 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is in response to the Amendment filed on 02/20/2026. Status of the Claims: Claim(s) 1, 5-6, 8-9, 14, 29, 34, 36, 40-41 and 43 has/have been amended. Claim(s) 3-4, 7, 11-13, 17, 21-28, 30-33, 35, 37-39, 42 and 44 has/have been canceled. Claim(s) 1-2, 5-6, 8-10, 14-16, 18-20, 29, 34, 36, 40-41 and 43 is/are pending in this Office Action. Response to Arguments The indicated allowability of claim 5 is withdrawn in view of the newly discovered reference(s) to Tsuji and Zobel. Rejections based on the newly cited reference(s) follow. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2014/0185875 to Tsuji (hereinafter Tsuji) in view of US 2018/0020156 to Zobel (hereinafter Zobel). Regarding independent claim 1, Tsuji teaches an apparatus (image pickup apparatus 100, see Fig. 1) comprising: one or more processors (Control Unit 105 comprising a central processing unit, see Fig. 1 and par. [0024]); and memory storing executable instructions that, when executed by the one or more processors, cause the apparatus to (The control unit 105 develops a program code stored in a Read Only Memory (ROM) (not shown) into a work area in a Random Access Memory (RAM) (not shown) and sequentially executes it, thereby controlling each functional unit of the image pickup apparatus 100, see par. [0024]): identify, in an image frame captured by an image capture system, a plurality of pixel units included within an object region corresponding to a depiction of an object portrayed in the image frame (figure 4 exhibits wherein secondary faces are detected), determine a frame auto-exposure value for the image frame by discounting the plurality of pixel units included within the object region (figure 3 exhibits step S307 where auto exposure is performed based on the main face and not the other faces, see also par. [0033]); and update, based on the frame auto-exposure value, one or more auto-exposure parameters for use by the image capture system to capture an additional image frame (it is clear that the parameters are updated to capture a next image). Tsuji doesn’t disclose “the identifying the plurality of pixel units included within the object region comprising comparing chrominance characteristics of the pixel units included in the image frame to a chrominance characteristic associated with the object”. Tsuji paragraph [0027] states that face detection is performed using “flesh color information”, but doesn’t explicitly disclose chrominance information. Zobel teaches using chrominance information to set an area having chrominance values within a skin tone range as a face area (see par. [0028]). It would have been obvious to a person having ordinary skill in the art before the effective filing date to substitute the face detection method of Tsuji for the face detection method of Zobel to achieve the predictable result of detecting faces in an image. Therefore, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 9, Tsuji in view of Zobel teaches the apparatus of claim 1, wherein: the instructions, when executed by the one or more processors, cause the apparatus to determine a color gamut for environmental imagery of a scene depicted in the image frame (face detection unit 109 is executed which determines flesh color information for the color gamut of the scene, see par. [0027]); the identifying the plurality of pixel units included within the object region is based on one or more local characteristics associated with pixel units included in the image frame (figure 3 step S302 performs face detection and figure 4 exhibits wherein secondary faces are detected); the one or more local characteristics associated with the pixel units included in the image frame include chrominance characteristics of the pixel units included in the image frame (chrominance information to set an area having chrominance values within a skin tone range as a face area, see Zobel par. [0028]); and the identifying the plurality of pixel units included within the object region further includes determining whether the chrominance characteristics of the pixel units included in the image frame are included within the color gamut for the environmental imagery of the scene (chrominance information to set an area having chrominance values within a skin tone range as a face area, see Zobel par. [0028]). Claim(s) 1, 5, 8, 14, 18, 20, 29, 34, 36, 40 and 43 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2021/0099632 to Molholm (hereinafter Molholm) in view of US 5,414,487 to Iwasaki (hereinafter Iwasaki) and Zobel. Regarding independent claim 1, Molholm teaches an apparatus comprising: one or more processors (controller 160 comprising processors, see Fig. 1 and par. [0042]); and memory storing executable instructions that, when executed by the one or more processors (memory storing program instructions executable to implement the operation, see par. [0013]), cause the apparatus to: identify, in an image frame captured by an image capture system, a plurality of pixel units included within an object region corresponding to a depiction of an object portrayed in the image frame (region of interest ROI from gaze information is identify, see par. [0052]). Molholm’s selective auto-exposure system teaches that the ROI in the scene remains as auto-exposed by the camera originally and the rest of the image outside the ROI is compensated to a scene exposure, see par. [0053]. But Molholm fails to clearly specify “determine a frame auto-exposure value for the image frame by discounting the plurality of pixel units included within the object region in the image frame, and update, based on the frame auto-exposure value, one or more auto-exposure parameters for use by the image capture system to capture an additional image frame. However, Iwasaki teaches a light metering calculation apparatus used in an auto-exposure camera (see abstract) that “determine a frame auto-exposure value for the image frame by discounting the plurality of pixel units included within the object region in the image frame (forms object images by grouping similar output adjacent cells, determining exposure amount from the groups. Bright and high position group is discarded from the exposure object, see column 4 lines 35-55), and update, based on the frame auto-exposure value, one or more auto-exposure parameters for use by the image capture system to capture an additional image frame (exposure amount is determined in correspondence with lower objects, see step S81 of Fig. 4, see column 4 lines 52-55 and column 5 lines 35-47). References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to auto-exposure control. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above system as taught by Molholm, by combining the teachings of Iwasaki. One of ordinary skill in the art would have been motivated to do this modification in order to properly calculate exposure when a plurality of objects is present as suggested by Iwasaki (see col. 1 lines 49-52). Molholm in view of Iwasaki fails to clearly specify “the identifying the plurality of pixel units included within the object region comprising comparing chrominance characteristics of the pixel units included in the image frame to a chrominance characteristic associated with the object”. However, Zobel teaches using chrominance information to set an area having chrominance values within a skin tone range as a face area (see par. [0028]). It would have been obvious to a person having ordinary skill in the art before the effective filing date to substitute the grouping method of the combination of Molholm in view of Iwasaki for the face detection method of Zobel to achieve the predictable result of detecting faces in an image. Therefore, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding claim 5, Molholm in view of Iwasaki and Zobel teaches the apparatus of claim 1, wherein the identifying the plurality of pixel units included within the object region is further based on a one or more other characteristics of the pixel units included in the image frame (gaze information is obtain from a gaze tracking system which tracks and selects the ROI, see Molholm par. [0052]). Regarding claim 8, Molholm in view of Iwasaki and Zobel teaches the apparatus of claim 1, wherein the identifying the plurality of pixel units included within the object region is further based on luminance characteristics of the pixel units included in the image frame (a grouping device groups adjacent cells having similar brightness values of a plurality of light-receiving cells, see Iwasaki Fig. 3B and column 4 lines 59-61). Regarding claim 14, Molholm in view of Iwasaki and Zobel teaches the apparatus of claim 1, wherein: the identifying the plurality of pixel units included within the object region is based on one or more global characteristics associated with the image frame (uses a gaze information, see Molholm par. [0052]); and the one or more global characteristics associated with the image frame include an object position characteristic determined based on object tracking data received from an object tracking system that tracks a position of the object within a scene depicted in the image frame (gaze information is obtain from a gaze tracking system which tracks and selects the ROI, see Molholm par. [0052]). Regarding claim 18, Molholm in view of Iwasaki and Zobel teaches the apparatus of claim 1, wherein: the instructions, when executed by the one or more processors, cause the apparatus to: determine a frame auto-exposure target for the image frame by discounting the plurality of pixel units included within the object region in the image frame (forms object images by grouping similar output adjacent cells, determining exposure amount from the groups. Bright and high position group is discarded from the exposure object, see Iwasaki column 4 lines 35-55); and determine, based on the frame auto-exposure value and the frame auto-exposure target, a frame auto-exposure gain (in Fig. 2 pipeline, the frame from the camera is feed to the sensor gain 252 and the ROI stats are also fed and the auto exposure information and the ambient lighting information is fed to the exposure compensation 282, see Molholm par. [0046] and Fig. 2); and the updating the one or more auto-exposure parameters includes updating the one or more auto-exposure parameters based on the frame auto-exposure gain (the sensor gain 252 feed to the exposure compensation 272, see Fig. 2 and see Molholm par. [0046]). Regarding claim 20, Molholm in view of Iwasaki and Zobel teaches the apparatus of claim 1, wherein the one or more auto-exposure parameters include one or more of: an exposure time parameter; a shutter aperture parameter (exposure controller 54 drives an aperture 55 and a shutter 56, see Iwasaki column 12 lines 55-59); an illumination intensity parameter; or a luminance gain parameter (uses integration time and gain parameters, see Molholm par. [0046]). Regarding claim(s) 29 and 34, claim(s) is/are drawn to the non-transitory computer-readable storage medium used by the corresponding apparatus in claim(s) 1 and 5 and is/are rejected for the same reasons used above. Molholm uses of a non-transitory computer readable storage medium as an embodiment, see Molholm par. [0070]. Regarding claim(s) 36, 40 and 43, claim(s) is/are drawn to the method used by the corresponding apparatus in claim(s) 1, 5 and 8 and is/are rejected for the same reasons used above. Molholm uses of a method as an embodiment, see Molholm par. [0071]. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Molholm in view of Iwasaki and Zobel as applied to claim 1 above, and further in view of IDS provided reference US 2005/0057666 to Hu et al. (hereinafter Hu). Regarding claim 2, Molholm in view of Iwasaki and Zobel teaches the apparatus of claim 1. But Molholm in view of Iwasaki and Zobel fails to clearly specify “wherein: the instructions, when executed by the one or more processors, cause the apparatus to assign, to pixel units included in the image frame, respective weight values indicative of respective confidence levels that the pixel units included in the image frame are also included in the depiction of the object; and the identifying the plurality of pixel units included within the object region is based on the respective weight values assigned to the pixel units included in the image frame”. However, Hu teaches a region-based auto exposure control apparatus “wherein: the instructions, when executed by the one or more processors, cause the apparatus to assign, to pixel units included in the image frame, respective weight values indicative of respective confidence levels that the pixel units included in the image frame are also included in the depiction of the object (assigns weight to the pixel tiles of the region of interest, see par. [0021]); and the identifying the plurality of pixel units included within the object region is based on the respective weight values assigned to the pixel units included in the image frame (the higher the weight the more interest there is in a particular tile, see par. [0022] and Fig. 3)”. References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to exposure control. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above system, as taught by Molholm in view of Iwasaki and Zobel, by incorporating the teachings, as taught by Hu. One of ordinary skill in the art would have been motivated to do this modification in order to performs automatic exposure and gain control while minimizing oscillations as well as providing a good response time, as suggested by Hu (see par. [0006]). Claim(s) 15-16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Molholm in view of Iwasaki and Zobel as applied to claim 14 above, and further in view of in view of IDS provided reference US 2014/0046341 to DiCarlo (hereinafter DiCarlo). Regarding claim 15, Molholm in view of Iwasaki and Zobel teaches the apparatus of claim 14. But Molholm in view of Iwasaki and Zobel fail to clearly specify “wherein: the object comprises an instrument controlled by a robotic arm; and the object tracking system tracks the position of the object based on kinematic data associated with movements of the robotic arm”. However, DiCarlo teaches an auto exposure camera of a surgical robot “wherein: the object comprises an instrument controlled by a robotic arm (end effectors 132 and 130 control by surgical instrument 128, see par. [0026]); and the object tracking system tracks the position of the object based on kinematic data associated with movements of the robotic arm (a ROI is defined by the position of the end effectors 132 and 130, see par. [0030])”. References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to exposure control. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above system, as taught by Molholm in view of Iwasaki and Zobel, by combining it with the system taught by DiCarlo. One of ordinary skill in the art would have been motivated to do this modification in order to continually adjusted the ROI when the surgical instruments are moved around the field-of-view by the surgeon, as suggested by DiCarlo (see par. [0019]). Regarding claim 16, Molholm in view of Iwasaki and Zobel teaches the apparatus of claim 14. But Molholm in view of Iwasaki fails to clearly specify “wherein the object tracking system tracks the position of the object based on computer vision techniques applied to image frames of an image frame sequence that includes the image frame”. However, DiCarlo teaches an auto exposure camera of a surgical robot “wherein the object tracking system tracks the position of the object based on computer vision techniques applied to image frames of an image frame sequence that includes the image frame (continually adjusted the ROI when the surgical instruments are moved around the field-of-view by the surgeon, as suggested by DiCarlo (see par. [0019])).” References are analogous art because they are from the same field of endeavor and/or are reasonably pertinent to the particular problem with which the applicant was concerned because they relate to exposure control. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the above system, as taught by Molholm in view of Iwasaki and Zobel, by combining it with the system, as taught by DiCarlo. One of ordinary skill in the art would have been motivated to do this modification in order to continually adjusted the ROI when the surgical instruments are moved around the field-of-view by the surgeon, as suggested by DiCarlo (see par. [0019]). Allowable Subject Matter Claim(s) 6, 10, 19 and 41 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANGEL L GARCES-RIVERA whose telephone number is (571)270-7268. The examiner can normally be reached Mon-Fri 9AM-5PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at 571-727-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANGEL L GARCES-RIVERA/Examiner, Art Unit 2637 /SINH TRAN/Supervisory Patent Examiner, Art Unit 2637
Read full office action

Prosecution Timeline

Jan 04, 2023
Application Filed
Apr 05, 2025
Non-Final Rejection — §103
Jun 24, 2025
Applicant Interview (Telephonic)
Jun 24, 2025
Examiner Interview Summary
Jul 11, 2025
Response Filed
Nov 15, 2025
Non-Final Rejection — §103
Feb 20, 2026
Response Filed
Mar 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601329
OPTICAL SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12581198
CONTROL APPARATUS, APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12581186
IMAGE PICKUP APPARATUS, CONTROL METHOD FOR IMAGE PICKUP APPARATUS, AND STORAGE MEDIUM CAPABLE OF EASILY RETRIEVING DESIRED-STATE IMAGE AND SOUND PORTIONS FROM IMAGE AND SOUND AFTER SPECIFIC SOUND IS GENERATED THROUGH ATTRIBUTE INFORMATION ADDED TO IMAGE AND SOUND
2y 5m to grant Granted Mar 17, 2026
Patent 12542976
CONTROL DEVICE, IMAGING APPARATUS, CONTROL METHOD, AND CONTROL PROGRAM
2y 5m to grant Granted Feb 03, 2026
Patent 12483798
ELECTRONIC DEVICE INCLUDING CAMERA AND OPERATION METHOD OF ELECTRONIC DEVICE
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
92%
With Interview (+10.3%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 625 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month