Prosecution Insights
Last updated: April 19, 2026
Application No. 18/414,473

COMPOSITE IMAGE GENERATING DEVICE, METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM THEREOF

Final Rejection §102§103
Filed
Jan 17, 2024
Examiner
PATEL, SHIVANG I
Art Unit
2615
Tech Center
2600 — Communications
Assignee
HTC Corporation
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 4m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
309 granted / 415 resolved
+12.5% vs TC avg
Strong +18% interview lift
Without
With
+18.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
22 currently pending
Career history
437
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
57.8%
+17.8% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see page 15-17, filed 1/13/2026 with respect to 35 USC §102 rejection of claims 1-20 have been fully considered but are moot because the arguments do not apply to any of the references being used in the current rejection and the arguments were directed towards the claims as amended. Applicant has amended claims and argues previously cited references do not teach amended claim limitations. In response, examiner points to newly cited Shah to teach argued claim limitations. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5, 6-13, 15, 17-23 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Shah et al (US 9,704,250 B1) Regarding claim 1, Lou discloses a composite image generating device ([0026] generate an HDR image by combining multiple images that captured with different exposure settings), comprising: a transceiver interface ([0052] the image-processing system 100 can include one or more wireless transceivers for wireless communications, such as cellular network communications, 802.11 wi-fi communications, wireless local area network (WLAN) communications, or some combination thereof.); and a processor, being electrically connected to the transceiver interface, and being configured to perform operations ([0052] mage-processing device 104 includes the image processor 124 (including the ISP 128 and the host processor 126)) comprising: determining an eye gaze position of a user ([0063] an object detected in the image of the field of view, a semantic analysis of the image of the field of view, a gaze of a viewer); determining a region of ​​interest corresponding to a plurality of real-time images based on the eye gaze position ([0072] Image-processing device (or another system or component) may (e.g., using host processor) determine region of interest of field of view), wherein each of the real-time images corresponds to an exposure value and a resolution ([0078] data 422 may be smaller than the data representative of long-exposure image 206 of FIG. 2. Because data 422 is smaller than data representative of long-exposure image 206, the required data throughput between image-capture device 404 and image-processing device 406 may be less than the required data throughput of an image processing system to generate HDR image); generating a composite image based on the region of interest and a region of non-interest corresponding to the real-time images ([0075] Image-processing device 406 may generate a composite image (e.g., an HDR image) including pixels from the partial-readout image captured by subset of photodiodes 410 (represented by data 422) and pixels from the full-readout image captured by photodiodes 408 (represented by data 420).), wherein the region of interest and the region of non-interest of the composite image are generated based on the real-time images corresponding to different of the resolutions ([0037] MFNR and/or super-resolution techniques can be applied to a particular region of interest to reduce noise (in the case of MFNR) and/or to increase resolution (in the case of super-resolution) in the region); and transmitting the composite image to a display device for a real-time display operation ([0096] . Output device 622 can also be one or more of a number of output mechanisms known to those of skill in the art, such as a display, projector, television, speaker device, etc.) wherein the operation of determining the eye gaze position of the user further comprises the following operations: determining the eye gaze position of the user based on an eye tracking information of the user in a metering image ([0081] the computing device (or one or more components thereof) may determine the region of interest based on a gaze of a viewer. For example, image-processing device 406 may determine region of interest 418 based on a gaze of a viewer.) wherein the operation of determining the region of interest further comprises the following operations: Shah discloses calculating a brightness value of the metering image at the eye gaze position (col 24, line 5-10, utilize information such as user gaze direction to activate resources that are likely to be used in order to spread out the need for processing capacity); generating a plurality of target pixel positions corresponding to the brightness value in the metering image (col 8, lines 5-10, converting the RGB values of the captured image to brightness values B, calculating a single brightness value or statistic Bpre ( e.g., center-weighted mean, median, or other heuristic) from the brightness values); and determining the region of interest based on the target pixel positions (col 2, lines 25-30, subject matter of interest in each depth plane can be detected automatically based on techniques such as facial detection, object detection, and/or a fill algorithm for determining a region of interest in a scene based on similarities in depth/distance, luminosity, and/or color.). Lou and Shah are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify composite image of Lou to include calculating a brightness value of the metering image at the eye gaze position; generating a plurality of target pixel positions corresponding to the brightness value in the metering image; and determining the region of interest based on the target pixel positions as described by Shah. The motivation for doing so would have been for computing a plurality of depth planes or a depth map for a scene to be captured by one or more cameras of a computing device and independently metering or calculating image statistics for each of the depth planes (Shah, col 1, lines 60-65). Therefore, it would have been obvious to combine Lou and Shah to obtain the invention as specified in claim 1. Regarding claim 2, Lou discloses wherein the operation of generating the composite image further comprises the following operations: generating a plurality of first region pixel values corresponding to the region of interest based on the real-time images corresponding to a first resolution ([0083] computing device (or one or more components thereof) may receive second data representative of an image of a region of interest within the field of view from a subset of the array of photodiodes); generating a plurality of second region pixel values corresponding to the region of non-interest based on the real-time images corresponding to a second resolution ([0082] receiving the first data from the array of photodiodes of the image sensor (e.g., at block 502) may include receiving the first data from all of the photodiodes of the array of photodiodes); and synthesizing the first region pixel values and the second region pixel values to generate the composite image, wherein the first resolution is higher than the second resolution ([0086] first portion of the composite image corresponding to the region of interest may be associated with a first dynamic range and a second portion of the composite image outside of the region of interest is associated with a second dynamic range) Regarding claim 3, Lou discloses wherein the real-time images corresponding to the second resolution are generated by the following operations: calculating a brightness value of a metering image at the eye gaze position; selecting at least one second real-time image from the real-time images based on the brightness value of the metering image at the eye gaze position and the brightness value of the real-time images corresponding to the region of interest ([0022] Some camera settings can be configured for post-processing of an image, such as alterations to a contrast, brightness, saturation, sharpness, levels, curves, and colors, among others); and performing a resolution reduction operation on the at least one second real-time image to generate the real-time images corresponding to the second resolution ([0023] Many cameras are equipped with an automatic exposure or “auto exposure” mode, where the exposure settings (e.g., exposure time, lens aperture, etc.) of the camera may be automatically adjusted to match, as closely as possible, the luminance of a scene or subject being photographed.). Regarding claim 5, Lou discloses wherein the real-time images are generated by the following operations: wherein the real-time images are generated by at least one image capturing device based on the exposure value and the resolution corresponding to each of the real-time images ([0024] a dynamic range refers to the range of luminosity between the brightest area and the darkest area of the scene or image frame.). Shah discloses calculating a brightness value of the metering image at the eye gaze position (col 8, lines 5-10, converting the RGB values of the captured image to brightness values B, calculating a single brightness value or statistic Bpre ( e.g., center-weighted mean, median, or other heuristic) from the brightness values); and determining the exposure value and the resolution corresponding to each of the real-time images based on the brightness value of the metering image at the eye gaze position (col 8, lines 22-30, The brightness value or statistic Bpre can be calculated in various ways, such as via spot metering, partial area metering, center-weighted average metering, or multi-zone metering.), Lou and Shah are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify composite image of Lou to include calculating a brightness value of the metering image at the eye gaze position; and determining the exposure value and the resolution corresponding to each of the real-time images based on the brightness value of the metering image at the eye gaze position as described by Shah. The motivation for doing so would have been for computing a plurality of depth planes or a depth map for a scene to be captured by one or more cameras of a computing device and independently metering or calculating image statistics for each of the depth planes (Shah, col 1, lines 60-65). Therefore, it would have been obvious to combine Lou and Shah to obtain the invention as specified in claim 5. Regarding claim 7, Lou discloses wherein the operation of determining the region of ​​interest further comprises the following operations: identifying a target object corresponding to the eye gaze position in the metering image ([0063] determine or identify the region of interest based on a location of the region of interest within the image of the field of view, a depth within a scene represented by the image of the field of view, a classification of the scene, an object detected in the image of the field of vie); generating the target pixel positions corresponding to the target object in the metering image; and determining the region of interest based on the target pixel positions corresponding to the target object and the target pixel positions corresponding to the brightness value Shah discloses generating the target pixel positions corresponding to the target object in the metering image (col 8, lines 5-10, converting the RGB values of the captured image to brightness values B, calculating a single brightness value or statistic Bpre ( e.g., center-weighted mean, median, or other heuristic) from the brightness values);; and determining the region of interest based on the target pixel positions corresponding to the target object and the target pixel positions corresponding to the brightness value (col 2, lines 25-30, subject matter of interest in each depth plane can be detected automatically based on techniques such as facial detection, object detection, and/or a fill algorithm for determining a region of interest in a scene based on similarities in depth/distance, luminosity, and/or color.). Lou and Shah are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify composite image of Lou to include generating the target pixel positions corresponding to the target object in the metering image; and determining the region of interest based on the target pixel positions corresponding to the target object and the target pixel positions corresponding to the brightness value as described by Shah. The motivation for doing so would have been for computing a plurality of depth planes or a depth map for a scene to be captured by one or more cameras of a computing device and independently metering or calculating image statistics for each of the depth planes (Shah, col 1, lines 60-65). Therefore, it would have been obvious to combine Lou and Shah to obtain the invention as specified in claim 7. Regarding claim 8, Lou discloses wherein the real-time images are generated by a single image capturing device, the single image capturing device corresponds to a plurality of first exposure parameters and a plurality of first resolution parameters, and the real-time images are generated by the following operations ([0065] generate composite image based on image and image may be less than that of an image processing system to generate HDR image based on short-exposure image and long exposure image): generating, by the single image capturing device, the real-time images based on the first exposure parameters and the first resolution parameters ([0065] The data representative of the of image 304 may be less than data representative of full-readout images that are used by conventional HDR techniques to generate HDR images.). Regarding claim 9, Lou discloses wherein the real-time images are generated by a plurality of image capturing devices, each of the image capturing devices corresponds to a second exposure parameter and a second resolution parameter, and the real-time images are generated by the following operations ([0065] generate composite image based on image and image may be less than that of an image processing system to generate HDR image based on short-exposure image and long exposure image: generating, by the image capturing devices, the real-time images based on the second exposure parameters and the second resolution parameters ([0065] The data representative of the of image 304 may be less than data representative of full-readout images that are used by conventional HDR techniques to generate HDR images.). Regarding claim 10, Lou discloses wherein the display device is a head-mounted display, and the head-mounted display is worn by the user ([0091] The computing device can include any suitable device, such as a vehicle or a computing device of a vehicle, a mobile device (e.g., a mobile phone), a desktop computing device, a tablet computing device, a wearable device (e.g., a VR headset, an AR headset, AR glasses). Regarding claim 11, Lou discloses a composite image generating method, being adapted for use in an electronic apparatus, and the composite image generating ([0026] generate an HDR image by combining multiple images that captured with different exposure settings) method comprises: determining an eye gaze position of a user ([0063] an object detected in the image of the field of view, a semantic analysis of the image of the field of view, a gaze of a viewer); determining a region of ​​interest corresponding to a plurality of real-time images based on the eye gaze position ([0072] Image-processing device (or another system or component) may (e.g., using host processor) determine region of interest of field of view), wherein each of the real-time images corresponds to an exposure value and a resolution ([0078] data 422 may be smaller than the data representative of long-exposure image 206 of FIG. 2. Because data 422 is smaller than data representative of long-exposure image 206, the required data throughput between image-capture device 404 and image-processing device 406 may be less than the required data throughput of an image processing system to generate HDR image); generating a composite image based on the region of interest and a region of non-interest corresponding to the real-time images ([0075] Image-processing device 406 may generate a composite image (e.g., an HDR image) including pixels from the partial-readout image captured by subset of photodiodes 410 (represented by data 422) and pixels from the full-readout image captured by photodiodes 408 (represented by data 420).), wherein the region of interest and the region of non-interest of the composite image are generated based on the real-time images corresponding to different of the resolutions ([0037] MFNR and/or super-resolution techniques can be applied to a particular region of interest to reduce noise (in the case of MFNR) and/or to increase resolution (in the case of super-resolution) in the region); and transmitting the composite image to a display device for a real-time display operation ([0096] . Output device 622 can also be one or more of a number of output mechanisms known to those of skill in the art, such as a display, projector, television, speaker device, etc.) wherein the steps of determining the eye gaze position of the user further comprises the following steps: determining the eye gaze position of the user based on an eye tracking information of the user in a metering image ([0081] the computing device (or one or more components thereof) may determine the region of interest based on a gaze of a viewer. For example, image-processing device 406 may determine region of interest 418 based on a gaze of a viewer.) wherein the steps of determining the region of interest further comprises the following steps: Shah discloses calculating a brightness value of the metering image at the eye gaze position (col 24, line 5-10, utilize information such as user gaze direction to activate resources that are likely to be used in order to spread out the need for processing capacity); generating a plurality of target pixel positions corresponding to the brightness value in the metering image (col 8, lines 5-10, converting the RGB values of the captured image to brightness values B, calculating a single brightness value or statistic Bpre ( e.g., center-weighted mean, median, or other heuristic) from the brightness values); and determining the region of interest based on the target pixel positions (col 2, lines 25-30, subject matter of interest in each depth plane can be detected automatically based on techniques such as facial detection, object detection, and/or a fill algorithm for determining a region of interest in a scene based on similarities in depth/distance, luminosity, and/or color.). Lou and Shah are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify composite image of Lou to include calculating a brightness value of the metering image at the eye gaze position; generating a plurality of target pixel positions corresponding to the brightness value in the metering image; and determining the region of interest based on the target pixel positions as described by Shah. The motivation for doing so would have been for computing a plurality of depth planes or a depth map for a scene to be captured by one or more cameras of a computing device and independently metering or calculating image statistics for each of the depth planes (Shah, col 1, lines 60-65). Therefore, it would have been obvious to combine Lou and Shah to obtain the invention as specified in claim 11. Regarding claim 12, Lou discloses wherein the operation of generating the composite image further comprises the following steps: generating a plurality of first region pixel values corresponding to the region of interest based on the real-time images corresponding to a first resolution ([0083] computing device (or one or more components thereof) may receive second data representative of an image of a region of interest within the field of view from a subset of the array of photodiodes); generating a plurality of second region pixel values corresponding to the region of non-interest based on the real-time images corresponding to a second resolution ([0082] receiving the first data from the array of photodiodes of the image sensor (e.g., at block 502) may include receiving the first data from all of the photodiodes of the array of photodiodes); and synthesizing the first region pixel values and the second region pixel values to generate the composite image, wherein the first resolution is higher than the second resolution ([0086] first portion of the composite image corresponding to the region of interest may be associated with a first dynamic range and a second portion of the composite image outside of the region of interest is associated with a second dynamic range) Regarding claim 13, Lou discloses wherein the real-time images corresponding to the second resolution are generated by the following steps: calculating a brightness value of a metering image at the eye gaze position; selecting at least one second real-time image from the real-time images based on the brightness value of the metering image at the eye gaze position and the brightness value of the real-time images corresponding to the region of interest ([0022] Some camera settings can be configured for post-processing of an image, such as alterations to a contrast, brightness, saturation, sharpness, levels, curves, and colors, among others); and performing a resolution reduction operation on the at least one second real-time image to generate the real-time images corresponding to the second resolution ([0023] Many cameras are equipped with an automatic exposure or “auto exposure” mode, where the exposure settings (e.g., exposure time, lens aperture, etc.) of the camera may be automatically adjusted to match, as closely as possible, the luminance of a scene or subject being photographed.). Regarding claim 15, Lou discloses wherein the real-time images are generated by the following operations: wherein the real-time images are generated by at least one image capturing device based on the exposure value and the resolution corresponding to each of the real-time images ([0024] a dynamic range refers to the range of luminosity between the brightest area and the darkest area of the scene or image frame.). Shah discloses calculating a brightness value of the metering image at the eye gaze position (col 8, lines 5-10, converting the RGB values of the captured image to brightness values B, calculating a single brightness value or statistic Bpre ( e.g., center-weighted mean, median, or other heuristic) from the brightness values); and determining the exposure value and the resolution corresponding to each of the real-time images based on the brightness value of the metering image at the eye gaze position (col 8, lines 22-30, The brightness value or statistic Bpre can be calculated in various ways, such as via spot metering, partial area metering, center-weighted average metering, or multi-zone metering.), Lou and Shah are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify composite image of Lou to include calculating a brightness value of the metering image at the eye gaze position; and determining the exposure value and the resolution corresponding to each of the real-time images based on the brightness value of the metering image at the eye gaze position as described by Shah. The motivation for doing so would have been for computing a plurality of depth planes or a depth map for a scene to be captured by one or more cameras of a computing device and independently metering or calculating image statistics for each of the depth planes (Shah, col 1, lines 60-65). Therefore, it would have been obvious to combine Lou and Shah to obtain the invention as specified in claim 15. Regarding claim 17, Lou discloses wherein the operation of determining the region of ​​interest further comprises the following operations: identifying a target object corresponding to the eye gaze position in the metering image ([0063] determine or identify the region of interest based on a location of the region of interest within the image of the field of view, a depth within a scene represented by the image of the field of view, a classification of the scene, an object detected in the image of the field of vie); generating the target pixel positions corresponding to the target object in the metering image; and determining the region of interest based on the target pixel positions corresponding to the target object and the target pixel positions corresponding to the brightness value Shah discloses generating the target pixel positions corresponding to the target object in the metering image (col 8, lines 5-10, converting the RGB values of the captured image to brightness values B, calculating a single brightness value or statistic Bpre ( e.g., center-weighted mean, median, or other heuristic) from the brightness values);; and determining the region of interest based on the target pixel positions corresponding to the target object and the target pixel positions corresponding to the brightness value (col 2, lines 25-30, subject matter of interest in each depth plane can be detected automatically based on techniques such as facial detection, object detection, and/or a fill algorithm for determining a region of interest in a scene based on similarities in depth/distance, luminosity, and/or color.). Lou and Shah are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify composite image of Lou to include generating the target pixel positions corresponding to the target object in the metering image; and determining the region of interest based on the target pixel positions corresponding to the target object and the target pixel positions corresponding to the brightness value as described by Shah. The motivation for doing so would have been for computing a plurality of depth planes or a depth map for a scene to be captured by one or more cameras of a computing device and independently metering or calculating image statistics for each of the depth planes (Shah, col 1, lines 60-65). Therefore, it would have been obvious to combine Lou and Shah to obtain the invention as specified in claim 17. Regarding claim 18, Lou discloses wherein the real-time images are generated by a single image capturing device, the single image capturing device corresponds to a plurality of first exposure parameters and a plurality of first resolution parameters, and the real-time images are generated by the following steps ([0065] generate composite image based on image and image may be less than that of an image processing system to generate HDR image based on short-exposure image and long exposure image): generating, by the single image capturing device, the real-time images based on the first exposure parameters and the first resolution parameters ([0065] The data representative of the of image 304 may be less than data representative of full-readout images that are used by conventional HDR techniques to generate HDR images.). Regarding claim 19, Lou discloses wherein the real-time images are generated by a plurality of image capturing devices, each of the image capturing devices corresponds to a second exposure parameter and a second resolution parameter, and the real-time images are generated by the following steps ([0065] generate composite image based on image and image may be less than that of an image processing system to generate HDR image based on short-exposure image and long exposure image: generating, by the image capturing devices, the real-time images based on the second exposure parameters and the second resolution parameters ([0065] The data representative of the of image 304 may be less than data representative of full-readout images that are used by conventional HDR techniques to generate HDR images.). Regarding claim 20, Lou discloses A non-transitory computer readable storage medium, having a computer program stored therein, wherein the computer program comprises a plurality of codes, the computer program executes a composite image generating method after being loaded into an electronic apparatus, the composite image generating method ([0093] a computer-readable or machine-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The computer-readable or machine-readable storage medium can be non-transitory.) comprises: determining an eye gaze position of a user ([0063] an object detected in the image of the field of view, a semantic analysis of the image of the field of view, a gaze of a viewer); determining a region of ​​interest corresponding to a plurality of real-time images based on the eye gaze position ([0072] Image-processing device (or another system or component) may (e.g., using host processor) determine region of interest of field of view), wherein each of the real-time images corresponds to an exposure value and a resolution ([0078] data 422 may be smaller than the data representative of long-exposure image 206 of FIG. 2. Because data 422 is smaller than data representative of long-exposure image 206, the required data throughput between image-capture device 404 and image-processing device 406 may be less than the required data throughput of an image processing system to generate HDR image); generating a composite image based on the region of interest and a region of non-interest corresponding to the real-time images ([0075] Image-processing device 406 may generate a composite image (e.g., an HDR image) including pixels from the partial-readout image captured by subset of photodiodes 410 (represented by data 422) and pixels from the full-readout image captured by photodiodes 408 (represented by data 420).), wherein the region of interest and the region of non-interest of the composite image are generated based on the real-time images corresponding to different of the resolutions ([0037] MFNR and/or super-resolution techniques can be applied to a particular region of interest to reduce noise (in the case of MFNR) and/or to increase resolution (in the case of super-resolution) in the region); and transmitting the composite image to a display device for a real-time display operation ([0096] . Output device 622 can also be one or more of a number of output mechanisms known to those of skill in the art, such as a display, projector, television, speaker device, etc.) wherein the steps of determining the eye gaze position of the user further comprises the following steps: determining the eye gaze position of the user based on an eye tracking information of the user in a metering image ([0081] the computing device (or one or more components thereof) may determine the region of interest based on a gaze of a viewer. For example, image-processing device 406 may determine region of interest 418 based on a gaze of a viewer.) wherein the steps of determining the region of interest further comprises the following steps: Shah discloses calculating a brightness value of the metering image at the eye gaze position (col 24, line 5-10, utilize information such as user gaze direction to activate resources that are likely to be used in order to spread out the need for processing capacity); generating a plurality of target pixel positions corresponding to the brightness value in the metering image (col 8, lines 5-10, converting the RGB values of the captured image to brightness values B, calculating a single brightness value or statistic Bpre ( e.g., center-weighted mean, median, or other heuristic) from the brightness values); and determining the region of interest based on the target pixel positions (col 2, lines 25-30, subject matter of interest in each depth plane can be detected automatically based on techniques such as facial detection, object detection, and/or a fill algorithm for determining a region of interest in a scene based on similarities in depth/distance, luminosity, and/or color.). Lou and Shah are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify composite image of Lou to include calculating a brightness value of the metering image at the eye gaze position; generating a plurality of target pixel positions corresponding to the brightness value in the metering image; and determining the region of interest based on the target pixel positions as described by Shah. The motivation for doing so would have been for computing a plurality of depth planes or a depth map for a scene to be captured by one or more cameras of a computing device and independently metering or calculating image statistics for each of the depth planes (Shah, col 1, lines 60-65). Therefore, it would have been obvious to combine Lou and Shah to obtain the invention as specified in claim 20. Regarding claim 12, Lou discloses is silent to the region of interest comprises a plurality of discontinuous areas corresponding to the plurality of target pixel positions having target brightness values within a threshold range of the brightness value at the eye gaze position. Shah discloses the region of interest comprises a plurality of discontinuous areas corresponding to the plurality of target pixel positions having target brightness values within a threshold range of the brightness value at the eye gaze position (col 4, lines 40-45, all of the pixels determined to be within a similarity threshold as the starting pixel( s) are identified, and these pixels can represent a person, object, or region of interest within a depth plane.) Lou and Shah are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify composite image of Lou to include the region of interest comprises a plurality of discontinuous areas corresponding to the plurality of target pixel positions having target brightness values within a threshold range of the brightness value at the eye gaze position.as described by Shah. The motivation for doing so would have been for computing a plurality of depth planes or a depth map for a scene to be captured by one or more cameras of a computing device and independently metering or calculating image statistics for each of the depth planes (Shah, col 1, lines 60-65). Therefore, it would have been obvious to combine Lou and Shah to obtain the invention as specified in claim 21. Regarding claim 22, Lou discloses is silent to the region of interest comprises a plurality of discontinuous areas corresponding to the plurality of target pixel positions having target brightness values within a threshold range of the brightness value at the eye gaze position. Shah discloses the region of interest comprises a plurality of discontinuous areas corresponding to the plurality of target pixel positions having target brightness values within a threshold range of the brightness value at the eye gaze position (col 4, lines 40-45, all of the pixels determined to be within a similarity threshold as the starting pixel( s) are identified, and these pixels can represent a person, object, or region of interest within a depth plane.) Lou and Shah are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify composite image of Lou to include the region of interest comprises a plurality of discontinuous areas corresponding to the plurality of target pixel positions having target brightness values within a threshold range of the brightness value at the eye gaze position.as described by Shah. The motivation for doing so would have been for computing a plurality of depth planes or a depth map for a scene to be captured by one or more cameras of a computing device and independently metering or calculating image statistics for each of the depth planes (Shah, col 1, lines 60-65). Therefore, it would have been obvious to combine Lou and Shah to obtain the invention as specified in claim 22. Regarding claim 23, Lou discloses is silent to the region of interest comprises a plurality of discontinuous areas corresponding to the plurality of target pixel positions having target brightness values within a threshold range of the brightness value at the eye gaze position. Shah discloses the region of interest comprises a plurality of discontinuous areas corresponding to the plurality of target pixel positions having target brightness values within a threshold range of the brightness value at the eye gaze position (col 4, lines 40-45, all of the pixels determined to be within a similarity threshold as the starting pixel( s) are identified, and these pixels can represent a person, object, or region of interest within a depth plane.) Lou and Shah are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify composite image of Lou to include the region of interest comprises a plurality of discontinuous areas corresponding to the plurality of target pixel positions having target brightness values within a threshold range of the brightness value at the eye gaze position.as described by Shah. The motivation for doing so would have been for computing a plurality of depth planes or a depth map for a scene to be captured by one or more cameras of a computing device and independently metering or calculating image statistics for each of the depth planes (Shah, col 1, lines 60-65). Therefore, it would have been obvious to combine Lou and Shah to obtain the invention as specified in claim 23. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVANG I PATEL whose telephone number is (571)272-8964. The examiner can normally be reached on M-F 9-5am. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHIVANG I PATEL/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Jan 17, 2024
Application Filed
Oct 09, 2025
Non-Final Rejection — §102, §103
Jan 13, 2026
Response Filed
Mar 13, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602847
SYSTEMS AND METHODS FOR LAYERED IMAGE GENERATION
2y 5m to grant Granted Apr 14, 2026
Patent 12599838
APPARATUS AND METHODS FOR RECORDING AND REPORTING ABUSIVE ONLINE INTERACTIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12592004
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591947
DISTORTION-BASED IMAGE RENDERING
2y 5m to grant Granted Mar 31, 2026
Patent 12584296
Work Machine Display Control System, Work Machine Display System, Work Machine, Work Machine Display Control Method, And Work Machine Display Control Program
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.5%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month