Prosecution Insights
Last updated: April 18, 2026
Application No. 17/813,300

USING ENERGY MODEL TO ENHANCE DEPTH ESTIMATION WITH BRIGHTNESS IMAGE

Non-Final OA §103
Filed
Jul 18, 2022
Examiner
DHOOGE, DEVIN J
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Analog Devices International Unlimited Company
OA Round
3 (Non-Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
50 granted / 71 resolved
+8.4% vs TC avg
Strong +43% interview lift
Without
With
+42.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
48 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
35.8%
-4.2% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 71 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/11/2025 has been entered. Response to Amendment This communication is filed in response to the action filed on 11/11/2025. Claims 1, 11, 13-14, 16 and 21 are currently amended. Claim 22 is new. Claims 1-5 and 7-22 are pending. Response to Arguments Applicant’s arguments filed on 11/11/2025 on pages 8-11, under REMARKS with respect to 35 U.S.C. 103 claim rejections to claims 1-5, and 7-21 have been fully considered and are persuasive. The rejections to the claims have been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of US 2007/0110319 A1. Information Disclosure Statement The information disclosure statement (IDS) filed on 11/26/2025. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1, 3-5, 16, 18-19, and 21-22 are rejected under 35 § U.S.C. 103 as being obvious over Image Guided Depth Up sampling using Anisotropic Total Generalized Variation to FERSTL et al. (hereinafter “FERSTL”) in view of US 2016/0330434 A1 to CHEN (hereinafter “CHEN”), in further view of US 2007/0110319 A1 to WYATT et al. (hereinafter “WYATT”). As per claim 1, FERSTL discloses a method, comprising: determining a boundary weight for a target depth pixel of a plurality of depth pixels (a computing system to perform an image processing method of calculating using the provided equations a texture edge (boundary weight) is applied to the depth images using weights and depth order gradients to do so wherein the gradient is produced from the target input image and includes a normal vector to the gradient and the scalars β, γ adjust the magnitude and the sharpness of the tensor and the depth images are further compensated using equation dij showing the difference between bright and dark surface reconstructions; figure 5; page 996, col 1, equation 5; page 998, col 2), with the depth image comprising the plurality of depth pixels (a method to drastically increase the lateral measurement resolution by a novel depth map up sampling approach, as shown in Figure 1 providing increase in both, quality and resolution, we add information from a high resolution intensity camera in a variational optimization framework according to depth image equation and weight adjustments to initial pixel values, see figures 3a-3g and figure 3 description describing an image updated based on specific methods to produce a depth image with a new depth value to the target depth pixel creating an unsampled version of the original input target image; figure 3b-f; page 993, col 2), and with the target depth pixel having a first depth value and corresponding to the brightness pixel (wherein each iteration of the image 3B-F includes a recreation of the target depth pixel and has a depth value associated with the target pixel in each brightness image B-F; figure 3b-f; page 993, col 2). FERSTL fails to disclose depth pixels based on a first gradient magnitude of the target depth pixel in a depth image and a different second gradient magnitude of a brightness pixel in a brightness image; and determining an energy for the target depth pixel based on the boundary wight; determining a second depth value of the target depth pixel by optimizing the energy; and updating the depth image by assigning the second depth value to the target depth pixel. CHEN discloses determining an energy for the target depth pixel based on the boundary weight (during step s09 the computing system is adapted to register a first fused depth map with a first and second image comprising target pixels and generate 3D point could data for each pixel/point in each respective image/feature map generated is generated using higher accuracy, and would include an energy value; figs 2A-B; paragraphs [0032-0034], [0040-0041]); determining a second depth value of the target depth pixel by optimizing the energy (for example as stated in paragraph 0041 the first preset pixel value is set to 5000 the second preset pixel value is 20,000, the first preset depth value is 10 meters the second preset depth value is 0.5 meters; paragraphs [0032-0034], [0040-0041]); and updating the depth image by assigning the second depth value to the target depth pixel (the aforementioned depth values correspond to different image iterations of the same target pixel, assigning a new depth value to the target pixel as desired based on parameter weighting and generating a map with desired accuracy, wherein the second depth value assigned to the target pixel is 0.5 meters; paragraphs [0032-0034], [0040-0041]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify FERSTL to have updating the depth image by assigning the second depth value to the target depth pixel of CHEN reference. The Suggestion/motivation for doing so would have been to provide the control method of a depth camera according to this disclosure is capable of deciding whether to generate the second depth map and, in certain cases, how much operational power should be supplied to the light source, so as to prevent unnecessary power consumption while maintaining high accuracy of depth detection of the depth camera as suggested by CHEN paragraph [0046]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHEN with FERSTL to obtain the invention as specified in claim 1. WYATT discloses depth pixels based on a first gradient magnitude of the target depth pixel in a depth image and a different second gradient magnitude of a brightness pixel in a brightness image (the system and corresponding method is adapted to identify and compute depth pixel brightness gradient values of a target depth pixel having a first gradient value in the x direction of the target pixel and a different second gradient value in the y direction of the pixel, further the gradients are seen to be different because in claim 2 it is claimed to find a difference between the first and second gradient values implying a difference exists and the values are different; figs 1 and 4; paragraphs [0008], [0032], [0038-0039], [0041], [0104]; claims 1-2). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to depth pixels based on a first gradient magnitude of the target depth pixel in a depth image and a different second gradient magnitude of a brightness pixel in a brightness image of WYATT reference. The Suggestion/motivation for doing so would have been to provide the ability to determine edge orientation and direction based on the way the brightness gradient maximizes and minimizes as suggested by WYATT at paragraph [0044]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine WYATT with modified FERSTL to obtain the invention as specified in claim 1. As per claim 3, FERSTL in view of WYATT in view of CHEN discloses the method of claim 1. Modified FERSTL further discloses wherein the target depth pixel represents a same locus in an object as the brightness pixel (as seen in figure 1 a-d the location of the object depicted never changes from the image type and displays a brightness and depth image of the same object of the same location wherein the pixels corresponding to the object would also comprise the same location; figure 1 and 3a-g). As per claim 4, FERSTL in view of WYATT in view of CHEN discloses the method of claim 1. Modified FERSTL further discloses wherein determining the energy for the target depth pixel based on the boundary weight comprises: determining a spatial error energy for the target depth pixel based on the boundary weight, wherein optimizing the energy comprises optimizing the spatial error energy by reducing a difference between a depth value of the target depth pixel and a depth value of another depth pixel that is adjacent to the target depth pixel in the depth image (the system further includes a anisotropic diffusion tensor and by including this as one of the terms in the TGV model and can penalize high depth discontinuities at homogenous regions and allow for sharp depth edges at corresponding texture differences within the image and further interpolates the depth data to fill in the image and optimize the image in a reasonable manner; page 996, col 1). As per claim 5, FERSTL in view of WYATT in view of CHEN discloses the method of claim 4. Modified FERSTL further discloses wherein determining the energy for the target depth pixel based on the boundary weight further comprises: determining a conditional error energy for the target depth pixel based on a depth value of the target depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image, wherein the conditional error energy indicates a measure of uncertainty in the depth value of the target depth pixel given the brightness value of the brightness pixel (one can penalize high depth discontinuities at homogeneous regions and allow sharp depth edges at corresponding texture differences see equation 5, and addressing within the model cases where the intensity pixels indicate homogeneous regions while the depth may indicate a depth edge is considered as a conditional error energy model reflecting an uncertainty in the depth value which is related to the brightness value of the unsampled brightness image seen in the examples provided in figure 3a-g; fig 3a-g; page 996, col 1, equation 5). As per claim 16, FERSTL discloses one or more non-transitory computer-readable storage media storing instructions executable to perform operations, the operations comprising: determining a boundary weight for a target depth pixel of a plurality of depth pixels based on a gradient magnitude of the target depth pixel in a depth image and a gradient magnitude of a brightness pixel in a brightness image (a computing system to perform an image processing method of calculating using the provided equations a texture edge (boundary weight) is applied to the depth images using weights and depth order gradients to do so wherein the gradient is produced from the target input image and includes a normal vector to the gradient and the scalars β, γ adjust the magnitude and the sharpness of the tensor and the depth images are further compensated using equation dij showing the difference between bright and dark surface reconstructions; figure 5; page 996, col 1, equation 5; page 998, col 2), with the depth image comprising the plurality of depth pixels (a method to drastically increase the lateral measurement resolution by a novel depth map up sampling approach, as shown in Figure 1 providing increase in both, quality and resolution, we add information from a high resolution intensity camera in a variational optimization framework according to depth image equation and weight adjustments to initial pixel values, see figures 3a-3g and figure 3 description describing an image updated based on specific methods to produce a depth image with a new depth value to the target depth pixel creating an unsampled version of the original input target image; figure 3b-f; page 993, col 2), and with the target depth pixel having a first depth value and corresponding to the brightness pixel (wherein each iteration of the image 3B-F includes a recreation of the target depth pixel and has a depth value associated with the target pixel in each brightness image B-F; figure 3b-f; page 993, col 2). FERSTL fails to disclose depth pixels based on a first gradient magnitude of the target depth pixel in a depth image and a different second gradient magnitude of a brightness pixel in a brightness image, determining an energy for the target depth pixel based on the boundary weight; determining a second depth value of the target depth pixel by optimizing the energy; and updating the depth image by assigning the second depth value to the target depth pixel. CHEN discloses determining an energy for the target depth pixel based on the boundary weight (during step s09 the computing system is adapted to register a first fused depth map with a first and second image comprising target pixels and generate 3D point could data for each pixel/point in each respective image/feature map generated is generated using higher accuracy, and would include an energy value; figs 2A-B; paragraphs [0032-0034], [0040-0041]); determining a second depth value of the target depth pixel by optimizing the energy (for example as stated in paragraph 0041 the first preset pixel value is set to 5000 the second preset pixel value is 20,000, the first preset depth value is 10 meters the second preset depth value is 0.5 meters; paragraphs [0032-0034], [0040-0041]); and updating the depth image by assigning the second depth value to the target depth pixel (the aforementioned depth values correspond to different image iterations of the same target pixel, assigning a new depth value to the target pixel as desired based on parameter weighting and generating a map with desired accuracy, wherein the second depth value assigned to the target pixel is 0.5 meters; paragraphs [0032-0034], [0040-0041]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify FERSTL to have and updating the depth image by assigning the second depth value to the target depth pixel of CHEN reference. The Suggestion/motivation for doing so would have been to provide the control method of a depth camera according to this disclosure is capable of deciding whether to generate the second depth map and, in certain cases, how much operational power should be supplied to the light source, so as to prevent unnecessary power consumption while maintaining high accuracy of depth detection of the depth camera as suggested by CHEN paragraph [0046]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHEN with FERSTL to obtain the invention as specified in claim 16. WYATT discloses depth pixels based on a first gradient magnitude of the target depth pixel in a depth image and a different second gradient magnitude of a brightness pixel in a brightness image (the system and corresponding method is adapted to identify and compute depth pixel brightness gradient values of a target depth pixel having a first gradient value in the x direction of the target pixel and a different second gradient value in the y direction of the pixel, further the gradients are seen to be different because in claim 2 it is claimed to find a difference between the first and second gradient values implying a difference exists and the values are different; figs 1 and 4; paragraphs [0008], [0032], [0038-0039], [0041], [0104]; claims 1-2). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to depth pixels based on a first gradient magnitude of the target depth pixel in a depth image and a different second gradient magnitude of a brightness pixel in a brightness image of WYATT reference. The Suggestion/motivation for doing so would have been to provide the ability to determine edge orientation and direction based on the way the brightness gradient maximizes and minimizes as suggested by WYATT at paragraph [0044]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine WYATT with modified FERSTL to obtain the invention as specified in claim 16. As per claim 18, FERSTL in view of WYATT in view of CHEN discloses the one or more non-transitory computer-readable storage media of claim 16. Modified FERSTL further discloses wherein determining the energy for the target depth pixel based on the boundary weight comprises: determining a spatial error energy for the target depth pixel based on the boundary weight, wherein optimizing the energy comprises optimizing the spatial error energy by reducing a difference between a depth value of the target depth pixel and a depth value of another depth pixel that is adjacent to the target depth pixel in the depth image (the system further includes a anisotropic diffusion tensor and by including this as one of the terms in the TGV model and can penalize high depth discontinuities at homogenous regions and allow for sharp depth edges at corresponding texture differences within the image and further interpolates the depth data to fill in the image and optimize the image in a reasonable manner; page 996, col 1). As per claim 19, FERSTL in view of WYATT in view of CHEN discloses the one or more non-transitory computer-readable storage media of claim 18. Modified FERSTL further discloses wherein determining the energy for the target depth pixel based on the boundary weight further comprises: determining a conditional error energy for the target depth pixel based on a depth value of the target depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image, wherein the conditional error energy indicates a measure of uncertainty in the depth value of the target depth pixel given the brightness value of the brightness pixel (one can penalize high depth discontinuities at homogeneous regions and allow sharp depth edges at corresponding texture differences see equation 5, and addressing within the model cases where the intensity pixels indicate homogeneous regions while the depth may indicate a depth edge is considered as a conditional error energy model reflecting an uncertainty in the depth value which is related to the brightness value of the unsampled brightness image seen in the examples provided in figure 3a-g; fig 3a-g; page 996, col 1, equation 5). As per claim 21, FERSTL in view of WYATT in view of CHEN discloses the method of claim 1 wherein the determining the boundary weight for the target depth pixel comprises, (PL is the pseudoinverse of the depth camera projection matrix CL, the camera center and the 3D point which is back projected using multiplication (product) with a projection matrix which acts as the gradient magnitude matrix, for the specific 3D pixel/point Xi,j of the image which is a brightness image so the pixel represented by point Xi,j is a brightness pixel; sections 3.1-3.2, page 995), resulting in a fusion gradient magnitude for the target depth pixel (the multiplication process would result in a fusion gradient magnitude value for the pixel located at the point Xi,j; sections 3.1-3.2, page 995). FERSTL fails to discloses and determining, using the fusion gradient magnitude, the boundary weight; and determining a product of the first gradient magnitude of the target depth pixel in the depth image and the second gradient magnitude of the brightness pixel in a brightness image. CHEN discloses and determining, using the fusion gradient magnitude, the boundary weight (generating (determining) second depth map based on a third image, subjecting the first and the second depth maps to image fusion processing (this process would involve a fusion gradient magnitude), and registering a fused depth map with one of the first and second images to generate 3D point cloud data which would include boundary weight values; abstract; paragraphs [0031-0032], [0039-0040]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify FERSTL to have determining a product of the gradient magnitude of the target depth pixel in the depth image and the gradient magnitude of the brightness pixel in a brightness image of CHEN reference. The Suggestion/motivation for doing so would have been to the second depth map is fused with the first depth map through image fusion processing, so as to maintain accuracy of depth detection of the depth camera as suggested by CHEN paragraph [0032]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine CHEN with FERSTL to obtain the invention as specified in claim 21. WYATT discloses determining a product of the first gradient magnitude of the target depth pixel in the depth image and the second gradient magnitude of the brightness pixel in a brightness image (the brightness value of the target pixel which is made up of both the x gradient magnitude value acting as the first gradient and the y gradient magnitude value acting as the second gradient are combined as a resultant brightness gradient magnitude value and this value is multiplied by a constant in order to reduce image contrast; figs 1, 4, and 11; paragraphs [0008], [0032], [0038-0039], [0041], [0089]; claims 1-2). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to have determining a product of the first gradient magnitude of the target depth pixel in the depth image and the second gradient magnitude of the brightness pixel in a brightness image of WYATT reference. The Suggestion/motivation for doing so would have been to provide the ability to determine edge orientation and direction based on the way the brightness gradient maximizes and minimizes as suggested by WYATT at paragraph [0044]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine WYATT with modified FERSTL to obtain the invention as specified in claim 21. 6. (Cancelled) As per claim 22, FERSTL in view of WYATT in view of CHEN discloses the method of claim 1. Modified FERSTL further discloses wherein optimizing the energy comprises minimizing the energy (after compensation (optimization technique) di,j = di,j +Δdi,j , the difference between bright and dark surface reconstructions is minimized; page 998, column 2, paragraph 2). Claims 2, 7-9, 11-15, 17, 20 are rejected under 35 § U.S.C. 103 as being obvious over Image Guided Depth Up sampling using Anisotropic Total Generalized Variation to FERSTL et al. (hereinafter “FERSTL”) in view of US 2007/0110319 A1 to WYATT et al. (hereinafter “WYATT”) in view of US 2016/0330434 A1 to CHEN (hereinafter “CHEN”) in further view of US 2021/0356598 A1 to HURWITZ (hereinafter “HURWITZ”). As per claim 2, FERSTL in view of WYATT in view of CHEN discloses the method of claim 1. Modified FERSTL fails to disclose wherein: the brightness image and the depth image represent a same object, the brightness image comprises a plurality of brightness pixels that includes the brightness pixel, and each respective brightness pixel of the plurality of brightness pixels correspond to a respective depth pixel of the plurality of depth pixels. HURWITZ discloses wherein: the brightness image and the depth image represent a same object, the brightness image comprises a plurality of brightness pixels that includes the brightness pixel, and each respective brightness pixel of the plurality of brightness pixels correspond to a respective depth pixel of the plurality of depth pixels (in order to penalize a change in labeling inside homogeneous areas it has been observed that the contrast measure computed from a pair of neighboring pixels is sensitive to both noise and blur the dimension of the neighborhood is slightly increased from two to four pixels. Let (1 , i, j, k) be a quadruplet of aligned consecutive pixels, and including depth, brightness of reflectance, and contrast pixel values; figure 5; paragraphs [0005], [0077]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to have the brightness image and the depth image represent a same object of HURWITZ reference. The Suggestion/motivation for doing so would have been to produce a binary edge map of the depth image which has the best performance as suggested by HURWITZ paragraph [0077]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HURWITZ with modified FERSTL to obtain the invention as specified in claim 2. As per claim 7, FERSTL in view of WYATT in view of CHEN discloses the method of claim 1. Modified FERSTL fails to disclose wherein the depth image and the brightness image are generated based on image data from a same image sensor. HURWITZ discloses wherein the depth image and the brightness image are generated based on image data from a same image sensor (the generated image from 2D to 3D images are computed and captured via the same image sensor/camera; paragraphs [0034], [0078], [0081]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to have wherein the depth image and the brightness image are generated based on image data from a same image sensor of HURWITZ reference. The Suggestion/motivation for doing so would have been to provide using an active image sensor which is a stereo camera to generate a series of disparity maps computed from a passive stereo camera from a single view point as suggested by HURWITZ at paragraph [0034]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HURWITZ with modified FERSTL to obtain the invention as specified in claim 7. As per claim 8, FERSTL in view of WYATT in view of CHEN discloses the method of claim 1. Modified FERSTL fails to disclose further comprising: instructing an illuminator assembly to project modulated light onto a local area including an object; instructing a camera assembly to capture reflected light from at least a portion of the object; and generating the depth image based on a phase shift between the reflected light and the modulated light projected into the local area. HURWITZ discloses further comprising: instructing an illuminator assembly to project modulated light into a local area including an object (a light from any light source is projected towards an object 910 and the light is reflected off of said object; fig 10a; paragraphs [0097], [0122]); instructing a camera assembly to capture reflected light from at least a portion of the object (instructing a camera 500 to capture the reflected light from the object 910; fig 10a; paragraphs [0097], [0122]); and generating the depth image based on a phase shift between the reflected light and the modulated light projected onto the local area (the system including controller 140 for determining a depth image/frame using determined phase relationship between the first laser light and the received reflected light and the determined phase relationship between the second laser light and the received reflected light, phase unwrapping may be performed to arrive at said depth image frame; paragraphs [0070], [0164]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to have generating the depth image based on a phase shift between the reflected light and the modulated light projected onto the local area and reflected off the object of HURWITZ reference. The Suggestion/motivation for doing so would have been to use correlated double sampling which is performed to minimize KTC noise as suggested by HURWITZ at paragraphs [0073]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HURWITZ with modified FERSTL to obtain the invention as specified in claim 8. As per claim 9, FERSTL in view of WYATT in view of CHEN discloses the method of claim 8. Modified FERSTL fails to disclose further comprising: generating the brightness image based on brightness of the reflected light. HURWITZ discloses further comprising: generating the brightness image based on brightness of the reflected light (a 2D IR frame may also be determined using the determined active brightness for the first laser light and/or the determined active brightness for the second laser light and with the image acquisition components 130, 140 and 150 reconfigured to control pulsed emission from the laser 110 and determine a depth frame based on a time difference between emission of a pulse and reception of reflected light/brightness image a 2D IR frame may also be determined based on the magnitude of charge accumulated in the imaging pixels of the image sensor 120; paragraph [0069-0072]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to have generating the brightness image based on brightness of the reflected light of HURWITZ reference. The Suggestion/motivation for doing so would have been to use correlated double sampling which is performed to minimize KTC noise as suggested by HURWITZ at paragraphs [0073]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HURWITZ with modified FERSTL to obtain the invention as specified in claim 9. As per claim 11, FERSTL discloses each brightness pixel of the plurality of brightness pixels corresponding to a respective depth pixel of the plurality of depth pixels (see figures 3a-3g and figure 3 description describing an image updated based on specific methods to produce a depth image with a new depth value to the target depth pixel creating an unsampled version of the original input target image; figure 3b-f), for each depth pixel of the plurality of depth pixels, determine a respective energy (calculating using the provided equations a texture edge (boundary weight) is applied to the depth images using weights and depth order gradients to do so wherein the gradient is produced from the target input image and includes a normal vector to the gradient and the scalars β, γ adjust the magnitude and the sharpness of the tensor and the depth images are further compensated using equation dij showing the difference between bright and dark surface reconstructions; figure 5; page 996, col 1, equation 5; page 998, col 2). FERSTL fails to disclose a system, comprising an illuminator assembly configured to project modulated light into a local area including an object; a camera assembly configured to capture reflected light from at least a portion of the object; and a controller configured to: generate a depth image based on the reflected light, the depth image comprising a plurality of depth pixels having respective depth values, with the depth image representing at least a portion of the object, generate a brightness image based on the reflected light, the brightness image comprising a plurality of brightness pixels and capturing representing at least the portion of the object, based on a first gradient magnitude of the respective depth pixel in the depth image and a different second gradient magnitude of a brightness pixel in the brightness image, determine enhanced depth values of respective ones of the plurality of depth pixels by fusing the depth image with the brightness image based on respective energies of the plurality of depth pixels and generate an enhanced depth image based on the enhanced depth values by, at least in part, replacing a particular depth value of the respective depth values with a particular enhanced death value of the enhanced depth values, with both the particular depth value and the particular enhanced depth value being associated with a particular depth pixel of the plurality of depth pixels. HURWITZ discloses a system, comprising an illuminator assembly configured to project modulated light into a local area including an object (a light from any light source is projected towards an object 910 and the light is reflected off of said object; fig 10a; paragraphs [0097], [0122]); a camera assembly configured to capture reflected light from at least a portion of the object (a camera 500 is adapted to capture the reflected light from the object 910; fig 10a; paragraphs [0097], [0122]); and a controller configured to: generate a depth image based on the reflected light, the depth image comprising a plurality of depth pixels having respective depth values (the system including controller 140 for determining a depth image/frame using determined phase relationship between the first laser light and the received reflected light and the determined phase relationship between the second laser light and the received reflected light, phase unwrapping may be performed to arrive at said depth image frame; paragraphs [0070], [0164]), with the depth image representing at least a portion of the object, generate a brightness image based on the reflected light, the brightness image comprising a plurality of brightness pixels and capturing representing at least the portion of the object (a 2D IR frame may also be determined using the determined active brightness for the first laser light and/or the determined active brightness for the second laser light and with the image acquisition components 130, 140 and 150 reconfigured to control pulsed emission from the laser 110 and determine a depth frame based on a time difference between emission of a pulse and reception of reflected light/brightness image a 2D IR frame may also be determined based on the magnitude of charge accumulated in the imaging pixels of the image sensor 120; paragraph [0069-0072]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify FERSTL to have a system, comprising an illuminator assembly configured to project modulated light into a local area including an object at a plurality of brightness levels to produce a plurality of images of HURWITZ reference. The Suggestion/motivation for doing so would have been to use correlated double sampling which is performed to minimize KTC noise as suggested by HURWITZ at paragraphs [0073]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HURWITZ with modified FERSTL to obtain the invention as specified in claim 11. WYATT discloses depth pixels based on a first gradient magnitude of the respective depth pixel in the depth image and a different second gradient magnitude of a brightness pixel in the brightness image (the system and corresponding method is adapted to identify and compute depth pixel brightness gradient values of a target depth pixel having a first gradient value in the x direction of the target pixel and a different second gradient value in the y direction of the pixel, further the gradients are seen to be different because in claim 2 it is claimed to find a difference between the first and second gradient values implying a difference exists and the values are different; figs 1 and 4; paragraphs [0008], [0032], [0038-0039], [0041], [0104]; claims 1-2). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to depth pixels based on a first gradient magnitude of the target depth pixel in a depth image and a different second gradient magnitude of a brightness pixel in a brightness image of WYATT reference. The Suggestion/motivation for doing so would have been to provide the ability to determine edge orientation and direction based on the way the brightness gradient maximizes and minimizes as suggested by WYATT at paragraph [0044]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine WYATT with modified FERSTL to obtain the invention as specified in claim 11. CHEN discloses determine enhanced depth values of respective ones of the plurality of depth pixels by fusing the depth image with the brightness image based on respective energies of the plurality of depth pixels and generate an enhanced depth image based on the enhanced depth values by, at least in part (during step s09 the computing system is adapted to register a first fused depth map with a first and second image comprising target pixels and generate 3D point could data for each pixel/point in each respective image/feature map generated is generated using higher accuracy, and would include an energy value; figs 2A-B; paragraphs [0032-0034], [0040-0041]), replacing a particular depth value of the respective depth values with a particular enhanced death value of the enhanced depth values (for example as stated in paragraph 0041 the first preset pixel value is set to 5000 the second preset pixel value is 20,000, the first preset depth value is 10 meters the second preset depth value is 0.5 meters; paragraphs [0032-0034], [0040-0041]), with both the particular depth value and the particular enhanced depth value being associated with a particular depth pixel of the plurality of depth pixels (the aforementioned depth values correspond to different image iterations of the same target pixel, assigning a new depth value to the target pixel as desired based on parameter weighting and generating a map with desired accuracy, wherein the second depth value assigned to the target pixel is 0.5 meters; paragraphs [0032-0034], [0040-0041]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to have with both the particular depth value and the particular enhanced depth value being associated with a particular depth pixel of the plurality of depth pixels of CHEN reference. The Suggestion/motivation for doing so would have been to provide the control method of a depth camera according to this disclosure is capable of deciding whether to generate the second depth map and, in certain cases, how much operational power should be supplied to the light source, so as to prevent unnecessary power consumption while maintaining high accuracy of depth detection of the depth camera as suggested by CHEN paragraph [0046]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to further combine CHEN with modified FERSTL to obtain the invention as specified in claim 11. As per claim 12, FERSTL in view of WYATT in view of HURWITZ in further view of CHEN discloses the system of claim 11. Modified FERSTL further discloses wherein fusing the depth image with the brightness image based on respective energies of the plurality of depth pixels comprises: for each depth pixel of the plurality of depth pixels, optimizing the respective energy (the up sampling as a convex optimization problem using a novel method for fast depth image up sampling by combining a low resolution depth image with high resolution texture information in a variational energy optimization framework; page 994, col 1). As per claim 13, FERSTL in view of WYATT in view of HURWITZ in further view of CHEN discloses the system of claim 12. FERSTL fails to disclose wherein the controller is further configured to determine the respective energy based on the first gradient magnitude of the respective depth pixel in the depth image and the second gradient magnitude of the brightness pixel in the brightness image by: determining a spatial error energy for the respective depth pixel based on the first gradient magnitude of the respective depth pixel in the depth image and the second gradient magnitude of the brightness pixel in the brightness image, wherein optimizing the respective energy comprises optimizing the spatial error energy by reducing a difference between a depth value of the respective depth pixel and a depth value of another depth pixel that is adjacent to the respective depth pixel in the depth image. WYATT discloses wherein the controller is further configured to determine the respective energy based on the first gradient magnitude of the respective depth pixel in the depth image and the second gradient magnitude of the brightness pixel in the brightness image by: determining a spatial error energy for the respective depth pixel based on the first gradient magnitude of the respective depth pixel in the depth image and the second gradient magnitude of the brightness pixel in the brightness image (the first and second gradient magnitudes are found in the x and y direction of the target pixel and is used to determine the spatial derivative value of each gradient magnitude and in a direction parallel to the edge direction, it can be assumed that brightness gradient values originating from edges are not included in the image and that only spatial derivative values originating from noise (spatial error) are included in the image; paragraphs [0004], [0008], [0032], [0038-0041], [0082]), wherein optimizing the respective energy comprises optimizing the spatial error energy by reducing a difference between a depth value of the respective depth pixel and a depth value of another depth pixel that is adjacent to the respective depth pixel in the depth image (the depth pixels having the brightness gradient values are selected in the direction which the gradient value maximizes (optimize) by as seen in equation (2) taking a difference of gradient values of the selected target/depth pixel and is adjacent to the gradient/related pixel being used in the equation and this is done to reduce noise/error in the in the image; figs 1-2; paragraphs [0074-0081]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to have determining a spatial error energy for the respective depth pixel based on the first gradient magnitude of the respective depth pixel in the depth image and the second gradient magnitude of the brightness pixel in the brightness image of WYATT reference. The Suggestion/motivation for doing so would have been to provide the ability to determine edge orientation and direction based on the way the brightness gradient maximizes and minimizes as suggested by WYATT at paragraph [0044]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine WYATT with modified FERSTL to obtain the invention as specified in claim 13. As per claim 14, FERSTL in view of WYATT in view of HURWITZ in further view of CHEN discloses the system of claim 11. Modified FERSTL further discloses wherein the controller is further configured to determine the respective energy based on the first gradient magnitude of the respective depth pixel in the depth image and the second gradient magnitude of the brightness pixel in the brightness image further by: determining a conditional error energy for the respective depth pixel based on a depth value of the respective depth pixel in the depth image and a brightness value of the brightness pixel in the brightness image, wherein the conditional error energy indicates a measure of uncertainty in the depth value of the respective depth pixel given the brightness value of the brightness pixel (can penalize high depth discontinuities at homogeneous regions and allow sharp depth edges at corresponding texture differences see equation 5, and addressing within the model cases where the intensity pixels indicate homogeneous regions while the depth may indicate a depth edge is considered as a conditional error energy model reflecting an uncertainty in the depth value which is related to the brightness value of the unsampled brightness image seen in the examples provided in figure 3a-g; fig 3a-g; page 996, col 1, equation 5). As per claim 15, FERSTL in view of WYATT in view of HURWITZ in further view of CHEN discloses the system of claim 11. Modified FERSTL fails to disclose wherein to generate the depth image and the brightness image based on the reflected light by: generating the depth image based on a phase shift between the reflected light and the modulated light projected into the local area; and generating the brightness image based on brightness of the reflected light. HURWITZ discloses wherein to generate the depth image and the brightness image based on the reflected light the controller is configured to: generate the depth image based on a phase shift between the reflected light and the modulated light projected into the local area (imaging pixels of the image sensor are given phase offsets according to the phase off set equation table provided in paragraph [0065] where the skilled person will readily understand that using DFT to determine the phase relationship between the first laser light and the received reflected laser light, and to determine active brightness, is merely one example and that any other suitable alternative technique may be used; paragraphs [0056], [0063-0065]); and generate the brightness image based on brightness of the reflected light (determine an active brightness 2D IR image frame based on the reflected light; paragraphs [0056-0057], [0068-0069]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to have generated depth maps based on reflected light of the object of HURWITZ reference. The Suggestion/motivation for doing so would have been to provide as suggested by HURWITZ at paragraph [0070] that this process may be repeated many times in order to generate a time series of depth frames, which may together form a video of the depth image frames in sequence. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HURWITZ with modified FERSTL to obtain the invention as specified in claim 15. As per claim 17, FERSTL in view of CHEN in view of WYATT discloses the one or more non-transitory computer-readable storage media of claim 16. Modified FERSTL fails to disclose wherein: the brightness image and the depth image represent a same object, the brightness image comprises a plurality of brightness pixels that includes the brightness pixel, and each respective brightness pixel of the plurality of brightness pixels correspond to a respective depth pixel of the plurality of depth pixels. HURWITZ discloses wherein: the brightness image and the depth image represent a same object, the brightness image comprises a plurality of brightness pixels that includes the brightness pixel, and each respective brightness pixel of the plurality of brightness pixels correspond to a respective depth pixel of the plurality of depth pixels (in order to penalize a change in labeling inside homogeneous areas it has been observed that the contrast measure computed from a pair of neighboring pixels is sensitive to both noise and blur the dimension of the neighborhood is slightly increased from two to four pixels. Let (1 , i, j, k) be a quadruplet of aligned consecutive pixels, and including depth, brightness of reflectance, and contrast pixel values; figure 5; paragraphs [0005], [0077]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to have the brightness image and the depth image represent a same object of HURWITZ reference. The Suggestion/motivation for doing so would have been to produce a binary edge map of the depth image which has the best performance as suggested by HURWITZ paragraph [0077]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HURWITZ with modified FERSTL to obtain the invention as specified in claim 17. As per claim 20, FERSTL in view of CHEN in view of WYATT discloses the one or more non-transitory computer-readable storage media of claim 16. Modified FERSTL fails to disclose wherein the depth image and the brightness image are generated based on image data from a same image sensor. HURWITZ discloses wherein the depth image and the brightness image are generated based on image data from a same image sensor (the generated image from 2D to 3D images are computed and captured via the same image sensor/camera; paragraphs [0034], [0078], [0081]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to have wherein the depth image and the brightness image are generated based on image data from a same image sensor of HURWITZ reference. The Suggestion/motivation for doing so would have been to provide using an active image sensor which is a stereo camera to generate a series of disparity maps computed from a passive stereo camera from a single view point as suggested by HURWITZ at paragraph [0034]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HURWITZ with modified FERSTL to obtain the invention as specified in claim 20. Claim 10 is rejected under 35 § U.S.C. 103 as being obvious over Image Guided Depth Up sampling using Anisotropic Total Generalized Variation to FERSTL et al. (hereinafter “FERSTL”) in view of US 2007/0110319 A1 to WYATT et al. (hereinafter “WYATT”) in view of US 2016/0330434 A1 to CHEN (hereinafter “CHEN”) in further view of US 2021/0356598 A1 to HURWITZ (hereinafter “HURWITZ”) in further view of US 2017/0018114 A1 to STEWART et al (hereinafter “STEWART”). As per claim 10, FERSTL in view of WYATT in view of CHEN in further view of HURWITZ discloses the method of claim 8. Modified FERSTL fails to disclose wherein the reflected light is first reflected light, and the method further comprises: instructing the camera assembly to capture second reflected light from at least the portion of the object; and generating the brightness image based on brightness of the second reflected light, wherein the second reflected light has a different wavelength from the first reflected light. STEWART discloses wherein the reflected light is first reflected light, and the method further comprises: instructing the camera assembly to capture second reflected light from at least the portion of the object (the camera and corresponding probe light from modulated emitter 32 is to apply probe light to subject 16’ at areas 34 and second area 36 and to determine reflectance and corresponding wavelength of the two reflected areas off of subject 16’; fig 1; paragraphs [0017-0018], [0023-0024]); and generating the brightness image based on brightness of the second reflected light, wherein the second reflected light has a different wavelength from the first reflected light (sensor 18 adapted to capture the reflectance of light off of the subject and includes one or more passive filters 22 may be arranged in series with sensor array 14 and configured to limit the wavelength response of the sensor array passive filters reduce noise by excluding photons of wavelengths not intended to be imaged and then a map of the reflected brightness’s from areas 34 and 36 may be generated based on the desired wavelengths allowed through from the filter arrangement; fig 1; paragraphs [0017-0018], [0023-0024], [0038]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to further modify FERSTL to have wherein the second reflected light has a different wavelength from the first reflected light of STEWART reference. The Suggestion/motivation for doing so would have been to filter out wavelengths of light reflectance not desired for observation as suggested by STEWART at paragraphs [0018-0019]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine STEWART with modified FERSTL to obtain the invention as specified in claim 10. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000. /Devin Dhooge/ USPTO Patent Examiner Art Unit 2677 /Jonathan S Lee/Primary Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Jul 18, 2022
Application Filed
Mar 19, 2025
Non-Final Rejection — §103
Jun 23, 2025
Response Filed
Aug 25, 2025
Final Rejection — §103
Nov 11, 2025
Request for Continued Examination
Nov 18, 2025
Response after Non-Final Action
Dec 18, 2025
Non-Final Rejection — §103
Feb 04, 2026
Examiner Interview Summary
Feb 04, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602773
Deep-Learning-based T1-Enhanced Selection of Linear Coefficients (DL-TESLA) for PET/MR Attenuation Correction
2y 5m to grant Granted Apr 14, 2026
Patent 12579780
HYPERSPECTRAL TARGET DETECTION METHOD OF BINARY-CLASSIFICATION ENCODER NETWORK BASED ON MOMENTUM UPDATE
2y 5m to grant Granted Mar 17, 2026
Patent 12524982
NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM, VISUALIZATION METHOD AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Jan 13, 2026
Patent 12517146
IMAGE-BASED DECK VERIFICATION
2y 5m to grant Granted Jan 06, 2026
Patent 12505673
MULTIMODAL GAME VIDEO SUMMARIZATION WITH METADATA
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+42.9%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 71 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month