Prosecution Insights
Last updated: April 19, 2026
Application No. 18/286,522

Warped Perspective Correction

Final Rejection §103
Filed
Oct 11, 2023
Examiner
BUDISALICH, ANDREW STEVEN
Art Unit
2662
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
87%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
36 granted / 46 resolved
+16.3% vs TC avg
Moderate +9% lift
Without
With
+8.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
35 currently pending
Career history
81
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
5.2%
-34.8% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 46 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 20 and 22-40 are pending. Claims 1-19 and 21 are canceled. Response to Arguments Applicant’s arguments, see p.6, filed 01/30/2026, with respect to the objection of Claim 27 have been fully considered and are persuasive. Therefore, the objection of Claim 27 has been withdrawn. Applicant’s arguments, see p.6-8, filed 01/30/2026, with respect to the rejections of Claims 20-39 under 35 U.S.C. 103 have been fully considered but are moot because Applicant’s amendments of the independent claims has altered the scope of the claims, and therefore, necessitated new grounds of rejection which are presented below. Examiner has considered applicants arguments with respect to the new claim 40. However, arguments are moot due to new claims being presented and are therefore being analyzed as presented below. Accordingly, THIS ACTION IS MADE FINAL. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 20, 23, 34-35, and 39 are rejected under 35 U.S.C. 103 as being unpatentable over Balachandreswaran et al. (US 20160210785 A1) in view of Bellows et al. (US 10965929 B1) and Hosfield et al. (US 20200302688 A1). Regarding Claim 20, Balachandreswaran teaches "A method comprising: at a device having an image sensor, a display, one or more processors, and non- transitory memory; capturing, using the image sensor, an image of a physical environment"; (Balachandreswaran, Abstract and Paras. 19 and 26, teaches a head mounted display comprises a camera array in communication with a processor and executes instructions using computer readable media, i.e., a device having an image sensor, a display, one or more processors, and non-transitory memory, wherein at least one image camera captures at least one physical image stream of the physical environment, i.e., capturing an image of the physical environment using the image sensor); "obtaining a plurality of initial depths respectively associated with a plurality of pixels of the image of the physical environment"; (Balachandreswaran, Para. 36, teaches the processor assigns depth data to each pixel in the physical image stream based on the depth information, i.e., obtain a plurality of initial depths respectively associated with the plurality of pixels of the image). However, Balachandreswaran does not explicitly teach "wherein the plurality of initial depths includes a first set of initial depths having a first spatial distribution obtained from a first source and a second set of initial depths having a second spatial distribution, different than the first distribution, obtained from a second source different than the first source; generating a depth map for the image of the physical environment based on the first set of initial depths, the second set of initial depths, and a plurality of confidences of the plurality of initial depths; transforming, using the one or more processors, the image of the physical environment based on the depth map and a difference between a perspective of the image sensor and a perspective of a user; and displaying, on the display, the transformed image”. In an analogous field of endeavor, Bellows teaches "wherein the plurality of initial depths includes a first set of initial depths having a first spatial distribution obtained from a first source and a second set of initial depths having a second spatial distribution, different than the first distribution, obtained from a second source different than the first source"; (Bellows, Col. 1 lines 38-65, teaches generating a first depth map based on time-of-flight measurements detected by a depth sensor of a head mounted device and generating a second depth map based on disparity mapping from stereo imagery detected by the stereoscopic camera system of the head mounted device, i.e., plurality of initial depths include two different sets of depths with different sources and spatial distributions being the first ToF depth sensor source and distribution and the stereoscopic camera system source and distribution resulting in the second depth map); "generating a depth map for the image of the physical environment based on the first set of initial depths, the second set of initial depths, and a plurality of confidences of the plurality of initial depths"; (Bellows, FIG. 1A and Col. 1 lines 38-65, teaches blending the first depth map and the second depth map into a combined depth map based on the confidence values of the respective pixel locations in the first depth map and the second depth map wherein the depth maps are generated from a sensor and stereoscopic camera system of the head mounted device which outputs the real scene to a video processing device, i.e., generating a depth map for the image of the physical environment based on the first set of depths, the second set of depths, and the confidences of the depths). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran by including the depths including two sets of depths having different spatial distributions and sources in which the depth map is generated from the two sets of depths and the confidences of depths taught by Bellows. One of ordinary skill in the art would be motivated to combine the references since it employs confidence-based fusion (Bellows, Abstract, teaches the motivation of combination to be to employ confidence-based fusion for depth mapping). However, the combination of references of Balachandreswaran in view of Bellows does not explicitly teach “transforming, using the one or more processors, the image of the physical environment based on the depth map and a difference between a perspective of the image sensor and a perspective of a user; and displaying, on the display, the transformed image”. In an analogous field of endeavor, Hosfield teaches "transforming, using the one or more processors, the image of the physical environment based on the depth map and a difference between a perspective of the image sensor and a perspective of a user"; (Hosfield, Paras. 54-58 and 67-69, teaches each source image is distorted based on a difference between the pose of the camera that captured the source image and the pose of the virtual camera relative to a reconstruction of the subject in which the virtual camera pose may be obtained from a pose detector in the head mounted device which tracks changes in the pose of a viewer's head wherein generating the distorted images comprises obtaining depth data associated with each source image and wherein portions of generated polygonal meshes for each source image are re-projected to face the virtual camera based on the pose of the camera for each source image and the pose of the virtual camera in which a parallax shader corrects for the change in perspective of the source image when mapping to the view of the virtual camera, i.e., transform the image based on the depth map and a difference between a perspective of the image sensor and the user's perspective); "and displaying, on the display, the transformed image"; (Hosfield, Para. 86, teaches the distorted images are combined so as to generate an image of the subject from the viewpoint of the virtual camera wherein the generated image is output for display at a head mounted device, i.e., display the transformed image). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran and Bellows by including the transforming of the image based on a depth map and the difference between a camera and user perspective and displaying the image taught by Hosfield. One of ordinary skill in the art would be motivated to combine the references since it improves the accuracy of the 3D representation (Hosfield, Para. 48, teaches the motivation of combination to be to improve the accuracy with which the subject can be represented in 3D). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Regarding Claim 23, the combination of references of Balachandreswaran in view of Bellows and Hosfield teaches "The method of claim 21, wherein the first source includes a laser depth sensor and the second source includes a stereo image sensor"; (Hosfield, Paras. 58-60, teaches a source image corresponding to a stereoscopic image and performing a LIDAR scan of the subject from a corresponding viewpoint, i.e., first source image includes a laser depth sensor being the LIDAR scan and the second source includes a stereo image sensor being the stereoscopic image of the subject). The proposed combination as well as the motivation for combining the Balachandreswaran, Bellows, and Hosfield references presented in the rejection of Claim 20, applies to claim 23. Thus, the method recited in claim 23 is met by Balachandreswaran in view of Bellows and Hosfield. Claim 34 recites a device with elements corresponding to the steps recited in Claim 20. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Balachandreswaran, Bellows, and Hosfield references, presented in rejection of Claim 20, apply to this claim. Finally, the combination of the Balachandreswaran, Bellows, and Hosfield references discloses a processor and a memory (for example, see Balachandreswaran, Paragraphs 19 and 26). Regarding Claim 35, the combination of references of Balachandreswaran in view of Bellows and Hosfield teaches "The device of claim 34, wherein the device is a head-mounted device (HMD), the image sensor is a forward-facing image sensor, the perspective of the image sensor is from a location of the image sensor, and the perspective of the user is from a location of an eye of a user of the HMD"; (Hosfield, FIG. 1 and Paras. 26-29, 32, 53-55, and 86-87, teaches the head mounted device obscures the user's view of the surrounding environment and the user is only able to see the pair of images displayed within the HMD wherein a front-facing camera may capture images to the front of the HMD in which the frame of the HMD system defines one or two eye display positions to be positioned in front of a respective eye of the observer and wherein a camera pose indicating a pose of a camera relative to the subject in the scene for each image is obtained and a virtual camera pose indicating a pose of a virtual camera relative to the subject corresponding to the pose of an HMD is obtained, i.e., device is an HMD with a forward-facing image sensor with a perspective of the image sensor from a location of the image sensor and a perspective of the user being from a location of the eye of the user of the HMD). The proposed combination as well as the motivation for combining the Balachandreswaran, Bellows, and Hosfield references presented in the rejection of Claim 20, applies to claim 35. Thus, the device recited in claim 35 is met by Balachandreswaran in view of Bellows and Hosfield. Claim 39 recites a computer-readable storage medium storing a program with instructions corresponding to the steps recited in Claim 20. Therefore, the recited programming instructions of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Balachandreswaran, Bellows, and Hosfield references, presented in rejection of Claim 20, apply to this claim. Finally, the combination of the Balachandreswaran, Bellows, and Hosfield references discloses a computer readable storage medium (for example, see Balachandreswaran, Paragraphs 19 and 26). Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Balachandreswaran in view of Bellows, Hosfield, and Zatzarinni et al. (US 20200327686 A1). Regarding Claim 22, the combination of references of Balachandreswaran in view of Bellows and Hosfield does not explicitly teach "The method of claim 20, wherein the first set of initial depths has a first average confidence and the second set of initial depths has a second average confidence different than the first average confidence". In an analogous field of endeavor, Zatzarinni teaches "The method of claim 20, wherein the first set of initial depths has a first average confidence and the second set of initial depths has a second average confidence different than the first average confidence"; (Zatzarinni, Para. 74, teaches a first set of confidence values corresponding to a first set of the depth values and a second set of confidence values corresponding to a second set of depth values wherein the first set of depth values more accurately represent depths of surfaces in the scene than the second set of depth values, i.e., the first set of initial depths has a first average or expected confidence and the second set of depths has a different second average or expected confidence). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran, Bellows, and Hosfield by including the first and second sets of depths having corresponding first and second average confidences taught by Zatzarinni. One of ordinary skill in the art would be motivated to combine the references since it enhances confidence maps (Zatzarinni, Para. 31, teaches the motivation of combination to be to enhance confidence maps based on subsequent processing). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Balachandreswaran in view of Bellows, Hosfield, and Woodhouse et al. (US 20130106852 A1). Regarding Claim 24, the combination of references of Balachandreswaran in view of Bellows and Hosfield does not explicitly teach "The method of claim 20, wherein the plurality of initial depths corresponds to unmerged nodes of a quadtree". In an analogous field of endeavor, Woodhouse teaches "The method of claim 20, wherein the plurality of initial depths corresponds to unmerged nodes of a quadtree"; (Woodhouse, Paras. 26-28, teaches each node of a quadtree may have node data representative of a corresponding group of pixels in the depth image the quadtree is compressed from wherein each node includes data indicating an average depth value of the corresponding group of pixels and wherein if the pixels in the group of pixels indicate a large variety of depth values the corresponding node may have child nodes added to more accurately represent the depth image and nodes with no children nodes are termed leaf nodes, i.e., initial depths correspond to unmerged nodes of a quadtree). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran, Bellows, and Hosfield by including the depths corresponding to unmerged nodes of a quadtree taught by Woodhouse. One of ordinary skill in the art would be motivated to combine the references since it preserves detail with less memory (Woodhouse, Para. 30, teaches the motivation of combination to be to use less memory while still preserving detail for areas of the scene with a more pronounced depth profile). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Balachandreswaran in view of Bellows, Hosfield, Woodhouse, and Scholefield et al. ("Quadtree structured image approximation for denoising and interpolation”). Regarding Claim 25, the combination of references of Balachandreswaran in view of Bellows, Hosfield, and Woodhouse does not explicitly teach "The method of claim 24, wherein a neighborhood of nodes of the quadtree are merged if no nodes in the neighborhood have a depth value or if depth values of merged nodes can be reconstructed within a threshold via interpolation". In an analogous field of endeavor, Scholefield teaches "The method of claim 24, wherein a neighborhood of nodes of the quadtree are merged if no nodes in the neighborhood have a depth value or if depth values of merged nodes can be reconstructed within a threshold via interpolation"; (Scholefield, FIG. 2c and Section III-A and Section V, teaches allowing neighboring regions of the tree to jointly represent just one polynomial region wherein two leaves are joined if their combined cost is less than the sum of their individual costs and the joined representation is used in place of the individual leaves for the rest of the algorithm wherein a modified penalty is used to so that the cost of a polynomial region is increased by a factor and increases the penalty on regions with fewer known pixels wherein modifying the penalty in this way allows successful interpolation, i.e., neighborhood of nodes of the quadtree are merged if the depth values can be reconstructed via interpolation). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran, Bellows, Hosfield, and Woodhouse by including the merging of nodes in a neighborhood if the depth values of the merged nodes are able to be reconstructed with interpolation taught by Scholefield. One of ordinary skill in the art would be motivated to combine the references since it reduces computational complexity (Scholefield, Abstract, teaches the motivation of combination to be to reduce computational complexity required to find a suitable subspace for each node of the quadtree). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claims 26 and 36 are rejected under 35 U.S.C. 103 as being unpatentable over Balachandreswaran in view of Bellows, Hosfield, and Xu et al. ("Stereo matching: An outlier confidence approach"). Regarding Claim 26, the combination of references of Balachandreswaran in view of Bellows and Hosfield does not explicitly teach “The method of claim 20, wherein generating the depth map includes, for a particular element of the depth map corresponding to a particular pixel of the image of the physical environment: determining a data-matching term based on a particular initial depth associated with the particular pixel of the image of the physical environment and a particular confidence of the particular initial depth; determining a smoothness term based on a plurality of initial depths respectively associated with a neighborhood surrounding the particular pixel and a plurality of confidences of the plurality of initial depths respectively associated with the neighborhood surrounding the particular pixel; and determining a weighted sum of the data-matching term and the smoothness term”. In an analogous field of endeavor, Xu teaches "The method of claim 20, wherein generating the depth map includes, for a particular element of the depth map corresponding to a particular pixel of the image of the physical environment: determining a data-matching term based on a particular initial depth associated with the particular pixel of the image of the physical environment and a particular confidence of the particular initial depth"; (Xu, Page 777, teaches a data term as a function of the disparity value for a given pixel and the confidence for the given pixel, i.e., determine a data-matching term based on a depth associated with the pixel and a confidence of the depth); "determining a smoothness term based on a plurality of initial depths respectively associated with a neighborhood surrounding the particular pixel and a plurality of confidences of the plurality of initial depths respectively associated with the neighborhood surrounding the particular pixel"; (Xu, Page 777, teaches defining a smoothness term that is constructed on the disparity maps and the consistency of disparities between frames wherein outlier confidences are computed on pixels indicating how confident the pixel is regarded as an outlier and outlier confidence maps are constructed on the input image pair, i.e., smoothness term based on depths that are associated with a neighborhood surrounding the pixel and confidences of the depths associated with the neighborhood by the spatial smoothness being considered within one disparity map and being made consistent with disparities between frames); "and determining a weighted sum of the data-matching term and the smoothness term"; (Xu, Page 777, teaches adding the data term with the smoothness term wherein the data term is a weighted sum of the confidences of the pixel with its disparity value, i.e., weighted sum of the data-matching term and the smoothness term). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran, Bellows, and Hosfield by including the weighted sum of a data-matching term and a smoothness term taught by Xu. One of ordinary skill in the art would be motivated to combine the references since it robustly estimates disparities (Xu, Abstract, teaches the motivation of combination to be to robustly estimate the disparities of both the occluded and non-occluded pixels). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claim 36 recites a device with elements corresponding to the steps recited in Claim 26. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Balachandreswaran, Bellows, Hosfield, and Xu references, presented in rejection of Claim 26, apply to this claim. Finally, the combination of the Balachandreswaran, Bellows, Hosfield, and Xu references discloses a processor and a memory (for example, see Balachandreswaran, Paragraphs 19 and 26). Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Balachandreswaran in view of Bellows, Hosfield, Xu, and Guillot et al. (US 20170031179 A1). Regarding Claim 27, the combination of references of Balachandreswaran in view of Bellows, Hosfield, and Xu does not explicitly teach "The method of claim 26, wherein a weight of the data-matching term in the weighted sum is based on a gaze of the user". In an analogous field of endeavor, Guillot teaches "The method of claim 26, wherein a weight of the data-matching term in the weighted sum is based on a gaze of the user"; (Guillot, Para. 136, teaches optimizing visual performance by defining the cost or merit function with weight values for various gaze directions and selectin higher values of weight coefficients in one or more gaze directions included within the zone of stabilized optical performance, i.e., weight of a term is based on a gaze of the user). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran, Bellows, Hosfield, and Xu wherein the weight coefficients are included in a weighted sum and the weight is applied to a data-matching term by including the weighting of the term being based on a gaze of the user taught by Guillot. One of ordinary skill in the art would be motivated to combine the references since it optimizes visual performance (Guillot, Para. 136, teaches the motivation of combination to be to optimize visual performance). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claims 28 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Balachandreswaran in view of Bellows, Hosfield, Xu, and Kim et al. (US 20120268557 A1). Regarding Claim 28, the combination of references of Balachandreswaran in view of Bellows, Hosfield, and Xu does not explicitly teach "The method of claim 26, wherein generating the depth map includes, for the particular element of the depth map, determining an updated depth that maximizes the weighted sum". In an analogous field of endeavor, Kim teaches "The method of claim 26, wherein generating the depth map includes, for the particular element of the depth map, determining an updated depth that maximizes the weighted sum"; (Kim, Para. 37, teaches the 3D image processing apparatus adjusting the depth map information on the basis of the meta data about the input video signal wherein the 3D effect of the input video signal is maximized, i.e., generating the depth map for a particular element of the depth map includes determining an updated depth which maximizes the weighted sum as the 3D effect). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran, Bellows, Hosfield, and Xu wherein the weighted sum drives a 3D effect of the video by including the determination of an updated depth to maximize the 3D effect taught by Kim. One of ordinary skill in the art would be motivated to combine the references since it improves picture quality (Kim, Para. 45, teaches the motivation of combination to be to improve picture quality). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Regarding Claim 29, the combination of references of Balachandreswaran in view of Bellows, Hosfield, Xu, and Kim teaches "The method of claim 28, wherein the data-matching term is a Gaussian function of the updated depth"; (Bellows, Col. 10 lines 9-23, teaches a blending function of two depth maps to choose the highest-confidence estimate at each point wherein the blending function may be neighborhood-oriented with a dilation operation with Gaussian weights so that depth measurements and confidence of neighboring pixels are factored in to reduce noise anomalies in the resulting depth map, i.e., term is a Gaussian function of the depth being the Gaussian weighted depth measurements). The proposed combination as well as the motivation for combining the Balachandreswaran, Bellows, and Hosfield references presented in the rejection of Claim 20, applies to claim 29. Thus, the method recited in claim 29 is met by Balachandreswaran in view of Bellows, Hosfield, Xu, and Kim. Claim 30 is rejected under 35 U.S.C. 103 as being unpatentable over Balachandreswaran in view of Bellows, Hosfield, Xu, Kim, Bellows, and Bronstein et al. (US 20170091917 A1). Regarding Claim 30, the combination of references of Balachandreswaran in view of Bellows, Hosfield, Xu, Kim, and Bellows does not explicitly teach "The method of claim 29, wherein a height and/or a width of the Gaussian function is dependent on the particular confidence of the particular initial depth". In an analogous field of endeavor, Bronstein teaches "The method of claim 29, wherein a height and/or a width of the Gaussian function is dependent on the particular confidence of the particular initial depth"; (Bronstein, Para. 107, teaches a reference depth value being computed using an average of central pixels with the high confidence level wherein a width of the Gaussian function is based on the reference depth value, i.e., width of the Gaussian function is dependent on the confidence of the depth by the reference depth being selected for high confidence). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran, Bellows, Hosfield, Xu, Kim, and Bellows by including the width of the Gaussian function being dependent on confidence of depth taught by Bronstein. One of ordinary skill in the art would be motivated to combine the references since it accounts for noise structure (Bronstein, Para. 2, teaches the motivation of combination to be to account for particular structures of noise). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claim 31 is rejected under 35 U.S.C. 103 as being unpatentable over Balachandreswaran in view of Bellows, Hosfield, Xu, and Huang (US 11164284 B1). Regarding Claim 31, the combination of references of Balachandreswaran in view of Bellows, Hosfield, and Xu does not explicitly teach "The method of claim 26, wherein the neighborhood surrounding the particular pixel includes the particular pixel and a nearest pixel in each of four directions". In an analogous field of endeavor, Huang teaches "The method of claim 26, wherein the neighborhood surrounding the particular pixel includes the particular pixel and a nearest pixel in each of four directions"; (Huang, FIG. 7 and Col. 6 lines 41-64, teaches finding two neighboring pixels on each of the horizontal direction and the vertical direction for the intermediate pixel, i.e., the neighborhood surrounding the particular pixel includes the particular pixel being the intermediate pixel and a nearest pixel in each of four directions being the neighboring pixels in both horizontal and vertical directions). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran Varekamp, Hosfield, and Xu by including the neighborhood including the pixel and a nearest pixel in each four directions taught by Huang. One of ordinary skill in the art would be motivated to combine the references since it rapidly performs image warping (Huang, Col. 1 lines 5-10, teaches the motivation of combination to be to rapidly perform an image warping process). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claims 32 and 37 is rejected under 35 U.S.C. 103 as being unpatentable over Balachandreswaran in view of Bellows, Hosfield, Xu, and Petrovskaya et al. (US 20160148433 A1). Regarding Claim 32, the combination of references of Balachandreswaran in view of Bellows, Hosfield, and Xu does not explicitly teach "The method of claim 26, wherein the smoothness term is based on a weighted average of the plurality of initial depths respectively associated with a neighborhood surrounding the particular pixel". In an analogous field of endeavor, Petrovskaya teaches "The method of claim 26, wherein the smoothness term is based on a weighted average of the plurality of initial depths respectively associated with a neighborhood surrounding the particular pixel"; (Petrovskaya, Para. 271, teaches the system may seek to compute smooth normals wherein the system may set the smoothed depth to be a weighted average of depth values in a certain window around a pixel, i.e., the smoothness term is based on a weighted average of depths associated with a neighborhood surrounding the particular pixel being the pixel window). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran, Bellows, Hosfield, and Xu by including the smoothness being based on a weighted average of depths associated with the surround neighborhood of pixels taught by Petrovskaya. One of ordinary skill in the art would be motivated to combine the references since it improves alignment of depth (Petrovskaya, Para. 104, teaches the motivation of combination to be to maximize or improve the alignment of the depth data). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claim 37 recites a device with elements corresponding to the steps recited in Claim 32. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Balachandreswaran, Bellows, Hosfield, Xu, and Petrovskaya references, presented in rejection of Claim 32, apply to this claim. Finally, the combination of the Balachandreswaran, Bellows, Hosfield, Xu, and Petrovskaya references discloses a processor and a memory (for example, see Balachandreswaran, Paragraphs 19 and 26). Claims 33 and 38 is rejected under 35 U.S.C. 103 as being unpatentable over Balachandreswaran in view of Bellows, Hosfield, Xu, Petrovskaya, and Nobayashi (US 20170223334 A1). Regarding Claim 33, the combination of references of Balachandreswaran in view of Varekamp, Hosfield, Xu, and Petrovskaya does not explicitly teach "The method of claim 32, wherein a weighting of the weighted average is based on the plurality of confidences of the plurality of initial depths respectively associated with the neighborhood surrounding the particular pixel". In an analogous field of endeavor, Nobayashi teaches "The method of claim 32, wherein a weighting of the weighted average is based on the plurality of confidences of the plurality of initial depths respectively associated with the neighborhood surrounding the particular pixel"; (Nobayashi, Paras. 63 and 65, teaches the weighted mean value of the depth values of the peripheral pixels of the corrected target pixel is regarded as the corrected depth value wherein the corrected object depth is calculated by weighted mean processing using the first confidence which indicates the reliability in the value of the object depth and wherein the weight coefficient of each pixel is set so that a larger value is set as the reliability indicated by the first confidence is higher and the depth value of the pixel is closer to the depth value of the target point, i.e., the weighting of the weighted average is based on the confidences of the depths associated with the neighborhood pixels surrounding the particular pixel). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran, Bellows, Hosfield, Xu, and Petrovskaya by including the weighting of the weighted average being based on the confidences of the depths associated with the pixel neighborhood taught by Nobayashi. One of ordinary skill in the art would be motivated to combine the references since it improves depth value accuracy (Nobayashi, Para. 78, teaches the motivation of combination to be to improve the accuracy of the depth values). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Claim 38 recites a device with elements corresponding to the steps recited in Claim 33. Therefore, the recited elements of this claim are mapped to the proposed combination in the same manner as the corresponding steps in its corresponding method claim. Additionally, the rationale and motivation to combine the Balachandreswaran, Bellows, Hosfield, Xu, Petrovskaya, and Nobayashi references, presented in rejection of Claim 33, apply to this claim. Finally, the combination of the Balachandreswaran, Bellows, Hosfield, Xu, Petrovskaya, and Nobayashi references discloses a processor and a memory (for example, see Balachandreswaran, Paragraphs 19 and 26). Claim 40 is rejected under 35 U.S.C. 103 as being unpatentable over Balachandreswaran in view of Bellows, Hosfield, Lim (US 20170272724 A1), and Ye et al. (US 20200273190 A1). Regarding Claim 40, the combination of references of Balachandreswaran in view of Bellows and Hosfield does not explicitly teach "The method of claim 20, wherein the depth map includes depth values for the plurality of initial pixels and one or more additional pixels, wherein a particular depth value for a particular additional pixel is generated based on a particular first depth of the first set of initial depths, a particular second depth of the second set of initial depths, and a particular confidence of the plurality of confidences”. In an analogous field of endeavor, Lim teaches "The method of claim 20, wherein the depth map includes depth values for the plurality of initial pixels and one or more additional pixels"; (Lim, Abstract and Para. 7, teaches generating a dense depth map from a sparse depth map where points are added to the sparse depth map in which the dense depth map has depth consistency by predicting and acquiring a depth value of each position on an image plane by using color information of an original image and mesh information thereof from a sparse depth map that is acquired by the projection of points existing on three-dimensional space onto a two-dimensional image plane, i.e., depth map being the dense depth map includes depth values for the initial pixels from the sparse depth map and the additional pixels being the points added to the sparse depth map to make the dense depth map). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran, Bellows, and Hosfield by including the depth map including depth values of initial pixels and additional pixels taught by Lim. One of ordinary skill in the art would be motivated to combine the references since it generates an accurate dense depth map (Lim, Para. 6, teaches the motivation of combination to be to improve an accuracy of a point cloud by generating an accurate and dense depth map with depth consistency and greater accuracy). However, the combination of references of Balachandreswaran in view of Bellows, Hosfield, and Lim does not explicitly teach "wherein a particular depth value for a particular additional pixel is generated based on a particular first depth of the first set of initial depths, a particular second depth of the second set of initial depths, and a particular confidence of the plurality of confidences". In an analogous field of endeavor, Ye teaches "wherein a particular depth value for a particular additional pixel is generated based on a particular first depth of the first set of initial depths, a particular second depth of the second set of initial depths, and a particular confidence of the plurality of confidences"; (Ye, Claims 1-2, teaches generating sparse depth maps and low resolution depth maps predicted by the CNN are generated in which the depth sources are fused and the corresponding confidence map is computed for every key frame and wherein dense depth maps are regressed and the dense 3D scene is reconstructed wherein the depth reconstruction uses the computed confidence map H for fused depth map according to different depth sources in which H represents the depth accuracy of different pixels, i.e., depth values of additional pixels for the dense depth map are determined based on the different depth sources being the sparse depth maps and the low resolution CNN depth maps and the confidence value of a pixel). It would have been obvious to one having ordinary skill in the art before the effective filing date to modify the invention of Balachandreswaran in view of Bellows, Hosfield, and Lim by including the depth values of additional pixels being generated based on depth from the first set of depths, depth from the second set of depths, and confidence from the confidence values taught by Ye. One of ordinary skill in the art would be motivated to combine the references since it improves running speed (Ye, Abstract, teaches the motivation of combination to be to improve running speed of the algorithm and ensure real-time dense 3D scene reconstruction). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW STEVEN BUDISALICH whose telephone number is (703)756-5568. The examiner can normally be reached Monday - Friday 8:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571) 272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW S BUDISALICH/Examiner, Art Unit 2662 /AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662
Read full office action

Prosecution Timeline

Oct 11, 2023
Application Filed
Oct 11, 2023
Response after Non-Final Action
Aug 12, 2024
Response after Non-Final Action
Nov 06, 2025
Non-Final Rejection — §103
Jan 30, 2026
Response Filed
Jan 30, 2026
Examiner Interview Summary
Jan 30, 2026
Applicant Interview (Telephonic)
Mar 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602820
METHOD AND APPARATUS WITH ATTENTION-BASED OBJECT ANALYSIS
2y 5m to grant Granted Apr 14, 2026
Patent 12597106
METHOD AND APPARATUS FOR IDENTIFYING DEFECT GRADE OF BAD PICTURE, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12592078
VIDEO MONITORING DEVICE, VIDEO MONITORING SYSTEM, VIDEO MONITORING METHOD, AND STORAGE MEDIUM STORING VIDEO MONITORING PROGRAM
2y 5m to grant Granted Mar 31, 2026
Patent 12586232
METHOD FOR OBJECT DETECTION USING CROPPED IMAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12567151
Microscopy System and Method for Instance Segmentation
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
87%
With Interview (+8.9%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 46 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month