Prosecution Insights
Last updated: April 19, 2026
Application No. 18/532,800

MODIFYING DEPTH MAPS

Non-Final OA §103
Filed
Dec 07, 2023
Examiner
DOTTIN, DARRYL V
Art Unit
2683
Tech Center
2600 — Communications
Assignee
Qualcomm Incorporated
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
92%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
411 granted / 521 resolved
+16.9% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
20 currently pending
Career history
541
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
49.5%
+9.5% vs TC avg
§102
29.1%
-10.9% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 521 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/07/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Status of Claims Claims 1-20 are pending in this application. Oath/Declaration The receipt of Oath/Declaration is acknowledged. Drawings The receipt of Drawings is acknowledged. Allowable Subject Matter 6. Claims 2, 7-8 and 17-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. 7. The following is a statement of reasons for the indication of allowable subject matter: Regarding Claim 2: None of the prior art(s) searched, cited and/or of record disclose(s) or suggest(s) the teaching(s) of the apparatus of claim 1, wherein, to modify the depth map based on the plurality of section confidences, the at least one processor is configured to: modify low-confidence depth sections, each of the low-confidence depth sections having a respective section confidence less than a section-confidence threshold; and skip modifying of high-confidence depth sections, each of the high-confidence depth sections having a respective section confidence greater than the section-confidence threshold. Regarding Claim 7: None of the prior art(s) searched, cited and/or of record disclose(s) or suggest(s) the teaching(s) of the apparatus of claim 6, wherein: the depth map comprises a two-dimensional array of depth values; the one or more depth holes comprise respective points in the two-dimensional array that lack respective depth values; the neighboring depth values comprise depth values in the two-dimensional array that are adjacent to one or more respective depth holes; and to generating the filling depth values for the one or more depth holes in the depth map, the at least one processor is configured to determine a respective filling depth value for each depth hole based on respective neighboring depth values of each depth hole. Regarding Claim 8: None of the prior art(s) searched, cited and/or of record disclose(s) or suggest(s) the teaching(s) of the apparatus of claim 7, wherein the filling depth values are generated by applying a two-dimensional filter to the neighboring depth values. Regarding Claim 17: None of the prior art(s) searched, cited and/or of record disclose(s) or suggest(s) the teaching(s) of the method of claim 16, wherein: the depth map comprises a two-dimensional array of depth values; the one or more depth holes comprise respective points in the two-dimensional array that lack respective depth values; the neighboring depth values comprise depth values in the two-dimensional array that are adjacent to one or more respective depth holes; and to generating the filling depth values for the one or more depth holes in the depth map, the at least one processor is configured to determine a respective filling depth value for each depth hole based on respective neighboring depth values of each depth hole. Regarding Claim 18: None of the prior art(s) searched, cited and/or of record disclose(s) or suggest(s) the teaching(s) of the method of claim 7, wherein the filling depth values are generated by applying a two-dimensional filter to the neighboring depth values. Claim Rejections - 35 USC § 103 8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 9. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 10. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. 11. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 12. Claims 1, 3-6, 9-16, 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Katz (US PG. Pub. 2012/0056982 A1) in view of Dmitriev (US PG. Pub. 2021/0390720 A1). Referring to Claim 1, Katz teaches an apparatus for modifying depth maps (See Katz, Fig. 1, Sect. [0038], Depth Camera System 20 of Motion the Motion Camera System 10), the apparatus comprising: at least one memory (See Katz, Fig. 2, Memory 31); and at least one processor (See Katz, Fig. 2, Processor 32) coupled to the at least one memory (See Katz, Fig. 1, Sect. [0044] lines 1-2, The processor 32 accesses memory 31 to use software 33 which derives a structured light depth map) and configured to: obtain a depth map comprising a plurality of depth values (See Katz, Sect. [0075] lines 9-10, a depth map of the captured frame 510 includes a set of depth values for the captured frame 510.); obtain a plurality of confidence values comprising a respective confidence value for each depth value of the plurality of depth values (See Katz, Sect. [0096], a confidence measure is a obtained based on a measure of noise in the depth value, wherein, a weight can be provided based on the confidence measure, such that a depth value with a higher confidence measure is assigned a higher weight. In one approach, an initial confidence measure is assigned to each pixel and the confidence measure is increased for each new frame in which the depth value is the same or close to the same, within a tolerance, based on the assumption that the depth of an object will not change quickly from frame to frame.). Katz fails to teach divide the depth map into a plurality of depth sections; determine, based on the plurality of confidence values, a plurality of section confidences comprising a respective section confidence for each of the plurality of depth sections; modify the depth map based on the plurality of section confidences. However, Dmitriev teaches divide the depth map into a plurality of depth sections (See Dmitriev, Sect. [0071] lines 1-5, Dividing by the partitioning module 104 the pair of images into a plurality of sections according to the segmentation map. For each given section of the plurality of sections, the disparity map generator 108 can be configured to calculate a disparity map for the given section.); determine, based on the plurality of confidence values, a plurality of section confidences comprising a respective section confidence for each of the plurality of depth sections (See Dmitriev, Sect. [0138], The generated depth map can be further processed so as to provide an optimized depth output. By way of example, a confidence mask corresponding to the depth map can be generated indicating a confidence level of the depth estimation for each pixel/value in the depth map. For pixels in the depth map that are with low confidence, an optimization process can be performed so as to replace or fix such pixels using pixel data with high confidence (e.g., neighboring pixels).); modify the depth map based on the plurality of section confidences (See Dmitriev, Sect. [0071] lines 5-15, The depth map generator 112 can be configured to compute a depth map for the given section based on the disparity map. The depth map for each given section is usable to be combined to generate a depth map for the pair of images. The disparity map comprises disparity values, each being indicative of difference of location between a respective pixel in the given section in the first image and a matching pixel thereof in the second image. The matching pixel can be searched in a range defined within the same segment to which the respective pixel belongs.). Before the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to incorporate divide the depth map into a plurality of depth sections; determine, based on the plurality of confidence values, a plurality of section confidences comprising a respective section confidence for each of the plurality of depth sections; modify the depth map based on the plurality of section confidences. The motivation for doing so would have been to provide a computerized method of depth map generation for: i) calculating a disparity map for the given section, the disparity map comprising disparity values each being indicative of difference of location between a respective pixel in the given section in the first image and a matching pixel thereof in the second image, wherein the matching pixel is searched in a range defined within the same segment that the respective pixel belongs to; and ii) computing a depth map for the given section based on the disparity map, wherein the depth map for each given section is usable to be combined to generate a depth map for the pair of images (See Sect. [0006] of the Dmitriev reference). Therefore, it would have been obvious to combine Katz in view of Dmitriev to obtain the invention as specified in claim 1. Referring to Claim 3, the combination of Katz in view of Dmitriev teaches the apparatus of claim 1 (See Katz, Fig. 1, Sect. [0038], Depth Camera System 20 of Motion the Motion Camera System 10), wherein, to modify the depth map, the at least one processor is configured to apply a two-dimensional filter to depth values of the depth map (See Katz, Sect. [0038] lines 3-7, the depth map may apply a two-dimensional (2-D) pixel area of the-captured scene, where each pixel in the 2-D pixel area has an associated depth value which represents a linear distance from the imaging component 22 to the object). Referring to Claim 4, the combination of Katz in view of Dmitriev teaches the apparatus of claim 1 (See Katz, Fig. 1, Sect. [0038], Depth Camera System 20 of Motion the Motion Camera System 10), wherein, to modify the depth map, the at least one processor is configured to apply a weighted-averaging filter to depth values of the depth map (See Katz, Sect. [0099], A weight can be provided based on an accuracy measure, such that a depth value with a higher accuracy measure is assigned a higher weight. We can calculate a weighted-average, based using the formula: the weight Wi=exp(-accuracy_i), where accuracy_i is an accuracy measure, and the averaged 3D point is Pavg=sum(Wi*Pi)/sum(Wi). Then, using these weights, point samples that are close in 3-D, might be merged using a weighted average. For example, based on the spatial resolution and the base line distances between the sensors and the illuminator, and between the sensors, we can assign an accuracy measure for each depth sample.). Referring to Claim 5, the combination of Katz in view of Dmitriev teaches the apparatus of claim 1 (See Katz, Fig. 1, Sect. [0038], Depth Camera System 20 of Motion the Motion Camera System 10), wherein, to modify the depth map, the at least one processor is configured to apply a smoothing filter to depth values of the depth map (See Katz, Sect. [0123], The depth map may be downsampled to a lower processing resolution wherein, one or more high-variance and/or noisy depth values may be removed and/or smoothed from the depth image so that it can be more easily used and processed with less computing overhead.). Referring to Claim 6, the combination of Katz in view of Dmitriev teaches the apparatus of claim 1 (See Katz, Fig. 1, Sect. [0038], Depth Camera System 20 of Motion the Motion Camera System 10), wherein, to modify the depth map, the at least one processor is configured to generate filling depth values for one or more depth holes in the depth map based on neighboring depth values (See Katz, Fig. 9, Sect. [0123] lines 5-10, depth holes or portions of missing and/or removed depth information may be filled in and/or reconstructed; and/or any other suitable processing may be performed on the received depth information such that the depth information may used to generate a model such as a skeletal model (see FIG. 9).). Referring to Claim 9, the combination of Katz in view of Dmitriev teaches the apparatus of claim 1 (See Katz, Fig. 1, Sect. [0038], Depth Camera System 20 of Motion the Motion Camera System 10), wherein a section confidence of a depth section is determined based on a statistical measure of confidence values of depth values of the depth section (See Katz, Sect. [0095], a depth value obtained from stereoscopic matching of an image from the sensor S1 to an image from the sensor S2 based on the distance BL1+BL2 in FIG. 6D. In this case, we can assign w1=BL1/(BL1+BL2+BL1+BL2) to a depth value from sensor S1, a weight of w2=BL2/(BL1+BL2+BL1+BL2) to a depth value from sensor S2, and a weight of w3=(BL1+BL2)/(BL1+BL2+BL1+BL2) to a depth value obtained from stereoscopic matching from S1 to S2. To illustrate, if we assume BL=1 and BL=2 distance units, w1=1/6, w2= 2/6 and w3= 3/6. In a further augmentation, a depth value is obtained from stereoscopic matching of an image from the sensor S2 to an image from the sensor S1 in FIG. 6D. In this case, we can assign w1=BL1/(BL1+BL2+BL1+BL2+BL1+BL2) to a depth value from sensor S1, a weight of w2=BL2/(BL1+BL2+BL1+BL2+BL1+BL2) to a depth value from sensor S2, a weight of w3=(BL1+BL2)/(BL1+BL2+BL1+BL2+BL1+BL2) to a depth value obtained from stereoscopic matching from S1 to S2, and a weight of w4=(BL1+BL2)/(BL1+BL2+BL1+BL2+BL1+BL2) to a depth value obtained from stereoscopic matching from S2 to S1. To illustrate, if we assume BL=1 and BL=2 distance units, w1= 1/9, w2= 2/9, w3= 3/9 and w4= 3/9.). Referring to Claim 10, the combination of Katz in view of Dmitriev teaches the apparatus of claim 9 (See Katz, Fig. 1, Sect. [0038], Depth Camera System 20 of Motion the Motion Camera System 10), wherein the statistical measure is based on at least one of: an average of the confidence values (See Katz Sect. [0100] lines 12-15, by calculating a weighted average of the 3D locations of the points. The weights can be defined by the confidence of the measurements, where confidence measures are the based on the correlation score.); or a minimum of the confidence values (See Katz, Sect. [0096] lines 15-19, large changes in the depth values can be indicative of a greater amount of noise, resulting in a lower confidence measure). Referring to Claim 11, the combination of Katz in view of Dmitriev teaches the apparatus of claim 1 (See Katz, Fig. 1, Sect. [0038], Depth Camera System 20 of Motion the Motion Camera System 10), wherein the at least one processor is further configured to determine a size for each of the plurality of depth sections based on the plurality of confidence values (See Katz, Sect. [0109], At step 760, for each pixel in the first frame of pixel data, we determine a corresponding point in the illumination frame and at step 761, we provide a corresponding first structured light depth map. Decision step 762 determines if a refinement of a depth value is indicated. A criterion can be evaluated for each pixel in the first frame of pixel data, arid, in one approach, can indicate whether refinement of the depth value associated with the pixel is desirable. In one approach, refinement is desirable when the associated depth value is unavailable or unreliable. Unreliability can be based on an accuracy measure and/or confidence measure, for instance. If the confidence measure exceeds a threshold confidence measure, the depth value may be deemed to be reliable. Or, if the accuracy measures exceeds a threshold accuracy measure, the depth value may be deemed to be reliable. In another approach, the confidence measure and the accuracy measure must both exceed respective threshold levels for the depth value to be deemed to be reliable.). Referring to Claim 12, the combination of Katz in view of Dmitriev teaches the apparatus of claim 1 (See Katz, Fig. 1, Sect. [0038], Depth Camera System 20 of Motion the Motion Camera System 10), wherein the at least one processor is further configured to determine a size for each of the plurality of depth sections based on a count of low-confidence depth values, wherein the low-confidence depth values correspond to respective confidence values that are less than a confidence threshold (See Katz, Sect. [0094] lines 4-17, a frame size is determined based on a weight to the depth values that is assigned based on the baseline distance between the sensor and the illuminator, so that a lower weight, indicating a lower confidence, is assigned when the baseline distance is less. For each pixel, the depth values are averaged among the two or more depth maps. Unweighted average of a depth value d1 for an ith pixel in the first frame and a depth value d2 for an ith pixel in the second frame is (d1+d2)/2. An example weighted average of a depth value d1 of weight w1 for an ith pixel in the first frame and a depth value d2 of weight w2 for an ith pixel in the second frame is (w1*d1+w2*d2)/[(w1+w2)].)). Referring to Claim 13, arguments analogous to claim 1 are applicable herein. The structural elements of “An apparatus for modifying depth maps” in claim 1 perform all of the operations of “A method for modifying depth maps” in claim 13. Thus, “A method for modifying depth maps” in claim 13 is rejected for reasons explicitly taught in the rejection of claim 1. Referring to Claim 14, arguments analogous to claim 2 are applicable herein. The structural elements of “The apparatus” in claim 2 perform all of the operations of “The method” in claim 14. Thus, “The method” in claim 14 is rejected for reasons explicitly taught in the rejection of claim 2. Referring to Claim 15, arguments analogous to claim 5 are applicable herein. The structural elements of “The apparatus” in claim 5 perform all of the operations of “The method” in claim 15. Thus, “The method” in claim 15 is rejected for reasons explicitly taught in the rejection of claim 5. Referring to Claim 16, arguments analogous to claim 6 are applicable herein. The structural elements of “The apparatus” in claim 6 perform all of the operations of “The method” in claim 16. Thus, “The method” in claim 16 is rejected for reasons explicitly taught in the rejection of claim 6. Referring to Claim 19, arguments analogous to claim 11 are applicable herein. The structural elements of “The apparatus” in claim 11 perform all of the operations of “The method” in claim 19. Thus, “The method” in claim 19 is rejected for reasons explicitly taught in the rejection of claim 11. Referring to Claim 20, arguments analogous to claim 12 are applicable herein. The structural elements of “The apparatus” in claim 12 perform all of the operations of “The method” in claim 20. Thus, “The method” in claim 20 is rejected for reasons explicitly taught in the rejection of claim 12. Cited Art 13. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure Urella et al. (US PG. PUB. No. 2023/0290070 A1) discloses Embodiments of devices and techniques of obtaining a three dimensional (3D) representation of an area are disclosed. In one embodiment, a two dimensional (2D) frame is obtained of an array of pixels of the area. Also, a depth frame of the area is obtained. The depth frame includes an array of depth estimation values. Each of the depth estimation values in the array of depth estimation values corresponds to one or more corresponding pixels in the array of pixels. Furthermore, an array of confidence scores is generated. Each confidence score in the array of confidence scores corresponds to one or more corresponding depth estimation values in the array of depth estimation values. Each of the confidence scores in the array of confidence scores indicates a confidence level that the one or more corresponding depth estimation values in the array of depth estimation values is accurate. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARRYL V DOTTIN whose telephone number is (571)270-5471. The examiner can normally be reached M-F 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached on 571-270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DARRYL V DOTTIN/Primary Examiner, Art Unit 2683 /DARRYL V DOTTIN/Primary Examiner, Art Unit 2683
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
Jan 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602618
ARTIFICIAL VISION PARAMETER LEARNING AND AUTOMATING METHOD FOR IMPROVING VISUAL PROSTHETIC SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12602425
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Apr 14, 2026
Patent 12586181
FUNCTIONAL IMAGING FEATURES FROM COMPUTED TOMOGRAPHY IMAGES
2y 5m to grant Granted Mar 24, 2026
Patent 12586150
EFFICIENT BI-DIRECTIONAL IMAGE SCALING
2y 5m to grant Granted Mar 24, 2026
Patent 12585416
IMAGE PROCESSING APPARATUS, CONTROL METHOD OF IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
92%
With Interview (+13.3%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 521 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month