Prosecution Insights
Last updated: April 19, 2026
Application No. 18/233,611

ELECTRONIC DEVICE FOR GENERATING DEPTH MAP AND OPERATION METHOD THEREOF

Final Rejection §103
Filed
Aug 14, 2023
Examiner
RHIM, WOO CHUL
Art Unit
2676
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
112 granted / 140 resolved
+18.0% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
28 currently pending
Career history
168
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
23.2%
-16.8% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 140 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/21/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendments Submission dated 01/23/2026 amends claims 1, 6, 11, 15, and 20 and cancels claim 5. Claims 1-4 and 6-20 are pending. In view of the amendment to the claim 11, 35 U.S.C. 112(f) is no longer invoked. Response to Arguments On pages 13-15 of the submission, the applicant argues that Price’s ambient images are not connected to the recited illumination images. The examiner disagrees because similar to the illumination images, Price’s ambient images also correspond to the structured light patterns (see, e.g., pars. 62-64, 71-83 and 97-99 and FIG. 8 of Price, which teach obtaining ambient light images corresponding to the structured light patterns). As such, when combined with the teaching of Fang, which teaches obtaining ambient images by capturing light emitting device, the combined teaching of Price and Fang reads on “obtaining a plurality of illumination images respectively corresponding to the plurality of patterns by photographing a light-emitting device configured to provide the light.” On pages 15-16 of the submission, the applicant argues that Fang as applied does not teach the aspect of obtaining a depth map using images of an object and images of light source. The examiner disagrees because the current rejection does not rely on Fang to teach such an aspect. Instead, as provided below and previously in the last office action, the rejection relies on the combination of Price and Fang to teach obtaining a depth map using images of an object and images of light source. Price as applied teaches generating the depth map based on the preprocessed input data including the captured images and the ambient images (see, e.g., pars. 83-92 and 97-101 and FIGS. 6-8 of Price, which teach processing the ambient light images and the pattern images to determine the pixel signatures for the multiple light patterns and generating a depth map based on the determined pixel signatures and the disparity values) and Fang as applied supplements and modifies the above teaching of Price by teaching that the ambient images may be obtained by capturing light emitting device (see, e.g., lines 1-8 on page 4 and lines 5-11 and 31-37 on page 5 of Fang). Accordingly, the examiner finds the applicant’s argument unpersuasive. The examiner reminds the applicant that the current rejection is based on a combination of references and one cannot show nonobviousness by attacking references individually (see MPEP 2145 (IV)). Claim Objections Claim 20 is objected to because of the following informalities: Claim 20 recites in line 13 “one processor to obtain the input data by obtaining the input data by preprocessing the plurality.” From that line, the examiner finds the phrase “by obtaining the input data” redundant and not precisely defining/affecting the scope of the claim. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6, 10-16, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Us patent application publication no. 2022/0353447 to Price et al. (hereinafter Price) in view of CN patent application publication no. 11285491B to Fang et al. (hereinafter Fang). For claim 1, Price as applied teaches a method of generating a depth map corresponding to input data, the method comprising: providing light to a target object in a plurality of patterns that change over time (see, e.g., pars. 46-51 and 95, and FIGS. 2, 3A-F and 8, which teach, over a frame capture time period, projecting multiple light patterns by activating an illuminator); obtaining a plurality of captured images respectively corresponding to the plurality of patterns, by photographing the target object to which the light is provided (see, e.g., pars. 46-55, 64-71 and 96 and FIGS. 2, 3A-F and 8, which teach capturing, with an imaging sensor, pattern images of an environment as the environment is illuminated with different structured light patterns); obtaining the input data by preprocessing the plurality of captured images (see, e.g., pars. 83-92 and 97-100 and FIG. 8, which teach determining the pixel signatures for the multiple light patterns and determining disparity values from the pixel signatures); and generating the depth map based on the input data (see, e.g., pars. 91-92 and 101 and FIGS. 6-8, which teach generating a depth map based on the pixel signatures and the disparity values), wherein the method further comprises obtaining a plurality of illumination images respectively corresponding to the plurality of patterns by photographing a light-emitting device configured to provide the light (see, e.g., pars. 62-64, 71-83 and 97-99 and FIG. 8, which teach obtaining ambient light images corresponding to the structured light patterns), and wherein the obtaining of the input data comprises obtaining the input data by preprocessing the plurality of captured images and the plurality of illumination images (see, e.g., pars 83-92 and 97-101 and FIG. 8, which teach before determining the pixel signatures for the multiple light patterns, processing the ambient light images and the pattern images). Price as applied does not explicitly teach that the ambient light image are obtained by photographing a light emitting device. Fang in the analogous art teaches capturing an image containing the light-emitting device (see, e.g., lines 1-8 on page 4 and lines 5-11 and 31-37 on page 5 of Fang). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Price to obtain ambient light images as taught by Fang because doing so would allow obtaining the optical characteristics of the ambient light (see, e.g., lines 15-21 on page 3 of Fang). For claim 11, Price as applied teaches an electronic device for generating a depth map corresponding to input data, the electronic device comprising: a light-emitting device configured to provide light to a target object in a plurality of patterns that change over time (see, e.g., pars. 46-51 and 95, and FIGS. 2, 3A-F and 8, which teach, over a frame capture time period, projecting multiple light patterns by activating an illuminator); a measuring device comprising at least one camera configured to obtain a plurality of captured images respectively corresponding to the plurality of patterns by photographing the target object to which the light is provided (see, e.g., pars. 42, 46-55, 64-71 and 96 and FIGS. 1, 2, 3A-F and 8, which teach capturing, with an imaging sensor, images of an environment as the environment is illuminated with different structured light patterns); a memory storing one or more instructions (see, e.g., pars. 39-40 and FIG. 1) ; and at least one processor configured to execute the one or more instructions stored in the memory to obtain the input data (see, e.g., pars. 39-40 and FIG. 1) by preprocessing the plurality of captured images (see, e.g., pars 91-92 and 97-100 and FIG. 8, which teach determining the pixel signatures for the multiple light patterns and determining disparity values from the pixel signatures), and generate the depth map based on the input data (see, e.g., pars. 91-92 and 101 and FIGS. 6-8, which teach generating a depth map based on the pixel signatures and the disparity values), wherein the at least one processor is further configured to execute the one or more instructions to: receive a plurality of illumination images that are previously captured, the plurality of illumination images being obtained by photographing the light-emitting device (see, e.g., pars. 62-64, 71-83 and 97-99 and FIG. 8, which teach obtaining ambient light images corresponding to the structured light patterns), and obtain the input data by preprocessing the plurality of captured images and the plurality of illumination images (see, e.g., pars 83-92 and 97-101 and FIG. 8, which teach before determining the pixel signatures for the multiple light patterns, processing the ambient light images and the pattern images). Price as applied does not explicitly teach that the ambient light image are obtained by photographing a light emitting device. Fang in the analogous art teaches capturing an image containing the light-emitting device (see, e.g., lines 1-8 on page 4 and lines 5-11 and 31-37 on page 5 of Fang). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Price to obtain ambient light images as taught by Fang because doing so would allow obtaining the optical characteristics of the ambient light (see, e.g., lines 15-21 on page 3 of Fang). For claim 20, Price as applied teaches a non-transitory computer-readable recording medium storing instructions that, when executed by at least one processor, cause the at least one processor (see, e.g., pars. 39-40, 103-104 and 107 and FIG. 1) to: provide light toward a target object in a plurality of patterns that change over time (see, e.g., pars. 46-51 and 95, and FIGS. 2, 3A-F and 8, which teach, over a frame capture time period, projecting multiple light patterns by activating an illuminator); obtain a plurality of captured images respectively corresponding to the plurality of patterns, by photographing the target object toward which the light is provided (see, e.g., pars. 46-55, 64-71 and 96 and FIGS. 2, 3A-F and 8, which teach capturing, with an imaging sensor, images of an environment as the environment is illuminated with different structured light patterns); obtain input data by preprocessing the plurality of captured images (see, e.g., pars 91-92 and 97-100 and FIG. 8, which teach determining the pixel signatures for the multiple light patterns and determining disparity values from the pixel signatures); and generate a depth map based on the input data (see, e.g., pars. 91-92 and 101 and FIGS. 6-8, which teach generating a depth map based on the pixel signatures and the disparity values), wherein the instructions, when executed by the at least one processor, further causes the at least one processor to obtain a plurality of illumination images respectively corresponding to the plurality of patterns by photographing a light-emitting device configured to provide the light (see, e.g., pars. 62-64, 71-83 and 97-99 and FIG. 8, which teach obtaining ambient light images corresponding to the structured light patterns), and wherein the instructions, when executed by the at least one processor, causes the at least one processor to obtain the input data by obtaining the input data by preprocessing the plurality of captured images and the plurality of illumination images (see, e.g., pars 83-92 and 97-101 and FIG. 8, which teach before determining the pixel signatures for the multiple light patterns, processing the ambient light images and the pattern images). Price as applied does not explicitly teach that the ambient light image are obtained by photographing a light emitting device. Fang in the analogous art teaches capturing an image containing the light-emitting device (see, e.g., lines 1-8 on page 4 and lines 5-11 and 31-37 on page 5 of Fang). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Price to obtain ambient light images as taught by Fang because doing so would allow obtaining the optical characteristics of the ambient light (see, e.g., lines 15-21 on page 3 of Fang). For claims 2 and 12, Price as applied discloses that each of the plurality of patterns comprises a light-emitting region to which the light is provided, and wherein a position of the light-emitting region changes over time (see, e.g., pars. 53-56, and FIGS. 3A, 3F, which show patterns with illuminated vertical stripes that change their positions and sizes over the frame capture time period). For claims 3 and 13, Price as applied discloses that each of the plurality of patterns comprises a light-emitting region to which the light is provided, and wherein an area of at least a partial region of the light-emitting region changes over time (see, e.g., pars. 53-56, and FIGS. 3A, 3F, which show patterns with illuminated vertical stripes that changes their positions and sizes over the frame capture time period). For claims 4 and 14, Price as applied discloses that an illumination of the light in each of the plurality of patterns changes over time (see, e.g., pars. 53-56 and FIGS. 3A, 3F, which show patterns with illuminated vertical stripes that changes their positions and sizes over the frame capture time period). For claims 6 and 16, Price in view of Fang teaches obtaining characteristic information of the light-emitting device (see, e.g., pars. 54-61 and 88-89 of Price, which teach obtaining operating characteristics of the illuminators), wherein the obtaining of the input data comprises obtaining the input data by preprocessing the plurality of captured images, the plurality of illumination images, and the characteristic information (see, e.g., pars. 83-92 of Price, which teach before determining the pixel signatures for the multiple light patterns, processing the ambient images and the pattern images that are captured based on the operating parameters). For claims 10 and 19, Price as applied discloses that the generating of the depth map comprises generating the depth map by providing a depth map generation module with the input data, and wherein the depth map generation module comprises an autoencoder (see, e.g., pars. 38 and 101, which teach that the processor may be used to generate a depth map and the processor may include autoencoder neural networks). For claim 15, Price as applied teaches that the plurality of illumination images respectively correspond to the plurality of patterns (see, pars. 62-64, 71-83 and 97-99 and FIGS. 3F, 4A-C, 5A-B, and 8, which teach that the ambient light images corresponding to the structured light patterns). Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Price in view of Fang and further in view of us patent application publication no. 2018/0063403 to Ryu et al. (hereinafter Ryu). For claims 7 and 17, which Price in view of Fang does not explicitly teach, Ryu in the analogous art teaches that the characteristic information comprises information about a size of the light-emitting device (see, e.g., par. 167 of Ryu, which teaches acquiring information about the light source such as its size). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Price in view of Fang to consider the size of the light source as taught by Ryu because doing so would yield predictable results of having more information about the illuminator and hence make a more informed decision with respect to modifying operating characteristics of the illuminator (see MPEP 2143(I)(D) and also pars. 169-170 of Ryu). Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Price in view of Fang and us patent application publication no. 2023/0177768 to Wang. For claim 9, Price in view of Fang teaches: comparing an intensity of an ambient illumination of the target object with an intensity of a threshold illumination (see, e.g., pars. 74-75 and FIG. 4A of Price, which teach comparing an intensity of the ambient light measure to a threshold number of photons), Price in view of Fang, however, does not explicitly teach that “ the light is provided to the target object based on the intensity of the ambient illumination being less than or equal to the intensity of the threshold illumination” and that “the threshold illumination is a maximum illumination at which the plurality of captured images reflecting changes in the plurality of patterns that change over time are able to be obtained.” Wang in the analogous art teaches being able to obtain an image of an object only when the ambient light luminance of the photographed object is less than the preset luminance threshold, the maximum luminance value (see, e.g., pars. 14-15 and 26-27 of Wang). It would have been obvious to one of ordinary skill in the art to modify Price in view of Fang to employ the thresholding scheme of Wang because doing so would “not only ensure that a three-dimensional sense and details of the photographed image in a low light environment are not missing, but also avoid a waste of resources” (see, e.g., pars. 15 and 27 of Wang). Allowable Subject Matter Claims 8 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. In regard to claims 8 and 18, when considered as a whole, prior art of record fails to disclose or render obvious, alone or in combination: “providing sub-light to the target object; and obtaining sub-characteristic information of a sub-light-emitting device configured to provide the sub-light, wherein the obtaining of the plurality of captured images comprises obtaining the plurality of captured images by photographing the target object to which the light and the sub-light are provided, and wherein the obtaining of the input data comprises obtaining the input data by preprocessing the plurality of captured images, the plurality of illumination images, the characteristic information, and the sub-characteristic information.” Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WOO RHIM whose telephone number is (571)272-6560. The examiner can normally be reached Mon - Fri 9:30 am - 6:00 pm et. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WOO C RHIM/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Aug 14, 2023
Application Filed
Oct 21, 2025
Non-Final Rejection — §103
Jan 23, 2026
Response Filed
Feb 12, 2026
Final Rejection — §103
Apr 08, 2026
Interview Requested
Apr 15, 2026
Applicant Interview (Telephonic)
Apr 15, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601667
AUTOMATED TURF TESTING APPARATUS AND SYSTEM FOR USING SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12596134
DEVICE, MOVEMENT SPEED ESTIMATION SYSTEM, FEEDING CONTROL SYSTEM, MOVEMENT SPEED ESTIMATION METHOD, AND RECORDING MEDIUM IN WHICH MOVEMENT SPEED ESTIMATION PROGRAM IS STORED
2y 5m to grant Granted Apr 07, 2026
Patent 12591997
ARRANGEMENT DEVICE AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12586169
Mass Image Processing Apparatus and Method
2y 5m to grant Granted Mar 24, 2026
Patent 12579607
DEMOSAICING METHOD AND APPARATUS FOR MOIRE REDUCTION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+21.4%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 140 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month