Prosecution Insights
Last updated: April 19, 2026
Application No. 18/144,632

METHOD AND SYSTEM FOR THREE-DIMENSIONAL SCANNING OF ARBITRARY SCENES

Final Rejection §103
Filed
May 08, 2023
Examiner
SINHA, SNIGDHA
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Northwestern University
OA Round
3 (Final)
50%
Grant Probability
Moderate
4-5
OA Rounds
2y 6m
To Grant
96%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
3 granted / 6 resolved
-12.0% vs TC avg
Strong +46% interview lift
Without
With
+45.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
26 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
2.0%
-38.0% vs TC avg
§103
65.6%
+25.6% vs TC avg
§102
16.2%
-23.8% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6, 12, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over von Cramon (US 20210295592) in view of Ando (US 20180352163). Regarding claim 1, von Cramon teaches a three-dimensional (3D) imaging system comprising: A projected configured to illuminate a scene (Paragraph 4, a light projector is added to illuminate a desired portion of a person’s face); A first camera configured to capture first data from the scene during illumination by the projector (Paragraph 87, a first camera may receive the reflected light, images that include subject matter captured…provide a separate two-dimensional dataset); and A second camera configured to capture second data from the scene during the illumination by the projector (Paragraph 87, a second or “through-path” camera is provided; images that include subject matter captured…provide a separate two-dimensional dataset); and A processor in communication with the first and second camera, wherein the processor is configured to generate a 3D image or a 3D video (Paragraph 194, the camera orientation information and data set are used by the image processor 500 to generate a three-dimensional model). While von Cramon fails to disclose the following, Ando teaches: Separate specular components and diffuse components of the scene based on one or more of the first data and the second data (Paragraph 133, The raw image is divided into a diffuse component (a diffuse albedo image), a specular reflection component (a specular albedo image)), to use the diffuse components of the scene as a screen to perform deflectometry on the scene (Paragraph 133, Subsequently, in step S402, the image inspection apparatus performs the deflectometry processing. The raw image is divided into a diffuse component (a diffuse albedo image), a specular reflection component (a specular albedo image), and phases (two phases in the X direction and the Y direction) by the deflectometry processing; Paragraph 141, Details of the deflectometry processing performed in step S402… diffuse reflection obtained by reflection on a work surface; Paragraph 154 and Figure 14B, A diffuse reflection image showing a diffuse reflection component). Note: Ando teaches the separated diffuse components are used as a screen to generate the diffuse reflection image. Ando and von Cramon are both considered analogous to the claimed invention because they are in the same field of image sensing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified von Cramon to incorporate the teachings of Ando and separate specular and diffuse components of a scene and use the diffuse components as a screen to perform deflectometry. Doing so would allow for easily confirming a state of texture on a surface in the scene (Ando, Paragraph 156). Method claim 12 corresponds to apparatus claim 1. Therefore claim 12 is rejected for the same reasons as used above. Regarding claim 6, the combination of von Cramon and Ando teaches the system of claim 1, wherein the first camera (von Cramon, Paragraph 87, a first camera) captures a portion of an environment in which the scene is located (Paragraph 133, The raw image is divided into a diffuse component (a diffuse albedo image), a specular reflection component (a specular albedo image)), and wherein the portion of the environment is used a screen to perform deflectometry (Paragraph 141, Details of the deflectometry processing performed in step S402… diffuse reflection obtained by reflection on a work surface; Paragraph 154 and Figure 14B, A diffuse reflection image showing a diffuse reflection component). Ando and von Cramon are both considered analogous to the claimed invention because they are in the same field of image sensing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified von Cramon to incorporate the teachings of Ando and capture a portion of the scene, and use that portion to perform deflectometry. Doing so would allow for saving time and computation power while using the technique of deflectometry to generate three-dimensional models of partial scenes. Method claim 16 corresponds to apparatus claim 6. Therefore claim 6 is rejected for the same reasons as used above. Claims 2 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of von Cramon and Ando as applied to claims 1, 6, 12, and 16, and further in view of Dumont (WO 2014133646). Regarding claim 2, the combination of von Cramon and Ando teaches the imaging system of claim 1. While the combination fails to disclose the following, Dumont teaches: Wherein the projector comprises a laser dot scanner that is configured to scan the scene with a single laser dot (Paragraph 3, laser triangulation, a laser source that projects, for example, a line, dot, or pattern; Paragraph 5, The surface is illuminated by a laser projection produced by a laser source that moves along the surface in conjunction with the first camera). Dumont and the combination of von Cramon and Ando are both considered analogous to the claimed invention because they are in the same field of image sensing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of von Cramon and Ando to incorporate the teachings of Dumont and use a projector with a laser dot scanner that scans with a single laser dot. Doing so would improve accuracy and precision while scanning. Method claim 13 corresponds to apparatus claim 2. Therefore, claim 13 is rejected for the same reasons as used above. Claims 3 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of von Cramon, Ando, and Dumont as applied to claims 2 and 13, and further in view of Kaufmann (KR 20200066371). Regarding claim 3, the combination of von Cramon, Ando, and Dumont teaches the imaging system of claim 2. While the combination fails to disclose the following, Kaufmann teaches: Wherein the first camera comprises a first event camera and the second camera comprises a second event camera, and wherein the processor is configured to identify a correspondence for each event-timestamp generated by the second event camera by comparing a position of the single laser dot on the scene with a pixel position of an event on the second event camera (Page 5, paragraph 3, event camera data is combined…accumulates events over time to reconstruct/estimate intensity values; Page 7, paragraph 9, the pixel event is a timestamp indicating when the event occurred). Kaufmann and the combination of von Cramon, Ando, and Dumont are both considered analogous to the claimed invention because they are in the same field of image sensing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of von Cramon, Ando, and Dumont to incorporate the teachings of Kaufmann and use pixel location and timestamp information to compare to the current location of the single laser dot. Doing so would allow using past information about each pixel to reconstruct images. Method claim 14 corresponds to apparatus claim 3. Therefore, claim 14 is rejected for the same reasons as used above. Claims 4 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of von Cramon, Ando, Dumont, and Kaufmann as applied to claims 3 and 14, and further in view of Qiu (US 20200367970). Regarding claim 4, the combination of von Cramon, Ando, Dumont and Kaufmann teaches the imaging system of claim 3, wherein the processor is further configured to tracing rays from the position of the single laser dot back to a camera chip of the second event camera (Dumont, paragraph 24, each of the cameras includes an image sensor) that receives the rays from the laser projector; Dumont, paragraph 29, the camera records High Definition video including the laser line projected on the object). While the combination fails to disclose the following, Qiu teaches: Wherein the processor is further configured to calculate surface normal of the scene (Paragraph 157, Face normals can be calculated…where lighting models depend on the normal of a face/vertex and the camera direction; Paragraph 160, different surface scanners (e.g., depth cameras or lasers) can be used to scan these objects). Qiu and the combination of von Cramon, Ando, Dumont, and Kaufmann are both considered analogous to the claimed invention because they are in the same field of image sensing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of von Cramon, Ando, Dumont, and Kaufmann to incorporate the teachings of Qiu and use a single laser dot projector in combination with an event camera to calculate surface normals. Doing so would allow for more precisely generating a three-dimensional model using the precision of the laser dot projector and technique of calculating surface normals. Method claim 15 corresponds to apparatus claim 4. Therefore, claim 15 is rejected for the same reasons as used above. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of von Cramon and Ando as applied to claims 1, 6, 12, and 16, and further in view of Maxwell (US 20100142825). Regarding claim 5, the combination of von Cramon and Ando teaches the imaging system of claim 1. While the combination fails to disclose the following, Maxwell teaches: Wherein the processor does not have prior information regarding a geometry or a reflectance of the scene (Paragraph 173, a second) derivative filter…is used in a novel manner, to identify regions of an image…patches comprising pixels…of uniform material reflectance). Maxwell teaches a technique to calculate initial material reflectance, and therefore, does not require prior information regarding reflectance. Maxwell and the combination of von Cramon and Ando are both considered analogous to the claimed invention because they are in the same field of image sensing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of von Cramon and Ando to incorporate the teachings of Maxwell and use the technique of calculating reflectance without requiring the processor to have prior information regarding reflectance of the scene. Doing so would generate accurate models while saving memory by not storing predetermined data. Claims 7-8 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of von Cramon and Ando as applied to claims 1, 6, 12, and 16, and further in view of Pinter (US 20200134773). Regarding claim 7, the combination of von Cramon and Ando teaches the imaging system of claim 6. While the combination fails to disclose the following, Pinter teaches: Wherein the processor uses the projector and the first camera to form a deflectometry sub-sensor (Paragraph 205, A monoscopic (single camera) phase measuring deflectometry (PMD) may include a single patterned area light (PAL) source and a single camera. Based on the light source pattern type and orientation of the PAL source, a processor may produce two-dimensional and/or three-dimensional quantitative/qualitative results). Pinter and the combination of von Cramon and Ando are both considered analogous to the claimed invention because they are in the same field of image sensing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of von Cramon and Ando to incorporate the teachings of Pinter and form a deflectometry sub-sensor. Doing so would allow for capturing data used to create the three-dimensional image informed by deflectometry. Method claim 17 corresponds to apparatus claim 7. Therefore, claim 17 is rejected for the same reasons as used above. Regarding claim 8, the combination of von Cramon and Ando teaches the imaging system of claim 6. While the combination fails to disclose the following, Pinter teaches: Wherein the processor uses the projector and the second camera to form a triangulation sub-sensor (Paragraph 295, The machine vision system controller 4405a may further include at least one image sensor (e.g., camera and/or camera optical element) input/output 4424a communicatively connected to a first image sensor (e.g., camera and/or camera optical element) 4425a and a second image sensor (e.g., camera and/or camera optical element)). Pinter and the combination of von Cramon and Ando are both considered analogous to the claimed invention because they are in the same field of image sensing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of von Cramon and Ando to incorporate the teachings of Pinter and form a triangulation sub-sensor. Doing so would improve accuracy via triangulation when generating the three-dimensional image. Method claim 18 corresponds to apparatus claim 8. Therefore, claim 18 is rejected for the same reasons as used above. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of von Cramon and Ando as applied to claims 1, 6, 12, and 16, and further in view of Schafer (KR 102238573). Regarding claim 11, the combination of von Cramon and Ando teaches the imaging system of claim 1. While the combination fails to disclose the following, Pinter teaches: Wherein the first event camera is configured to produce a timestamp of brightness changes at each pixel being imaged in the scene (Page 5, paragraph 1 (labeled 4), cameras can capture sequences of stereo images of an optical pattern moving over time and develop information about each pixel that represents how the pattern changes over time). Schafer and the combination of von Cramon and Ando are both considered analogous to the claimed invention because they are in the same field of image sensing. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of von Cramon and Ando to incorporate the teachings of Schafer and track the timestamps of brightness changes at each pixel. Doing so would improve three-dimensional model generation using information about brightness given different angles from the projector. Response to Arguments Applicant's arguments filed 20 November 2025 have been fully considered but they are not persuasive. Applicants state that “Ando explicitly states that while considering the "specular reflection component," the "diffuse component is excluded by a difference between reverse phases." Such exclusion of the "diffuse component" when determining the "specular reflection component" teaches away from the claim element "use the diffuse components of the scene as a screen to perform deflectometry on the scene," as claimed.” The examiner disagrees. Claim 1 recites that “the processor is configured to separate specular and diffuse components of the scene… to use the diffuse components of the scene as a screen to perform deflectometry…” Ando teaches using the specular components of the scene to determine the specular reflection component as well as using diffuse components of the scene to determine the diffuse reflection component (Paragraph 133). This does not teach away from what is claimed in claim 1, since deflectometry can be performed on both the specular and diffuse components independently. Additionally, Ando teaches using the diffuse components as a screen by reflecting on a work surface with specular and diffuse components (Paragraph 141, States of the stripe illumination and specular reflection and diffuse reflection obtained by reflection on a work surface are shown in FIG. 10). The specular and diffuse components can be separated, and deflectometry can be performed on each component individually. The work surface described in Ando is a screen as claimed in claim 1 as it is used to perform deflectometry. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified von Cramon to incorporate the teachings of Ando and separate specular and diffuse components of a scene and use the diffuse components as a screen to perform deflectometry. Therefore, the rational is proper and rejection would be maintained. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SNIGDHA SINHA whose telephone number is (571)272-6618. The examiner can normally be reached Mon-Fri. 12pm-8pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SNIGDHA SINHA/Examiner, Art Unit 2619 /JASON CHAN/Supervisory Patent Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

May 08, 2023
Application Filed
Mar 27, 2025
Non-Final Rejection — §103
Jul 07, 2025
Response Filed
Aug 14, 2025
Non-Final Rejection — §103
Nov 20, 2025
Response Filed
Jan 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567216
AUGMENTED-REALITY-INTERFACE CONFLATION IDENTIFICATION
2y 5m to grant Granted Mar 03, 2026
Patent 12406339
MACHINE LEARNING DATA AUGMENTATION USING DIFFUSION-BASED GENERATIVE MODELS
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
50%
Grant Probability
96%
With Interview (+45.8%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month