Prosecution Insights
Last updated: April 19, 2026
Application No. 19/062,827

GENERATING GRAPHICAL REPRESENTATIONS FOR VIEWING 3D DATA AND/OR IMAGE DATA

Non-Final OA §103
Filed
Feb 25, 2025
Examiner
RICHER, AARON M
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Faro Technologies Inc.
OA Round
1 (Non-Final)
51%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
70%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
236 granted / 465 resolved
-11.2% vs TC avg
Strong +20% interview lift
Without
With
+19.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
28 currently pending
Career history
493
Total Applications
across all art units

Statute-Specific Performance

§101
9.4%
-30.6% vs TC avg
§103
54.7%
+14.7% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
19.9%
-20.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 465 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7, 11-15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Park (U.S. Publication 2020/0357177) in view of Natori (U.S. Publication 2021/0041230). As to claim 1, Park discloses a method comprising: receiving three-dimensional (3D) data associated with an environment (fig. 1, element 130; point cloud data is generated and received for display); receiving image data associated with the environment (p. 3, section 0050; images of the environment are received from a camera); and generating a graphical representation based at least in part on at least one of the 3D data and the image data (figs. 2a-2d; p. 3, section 0055; a graphic showing both image data and point cloud data is produced), the graphical representation comprising a first region selectively switchable between a single-sub-region mode (figs. 2a-2d) and another mode (fig. 12; a mode for placing content is shown), wherein, responsive to the single-sub-region mode being enabled, the first region displays at least one of at least a first portion of the 3D data and at least a first portion of the image data (figs. 2a-2d; p. 3, section 0055; a graphic showing both image data and point cloud data is produced). Park does not disclose, but Natori discloses a multi-sub-region mode one can switch to from a single-sub-region mode wherein, responsive to the multi-sub-region mode being enabled, the first region comprises at least a first sub-region and a second sub-region, the first sub-region displaying at least one of at least a second portion of the 3D data and at least a second portion of the image data, and the second sub-region displaying at least one of at least a third portion of the 3D data and at least a third portion of the image data (fig. 37; fig. 38b; fig. 39b; fig. 40b; fig. 41b; figs. 42-43; p. 4, section 0095; p. 7, section 0124; a mode is shown where a single sub-region exists, while in a split screen mode, multiple sub-regions exist; each sub-region displays its portion of 3D data based on point cloud data, as well as texture data, which reads on image data; thus, in a first region screen split in two, the first sub-region’s point cloud and texture data would read on the claimed second portion of the 3D data and second portion of the image data, while the second sub-region’s point cloud and texture data would read on the claimed third portion of the 3D data and third portion of the image data). The motivation for this is to allow a user to compare results of two or more objects (p. 21, section 0242). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Park to use a method wherein, responsive to the multi-sub-region mode being enabled, the first region comprises at least a first sub-region and a second sub-region, the first sub-region displaying at least one of at least a second portion of the 3D data and at least a second portion of the image data, and the second sub-region displaying at least one of at least a third portion of the 3D data and at least a third portion of the image data in order to allow a user to compare results of two or more objects as taught by Natori. As to claim 2, Park discloses wherein the graphical representation further comprises a second region, wherein the second region displays information associated with the 3D data and the image data (fig. 8; p. 4, sections 0074-0075; a second region with information labels associated with 3D point cloud and image data is shown). Natori also discloses such a region (fig. 43; a region that shows information about each area/sub-region associated with the 3D and image data is displayed). As to claim 3, Park discloses wherein the 3D data and the image data are temporally linked (fig. 13; p. 5, sections 0088-0094; the 3D point cloud data is generated and then immediately overlaid on currently captured image data, making it the 3D point cloud data that is most close in time, i.e. most closely temporally linked, to the currently captured image data). As to claim 4, Natori discloses wherein, in an independent linking mode, a field of view of the first sub-region can be changed independently of a field of view of the second sub-region (fig. 43; p. 22, sections 0247-0248; a first area/sub-region FOV can be edited, for example having a further split applied, without affecting other areas; this reads on an independent linking mode since at least one area can be edited independently). Motivation for the combination of references is given in the rejection to claim 1. As to claim 5, Natori discloses wherein, in a dependent linking mode, a field of view of the first sub-region changes according to a change to a field of view of the second sub-region (fig. 42; p. 21, section 0246; the split areas can have size changes that affect the other split areas effected by moving the boundaries between the areas; for example, dragging the boundary between areas 1 and 2 to make the area 1 FOV larger would be a dependent linking mode that would also change the area 2 FOV to be smaller). Motivation for the combination of references is given in the rejection to claim 1. As to claim 7, Park discloses wherein the 3D data comprises a point cloud (p. 4, section 0070; the 3D data overlaying the image is point cloud data). As to claim 11, see the rejection to claim 1. Further, Park discloses a system, comprising: a memory comprising computer readable instructions; and a processing device for executing the computer readable instructions, the computer readable instructions controlling the processing device to perform operations (p. 5, section 0113-p. 6, section 0114). As to claim 12, see the rejection to claim 2. As to claim 13, see the rejection to claim 3. As to claim 14, see the rejection to claim 4. As to claim 15, see the rejection to claim 5. As to claim 20, see the rejection to claim 1. Further Park discloses a system, comprising: at least one camera that captures image data associated with an environment (p. 3, section 0050); at least one scanner that captures three-dimensional (3D) data associated with the environment (p. 3, sections 0052-0061; the point cloud data generator and the 3D position determiner together act as a scanner to capture 3D data associated with the camera image in the environment); and a processing system (p. 5, section 0113-p. 6, section 0114). Park does not explicitly disclose a data store for storing the image data associated with the environment captured with the at least one camera, and the 3D data associated with the environment captured with the at least one scanner, but since this data is used for generating a display after it is captured, it must be stored in some location in the system, and whichever location that is would inherently read on a data store. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Park in view of Natori and further in view of Themelis (U.S. Publication 2021/0243387). As to claim 6, Park does not disclose, but Themelis discloses, wherein, in a hybrid linking mode, a field of view of the first sub-region can be changed independently of a field of view of the second sub-region and the field of view of the second sub-region changes according to a change to the field of view of the first sub-region (figs. 3a-3c; p. 4, section 0042-0044; p. 5-6, section 0049; a second field of view, corresponding to the claimed “field of view of the first sub-region” is changed independently without the other field of view having to change first; according to that change, the other field of view is changed such that it reflects this change within an overview as in the figures). The motivation for this is improve spatial awareness and increase incident awareness of a user (p. 2, sections 0021-0022). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Park and Natori to use a method wherein in a hybrid linking mode, a field of view of the first sub-region can be changed independently of a field of view of the second sub-region and the field of view of the second sub-region changes according to a change to the field of view of the first sub-region in order to improve spatial awareness and increase incident awareness of a user as taught by Themelis. As to claim 16, see the rejection to claim 6. Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Park in view of Natori and further in view of Berger (WO 2013/186160). As to claim 8, Park does not disclose, but Berger discloses wherein the image data comprises a 360 degree image (p. 1, lines 9-16; p. 8, line 18-p. 9, line 20; the image data displayed with textured 3D point cloud data is a 360 degree panoramic image). The motivation for this is to allow a remote user to engineer and plan structure or installation changes (col. 9, line 34-p. 10, line 2). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Park and Natori to use a 360 degree image along with the 3D data in order to allow a remote user to engineer and plan structure or installation changes as taught by Berger. As to claim 18, see the rejections to claims 7 and 8. Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Park in view of Natori and further in view of Baek (U.S. Publication 2021/0072390). As to claim 9, Park does not disclose, but Baek discloses wherein the first portion of the 3D data is captured at a first point in time and the at least the first portion of the image data is captured at a second point in time (p. 6, sections 0068-0069; 3D point cloud data captured at one time is displayed with image data captured at a different time). The motivation for this is that it can be difficult to perfectly synchronize acquisition time points, and so it is better to instead allow the different time points and remove interference (p. 1, section 0003). It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Park and Natori to use a method wherein the first portion of the 3D data is captured at a first point in time and the at least the first portion of the image data is captured at a second point in time in order to avoid the difficulty of trying to perfectly synchronize acquisition time points as taught by Baek. As to claim 18, see the rejection to claim 9. Claims 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Park in view of Natori and further in view of Wiemker (U.S. Publication 2014/0354642). As to claim 10, Park does not disclose but Wiemker discloses wherein the at least one of the at least the second portion of the 3D data and the at least the second portion of the image data are captured at a first point in time, and wherein the at least one of the at least the third portion of the 3D data and the least the third portion of the image data are captured at a second point in time (fig. 6a; p. 1, sections 0003-0005; p. 6, section 0071; p. 7, section 0075; each view shows image data plus color-encoded 3D data; each view with image and 3D data is captured at a different time, for example, one view of image and 3D data, which can read on second portions of image and 3D data, is from a point in time before treatment, while another, which can read on third portions of image and 3D data, is from a point in time after treatment). The motivation for this to allow a clinician to evaluate how a patient has responded to a particular therapy. It would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify Park and Natori to use a method wherein the at least one of the at least the second portion of the 3D data and the least the second portion of the image data are captured at a first point in time, and wherein the at least one of the at least the third portion of the 3D data and the least the third portion of the image data are captured at a second point in time in order to allow a clinician to evaluate how a patient has responded to a particular therapy as taught by Wiemker. As to claim 19, see the rejection to claim 10. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AARON M RICHER whose telephone number is (571)272-7790. The examiner can normally be reached 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON M RICHER/Primary Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Feb 25, 2025
Application Filed
Jan 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586151
Frame Rate Extrapolation
2y 5m to grant Granted Mar 24, 2026
Patent 12579600
SEAMLESS VIDEO IN HETEROGENEOUS CORE INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12571669
DETECTING AND GENERATING A RENDERING OF FILL LEVEL AND DISTRIBUTION OF MATERIAL IN RECEIVING VEHICLE(S)
2y 5m to grant Granted Mar 10, 2026
Patent 12555305
Systems And Methods For Generating And/Or Using 3-Dimensional Information With Camera Arrays
2y 5m to grant Granted Feb 17, 2026
Patent 12548233
3D TEXTURING VIA A RENDERING LOSS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
51%
Grant Probability
70%
With Interview (+19.5%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 465 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month