Prosecution Insights
Last updated: April 19, 2026
Application No. 18/359,603

PHOTOGRAMMETRY SYSTEM FOR GENERATING STREET EDGES IN TWO-DIMENSIONAL MAPS

Non-Final OA §101§103
Filed
Jul 26, 2023
Examiner
MAZUMDER, TAPAS
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Faro Technologies Inc.
OA Round
2 (Non-Final)
82%
Grant Probability
Favorable
2-3
OA Rounds
2y 4m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
342 granted / 418 resolved
+19.8% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
16 currently pending
Career history
434
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 418 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because Claim 19 is directed to a computer program product comprising computer-readable instruction and a computer program product comprising computer-readable instruction doesn’t fall in the statutory categories of invention: process, machine, manufacture, or composition of matter. Computer readable instructions see to be software. Claim 20 is not also statutory for the same reason as specified for claim 19. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 6, 10-12, 15 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Grosgeorge et al. ( US patent publication: 20210407125, “Grosgeorge”) in view of Ko et al. ( US patent Publication: 2021048722, “Ko”). Regarding claim 10, A system comprising: a memory having computer readable instructions; and one or more processors for executing the computer readable instructions, the computer readable instructions controlling the one or more processors to perform operations comprising: (“[0089] Each of the processes, methods, instructions, applications and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code modules executed by one or more computer systems or computer processors comprising computer hardware. The code modules (or “engines”) may be stored on any type of non-transitory computer-readable medium or computer storage device, such as hard drives, solid-state memory, optical disc, and/or the like”) retrieving at least one selected image from a plurality of aerial images of an environment, the at least one selected image comprising surface regions that are concurrently in a three-dimensional (3D) point cloud of the environment; ( Step 202 receives aerial images. “[0018] Referring to FIG. 2, the figure illustrates a block diagram further illustrating the process 200 for determining the center of a ground control point in aerial images. In step 202, the system receives aerial images obtained by an unmanned aerial vehicle.” Then step 204 selects an image from the aerial images having a surface region …” [0018] Referring to FIG. 2, the figure illustrates a block diagram further illustrating the process 200 for determining the center of a ground control point in aerial images. In step 202, the system receives aerial images obtained by an unmanned aerial vehicle. “ and the surface region are concurrently in a 3d point cloud as aerial images are used to construct a point cloud. ( step 210) ….[0018]… a photogrammetry process is performed on the aerial images using the identified pixel coordinates in the images. The photogrammetry process uses the pixel coordinates, and the associated geo-spatial locations of the ground control points to generate any of a geo-rectified imagery, composite imagery, 3d mesh, point clouds utilizing the aerial image.”) Grosgeorge doesn’t expressly teach, detecting areas of the surface regions in the at least one selected image, such that coordinates of the areas of the surface regions are extracted from the at least one selected image, comparing the at least one selected image to the 3D point cloud to align common locations in both the at least one selected image and the 3D point cloud; displaying an integration of a drawing of the coordinates of the areas of the surface regions in a representation of the 3D point cloud. However, Ko teaches, detecting areas of a surface region in an image, such that coordinates of the area of the surface region is extracted from the image; ([[0134] Here, the object of the road area may include at least one of a lane, a road surface marking, and a polygon, and the detecting (S110) may include classifying, when the object in the road area is detected from the aerial image, the detected object by object type.” [0135] In addition, the HD map producing apparatus 100 may extract a 2D coordinate value of the object detected from the aerial image (S120). ) comparing the image to the 3D point cloud to align common locations in both the image and the 3D point cloud; (“ [0136] In addition, the HD map producing apparatus 100 may calculate a 3D coordinate value corresponding to the 2D coordinate value by projecting the extracted 2D coordinate value onto point cloud data configuring MMS data (S130).. “) and displaying an integration of a drawing of the coordinates of the areas of the surface regions in a representation of the 3D point cloud. (“[[0137] In addition, the HD map producing apparatus 100 may generate an HD map showing the road area of the aerial image in three dimensions based on the calculated 3D coordinate value (S140).” “ [0150] In addition, a display unit 230 may display various screens for producing an HD map.”) Grosgeorge and Ko are analogous are analogous as both of them are from the field of image processing. Therefore it would have been obvious for an ordinary skill ed person in the art before the effective filing date of the claimed invention to have modified Grosgeorge to have included detecting areas of the surface regions in the at least one selected image, such that coordinates of the areas of the surface regions are extracted from the at least one selected image, comparing the at least one selected image to the 3D point cloud to align common locations in both the at least one selected image and the 3D point cloud; displaying an integration of a drawing of the coordinates of the areas of the surface regions in a representation of the 3D point cloud as taught by Ko. The motivation to include the modification is to display road/street image on a 3D map for better visualization of a map. Claim 1 is directed to a method and its steps are similar in scope and functions of the elements of the device claim 10 and therefore claim 1 is rejected with same rationales as specified in the rejection of claim 10. Claim 19 is directed to a computer program product and its steps are similar in scope and functions of the elements of the device claim 10 and therefore claim 19 is rejected with same rationales as specified in the rejection of claim 10. Regarding claims 2, 11 and 20, Grosgeorge as modified by Ko teaches, teaches, wherein the 3D point cloud is generated from the plurality of aerial images using photogrammetry (Grosgeorge, “[0018], …... In step 210, a photogrammetry process is performed on the aerial images using the identified pixel coordinates in the images. The photogrammetry process uses the pixel coordinates, and the associated geo-spatial locations of the ground control points to generate any of a geo-rectified imagery, composite imagery, 3d mesh, point clouds utilizing the aerial image.”) Regarding claims 3 and 12, Grosgeorge as modified by Ko teaches, , wherein the at least one selected image is selected from the plurality of aerial images having been used to generate the 3D point cloud. ( Grosgeorge Step 202 receives aerial images. “[0018] Referring to FIG. 2, the figure illustrates a block diagram further illustrating the process 200 for determining the center of a ground control point in aerial images. In step 202, the system receives aerial images obtained by an unmanned aerial vehicle.” Then step 204 selects an image from the aerial images having a surface region …” [0018] Referring to FIG. 2, the figure illustrates a block diagram further illustrating the process 200 for determining the center of a ground control point in aerial images. In step 202, the system receives aerial images obtained by an unmanned aerial vehicle. “ and the surface region are concurrently in a 3d point cloud as aerial images are used to construct a point cloud. ( step 210) ….[0018]… a photogrammetry process is performed on the aerial images using the identified pixel coordinates in the images. The photogrammetry process uses the pixel coordinates, and the associated geo-spatial locations of the ground control points to generate any of a geo-rectified imagery, composite imagery, 3d mesh, point clouds utilizing the aerial image.”) Regarding claims 6 and 15, Grosgeorge as modified by Ko teaches, wherein the coordinates of the areas of the surface regions are connected by lines to form the drawing of the areas, the lines being formed along edges of the surface regions.(Dong Fig.4A shows lines formed by connecting boundary coordinates) Claim(s) 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Grosgeorge as modified by Ko and further in view of Akiyama et al. ( US Patent publication: 20190156131, “Akiyama”). Regarding claims 4 and 13, Grosgeorge as modified by Ko doesn’t expressly teach, wherein the areas of the surface regions are detected using machine learning. Akiyama teaches, wherein the areas of the surface regions are detected using machine learning. (“[0028] When the road surface detecting section 102 receives the image taken by the camera 20 and subjected to the distortion correction process by the image correcting section 101, it detects a road surface region corresponding to the road surface, from the received (input) image. The road surface detecting section 102, for example, divides the input image into a plurality of observation blocks and performs 2-class detection to identify whether each observation block corresponds to a road surface region or a non-road surface region other than a road surface region, using a detector to which a machine learning method is applied”) Grosgeorge as modified by Ko and Akiyama are analogous as they are from the image rendering. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Grosgeorge as modified by Ko to have included the areas of the surface regions to be detected using machine learning. as taught by Akiyama. The motivation to include the modification is to have automation in detecting surface region in an Claim(s) 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Grosgeorge as modified by Ko and further in view of Bainbridge et al. ( US Patent publication: 20230292647, Banbridge”). Regarding claims 5 and 14, Grosgeorge as modified by Ko doesn’t expressly teach, wherein the plurality of aerial images are orthoimages. Bainbridge teaches, wherein the plurality of aerial images are orthoimages ( [0015] The images may be orthoimages and/or form or be used to form an orthomosaic map of the AOI. Where the images form an orthomosaic map, this may be generated by stitching a plurality of overlapping HR drone images. [0016] An orthoimage is an aerial image that has been geometrically corrected (“orthorectified”) such that the scale is spatially uniform across the image.”) Grosgeorge as modified by Ko and Bainbridge are analogous as they are from the image rendering. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Grosgeorge as modified by Ko to have included the plurality of aerial images to be orthoimages as taught by Bainbridge. The motivation to include the modification is to have geometric relationship between points in the image. Claim(s) 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Grosgeorge as modified by Ko and further in view of Phan et al. ( US Patent publication: 2020/0109939, “Phan”). Regarding claims 7 and 16, Grosgeorge as modified by Ko doesn’t expressly teach, wherein forward projection is utilized to integrate the drawing into the 3D point cloud. However, Pham teaches, forward projection is utilized to integrate the drawing into the 3D point cloud.( [0060] At block 705, the server 101 is configured to determine the image coordinates for each point in the initial set selected at block 315. As noted above, image coordinates can be obtained by use of the camera calibration matrix in a process also referred to as forward projection (i.e. projecting a point in three dimensions “forward” into a captured image, as opposed to back projection, referred to projecting a point in an image “back” into the point cloud). Grosgeorge as modified by Ko and Pham are analogous as they are from the image rendering. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Grosgeorge as modified by Ko to have included wherein forward projection is utilized to integrate the drawing into the 3D point cloud as taught by Pham. The motivation to include the modification is to use a standard method of including a two dimensional image onto 3d point cloud. Claim(s) 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Grosgeorge as modified by Ko and further in view of Phan and Goncalves et al. ( US patent Publication: 2004/0167670, “Goncalves”). Claims 8 and 17, Grosgeorge as modified by Ko wherein: back projection is utilized to integrate 3D data of the 3D point cloud into the at least one selected image; and smoothing the 3D representation of a line by finding 3D points for a line in which the 3D points have a minimal squared sum of a back projection error into a plurality of images. However, Pham teaches, back projection is utilized to integrate 3D data of the 3D point cloud into the at least one selected image;.( [0060] At block 705, the server 101 is configured to determine the image coordinates for each point in the initial set selected at block 315. As noted above, image coordinates can be obtained by use of the camera calibration matrix in a process also referred to as forward projection (i.e. projecting a point in three dimensions “forward” into a captured image, as opposed to back projection, referred to projecting a point in an image “back” into the point cloud). Goncalves teaches, smoothing the 3D representation of a line by finding 3D points for a line in which the 3D points have a minimal squared sum of a back projection error into a plurality of images.(“[0147] In one embodiment, the process retrieves the 3-D coordinates for the features of the landmark from a data store, such as from the Feature Table 804 of the landmark database 606. From the 3-D coordinates, the process shifts a hypothetical pose (relative to the landmark pose) and calculates new 2-D image coordinates by projection from the 3-D coordinates and the change in pose. In one embodiment, the relative pose is determined by searching in a six-dimensional 3-D pose space, such as, for example, x, y, z, roll, pitch, and yaw (.theta.) for a point with a relatively small root mean square (RMS) projection error between the presently-measured feature coordinates and the projected coordinates from the 3-D feature to the image.”) Grosgeorge as modified by Ko and Pham are analogous as they are from the image rendering. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Grosgeorge as modified by Ko to have included back projection utilized to integrate the drawing into the 3D point cloud as taught by Pham and smoothing the 3D representation of a line by finding 3D points for a line in which the 3D points have a minimal squared sum of a back projection error into a plurality of images.as taught by Goncalves. The motivation to include the modification is to use a standard method of including a 3D image onto a 2D image with minimum projection error. Claim(s) 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Grosgeorge as modified by Ko and further in view of Zapletal et al. ( US Patent publication: 20240020924 , “Zapletal”). Regarding claims 9 and 18, Grosgeorge as modified by Ko is silent about wherein the drawing of the coordinates of the areas of the surface regions is displayed in an orthographic view of the 3D point cloud. However, Zapletal teaches, the drawing of the coordinates of the areas of the surface regions is displayed in an orthographic view of the 3D point cloud. (“[0070] The combined land-cover maps 20 and the per-class land-cover maps 21-23 may be displayed as 2D maps, whereas the classified point clouds or meshes 24 may be displayed as 3D maps. The 2D maps may either respect the occlusions by the 3D mesh from orthographic view (“vision related”), or ignore the occlusions by the 3D mesh, thus allowing to see under trees and overhangs of buildings (“ground related”), optionally showing the highest probability through all mesh layers without occlusions from orthographic view.”) Grosgeorge as modified by Ko and Zapletal are analogous as they are from the image rendering. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Grosgeorge as modified by Ko to have the drawing of the coordinates of the areas of the surface regions to be displayed in an orthographic view of the 3D point cloud as taught by Zapletal. The motivation to include the modification is to use a standard method of displaying two dimensional surface image over a 3d point cloud. Response to Arguments Applicant’s arguments, see remarks Pages 8-9, filed 12/17/2025, with respect to rejection of claim 1 under 35 USC 103 have been fully considered and are persuasive. Therefore the rejection has been withdrawn. However, upon further considerations a new ground(s) of rejection has been made under 35 USC 103 as being unpatentable over Grosgeorge et al. ( US patent publication: 20210407125, “Grosgeorge”) in view of Ko et al. ( US patent Publication: 2021048722, “Ko”). As the Dong reference is an invalid reference and a new reference is used and the action is made non-final. Applicant’s arguments, see remarks Page 8, filed 12/17/2025, with respect to objection and rejection of claim 19 under USC 112(b) have been fully considered and are persuasive. However new 35 USC 1-01 issue has been created. . Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tapas Mazumder whose telephone number is (571)270-746. The examiner can normally be reached M-F 8:00 AM-5:00 PM PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAPAS MAZUMDER/ Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Jul 26, 2023
Application Filed
Sep 06, 2025
Non-Final Rejection — §101, §103
Dec 17, 2025
Response Filed
Mar 24, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579763
SIGNALING POSE INFORMATION TO A SPLIT RENDERING SERVER FOR AUGMENTED REALITY COMMUNICATION SESSIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12571648
GUIDANCE FOR COLLABORATIVE MAP BUILDING AND UPDATING
2y 5m to grant Granted Mar 10, 2026
Patent 12573157
SEE-THROUGH DISPLAY METHOD AND SEE-THROUGH DISPLAY SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12561916
INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12555328
VIDEO PLAYING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+16.2%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 418 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month