DETAILED ACTION
This is the first office action on the merits. Claims 1-22 are currently pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The following addresses applicant’s remarks/amendments dated 11 December 2025.
The amendment is sufficient to overcome the objection to the drawings.
The amendment is sufficient to overcome the objection to the specification.
The amendment is sufficient to overcome the objection to the claims 1-11, 15-16, and 19-21.
Claims 1, 7, 9, 11-12, 15-17, and 19-21 were amended. Claim 2 was cancelled. No new claims were added. Therefore, claims 1 and 3-22 are currently pending in the current application and are addressed below.
Response to Arguments
Applicant's arguments filed 11 December 2025 have been fully considered but they are not persuasive. On pages 13-15, Applicant argues that the combination of Jovanovic and Benhimane do not teach the amended limitations of claim 1.
First, Applicant argues that Jovanovic fails to disclose “creating a 3D patch that is a surface patch having depth information and a predetermined surface dimension.” However, in Fig. 23 Jovanovic teaches “key points from the 2D images are located in the 3D point cloud”. By locating the key points in from the 2D image within the 3D point cloud, Jovanovic teaches creating a patch in the 3D point cloud. Because the patch is created in the 3D point cloud, it contains depth information and a predetermined surface dimension.
Second, Applicant argues that Jovanovic fails to disclose “based on a determination that the points in the 3D patch are on a single plane in the environment based on the corresponding 3D coordinates, computing a descriptor for the 3D patch.” However, the reference template consists of coordinates from the environment, which Jovanovic calls the “world coordinates” (Paragraph [0099]-[0101], Equations 3-4). In Equation 3-4, Jovanovic outlines an example of transforming between camera coordinates and the world coordinates corresponding to a reference template plane. One of ordinary skill in the art would understand that Jovanovic’s coordinate transformation could be used to translate between any coordinates in the 3D point cloud and the environment and one combine Jovanovic’s coordinate transformation technique with Benhimane’s method of computing a feature descriptor.
Thus, the rejection is maintained.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 5-6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 5 is indefinite because claim 5 is dependent upon itself. Thus, it is unclear to which apparatus claim 5 is referencing.
Claim 6 is rejected due to dependency.
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claims 5-6 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 5 is rejected due to self-dependency, and claim 6 is rejected due to dependency on claim 5. Applicant may cancel the claims, amend the claims to place the claims in proper dependent form, rewrite the claims in independent form, or present a sufficient showing that the dependent claims complies with the statutory requirements.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-4, 7-8, 11-13, 15-17, and 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over Jovanovic et al, US 20160134860 A1 (“Jovanovic”) in view of Benhimane et al., US 20140293016 A1 (“Benhimane”).
Regarding claim 1, Jovanovic discloses an apparatus comprising: a scanner (Fig. 10, light projector 230, camera 240, Paragraph [0058]) that captures a 3D map of an environment, the 3D map comprising a plurality of 3D point clouds (Fig. 23, 3D point cloud, Paragraph [0093]); a camera (Fig. 10, digital camera 240, Paragraph [0060]-[0061]) that captures a 2D image corresponding to each 3D point cloud from the plurality of 3D point clouds (Fig. 23, 2D photographic images, Paragraph [0093]); and at least one processor coupled with the scanner and the camera (Fig. 10, processor 200, Paragraph [0058]), the at least one processor performing a method comprising: capturing a frame comprising the 3D point cloud and the 2D image (Fig. 23, 3d point cloud, 2D photographic image, Paragraph [0093]); detecting a key point in the 2D image, the key point can be used as a feature (Fig. 23, Step: Key points identified… using image processing, Paragraph [0093]); creating a 3D patch that is a surface patch having depth information and a predetermined surface dimension (Fig. 23, Step: Key points from the 2D image are located in the 3D point cloud, Paragraph [0093]), wherein the 3D patch comprises points surrounding a 3D position of the key point, the 3D position and the points of the 3D patch are determined from the 3D point cloud (Fig. 23, Step: Key points from the 2D image are located in the 3D point cloud, Paragraph [0093]); based on a determination that the points in the 3D patch are on a single plane in the environment based on the corresponding 3D coordinates (Fig. 27, Equations 3-4, Paragraph [0086], [0099]-[0101]), point cloud with the plurality of 3D point clouds based on the registered frame (Fig. 33, Step 5, Paragraph [0113]; See also: Paragraph [0088]).
Jovanovic does not teach: computing a descriptor for the 3D patch; registering the frame with a second frame by matching the descriptor for the 3D patch with a second descriptor associated with a second 3D patch from the second frame;
However, Benhimane teaches a feature detection method that computes a descriptor for the feature. The descriptor could be a vector describing the gradient direction of neighboring pixels or a vector aligned with the gravity force (Fig. 10a, step S55-S56, Paragraph [0134]; See also: Paragraph [0111]). Benhimane also teaches matching features from a reference image and current image through a feature descriptor. (Fig. 6, reference intensity image, current intensity image, feature point F1, matched feature point F1 (matched), Paragraph [0073], See also: Paragraph [0006]: descriptor is needed to match features).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jovanovic’s process for developing a 3D model of a scene by creating a descriptor for each key point which is used to match key points between frames, which is disclosed by Benhimane. One of ordinary skill in the art would have been motivated to make this modification in order to “[enable] distinguishing similar features at different physical scales”, as suggested by Benhimane (Paragraph [0157]).
Regarding claim 3, Jovanovic, as modified in view of Benhimane, discloses the apparatus of claim 1.
Jovanovic, as modified in view of Benhimane does not teach: wherein the 3D patch is of a predetermined shape.
However, Benhimane teaches detecting a feature within a sampling window defined by a circular set of pixels (Fig. 9, feature F, circular set of pixels, Paragraph [0124]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jovanovic’s process of locating a key point in a 3D point cloud by locating the key point within a circular set of pixels, which is disclosed by Benhimane. One of ordinary skill in the art would have been motivated to make this modification in order to “[enable] distinguishing similar features at different physical scales”, as suggested by Benhimane (Paragraph [0157]).
Regarding claim 4, Jovanovic, as modified in view of Benhimane, discloses the apparatus of claim 1, wherein the 2D image is one of a color image and a grayscale image (Jovanovic, Fig. 10, camera 240, Paragraph [0060]).
Regarding claim 7, Jovanovic, as modified in view of Benhimane, discloses the apparatus of claim 1.
Jovanovic, as modified in view of Benhimane, does not teach: wherein the 3D patch is recolored using images within a predetermined temporal neighborhood.
However, Benhimane discloses that a three-dimensional point cloud, denoted as a reference mesh, may be colored. The reference mesh is updated using current three dimensional point clouds (Fig. 2a-b, steps S2-5, and S8a, Paragraph [0180]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the process for developing a 3D model of a scene, disclosed by Jovanovic and Benhimane, by updating the colors of the three-dimensional point cloud, which is disclosed by Benhimane. One of ordinary skill in the art would have been motivated to make this modification in order to encode intensity in more than one channels in different bit resolutions, as suggested by Benhimane (Paragraph [0024]).
Regarding claim 8, Jovanovic, as modified in view of Benhimane, discloses the apparatus of claim 7.
Jovanovic, as modified in view of Benhimane, does not teach: wherein colors of points in the 3D patch are used to generate the descriptor for the 3D patch.
However, Benhimane teaches that intensity can be encoded in RGB channels and that a feature descriptor can contain at least one parameter based on image intensity information, which would be color for an RGB image. (Paragraph [0024] and [0111]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the process for developing a 3D model of a scene, disclosed by Jovanovic and Benhimane, by including descriptors with intensity information, which is disclosed by Benhimane. One of ordinary skill in the art would have been motivated to make this modification in order to encode intensity in more than one channels in different bit resolutions, as suggested by Benhimane (Paragraph [0024]).
Regarding claim 11, Jovanovic, as modified in view of Benhimane, disclose the apparatus of claim 1, wherein the method further comprises: computing a first quality metric of the 3D patch, and a second quality metric of the second 3D patch (Benhimane, Fig. 10a, step S55-S56, Paragraph [0134]; See also: Paragraph [0111]: feature descriptor contains at least one parameter based on image intensity; Fig. 6, feature point F1, matched feature point F1 (matched), Paragraph [0073], See also: Paragraph [0006]: descriptor is needed to match features);
Jovanovic, as modified in view of Benhimane, does not teach: matching the descriptors for the first 3D patch and the second 3D patch in response to a difference between the first quality metric and the second quality metric being within a predetermined threshold.
However, Benhimane teaches comparing a similarity measure to a threshold to determine if the three-dimensional model is updated. The similarity measure is indicative of the overlap between two three-dimensional models. (Fig. 2b, step S6-S8, Paragraph [0064-0066], See also: Paragraph [0074]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the process for developing a 3D model of a scene, disclosed by Jovanovic and Benhimane, by using a similarity measure to determine which frames are matched, which is disclosed by Benhimane. One of ordinary skill in the art would have been motivated to make this modification in order to avoid adding a large amount of redundant data, as suggested by Benhimane (Paragraph [0064]).
Claims 12-13 and 15-16 are method claims corresponding to apparatus claims 1, 4, 7, and 11. Claims 12-13 and 15-16 are rejected for the same reasons.
Claims 17 and 19-21 contains the same claim limitations as claims 1, 7, and 11 and are rejected for the same reasons.
Claims 5-6, 14, 18, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Jovanovic, as modified in view of Benhimane, in further view of Frank et al., US 20180052233 A1 (“Frank”).
Regarding claim 5, Jovanovic, as modified in view of Benhimane, discloses the apparatus of claim 1.
Jovanovic, as modified in view of Benhimane, does not teach: wherein the method further comprises computing a loop closure, wherein computing the loop closure comprises: capturing the second frame from substantially the same position as the frame; computing a difference in the pose of the scanner based on a difference in orientation of matching 3D patches in the frame and the second frame; and updating the map by adjusting coordinates based on the difference.
However, Frank teaches a loop closure operation that corrects a shift in subsequent scans based on a correction between the starting and ending position of a scan. (Figs. 17-18, starting position 1510, different position 1520, loop closure correction 1530, scan positions 1610, displacement vectors 1810, Paragraph [0076], [0080] – [0082]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the frame alignment process, disclosed by Jovanovic and Benhimane, by including a loop closure operation, which is disclosed by Frank. One of ordinary skill in the art would have been motivated to make this modification in order to overcome errors and inefficiencies, as suggested by Frank (Paragraph [0078]).
Regarding claim 6, Jovanovic, as modified in view of Benhimane and Frank, discloses the apparatus of claim 5, wherein an orientation of a 3D patch is compared to a direction of gravity to determine difference in orientation of matching 3D patches (Benhimane, Fig. 10a, step S55-S56, Paragraph [0134]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the process for developing a 3D model of a scene, disclosed by Jovanovic and Benhimane, by creating a descriptor for each key point which is used to match key points between frames, which is disclosed by Benhimane. One of ordinary skill in the art would have been motivated to make this modification in order to “[enable] distinguishing similar features at different physical scales”, as suggested by Benhimane (Paragraph [0157]).
Claim 14 is a method claim corresponding to apparatus claim 5 and is rejected for the same reasons.
Claims 18 and 22 contain the same limitations as claims 5 and 6 and are rejected for the same reasons.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Jovanovic, as modified in view of Benhimane, in further view of Narang et al., US 20180204338 A1 (“Narang”).
Regarding claim 9, Jovanovic, as modified in view of Benhimane, discloses the apparatus of claim 1, wherein an orientation of the 3D patch is defined (Benhimane, Fig. 10a, step S55-S56, Paragraph [0134])
Jovanovic, as modified in view of Benhimane, does not teach: defining the orientation of the patch based on a plane normal to the patch.
However, Narang teaches selecting points that allow an accurate calculation of a normal surface, which allows the system to find better matches when matching 3D scans (Paragraph [0045]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the descriptors disclosed by Jovanovic, as modified in view of Benhimane, by using a surface normal to the patch to describe the orientation of the patch, which is taught by Narang. One of ordinary skill in the art would have been motivated to make this modification in order to “find better matches in the matching step”, as suggested by Narang (Paragraph [0045]).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Jovanovic, as modified in view of Benhimane, in further view of Dal Mutto et al., US 20200372626 A1 (“Dal Mutto”).
Regarding claim 10, Jovanovic, as modified in view of Benhimane, discloses the apparatus of claim 1.
Jovanovic, as modified in view of Benhimane, does not teach: wherein the key point is detected using an artificial intelligence model.
However, Dal Mutto teaches applying a convolutional neural network to extract feature vectors (Fig 10A-B, method 1000, Paragraph [0138]-[0141]).
It would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Jovanovic’s process locating key points by using a CNN to identify features, which is disclosed by Dal Mutto. One of ordinary skill in the art would have been motivated to make this modification in order to “increases the likelihood that two different 3D models of substantially similar objects (or the same objects) will be voxelized from the same perspective”, as suggested by Dal Mutto (Paragraph [0140]).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL N NGUYEN whose telephone number is (571)270-5405. The examiner can normally be reached Monday - Friday 8 am - 5:30 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yuqing Xiao can be reached at (571) 270-3603. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RACHEL NGUYEN/Examiner, Art Unit 3645
/YUQING XIAO/Supervisory Patent Examiner, Art Unit 3645