Prosecution Insights
Last updated: April 19, 2026
Application No. 18/194,298

GENERATING AND VALIDATING A VIRTUAL 3D REPRESENTATION OF A REAL-WORLD STRUCTURE

Final Rejection §102§103§112
Filed
Mar 31, 2023
Examiner
MCCULLEY, RYAN D
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Hover Inc.
OA Round
6 (Final)
70%
Grant Probability
Favorable
7-8
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
344 granted / 493 resolved
+7.8% vs TC avg
Strong +30% interview lift
Without
With
+29.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
31 currently pending
Career history
524
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
51.6%
+11.6% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 493 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is in response to Applicant’s amendment/response filed on 12 September 2025, which has been entered and made of record. Response to Arguments Applicant’s arguments have been fully considered but they are moot in view of the new grounds of rejection presented in this Office Action. Claim Rejections - 35 USC § 112 The previous rejections under 35 U.S.C. 112 are withdrawn in view of the claim amendments. However, new rejections under 35 U.S.C. 112 are introduced in response to the new claims. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 49 and 50 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claims 49 and 50 recite “the metadata comprising the geometrical constraints” in the last line, which lacks proper antecedent basis. Since the earlier-recited metadata is not described as comprising the geometrical constraints, the scope of the claim is unclear. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 49 and 50 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Fathi et al. (US 2018/0053347; hereinafter “Fathi”). Regarding claim 49, Fathi discloses A method for generating a virtual 3D representation of a real-world structure (“generating verified wireframes corresponding to at least part of a structure,” abstract), the method comprising: receiving a plurality of ground-based lateral images (“The image-capture devices from which the 2D images are generated can be integrated into a device such as a smartphone,” para. 40; “ground-based (mobile) imagery,” para. 47) of a real-world rectangular-shaped sub-structure of a building comprising a plurality of rectangular-shaped sub-structures, wherein the sub-structures are each formed by walls (“typical residential building,” para. 55); identifying one or more landmarks in each image of the plurality of images, wherein each of the one or more landmarks is not a line (“a set of 2D and/or 3D features could be extracted and matched automatically among the different datasets,” para. 47); generating metadata from the one or more identified landmarks in each image of the plurality of images, wherein the metadata comprises a position of the identified landmarks (“The 2D digital images will at least partially overlap with regard to the relevant structural features,” para. 41; “perform automated feature extraction,” para. 90); providing geometric constraints for the one or more identified landmarks, wherein the geometric constraints comprise geometric-properties of the identified landmarks corresponding to the position of the identified landmarks (“different types of constraints could be loaded to the tool based on the current active module including: 1) a geometric constraint that enforces or limits connectivity of an object to other objects … constraints that describe relational connectivity (e.g., a floor object is enclosed by walls, a roof structure must be above ground plane, etc.),” para. 90; “A rule-based model can be implemented once the adjustment is completed so that the verified wireframe can be verified to be geometrically valid according to construction rules and conventions,” para. 93); correlating a first landmark of the one more of the identified landmarks in a first image of the plurality of images with a second landmark of the one or more identified landmarks in a second image of the plurality of images (“Point clouds derived from stereographic image capture methodologies,” para. 42; “Corresponding 2D features can be converted into 3D features using visual triangulation techniques,” para. 47); generating the 3D representation based on: the identified landmarks in the plurality of images; the correlations between the first and second identified landmarks; and the metadata comprising the geometrical constraints for the identified landmarks (the final generated 3D wireframe representation, e.g. Verified Wireframe 124 of Fig. 1, is based on the landmarks and constraints as discussed above). Regarding claim 50, it is rejected using the same citations and rationales described in the rejection of claim 49. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 21, 22, 25, 26, 31, 32, 34, 35, 38, 39, 44, 45, 47, 48, 51, 52, and 54-58 are rejected under 35 U.S.C. 103 as being unpatentable over Fathi et al. (US 2018/0053347; hereinafter “Fathi”) in view of Matsunobu et al. (US 2019/0333269; hereinafter “Matsunobu”). Regarding claim 21, Fathi discloses A method for generating a virtual 3D representation of a real-world structure (“generating verified wireframes corresponding to at least part of a structure,” abstract), the method comprising: receiving a plurality of ground-based lateral images (“The image-capture devices from which the 2D images are generated can be integrated into a device such as a smartphone,” para. 40; “ground-based (mobile) imagery,” para. 47) of a real-world rectangular-shaped sub-structure of a building comprising a plurality of rectangular-shaped sub-structures, wherein the sub-structures are each formed by walls (“typical residential building,” para. 55); identifying one or more landmarks in each image of the plurality of images, wherein each of the one or more landmarks is not a line (“a set of 2D and/or 3D features could be extracted and matched automatically among the different datasets,” para. 47); generating metadata associated with the one or more identified landmarks in each image of the plurality of images, wherein the metadata comprises a geometric constraint of the one or more identified landmarks, wherein the geometric constraint comprises geometric properties including a position of the identified landmarks relative to the image the landmark is associated with (“different types of constraints could be loaded to the tool based on the current active module including: 1) a geometric constraint that enforces or limits connectivity of an object to other objects … constraints that describe relational connectivity (e.g., a floor object is enclosed by walls, a roof structure must be above ground plane, etc.),” para. 90; “A rule-based model can be implemented once the adjustment is completed so that the verified wireframe can be verified to be geometrically valid according to construction rules and conventions,” para. 93); correlating a first landmark of the one more of the identified landmarks in a first image of the plurality of images with a second landmark of the one or more identified landmarks in a second image of the plurality of images (“Point clouds derived from stereographic image capture methodologies,” para. 42; “Corresponding 2D features can be converted into 3D features using visual triangulation techniques,” para. 47); generating the 3D representation based on: the identified landmarks in the plurality of images; the correlations between the first and second identified landmarks; and the metadata comprising the geometrical constraints for the identified landmarks (the final generated 3D wireframe representation, e.g. Verified Wireframe 124 of Fig. 1, is based on the landmarks and constraints as discussed above); and validating the 3D representation, wherein validating the 3D representation comprises: performing a comparison of the 3D representation and a reference representation of the real-world structure, wherein the reference representation comprises a digital image of the real-world structure (“projection or overlay of the unverified wireframe ... on or over at least one 2D image representation … verifying that at least the vertices of the unverified wireframe are or are not accurately matched or aligned with the corresponding vertices on the 2D image,” para. 31). Fathi does not disclose calculating a reprojection error based on the comparison; and determining whether the 3D representation is valid by determining whether a value of the reprojection error is more or is less than a threshold value. In the same art of generating 3D models, Matsunobu teaches calculating a reprojection error based on the comparison; and determining whether the 3D representation is valid by determining whether a value of the reprojection error is more or is less than a threshold value (“the reliability of the three-dimensional model may utilize, as an index, an error between a reprojection point which is obtained by reprojecting a three-dimensional point in the three-dimensional model onto an imaging plane of the multi-viewpoint image,” para. 29; “selects a model with a low reprojection error,” para. 107; “calculate reliability of each three-dimensional point based on a reprojection error of each three-dimensional point, and use only points having high reliability,” para. 130; a “low” reprojection error is considered a threshold). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Matsunobu to Fathi. The motivation would have been “to improve the accuracy of three-dimensional model” (Matsunobu, para. 26). Regarding claim 22, the combination of Fathi and Matsunobu renders obvious wherein the metadata further comprises a classification of the one or more identified landmarks (“an edge with a certain edge type cannot be connected to another edge with an edge type that does not conform to the first edge type,” Fathi, para. 90; “a collection of smaller structure or elements (e.g., doors, windows, etc.) associated with a larger structure or element (e.g., the overall dimensions of a building) where information about such collection of smaller and larger structure or elements can be processed,” Fathi, para. 28). Regarding claim 25, the combination of Fathi and Matsunobu renders obvious generating a surface of the sub-structure based on the identified landmarks (“3D reconstructions of structures or elements comprising planar surfaces,” Fathi, para. 39). Regarding claim 26, the combination of Fathi and Matsunobu renders obvious generating one or more other surfaces based on the identified landmarks (“3D reconstructions of structures or elements comprising planar surfaces,” Fathi, para. 39). Regarding claim 31, the combination of Fathi and Matsunobu renders obvious generating a data map of correlations between the first and second identified landmarks (“Point clouds derived from stereographic image capture methodologies,” Fathi, para. 42; “a set of 2D and/or 3D features could be extracted and matched automatically among the different datasets,” Fathi, para. 47). Regarding claim 32, the combination of Fathi and Matsunobu renders obvious providing a scale for the 3D representation (“project dimensions on top of a wireframe or point cloud or other 3D structure,” Fathi, para. 153). Regarding claims 34, 35, 38, 39, 44, and 45, they are rejected using the same citations and rationales described in the rejections of claims 21, 22, 25, 26, 31, and 32, respectively. Regarding claims 47 and 48, they are rejected using the same citations and rationales described in the rejections of claims 21 and 34, respectively, except that claims 47 and 48 replace “a building comprising a plurality of rectangular-shaped sub-structures” with “the real-world structure,” which is rejected using the same citation. Regarding claim 51, the combination of Fathi and Matsunobu renders obvious generating a plurality of outlines of the real-world structure; and generating the reference representation by combining the plurality of outlines or images depicting the outlines (“With the wireframe projected over the image or point cloud, endpoints and edges can be compared to features of the underlying 2D image or 3D representation,” Fathi, para. 163). Regarding claim 52, the combination of Fathi and Matsunobu renders obvious wherein generating the reference representation comprises generating a wireframe representation of the real-world structure (“The 3D representation can comprise … surface mesh information, computer aided design (CAD) model information or building information modeling (BIM) model incorporating the structure or element of interest,” Fathi, para. 162; “the unverified wireframe is projected on the next 2D image or 3D representation. The comparison of the unverified wireframe can, in some implementations, continue until verification with all of the 2D images and/or 3D representation is complete,” Fathi, para. 164; a “surface mesh” teaches a Broadest Reasonable Interpretation of “wireframe”). Regarding claim 54, the combination of Fathi and Matsunobu renders obvious wherein calculating the reprojection error comprises calculating reprojection errors for different portions of the 3D representation, wherein determining whether the 3D representation is valid comprises determining whether each of the different portions of the 3D representation is valid (“comparison by a user of the unverified wireframe with a portion or portions of one or more 2D images or point cloud of the structure or element of interest,” Fathi, para. 25; “fully or partially automated or manual wireframe verification and adjustment process … ensure that the verified wireframe is or is not an accurate model of all or part of the structure or element of interest,” Fathi, para. 31; “verification and adjustment of the corresponding portion of the wireframe,” Fathi, para. 64; “the reliability of the three-dimensional model may utilize, as an index, an error between a reprojection point which is obtained by reprojecting a three-dimensional point in the three-dimensional model onto an imaging plane of the multi-viewpoint image,” Matsunobu, para. 29; “an error between a part of the first three-dimensional model and a part of the second three-dimensional model,” Matsunobu, para. 30; see claim 21 for motivation to combine). Regarding claim 55, the combination of Fathi and Matsunobu renders obvious wherein the reference representation further comprises one or more of: an outline of the real-world structure, a wireframe representation of the real-world structure, and a second 3D representation of the real-world structure (“The 3D representation can comprise … surface mesh information, computer aided design (CAD) model information or building information modeling (BIM) model incorporating the structure or element of interest,” Fathi, para. 162; “the unverified wireframe is projected on the next 2D image or 3D representation. The comparison of the unverified wireframe can, in some implementations, continue until verification with all of the 2D images and/or 3D representation is complete,” Fathi, para. 164). Regarding claims 56-58, they are rejected using the same citations and rationales described in the rejection of claim 55. Claims 27 and 40 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Fathi and Matsunobu, and further in view of Keane (US 2019/0188337). Regarding claim 27, the combination of Fathi and Matsunobu does not disclose wherein the surfaces comprise one or more of a side of the sub-structure and a back of the sub-structure, wherein the one or more of the side of the sub-structure and the back of the sub-structure are not visible in the plurality of images. In the same art of 3D reconstruction, Keane teaches wherein the surfaces comprise one or more of a side of the sub-structure and a back of the sub-structure, wherein the one or more of the side of the sub-structure and the back of the sub-structure are not visible in the plurality of images ("the entire external structure including siding, walls ... and/or the roof can be modeled using the techniques discussed herein, " para. 25; "features within images and/or missing features within images may be catalogued (e.g., spatial cataloguing) within one or more databases for retrieval and/or analysis. For example, measurements of features (e.g., shape or size of component(s), shape or size of ridge, eave, rake, or the like, size of building(s), height of tree(s), footprint(s) of man-made or non-man made feature(s)) may be obtained," para. 90). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Keane to the combination of Fathi and Matsunobu. The motivation would have been "to provide accuracy and/or precision for each model" (Keane, para. 124) and "to be more cost effective" (Keane, para. 3). Regarding claim 40, it is rejected using the same citations and rationales described in the rejection of claim 27. Claims 28 and 41 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Fathi and Matsunobu, and further in view of Vicenzotti (US 2017/0316573). Regarding claim 28, the combination of Fathi and Matsunobu does not disclose correlating the first identified landmark to the second identified landmark according to the metadata of the first and second identified landmarks. In the same art of feature matching, Vicenzotti teaches correlating the first identified landmark to the second identified landmark according to the metadata of the first and second identified landmarks (“detect one or more features or visually ‘interesting’ parts in the paired first and second digital images … Each feature can be given a ‘descriptor’ which allows features in the paired first and second digital images to be compared to see if they match features,” para. 14). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the feature-matching teachings of Vicenzotti to the combination of Fathi and Matsunobu. The motivation would have been “to be more accurate” (Vicenzotti, para. 25). Regarding claim 41, it is rejected using the same citations and rationales described in the rejection of claim 28. Claims 33 and 46 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Fathi and Matsunobu, and further in view of Colburn et al. (US 2020/0116493; hereinafter “Colburn”). Regarding claim 33, the combination of Fathi and Matsunobu does not disclose where the scale is based on measurements of structural aspects within an image of the plurality of images. In the same art of 3D reconstruction, Colburn teaches where the scale is based on measurements of structural aspects within an image of the plurality of images ("measure sizes in images of objects of known size, and use such information to estimate room width, length and/or height," para. 14). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Colburn to the combination of Fathi and Matsunobu. The motivation would have been for “greater accuracy” (Colburn, para. 15). Regarding claim 46, it is rejected using the same citations and rationales described in the rejection of claim 33. Claim 53 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Fathi and Matsunobu, and further in view of Arora et al. (US 2015/0381968; hereinafter “Arora”). Regarding claim 53, the combination of Fathi and Matsunobu does not disclose setting the threshold value based upon a category of the real-world structure, wherein different threshold values are associated with different categories of real-world structures. In the same art of 3D reconstruction, Arora teaches setting the threshold value based upon a category of the real-world structure, wherein different threshold values are associated with different categories of real-world structures (“a cuboid shaped product … The closest points on the original point cloud are compared to the refined point cloud to estimate an error-in-fit, and compare against a threshold to determine whether the object is a cuboid or not,” para. 48; “cylindrically shaped product … The closest points of the original point cloud are compared to the refined point cloud to estimate an error-in-fit and, if the error-in-fit is greater than a threshold, it is declared a bottle,” para. 51; “If the error-in-fit for both cuboid and bottle is higher than their respective thresholds, we deem the object as a generic shape class,” par. 52). Before the effective filing date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Arora to the combination of Fathi and Matsunobu. The motivation would have been to “efficiently generate a 3D model” (Arora, para. 47) and “enhance the matching accuracy” (Arora, para. 59). Conclusion Applicant's amendment necessitated any new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ryan McCulley whose telephone number is (571)270-3754. The examiner can normally be reached Monday through Friday, 8:00am - 4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN MCCULLEY/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Mar 31, 2023
Application Filed
Apr 03, 2023
Response after Non-Final Action
Jun 21, 2023
Non-Final Rejection — §102, §103, §112
Sep 20, 2023
Examiner Interview Summary
Sep 20, 2023
Applicant Interview (Telephonic)
Sep 28, 2023
Response Filed
Oct 05, 2023
Final Rejection — §102, §103, §112
Jan 10, 2024
Request for Continued Examination
Jan 17, 2024
Response after Non-Final Action
Feb 16, 2024
Non-Final Rejection — §102, §103, §112
May 20, 2024
Applicant Interview (Telephonic)
May 20, 2024
Examiner Interview Summary
May 22, 2024
Response Filed
Sep 04, 2024
Final Rejection — §102, §103, §112
Feb 03, 2025
Request for Continued Examination
Feb 04, 2025
Response after Non-Final Action
Mar 11, 2025
Non-Final Rejection — §102, §103, §112
Aug 22, 2025
Examiner Interview Summary
Aug 22, 2025
Applicant Interview (Telephonic)
Sep 12, 2025
Response Filed
Oct 03, 2025
Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602859
INFORMATION PROCESSING SYSTEM, RAY TRACE METHOD, AND PROGRAM FOR RADIO WAVE PROPAGATION SIMULATION
2y 5m to grant Granted Apr 14, 2026
Patent 12586290
TEMPORALLY COHERENT VOLUMETRIC VIDEO
2y 5m to grant Granted Mar 24, 2026
Patent 12555335
SYSTEMS AND METHODS FOR ENHANCING AND DEVELOPING ACCIDENT SCENE VISUALIZATIONS
2y 5m to grant Granted Feb 17, 2026
Patent 12548241
HIGH-FIDELITY THREE-DIMENSIONAL ASSET ENCODING
2y 5m to grant Granted Feb 10, 2026
Patent 12541904
ELECTRONIC DEVICE, METHOD FOR PROMPTING FUNCTION SETTING OF ELECTRONIC DEVICE, AND METHOD FOR PLAYING PROMPT FILE
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+29.7%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 493 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month