Prosecution Insights
Last updated: April 19, 2026
Application No. 18/660,176

PROVIDING ACCESS TO MESH GEOMETRY FROM IMAGES OF OBJECTS

Final Rejection §103
Filed
May 09, 2024
Examiner
CRAWFORD, JACINTA M
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
97%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
709 granted / 805 resolved
+26.1% vs TC avg
Moderate +9% lift
Without
With
+9.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
29 currently pending
Career history
834
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
55.1%
+15.1% vs TC avg
§102
5.2%
-34.8% vs TC avg
§112
16.8%
-23.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 805 resolved cases

Office Action

§103
DETAILED ACTION This action is in response to communications: Amendment filed February 20, 2026. Claims 1-25 are pending in this case. Claims 1, 4, 5, 8, 10, 11, 14, 15, 18, 20, 21, 23, and 24 have been newly amended. No claims have been newly added or cancelled. This action is made FINAL. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 11-14, 21, and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pearson et al. (US 12,014,433). As to claim 1, Pearson et al. disclose a method, comprising: receiving, from a camera associated with an application, an image of an environment (column 7, lines 64 thru column 8, lines 29 notes device 110 running application 112 configured to perform 3D scanning and mapping of the exterior of the object 130 and its associated external features, device 110 may be configured to take images of the exterior of the object 130, e.g. using one or more camera(s) for taking images or video of the object 130, where Figure 2A, column 11, lines 25 thru column 12, lines 35 further notes capturing images of an environment associated with the exterior of an object, e.g. via a drone; column 8, lines 45 thru column 9, lines 10 notes device 120 running application 122 may be configured to scan and map the interior of the object 130 and its associated internal features, device 120 may be configured to take images of the interior of the object 130, e.g. using one or more camera(s) for taking images or video of the object 130, where Figure 2B, column 12, lines 36 thru column 13, lines 62 further notes capturing images of an environment associated with the interior of the object, e.g. via a user device, where, based on Figure 1, each of applications 112, 122, and 132 of respective devices 110, 120, and 130 may be all associated with each other and model generation and display system 100); determining, based on the image of the environment, a location of an object in the environment (column 8, lines 19-22 notes device 110 may be able to collect positioning data, e.g. through a global positioning system (GPS) chip or sensor, while collecting images or other data associated with the exterior of the object 130; column 9, lines 2-5 notes device 120 may be able to collect positioning data, e.g. through a global positioning system (GPS) chip or sensor, while collecting images or other data associated with the interior of the object 130); determining a mesh geometry representation of the object, the mesh geometry representation being from a region in a geographic indexing system identified based on the location of the object (column 8, lines 30-44 notes application 112 (of device 110) may be configured to take the images and depth information and generate an exterior mesh, the application 112 may take the collected positioning data from device 110 and associate it with the exterior mesh, e.g. associate parts of the exterior mesh with GPS coordinates, for geo-referencing purposes when generating a combined mesh, application 112 then outputs the exterior mesh, e.g. point cloud, and sends it to the model generation and display system 100; column 9, lines 11-25 notes application 122 (of device 120) may be configured to take the images and depth information and generate an interior mesh, the application 122 may take the collected positioning data from device 120 and associate it with the interior mesh, e.g. associate parts of the interior mesh with GPS coordinates, for geo-referencing purposes when generating a combined mesh, application 122 then outputs the interior mesh, e.g. point cloud, and send it to the model generation and display system 100; column 9, lines 26 thru column 10, lines 38 notes model generation and display system 100 including components to combine the exterior and interior meshes as a combined mesh using geo-referencing, e.g. GPS coordinates, then further rendering the combined mesh into a 3D model, and appending images to different locations of the 3D model, where column 6, lines 47-64 further notes 3D model can also be a wireframe mesh); and providing the mesh geometry representation of the object to the application (column 10, lines 39-64 notes model generation and display system 100 in communication with user device 130 of a user, the user device may run application 132, which may be configured to interface with the model generation and display system 100 in order to oversee, control, or direct the generation and/or display of the model of object 130, e.g. through application 132 of user device 130, the user may be able to select the appropriate interior mesh and exterior mesh to be combined, ensure that the meshes are properly oriented and aligned (if needed), correct any obvious mistakes in dimension measurements, review and ensure that captured images are appended to the appropriate locations of the model, and so forth, thus considered the interior and exterior meshes as well as the 3D model may be provided to the application 132 of the user device 130). As noted above, Pearson et al. describes generating interior and exterior meshes which are further combined into a 3D model, which may also be considered a mesh. Also noted above, Pearson et al. further describes through application 132 of user device 130, the interior and exterior meshes as well as the 3D model may be analyzed and/or manipulated by a user, thus obvious that the interior and exterior meshes as well as the 3D model are provided to the application 132 of the user device 130 in order to perform such analysis and/or manipulations, yielding predictable results, without changing the scope of the invention. As to claim 2, Pearson et al. disclose the mesh geometry representation of the object has a level of detail (LOD) indicative of a number of edges of the mesh geometry representation of the object (e.g. Figures 6A and 6B illustrate exterior of virtualized model of the object, e.g. building, Figure 7 illustrates interior of virtualized model of the building, Figure 8 illustrates a particular room of virtualized model of the building, Figures 9, 10, 12A, and 12B illustrate roof view of the virtualized model of the building, where Figures illustrate control table, e.g. touchscreen table, for controlling display of the different views of virtualized model of the building, where column 18, lines 5-11 and lines 60-67 notes house toggles 640/740 presented on touch screen table 615/715, which allows user to select through the options of seeing the whole house, individual floors, individual rooms, and/or the roof, thus considered “a level of detail indicative of a number of edges”). As to claim 3, Pearson et al. disclose the LOD is specified by the application (e.g. as noted in claim 2, Figures illustrate control table, e.g. touchscreen table, for controlling display of the different views of virtualized model of the building, which, as noted in claim 1, may be performed via user device 130 with application 132). As to claim 4, Pearson et al. disclose the LOD is represented by a first number and a second number, where the first number indicates the number of edges of the object and the second number indicates a number of semantic features of the object (as noted in claim 2, Figures illustrate a number of views at different levels of the object, e.g. the building as a whole, the interior of the building, the roof of the building, and further illustrate a number of sematic features, e.g. labels for different parts of the exterior and/or interior of the building). As to claim 11, Pearson et al. disclose a computer program product comprising a non-transitory storage medium (Figure 14, main memory 1406, ROM, 1408, and/or storage device 1410), the computer program product including code that, when executed by processing circuitry (Figure 14, processor(s) 1404), causes the processing circuitry to perform a method (column 22, lines 44-50 notes software instructions and/or other executable code may be read from a computer readable storage medium, executed by one or more hardware processors and/or any other suitable computing devices), the method comprising the method as outlined in claim 1. Please see the rejection and rationale of claim 1. Claims 12-14 are similar in scope to claims 2-4, respectively, and are therefore rejected under similar rationale. As to claim 21, Pearson et al. disclose a system (Figure 14, system), comprising: memory (main memory 1406, ROM, 1408, and/or storage device 1410); and processing circuitry coupled to the memory (processor(s) 1404 coupled to memories 1406, 1408, and 1410 via bus 1402), the processing circuitry being configured to perform the method as outlined in claim 1. Please see the rejection and rationale of claim 1. Claim 22 is similar in scope to claim 2, and is therefore rejected under similar rationale. Claim(s) 5-7, 15-17, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pearson et al. (US 12,014,433) as applied to claims 1, 11, and 21 above, and further in view of Altman et al. (US 2013/0069944). As to claim 5, Pearson et al. disclose determining the mesh geometry representation of the object (e.g. as noted in claim 1, exterior and interior meshes as well as 3D model), but do not disclose, but Altman et al. determining the mesh geometry representation of the object (Figures 3 and 4) includes: sending the location of the object to a façade service (step 402, [0088] notes receiving an image and a geo-location tag of the image, where Figure 3, [0068] notes map generation system 300 includes a façade database store 306 for storing facades for building models for use in the overall 3D map system); and receiving, from the façade service, a data structure representing the region, the region representing a section of the Earth, the data structure including the mesh geometry representation of the object (step 406, [0088] notes determining a building model for a physical building corresponding to an object in the image based on the geo-location tag, [0089] notes the 3D building model is representative of a physical building in the real-world, e.g. on Earth, and step 408, [0088] further notes mapping, on a region-by-region basis, the image to a stored façade of the building model, where [0070], [0071] notes map generation system 300 further includes a render module 308 for generating a 3D map environment, e.g. mapping the photograph image to a building model corresponding to the physical building, e.g. as a texture). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Pearson et al.’s method of determining a mesh geometry representation of an object, e.g. a building, with Altman et al.’s method of accessing a façade database storing facades for building models and providing the façade to map a building model corresponding to a physical building in an image such that representations of an object, e.g. buildings, may be accurately depicted, thus enhancing the system (see Background and Disclosure of Invention of Altman et al.). As to claim 6, Pearson et al. modified with Altman et al. disclose the object includes a building and terrain (Pearson, Figure 2A illustrates capturing image of building and terrain, e.g. from an aerial view; modified with Altman, [0032] notes photograph image can be mapped to a specific building, e.g. stored in the façade database, to render a 3D environment, the 3D environment defined as a 3D map including a virtual representation of physical world buildings and can also include 3D models of landscape and terrain). As to claim 7, Pearson et al. modified with Altman et al. disclose the mesh geometry representation of the building is separate from the mesh geometry representation of the terrain (Pearson, as noted in claim 6, image captured may be of building and terrain, where noted in claim 1, mesh geometry representation includes the exterior mesh of the object, e.g. building, not including any terrain, thus may be considered separate from the building). Claims 15-17 are similar in scope to claims 5-7, respectively, and are therefore rejected under similar rationale. Claim 23 is similar in scope to claim 5, and is therefore rejected under similar rationale. Claim(s) 8, 18, and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pearson et al. (US 12,014,433) as applied to claims 1, 11, and 21 above, and further in view of Toh et al. (US 2020/0349350). As to claim 8, Pearson et al. disclose determining the location of the object (e.g. as noted in claim 1, using global positioning system (GPS)), but do not disclose, but Toh et al. disclose determining the location of the object includes: sending the image of the environment to a visual positioning system (VPS); and receiving the location of the object from the VPS ([0056] notes images captured with the camera assembly 212 may also be used by the AR localization engine 224 to determine a location and orientation of the mobile device 110 within a physical space, such as an interior space (e.g., an interior space of a building), based on a representation of that physical space that is received from the memory 260 or an external computing device, where the representation of a physical space may include visual features of the physical space (e.g., features extracted from images of the physical space), location-determination data associated with those features that can be used by a visual positioning system to determine location and/or position within the physical space based on one or more images of the physical space, and/or a three-dimensional model of at least some structures within the physical space). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Pearson et al.’s method of determining a location of the object using global positioning system (GPS) with Toh et al.’s method of using visual positioning system (VPS) as GPS may not be available and/or sufficiently accurate in some situations, thus providing a more reliable and accurate system (see [0019] of Toh et al.). Claims 18 and 24 are similar in scope to claim 8, and are therefore rejected under similar rationale. Claim(s) 9, 19, and 25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pearson et al. (US 12,014,433) as applied to claims 1, 11, and 21 above, and further in view of KUBISCH et al. (US 2014/0168242). As to claim 9, Pearson et al. disclose providing the mesh geometry representation of the object to the application (e.g. as noted in claim 1, providing exterior and interior meshes as well as 3D model), but do not disclose, but KUBISCH et al. disclose providing a pointer to a buffer in which the mesh geometry representation of the object is stored to the application (Figure 3, [0036] notes to render a scene, draw commands are implemented, where each draw command includes a set of one or more draw calls, a given draw command typically associated with a particular graphics object within a graphics scene that is being rendered, [0037] notes before a scene is rendered, software application 125 has to set up the graphics scene, e.g. via driver program 130, specify the different shader input buffers 308 needed to store different types of shader input data associated with the graphics scene and define the command buffer 304 for draw commands that are to be executed to render the scene, where the software application 125 also passes the shader input data associated with the graphics scene to the driver program 130 and causes the driver program 130 to store the shader input data associated with the graphics scene in the appropriate shader input buffers 308, where the driver program 130 further computes pointers to the different shader input buffers 308 and passes the pointers to back to the software application 125). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Pearson et al.’s method of providing the mesh geometry representation of the object to the application to include providing a pointer to a buffer to the application as described in KUBISCH et al. such that the application may locate the object stored with readiness and ease for subsequent use, e.g. rendering (see [0037] and [0038] of KUBISCH et al.). Claims 19 and 25 are similar in scope to claim 9, and are therefore rejected under similar rationale. Claim(s) 10 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pearson et al. (US 12,014,433) as applied to claims 1 and 11 above, and further in view of Kennedy et al. (US 11,016,303). As to claim 10, Pearson et al. disclose receiving of images of environments from the camera associated with the application (e.g. as noted in claim 1, receiving images via devices, e.g. drone, user device, etc.), but do not disclose, but Kennedy et al. disclose providing a toggle that, when activated, stops a receiving of images of environments from the camera associated with the application (Figure 3, column 38-60 notes muting event module 312 receives camera muting events to mute or unmute one or more camera of headset 100, depending on the current state of the cameras, which provides an indication to the camera mute system 300 to toggle, e.g. activate or deactivate, image capture of the one or more cameras of the headset 100, e.g. a user manually pressing a physical or virtual button on the headset or when the headset enters a particular geographic location). It would have been obvious to one of ordinary skill in the art at the time of the invention to modify Pearson et al.’s method of receiving image of environments from a camera with Kennedy et al.’s method of “camera muting” to toggle, e.g. deactivate, image capture of one or more cameras to provide a user with the option to address privacy issues with “always on” cameras, e.g. when cameras are used in public settings, thus enhancing the functionality of the system (see Background and Summary of Kennedy et al.). Claim 20 is similar in scope to claim 10, and is therefore rejected under similar rationale. Response to Arguments Applicant's arguments filed February 20, 2026 have been fully considered but they are not persuasive. Applicant amends independent claims 1, 11, and 21 to similarly recite, “…receiving, from a camera associated with an application, an image of an environment; determining, based on the image of the environment, a location of an object in the environment; determining a mesh geometry representation of the object, the mesh geometry representation being from a region in a geographic indexing system identified based on the location of the object; and providing the mesh geometry representation of the object to the application…” Applicant argues on pages 8-10 of the Amendment filed that the prior art previously cited, e.g. Levinson, fails to teach or suggest the limitations of independent claims 1, 11, and 21 as now amended. In reply, in light of the amendments of independent claims 1, 11, and 21, the claims are now cited to be taught by newly found reference Pearson et al. (US 12,014,433). Please see the rejection and notes regarding the claims above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACINTA M CRAWFORD whose telephone number is (571)270-1539. The examiner can normally be reached 8:30a.m. to 4:30p.m. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y. Poon can be reached at (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACINTA M CRAWFORD/Primary Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

May 09, 2024
Application Filed
Nov 15, 2025
Non-Final Rejection — §103
Jan 27, 2026
Interview Requested
Feb 04, 2026
Applicant Interview (Telephonic)
Feb 07, 2026
Examiner Interview Summary
Feb 20, 2026
Response Filed
Mar 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602734
GRAPHICS PROCESSORS
2y 5m to grant Granted Apr 14, 2026
Patent 12602735
GRAPH DATA CALCULATION METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12602841
HIGH DYNAMIC RANGE VISUALIZATIONS INDICATING RANGES, POINT CURVES, AND PREVIEWS
2y 5m to grant Granted Apr 14, 2026
Patent 12597180
ARTIFICIAL INTELLIGENCE AUGMENTATION OF GEOGRAPHIC DATA LAYERS
2y 5m to grant Granted Apr 07, 2026
Patent 12591946
DETECTING ERROR IN SAFETY-CRITICAL GPU BY MONITORING FOR RESPONSE TO AN INSTRUCTION
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
97%
With Interview (+9.2%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 805 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month