Prosecution Insights
Last updated: April 19, 2026
Application No. 18/135,684

AUGMENTED REALITY ENHANCED BUILDING MODEL VIEWER

Non-Final OA §103
Filed
Apr 17, 2023
Examiner
WANG, YUEHAN
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Digs Inc.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
96%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
404 granted / 485 resolved
+21.3% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
47 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
69.6%
+29.6% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 485 resolved cases

Office Action

§103
DETAILED ACTION Response to Amendment Applicant’s amendments filed on 08 October 2025 have been entered. Claims 1, 7, and 14 have been amended. Claims 5, 11, and 15 have been previously cancelled. Claims 1-4, 6-10, 12-14, and 16-20 are still pending in this application, with claims 1, 7, and 14 being independent. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08 October 2025 has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 6, 7, 9, and 12-14 and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over GHARPURAY (US 20210117071 A1), referred herein as GHARPURAY (from IDS) in view of ADKINSON et al. (US 20210295599 A1), referred herein as ADKINSON, Favale et al. (US 12211161 B2), referred herein as Favale, and Suomi et al. (US 20200134106 A1), referred herein as Suomi. Regarding Claim 1, GHARPURAY teaches a method, comprising: accessing, at a user device, a digital model of a physical structure (GHARPURAY [0077] Input photos and videos: Input photos and videos may be taken with any camera, including that of a smartphone. Input photos and videos are preferably labeled with room labels (e.g. kitchen, bathroom, dining room). Input photos and videos may also be gathered with other devices; [0010] The present invention simplifies the process of digitizing a 3D space (encoded as a custom file type, TXLD), automatically and intelligently adding objects into the encoded 3D space based on desired “style”, and rendering the updated encoded space to a variety of mediums including but not limited to Virtual Reality, Augmented Reality, and Mixed Reality; [0276] After we have applied textures to the imported 3-D model and fixtures, we iterate over all products and assets, pulling them from the 3D product library and placing them at the positions encoded in the TXLD file); initiating, at the user device, an augmented reality (AR) session on the user device GHARPURAY [0010] objects encoded in each TXLD file will be overlaid on the camera view. In the case of mixed reality, users may interact with these objects as they would real objects); overlaying, when the portion of the physical structure is in view, information from the digital model on a display of the user device using AR objects (GHARPURAY [0010] objects encoded in each TXLD file will be overlaid on the camera view. In the case of mixed reality, users may interact with these objects as they would real objects), at least one of the AR objects tagged to an element of the physical structure in the digital model when the element is within the portion of the physical structure that is in view (GHARPURAY [0010] The present invention simplifies the process of digitizing a 3D space (encoded as a custom file type, TXLD), automatically and intelligently adding objects into the encoded 3D space based on desired “style” … objects encoded in each TXLD file will be overlaid on the camera view. In the case of mixed reality, users may interact with these objects as they would real objects [0139] Each digital representation of both assets (non-fixtures) and fixtures are tagged with identifiers (names, styles, manufacturer, UPC codes, tag(s), categorie(s), and additional product information) to enable quick and easy retrieval and library management); and providing an interface to select a type of AR object to be overlaid, the type of object indicating a type of additional information (GHARPURAY [0011] allowing users to optionally modify and customize the digitized space using a dedicated TXLD editor interface; [0180] To make visualizing TXLD files easier, the invention includes a “TXLD File Editor”, a graphical representation allowing users to fine-tune and update the TXLD file as they see fit. FIGS. 7 and 8 are both examples of what this interface may look like. Users will be able to select and edit specific floors, rooms, edges, corners, assets, fixtures, and any other components of a TXLD file. The TXLD file's editor will appear like a flat 2D representation of the space, but will encode all 3D information. A human may optionally use this user-friendly editor to validate TXLD formats generated with automation; [0081] “Manipulate objects”: In a Virtual, Mixed, or Augmented reality environment, “manipulate” refers to adding, removing, updating (altering product variant), re-sizing (selecting an alternate product size), rotating, moving (to a new position), and any other possible changes in a 3-D space). GHARPURAY teaches a camera view, but does not explicitly teach when a camera on the user device has at least a portion of the physical structure in view; and wherein the at least one of the AR objects is an AR object of the selected type of AR object, and the at least one of the AR objects is overlaid upon the element and displayed on the user device when additional information of the type of additional information is available about the element, the at least one of the AR object providing access to the additional information about the element of the physical structure. ADKINSON teaches (an AR session) when a camera on the user device has at least a portion of the physical structure in view (ADKINSON [0018] These calculated values allows AR objects to be placed within a scene and appear to be part of the scene, viz. the AR object moves through the camera's view similar to other physical objects within the scene as the camera moves; [0034] The 3D model/digital twin may also be updated in real time to accommodate environmental changes, such as objects being moved, new objects/features being exposed due to persons moving about, in, or out of the video frame, etc.). Favale teaches wherein the at least one of the AR objects is an AR object of the selected type of AR object, and the at least one of the AR objects is overlaid upon the element and displayed on the user device when additional information of the type of additional information is available about the element (Favale col:11, ln15-25: FIG. 2, the virtual object 201-2 could represent a structural support object (e.g., a shelf) and the virtual object 201-1 stacked on top of the structural support object could be a product (e.g., a boxed product). Responsive to receiving a request to save the virtual reset 203-1 (e.g., via selection of a user interface 211 object) the reset modeling subsystem 233 saves the virtual reset 203-1 including the virtual objects 201-1 and 201-2 arranged as instructed via the inputs received via the user interface 211-1 in the data repository 137 and/or, as depicted in FIG. 2; col11, ln56-64: The AR and/or VR reset rendering subsystem 235 can access the data repository 137 or, as depicted in FIG. 2, the data storage unit 214 of the user computing device 110 to retrieve the stored virtual reset 203-1. The AR and/or VR reset rendering subsystem 235 renders the AR scene 215 so that the user viewing the user interface 211-2 can view the virtual reset 203-1 in the AR scene 215 in an overlay over the physical environment). Suomi teaches the at least one of the AR object providing access to the additional information about the element of the physical structure (Suomi [0055] If a user input selecting one or more of possibilities to further filter the drawing content displayed is detected (step 708: yes), the displayed drawing content is filtered in step 709 accordingly. For example, if the user input indicates “no dimensions” in the display 801 illustrated in FIG. 8B, the result would be a display 801 illustrated in FIG. 8C; [0060] The downloaded drawing content data 921 is used to annotate (flow 914) a model 901 displayed with corresponding model annotations 921′. (It should be appreciated that the term annotations in this example covers all possible additional information described in more detail with FIG. 1.)). ADKINSON discloses systems and methods for creation of a 3D mesh from a video stream or a sequence of frames, which is analogous to the present patent application. It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified GHARPURAY to incorporate the teachings of ADKINSON, and apply the method of allowing AR object moves through the camera's view similar to other physical objects within the scene as the camera moves into the method for generation of 3D interactive rendition of space. Doing so would be capable of supporting a remote video session with which users can interact via AR objects in real-time in the augmented reality enhanced building model viewer. Favale relates to 3D modeling of objects and arrangements of such objects for virtual and/or augmented reality applications, which is analogous to the present patent application. It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified GHARPURAY to incorporate the teachings of Favale, and apply the user interface for receiving properties information defining the virtual object requested into the method for generation of 3D interactive rendition of space. Doing so would allow the virtual modeling system to present the virtual object in the user interface by at least showing the area of the image superimposed on the face and showing the properties in real-time in the augmented reality enhanced building model viewer. Suomi relates to computer aided modeling of structures, and especially drawing contents created for engineering drawings, which is analogous to the present patent application. It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified GHARPURAY to incorporate the teachings of Suomi, and apply the filter for displaying user selected content for PRDC into the method for generation of 3D interactive rendition of space. Doing so would be able to convey information very precisely, with very little ambiguity by displayed filtered information of engineering drawings. in the augmented reality enhanced building model viewer. Regarding Claim 3, GHARPURAY in view of ADKINSON, Favale and Suomi teaches the method of claim 1, and further teaches wherein overlaying using AR objects comprises overlaying information that is tagged to a portion of the physical structure that is in view in one or more AR objects (GHARPURAY [0010] The present invention simplifies the process of digitizing a 3D space (encoded as a custom file type, TXLD), automatically and intelligently adding objects into the encoded 3D space based on desired “style” … objects encoded in each TXLD file will be overlaid on the camera view. In the case of mixed reality, users may interact with these objects as they would real objects [0139] Each digital representation of both assets (non-fixtures) and fixtures are tagged with identifiers (names, styles, manufacturer, UPC codes, tag(s), categorie(s), and additional product information) to enable quick and easy retrieval and library management). Regarding Claim 6, GHARPURAY in view of ADKINSON, Favale and Suomi teaches the method of claim 1, and further teaches further comprising: receiving, at the user device, user inputted information (GHARPURAY [0169] Depending on the medium used by the user to both gather data and render the space, a specialized process will be utilized to digitize collected inputs. This includes both real-time digitization (in the case of Augmented Reality) and non real-time digitization (in the case of Virtual Reality); [0180] To make visualizing TXLD files easier, the invention includes a “TXLD File Editor”); and tagging, to the digital model, the user inputted information to the portion of the physical structure in view (GHARPURAY [0169] The data used in the digitization are preferably tagged with identifiers, including but not limited to the room in which the photograph or still image or point cloud was obtained, the location in the room from which the photograph or still image or point cloud was obtained, the geographic coordinates of the location at which the photograph or still image or point cloud was obtained, the time of day at which at which the photograph or still image or point cloud was obtained, and the date on which the photograph or still image or point cloud was obtained). Regarding Claims 7, 9 and 12, GHARPURAY in view of ADKINSON, Favale and Suomi teaches a non-transitory computer readable medium (CRM) comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to … (GHARPURAY [0314] Embodiments may also be implemented as instructions stored on a non-transitory machine-readable medium, which may be read and executed by one or more procedures). The metes and bounds of the limitations of the method claim substantially correspond to the claim as set forth in claims 1, 3 and 6; thus they are rejected on similar grounds and rationale as their corresponding limitations. Regarding Claim 13, GHARPURAY in view of ADKINSON, Favale and Suomi teaches the CRM of claim 7, and further teaches wherein the apparatus is a mobile device (GHARPURAY [0206] The digital representations that make up the growing 3D product library will be independently distributed using a system-created marketplace developed for Augmented and/or Mixed Reality (AR/MR) applications which may be deployed via mobile apps, wearable devices, or other compatible hardware). Regarding Claim 14, GHARPURAY teaches a non-transitory computer-readable medium (CRM) comprising instructions that, when executed by a processor on an apparatus, cause the apparatus to (GHARPURAY [0314] Embodiments may also be implemented as instructions stored on a non-transitory machine-readable medium, which may be read and executed by one or more procedures): receive, over a network, information from a device about the device's view of a structure (GHARPURAY [0077] Input photos and videos: Input photos and videos may be taken with any camera, including that of a smartphone. Input photos and videos are preferably labeled with room labels (e.g. kitchen, bathroom, dining room). Input photos and videos may also be gathered with other devices; [0010] The present invention simplifies the process of digitizing a 3D space (encoded as a custom file type, TXLD), automatically and intelligently adding objects into the encoded 3D space based on desired “style”, and rendering the updated encoded space to a variety of mediums including but not limited to Virtual Reality, Augmented Reality, and Mixed Reality; [0276] After we have applied textures to the imported 3-D model and fixtures, we iterate over all products and assets, pulling them from the 3D product library and placing them at the positions encoded in the TXLD file; [0290] Headsets may preferably pull scene data over the network, rather than having to connect to a computer and manually download the data); GHARPURAY does not but ADKINSON teaches determine, from the information about the device's view of the structure, a position and orientation of the device within a digital model of the structure (ADKINSON [0018] These APIs may provide depth data and/or a point cloud, which typically includes one or more points that are indicated by an x, y position within the video frame along with a depth (or z-axis). These x, y, and z values can be tied to one or more identified anchor features within the frame, e.g. a corner or edge of an object in-frame, which can be readily identified and tracked for movement between frames. Use of anchor features can allow the detected/calculated x, y, and z values to be adjusted from frame to frame relative to the anchor features as the camera of the capturing device moves in space relative to the anchor features; [0022] Progressive creation of an accurate 3D model that also includes acceptably accurate real-world scaling ideally relies upon not only captured video, but also accurate depth data and camera pose information (e.g., camera orientation in space, movement of the camera in space, camera intrinsics such as lens focal length, lens aberrations, focal point, and aperture settings/depth of field, etc.)); GHARPURAY in view of ADKINSON further teaches determine, from the position and orientation of the device within the digital model, a portion of the digital model that corresponds to the view of the device (ADKINSON [0018] These calculated values allows AR objects to be placed within a scene and appear to be part of the scene, viz. the AR object moves through the camera's view similar to other physical objects within the scene as the camera moves; [0034] The 3D model/digital twin may also be updated in real time to accommodate environmental changes, such as objects being moved, new objects/features being exposed due to persons moving about, in, or out of the video frame, etc.); and transmit, over the network to the device, any information that is tagged to one or more elements of the structure that are within the portion of the digital model (GHARPURAY [0300] As virtual reality, mixed reality, and augmented reality technology evolves, more and more 3D models will be easily transmitted over the network; [0097] TXLD file: A custom file system specific to this system. A TXLD file encodes a 3-D space, from its geometry to all of the digital representations, textures, lights, and other elements of the space; [0139] An ever-growing 3D library of products will supply a growing number of “platform agnostic” objects to the system that will be flexible and easy to distribute. For the remainder of this document “products”, used interchangeably with “objects”, include both fixtures and non-fixtures. Each digital representation of both assets (non-fixtures) and fixtures are tagged with identifiers (names, styles, manufacturer, UPC codes, tag(s), categorie(s), and additional product information) to enable quick and easy retrieval and library management. The 3D model itself may be stored using a Cloud storage service or alternate storage mechanism), wherein the instructions are to further cause the apparatus to transmit, over the network, one or more augmented reality (AR) objects to the device that correspond to the information that is tagged to the one or more elements of the structure that are within the portion of the digital model in response to receiving a selection of a type of AR object, the type of object indicating a type of additional information (GHARPURAY [0010] The present invention simplifies the process of digitizing a 3D space (encoded as a custom file type, TXLD), automatically and intelligently adding objects into the encoded 3D space based on desired “style” … objects encoded in each TXLD file will be overlaid on the camera view. In the case of mixed reality, users may interact with these objects as they would real objects [0139] Each digital representation of both assets (non-fixtures) and fixtures are tagged with identifiers (names, styles, manufacturer, UPC codes, tag(s), categorie(s), and additional product information) to enable quick and easy retrieval and library management; [0011] allowing users to optionally modify and customize the digitized space using a dedicated TXLD editor interface; [0180] To make visualizing TXLD files easier, the invention includes a “TXLD File Editor”, a graphical representation allowing users to fine-tune and update the TXLD file as they see fit. FIGS. 7 and 8 are both examples of what this interface may look like. Users will be able to select and edit specific floors, rooms, edges, corners, assets, fixtures, and any other components of a TXLD file. The TXLD file's editor will appear like a flat 2D representation of the space, but will encode all 3D information. A human may optionally use this user-friendly editor to validate TXLD formats generated with automation; [0081] “Manipulate objects”: In a Virtual, Mixed, or Augmented reality environment, “manipulate” refers to adding, removing, updating (altering product variant), re-sizing (selecting an alternate product size), rotating, moving (to a new position), and any other possible changes in a 3-D space; ADKINSON [0045] Further, it should be understood that, while the foregoing embodiments are described with respect to a device 102 that may provide a video feed, system 100 and/or method 200 may be adapted to work with other technologies, e.g. waveguides and/or other see-through technologies such as smart glasses or heads-up displays, which may project AR objects onto a view of the real world). GHARPURAY does not but Favale teaches wherein the one or more AR objects are of the selected type of AR object, and the one or more AR objects are overlaid upon one or more elements of the structure and displayed on the device's view of the structure when additional information of the type of additional information is available about the one or more element (Favale col:11, ln15-25: FIG. 2, the virtual object 201-2 could represent a structural support object (e.g., a shelf) and the virtual object 201-1 stacked on top of the structural support object could be a product (e.g., a boxed product). Responsive to receiving a request to save the virtual reset 203-1 (e.g., via selection of a user interface 211 object) the reset modeling subsystem 233 saves the virtual reset 203-1 including the virtual objects 201-1 and 201-2 arranged as instructed via the inputs received via the user interface 211-1 in the data repository 137 and/or, as depicted in FIG. 2; col11, ln56-64: The AR and/or VR reset rendering subsystem 235 can access the data repository 137 or, as depicted in FIG. 2, the data storage unit 214 of the user computing device 110 to retrieve the stored virtual reset 203-1. The AR and/or VR reset rendering subsystem 235 renders the AR scene 215 so that the user viewing the user interface 211-2 can view the virtual reset 203-1 in the AR scene 215 in an overlay over the physical environment). GHARPURAY does not but Suomi teaches the one or more AR objects providing access to the additional information about the element of the structure (Suomi [0055] If a user input selecting one or more of possibilities to further filter the drawing content displayed is detected (step 708: yes), the displayed drawing content is filtered in step 709 accordingly. For example, if the user input indicates “no dimensions” in the display 801 illustrated in FIG. 8B, the result would be a display 801 illustrated in FIG. 8C; [0060] The downloaded drawing content data 921 is used to annotate (flow 914) a model 901 displayed with corresponding model annotations 921′. (It should be appreciated that the term annotations in this example covers all possible additional information described in more detail with FIG. 1.)). ADKINSON discloses systems and methods for creation of a 3D mesh from a video stream or a sequence of frames, which is analogous to the present patent application. It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified GHARPURAY to incorporate the teachings of ADKINSON, and apply the method of allowing AR object moves through the camera's view similar to other physical objects within the scene as the camera moves into the method for generation of 3D interactive rendition of space. Doing so would be capable of supporting a remote video session with which users can interact via AR objects in real-time in the augmented reality enhanced building model viewer. Favale relates to 3D modeling of objects and arrangements of such objects for virtual and/or augmented reality applications, which is analogous to the present patent application. It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified GHARPURAY to incorporate the teachings of Favale, and apply the user interface for receiving properties information defining the virtual object requested into the method for generation of 3D interactive rendition of space. Doing so would allow the virtual modeling system to present the virtual object in the user interface by at least showing the area of the image superimposed on the face and showing the properties in real-time in the augmented reality enhanced building model viewer. Suomi relates to computer aided modeling of structures, and especially drawing contents created for engineering drawings, which is analogous to the present patent application. It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified GHARPURAY to incorporate the teachings of Suomi, and apply the filter for displaying user selected content for PRDC into the method for generation of 3D interactive rendition of space. Doing so would be able to convey information very precisely, with very little ambiguity by displayed filtered information of engineering drawings. in the augmented reality enhanced building model viewer. Regarding Claim 16, GHARPURAY in view of ADKINSON, Favale and Suomi teaches the CRM of claim 14, and further teaches wherein the instructions are to further cause the apparatus to receive, over the network, additional information from the device for tagging to the portion of the digital model (GHARPURAY [0098] Interaction Methods (with platform): Interactions methods may vary by medium (e.g. AR, VR, MR). Users may add and “manipulate” objects. Users may also like specific products in the scene, request or view additional information about the product, click external links for the product). Regarding Claim 18, GHARPURAY in view of ADKINSON, Favale and Suomi teaches the CRM of claim 14, and further teaches wherein the instructions are to further cause the apparatus to transmit, over the network to the device, a virtual reality view of the portion of the digital model (ADKINSON [0018] These calculated values allows AR objects to be placed within a scene and appear to be part of the scene, viz. the AR object moves through the camera's view similar to other physical objects within the scene as the camera moves; [0034] The 3D model/digital twin may also be updated in real time to accommodate environmental changes, such as objects being moved, new objects/features being exposed due to persons moving about, in, or out of the video frame, etc.; GHARPURAY Abst: processing the file to create 3D interactive renditions of the space using virtual reality, augmented reality, or mixed reality technologies). Regarding Claim 19, GHARPURAY in view of ADKINSON, Favale and Suomi teaches the CRM of claim 14, and further teaches wherein the instructions are to further cause the apparatus to transmit, over the network to the device, an updated set of AR objects to the device as the device's view of the structure changes (ADKINSON [0038] the origin may be relocated or shift as the 3D model/digital twin evolves, such as where the 3D model/digital twin is continuously generated and expanded as the video feed progresses. The point of view of the camera may change, such as due to the user of the device providing the video feed moving the device about. While depicted as a single step, it should be understood that in some embodiments, the coordinate space between the 3D model/digital twin and video feed may be continuously reconciled). Regarding Claim 20 GHARPURAY in view of ADKINSON, Favale and Suomi teaches the CRM of claim 14, and further teaches wherein the apparatus is a remote server (GHARPURAY [0313] at least a portion of the software instructions may be downloaded from a remote server or storage device, over a wired or wireless connection). Claim(s) 2, 4, 8, 10, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over GHARPURAY (US 20210117071 A1), referred herein as GHARPURAY (from IDS) in view of ADKINSON et al. (US 20210295599 A1), referred herein as ADKINSON and Favale et al. (US 12211161 B2), referred herein as Favale, Suomi et al. (US 20200134106 A1), referred herein as Suomi, and further in view of NPL Kahn et al. (Beyond 3D As-built Information Using Mobile AR Enhancing the Building Lifecycle Management, 2012), referred herein as Kahn. Regarding Claim 2, GHARPURAY in view of ADKINSON, Favale and Suomi teaches the method of claim 1, but does not teach wherein overlaying using AR objects comprises representing structures that are hidden by a surface by overlaying AR representations of the structures upon the surface to approximate their position beneath the surface. However, Kahn discloses how Augmented Reality could be used as supportive technology for documentation during operational and maintenance phases of the building lifecycle. Therefore it is analogous to the present patent application. Kahn teaches overlaying using AR objects comprises representing structures that are hidden by a surface by overlaying AR representations of the structures upon the surface to approximate their position beneath the surface (Kahn pp. 35, col. 2: Our system superimposes the flow directions directly on the pipes and explains each component’s function additionally (see Fig. 7). The second layer illustrates the current heating condition and the third layer illustrates the (maintenance) status of component parts; pp. 36, col. 1: We expect that the combination of BIM data with AR will be particularly evident when it comes to the visualization of hidden objects which are part of the BIM data, but not visible in the real world. With mobile AR it becomes possible to see what was built inside a wall, for example the position of water pipes or electrical cables. Due to the steadily increasing availability of building related data, in the future BIM data will not only be accessible for facility managers but also for building inhabitants or house owners. For this new market, markerless augmented reality applications offer a large potential, as they integrate BIM data with the real world and thereby make hidden information intuitively visible). It would have been obvious for a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified GHARPURAY to incorporate the teachings of Kahn, and apply the Building Information Model (BIM) into the method for generation of 3D interactive rendition of space. Doing so would not only hold the 3D-building-geometry but encompasses pipe/electrical systems as well as semantic building information in the augmented reality enhanced building model viewer. Regarding Claim 4, GHARPURAY in view of ADKINSON, Favale and Suomi teaches the method of claim 1. However, Kahn teaches wherein overlaying using AR objects comprises highlighting a portion of the physical structure when additional information about the portion of the physical structure is available (Kahn pp. 33, col. 1: Fig. 5 visualizes the use of the distributed annotation engine on a mobile tablet PC and on a workstation. Whereas on the workstation the annotation system is a pure VR application, the annotation system on the mobile device uses the mobile AR framework described in section IV to link the pose of the mobile device in the physical building with the corresponding part of the virtual 3D model). The same motivation of claim 3 applies here. Regarding Claims 8 and 10, GHARPURAY in view of ADKINSON, Favale and Suomi teaches the CRM of claim 7. The metes and bounds of the limitations of the method claim substantially correspond to the claim as set forth in claims 2 and 4; thus they are rejected on similar grounds and rationale as their corresponding limitations. Regarding Claim 17, GHARPURAY in view of ADKINSON, Favale and Suomi teaches the CRM of claim 14, However, Kahn teaches wherein the instructions are to further cause the apparatus to transmit, over the network to the device, instructions to highlight one or more objects in view of the device (Kahn pp. 33, col. 1: Fig. 5 visualizes the use of the distributed annotation engine on a mobile tablet PC and on a workstation. Whereas on the workstation the annotation system is a pure VR application, the annotation system on the mobile device uses the mobile AR framework described in section IV to link the pose of the mobile device in the physical building with the corresponding part of the virtual 3D model). Response to Arguments Applicant’s arguments, see page 7, filed on 08 October 2025 with respect to103 rejection on claims 1, 7 and 14 has been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Samantha (Yuehan) Wang whose telephone number is (571)270-5011. The examiner can normally be reached Monday-Friday, 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached on (571)272-7440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Samantha (YUEHAN) WANG/ Primary Examiner Art Unit 2617
Read full office action

Prosecution Timeline

Apr 17, 2023
Application Filed
Jan 29, 2025
Non-Final Rejection — §103
Jul 03, 2025
Response Filed
Aug 06, 2025
Final Rejection — §103
Oct 08, 2025
Response after Non-Final Action
Nov 06, 2025
Request for Continued Examination
Nov 15, 2025
Response after Non-Final Action
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597178
VECTOR OBJECT PATH SEGMENT EDITING
2y 5m to grant Granted Apr 07, 2026
Patent 12597506
ENDOSCOPIC EXAMINATION SUPPORT APPARATUS, ENDOSCOPIC EXAMINATION SUPPORT METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586286
DIFFERENTIABLE REAL-TIME RADIANCE FIELD RENDERING FOR LARGE SCALE VIEW SYNTHESIS
2y 5m to grant Granted Mar 24, 2026
Patent 12586261
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567182
USING AUGMENTED REALITY TO VISUALIZE OPTIMAL WATER SENSOR PLACEMENT
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
96%
With Interview (+12.9%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 485 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month