Prosecution Insights
Last updated: April 19, 2026
Application No. 18/066,135

REPLICATING PHYSICAL ENVIRONMENTS AND GENERATING 3D ASSETS FOR SYNTHETIC SCENE GENERATION

Non-Final OA §102§103
Filed
Dec 14, 2022
Examiner
BEUTEL, WILLIAM A
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Nvidia Corporation
OA Round
5 (Non-Final)
70%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
328 granted / 469 resolved
+7.9% vs TC avg
Strong +20% interview lift
Without
With
+20.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
28 currently pending
Career history
497
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 469 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 18, 2026 has been entered. Response to Arguments Applicant's arguments filed 2/13/2026 have been fully considered but they are not persuasive in part. Regarding claims 10-15, applicant argues that the claim amendments overcome the rejection made under 35 U.S.C. 102 (see applicant’s remarks filed 2/13/2026 with regard to claims rejected under 35 U.S.C. 102). Examiner respectfully disagrees. The amendments recite additional limitations that would not be taught by the cited reference alone under 35 U.S.C. 102 if the claim positively recited the limitations as part of the processor. As the claim is currently drafted, however, the processor is not recited as containing any configuration that performs the recited steps. Instead, the claim merely recites “A processor, comprising one or more circuits to…” This is analogous to merely reciting that the processor is intended to, but not actually configured to. As such, the claim is merely directed to a processor with circuits that can be used to perform machine algorithms like the one recited, but not actually containing such configuration. Applicant’s arguments are therefore not persuasive. Examiner suggests amending the claim to recite “A processor, comprising one or more circuits configured to:” Applicant’s arguments, see applicant’s correspondence, filed February 18, 2026, with respect to the rejection(s) of claim(s) 1-9 and 16-20 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Nussbaum et al. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 10-15 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Eder et al. (US 2021/0279957 A1) Regarding claim 10, the claim recites the following: A processor, comprising one or more circuits to: use one or more neural networks to generate, based at least on image data captured of an environment, a three-dimensional (3D) representation of the environment including specified 3D representations of one or more identified objects; classify first individual ones of the one or more objects as identified objects and second individual ones of the one or more objects as unidentified objects; generate, based at least on the image data and classifying the second individual ones of the one or more objects as unidentified objects, one or more surface representations of the unidentified objects; and replace, by updating a number of points of a mesh representation of the environment, at least a portion of the 3D representation corresponding to the one or more identified objects with a portion of a 3D model selected from a repository of 3D models. In other words, by merely reciting a processor comprising circuits to use/generate/replace, the claim merely recites intended use of the circuits (i.e. the claim is directed to the circuits themselves, and the additional steps are not recited part of specific configured circuitry, but merely what the circuits can be used to perform). As such, the claim is directed to a physical processor that comprises one or more circuits that are capable to use a neural network to generate data and capable to replace data with data as intended use that has no patentable weight (see MPEP 2111). As such, Eder discloses: A processor, comprising: one or more circuits (Eder, ¶243: “A system comprising: at least one programmable processor) (Also note, however, that the claim is rejected under 35 U.S.C. 103 for sake of compact prosecution). Regarding claim 11, the claim merely states that “wherein the one or more circuits are further to analyze segments of the 3D representation of the environment 3D representations to determine an identity of at least one object, and replace at least a portion of segments with the portion of the 3D model selected from the repository of 3D models, wherein the repository corresponds to a plurality of objects from a digital catalog.” The claim merely provides what the circuits are intended to be used for, which is merely intended use and as such Eder teaches the claim as set forth above for claim 10. Regarding claim 12, the claim merely states that “wherein the one or more circuits are further to provide a presentation of the 3D environment using at least a portion of at least one specified 3D representation of the specified 3D representations, wherein one or more aspects of the specified 3D representations are modifiable in the presentation.” The claim merely provides what the circuits are intended to be used for, which is merely intended use and as such Eder teaches the claim as set forth above for claim 10. Regarding claim 13, the claim merely states that “wherein the one or more circuits are further to provide at least one specified 3D representation of the specified 3D representations of the one or more identified objects to a cloud-hosted collaborative content creation platform for multi- dimensional assets.” The claim merely provides what the circuits are intended to be used for, which is merely intended use and as such Eder teaches the claim as set forth above for claim 10. Regarding claim 14, the claim merely states that “wherein at least one specified 3D representation of the specified 3D representations includes model data corresponding to the one or more identified objects and comprised in a repository of model data corresponding to a plurality of objects.” In other words, the claim merely states a type of intended data, without any specific limitations directed to the circuitry itself. The claim merely provides what the circuits are intended to be used for, which is merely intended use and as such Eder teaches the claim as set forth above for claim 10. Regarding claim 15, the claim states, “wherein the repository of model data is comprised in a data store of a cloud-hosted collaborative content creation platform for multi-dimensional assets.” The claim, however, is directed to a processor comprising one or more circuits to replace data from a repository. The recitation of any intended structure of the repository, however, is not claimed as part of the processor circuits to which the claim is directed. Accordingly, the claim is merely reciting intended use, and Eder teaches the claim for the same reasons as set forth above for claim 10. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5-7, and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Eder et al. (US 2021/0279957 A1) in view of Böckem et al. (US 2023/0042369 A1) and in further view of Rieffel et al. (US 2010/0214284 A1) and Nussbaum et al. (US 11,210,851 B1). Regarding claim 1, Eder discloses: A computer-implemented method (Eder, Abstract), comprising: Generating, based at least on image data representative of an environment including on or more objects, a three-dimensional (3D) representation of the environment (Eder, ¶170, generating, via a machine learning model, a 3D model of the location and elements therein, wherein the machine learning model may be configured to receive the plurality of images as input and predict the geometry of the location and the elements therein to form the 3D model – see Fig. 6A, explained in ¶21 as example image of scene for which a virtual representation is to be generated); Analyzing the 3D representation to determine one or more segments corresponding to the one or more objects in the environment (Eder, ¶122: In a 3D-model based semantic segmentation, given a 3D model, semantic labels can be inferred by one or more machine learning models (e.g., a neural network) trained for 3D semantic or instance segmentation and 3D object detection and localization, using 3D model as input; Also ¶202); Classifying, based at least one data for the one or more segments and the image data, first individual ones of the one or more objects as identified objects (Eder, ¶69: machine learning model identifying elements of one or more received images or the 3D model; Figs. 6A to 6C and ¶121: images spatially localized in context of 3D model, with annotations back-projected to 3D model, with regional segmentation annotations, labels – e.g. bed, pillow, table – represented on surfaces of 3D model, with weighted aggregation and voting schemes may be used to disambiguate regions that may appear to share different labels due to effects of noise and errors in the construction of the 3D model; ¶134: generating the visual indicators includes generating, via a machine learning model, a probability map indicating how accurately a particular element is represented in the virtual representation VIR or a 2D representation of the virtual representation VIRs, where a low probability portion indicates the corresponding portion of the virtual representation VIR may need additional data to further improve the virtual representation VIR; ¶159: 3D model may be configured to further semantically identify each of the elements in the room, such as bed, pillows, floor, wall, and window – See Fig. 8F; ¶213: identify inventory items with 3D map; ¶225: semantically trained machine learning model configured to perform semantic or instance segmentation and 3D object detection and localization of each object in an inputted image) Generating, (Eder, Fig. 8F and ¶159: final 3D model of room constructed, with surface representations of everything in the room – this would include “unidentified objects; ¶213: 3D map generated may be incomplete, and therefore “use a machine learning model (e.g., to interpolate between different images, etc.) to make the map more consistent/complete”, where block 3204 illustrates a module comprising a machine learning model which reads the 3D map generated from block 3203 and identifies inventory items associated with that map, and provide bounding box around identified objects – i.e. examiner notes that this implies no bounding box around unidentified objects, but the interpolation of the map to make consistent would still generate surface representations of unidentified objects) Providing the 3D representation of the environment (Eder, ¶157: Once construction of the 3D model is complete, the graphical user interface may enable the user to inspect and review the 3D model; ¶159: constructed final 3D model of room). Eder does not explicitly disclose replacing, at least a portion of the one or more segments corresponding to the identified objects with a stored portion of a 3D model selected from an existing repository of 3D models, wherein the replacing changes a number of points in a mesh representation of the environment. Böckem discloses: Replacing at least a portion of the one or more segments corresponding to the identified objects with a stored portion of a 3D model selected from an existing repository of 3D models, wherein the replacing changes a number of points in a mesh representation of the environment; (Böckem: ¶18: the process of registering or incorporation of a local 3D model (e.g. a 3D point cloud or vector file model such as a mesh) into a translocal 3D model involves replacing data of the translocal 3D model relating to the item represented by the local 3D model by data of the local 3D model, or complementing data of the translocal 3D model relating to the item represented by the local 3D model by data of the local 3D model; ¶21: input data may comprise a 3D terrain and/or city model, e.g. in the form of a 3D point cloud or a 3D vector file model, e.g. such as a 3D mesh model; ¶27: a visualization fusion is carried out such that, considered in a current placement of the 3D item visualization, in a section of the 3D environment visualization which corresponds to the 3D item visualization the 3D environment visualization is replaced by the 3D item visualization, and in a surrounding section, which is adjacent to the 3D item visualization and extends away from the 3D item visualization, the 3D environment visualization is replaced by a replacement visualization based on synthetic data, the replacement visualization providing a gapless transition between the 3D item visualization and the remaining of the 3D environment visualization ¶185: plurality of local 3D models; ¶193 and Figs. 7-8: visualization fusion, inclusion of 3D item visualization 2, e.g. building, into 3D environmental visualization 1 to provide gapless visualization of included object within environment in 3D space; Fig. 8, and ¶198: 3D environment visualization 1 without inserted 3D item visualization and bottom shows fusion with inserted 3D item – note replacement of ground mesh with building changes the number of points); and Providing the 3D representation of the environment including the stored portion of the selected 3D model and the one or more surface representations (Böckem: Fig. 8 and ¶198: fusion visualization includes inserted 3D item with other represented 3D surfaces) Both Eder and Böckem are directed to methods and systems to map objects within a real-world environment and reproducing virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Eder, by allowing for the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, using known electronic interfacing and programming techniques. The modification merely substitutes one type of virtual object for modifying a virtual display of an environment for another, yielding predictable results of replacing an object with another type of known virtual object for display. Moreover, the modification results in an improved system and method for modifying a mapped environment of a user by allowing for incorporation of different types of virtual objects into the environment for more diverse and tailored perspectives of the environment, while also using a seamless fusion of data to provide improved visualization of data in augmented reality applications (see e.g. Bockem, ¶2 discussing different application of data fusion, such as gaming and augmented reality applications, etc.). As the claim differentiates the use of “existing repository”, for sake of completeness, Rieffel is further relied upon for teaching the stored portion in an existing repository. Rieffel teaches: Replacing at least a portion of the one or more segments corresponding to the identified objects with a stored portion of a 3D model selected from an existing repository of 3D models and providing the 3D representation of the environment including the stored portion of the selected 3D model and the one or more surface representations (Rieffel, ¶98: The model builder 226 uses the semantic information associated with the feature that indicates that a chair is to be inserted into a model in that pose (i.e., at that location and orientation). If a specific 3D CAD model of a chair is stored in the insertable item database 244 the 3D model renderer 228 may insert the specific 3D CAD model into the model. If a specific 3D CAD model of a chair is not stored in the insertable item database 224, the 3D model renderer 228 may instead insert a placeholder for a chair in the model, which may be a 3D CAD model of a generic chair or may be another visual indicator that a chair should be placed at the location of the feature 112-E); Eder, Böckem and Rieffel are directed to methods and systems to map objects within a real-world environment and reproducing virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Eder, by allowing for the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, by further utilizing the technique of using existing stored model data as provided by Rieffel, using known electronic interfacing and programming techniques. The modification results in an improved system and method for modifying a mapped environment of a user by allowing for incorporation of different types of virtual objects into the environment by utilizing stored model data as opposed to transient data, allowing for a wider use of model data by other users and over time to be utilized and furthermore reducing the amount of necessary consumption of resources for regenerating model data every time models are inserted for faster and more efficient processing. Eder modified by Bockem and Rieffel does not explicitly teach the classification of second individual ones of the one or more objects as unidentified objects; and generating, based on at least classifying the second individual ones of the one or more objects as unidentified, a representation of the unidentified objects. Nussbaum discloses Classifying second individual ones of the one or more objects as unidentified objects; and generating, based on at least classifying the second individual ones of the one or more objects as unidentified, a representation of the unidentified objects. (Nussbaum, [6:39-50]: delineate objects in 3D model; [6:51-57]: identify different classes of objects; [10:17-28]: identified objects may be shaded or otherwise denoted, such as the shading on building 128 and lane markings 120, as shown in FIG. 1 and unidentified virtual objects, such as tree 126, may be a collection of points that, when represented in virtual environment 132, are recognizable as an object to user 10) Eder, Böckem, Rieffel and Nussbaum are directed to methods and systems to map objects within a real-world environment and reproducing virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Eder, by allowing for the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, and utilizing the technique of using existing stored model data as provided by Rieffel, by further utilizing the technique of classifying objects as identified or unidentified for changing a visual depiction of the object as provided by Nussbaum, using known electronic interfacing and programming techniques. The modification results in an improved 3D modeling and segmentation system by better clarifying to the user what the system is able to process such that the user can more readily understand how the system is working in the environment, allowing a user more information for better judgment about the resulting visualization and data presented. Regarding claim 5, Eder further discloses: Wherein the 3D representation comprises an oriented point cloud (Eder, ¶47: point cloud; ¶75: 3D model in form of point cloud) Regarding claim 6, Eder further discloses: Capturing the image data using at least one image capture device, wherein the image data includes at least two images captured from different points of view with respect to the environment (Eder, Abstract: plurality of images and videos using camera; ¶77: plurality of images captured with camera; ¶85 discusses multi-views of camera images; ¶137: relative camera poses associated with additional images used to construct 3D model) Regarding claim 7, Eder further discloses: Determining a set of keypoints in the at least two images; an registering the at least two images based at least on the set of keypoints for use in generating the 3D representation of the environment (Eder, ¶137: overlapping portions between the additional images and previously captured images may be analyzed to relocalize the user's position within the scene and enable the user to continue capturing RGB-D or metric posed RGB images to further construct the 3D model; ¶160: the registered images and 3D models from each selected virtual representation may be aligned using an optimization algorithm based on overlapping fields of view of the registered images and the geometries of the 3D models, including minimizing distance between corresponding points to align virtual representations; Also ¶175: geometric reconstruction using SLAM; ¶199: fusing data where an AI algorithm (e.g., neural network) specifically trained to identify key elements may be used (e.g., walls, ceiling, floor, furniture, wall hangings, appliances, and/or other objects) Regarding claim 9, Eder further discloses: Receiving an instruction to modify one or more aspects of the one or more objects (Eder, ¶¶128-129: user interface for interacting with virtual representation having 3D model of elements in room, including user selecting region of room – Fig. 11B; ¶186: a user may suggest modifications to the virtual representation through the graphical user interface); and generating an updated 3D representation of the environment including the modified one or more aspects (Eder, ¶¶128-130 discloses user interface for modifying aspects of environment – See Figs. 11A-11C ; ¶186: user may later accept or decline these tentative modifications to the virtual representation in part or on the whole) Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Eder et al. (US 2021/0279957 A1) in view of Böckem et al. (US 2023/0042369 A1) Rieffel et al. (US 2010/0214284 A1) and Nussbaum et al. (US 11,210,851 B1) in further view of Berger et al. (US 2024/0119678 A1). Regarding claim 2, the limitations included from claim 1 are rejected based on the same rationale as claim 1 set forth above. Further regarding claim 2, Eder further discloses: Transmitting the 3D representation of the environment(Eder, ¶74: a device-to-cloud streaming process can be used with sufficient connectivity between the capturing device and the cloud-based server; ¶¶155-156 virtual representation VIR performed offline, where “to enable offline construction, the graphical user interface may be provided with a capability for a user to upload the captured data (e.g., digital media) to a server.”; ¶186 discusses user ability to modify) Eder does not explicitly disclose transmitting the 3D representation of the environment, including the portion of the selected 3D model as claimed. Berger discloses: Transmitting the 3D representation of the environment, including the stored portion of the selected 3D model to a cloud-hosted platform for collaborative content creation of multi-dimensional assets (Berger, ¶24: messaging server system including receiving data from client, which includes media augmentation and overlays; ¶43: augmentation system 208 is able to communicate and exchange data with another augmentation system 208 on another client device 102 and with the server via network 112, where data includes “a session identifier that identifies the shared AR session, a transformation between a first client device 102 and a second client device 102 (e.g., a plurality of client devices 102 include the first and second devices) that is used to align the shared AR session to a common point of origin, a common coordinate frame, functions (e.g., commands to invoke functions) as well as other payload data (e.g., text, audio, video or other multimedia data)”; ¶64 further discloses sharing 3D environment data, where “Data can include data used to establish the common coordinate frame of the shared AR scene, the transformation between the devices, the session identifier, images depicting a body, skeletal joint positions, wrist joint positions, feet, and so forth.”; ¶68 discusses augmented reality data applied to image data; ¶121: AR item selection module presents list of AR objects that can be selected; ¶124: AR object having position, dimensions and orientations set based on 3D bounding box of real-world object) Note combination of Edger with Bockem, Rieffel and Berger teaches the stored data. Both Eder and Berger are directed to methods and systems to map objects within a real-world environment and reproducing virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Edger, including the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, utilizing the technique of using stored model data as provided by Rieffel, and further utilizing the technique of classifying objects as identified or unidentified for changing a visual depiction of the object as provided by Nussbaum, by providing the data to a collaborative system provided by Berger, using known electronic interfacing and programming techniques. The modification improves upon a localized system by allowing for multiple users to collaborate on the data, allowing for easier sharing of data and ideas among remotely located collaborators, where collaboration among multiple people enhances innovation, problem solving, creativity and the general sharing of information. Claim(s) 3-4 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Eder et al. (US 2021/0279957 A1) in view of Böckem et al. (US 2023/0042369 A1) Rieffel et al. (US 2010/0214284 A1) and Nussbaum et al. (US 11,210,851 B1) in further view of Arrasvuori (US 2008/0071559 A1). Regarding claim 3, the limitations included from claim 1 are rejected based on the same rationale as claim 1 set forth above. Further regarding claim 3, Bockem further discloses: Wherein the repository of 3D models corresponds to a plurality of objects (Böckem: ¶18: incorporation of a local 3D model; ¶185: plurality of local 3D models) Both Eder and Berger are directed to methods and systems to map objects within a real-world environment and reproducin virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Edger, by including the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, using known electronic interfacing and programming techniques. The modification merely substitutes one type of virtual object for modifying a virtual display of an environment for another, yielding predictable results of replacing an object with another type of known virtual object for display. Moreover, the modification results in an improved system and method for modifying a mapped environment of a user by allowing for incorporation of different types of virtual objects, allowing for a more interactive experience with easier operation by presenting a user with options of easier selection and insertion of related objects, while tailoring the results to user preferences. Arrasvuori discloses: Wherein the existing repository of 3D models corresponds to a plurality of objects from a digital catalog (Arrasvuori, ¶38: online shopping service 302 runs on remote server 304 accessible via network and containing 3D models or other graphical representations of tangible products that are being sold, the system handling cataloguing of products) Eder, Böckem, Rieffel and Arrasvuori are directed to methods and systems to map objects within a real-world environment and reproducing virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Edger, including the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, utilizing the technique of using stored model data as provided by Rieffel, and further utilizing the technique of classifying objects as identified or unidentified for changing a visual depiction of the object as provided by Nussbaum, by further incorporating a product catalog for 3D models as provided by Arrasvuori, using known electronic interfacing and programming techniques. The modification results in an improved virtual object replacement system and technique by allowing for additional utility and commercialization of system, and allowing for easier remote control over distributed objects based on available products by a remote server, rather than requiring all data to be stored locally with a user. Regarding claim 4, Arrasvuori further discloses: Wherein the existing repository of 3D models is comprised in a data store of a cloud-hosted platform (Arrasvuori, ¶38: online shopping service 302 runs on remote server 304 accessible via network and containing 3D mode3ls or other graphical representations of tangible products that are being sold, the system handling cataloguing of products) for collaborative content creation of multi-dimensional assets (The recited limitation is merely recited as an intended use, as a server hosting product models can be used for collaborative content creation, as there is no additional element recited that provides any functional limitation or structural difference that requires anything more, and as such does not have patentable weight; Arrasvuori, however, ¶38 discloses the system handling cataloguing and pricing related to the 3D objects which is also creation of “content”; Also, ¶77: operator may be able create a subset of objects from the database 840 that seem to fit the customer's needs, and send them to the client 802 for review) Eder, Böckem, Rieffel and Arrasvuori are directed to methods and systems to map objects within a real-world environment and reproducing virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Edger, including the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, utilizing the technique of using stored model data as provided by Rieffel, and further utilizing the technique of classifying objects as identified or unidentified for changing a visual depiction of the object as provided by Nussbaum,by further incorporating a product catalog for 3D models as provided by Arrasvuori, using known electronic interfacing and programming techniques. The modification results in an improved virtual object replacement system and technique by allowing for additional utility and commercialization of system, and allowing for easier remote control over distributed objects based on available products by a remote server, rather than requiring all data to be stored locally with a user. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Eder et al. (US 2021/0279957 A1) in view of Böckem et al. (US 2023/0042369 A1) Rieffel et al. (US 2010/0214284 A1) and Nussbaum et al. (US 11,210,851 B1) in further view of Szasz et al. (US 11,069,134 B2). Regarding claim 8, the limitations included from claim 1 are rejected based on the same rationale as claim 1 set forth above. Further regarding claim 8, Eder does not explicitly teach determining that a specific object of the at least one object is unable to be identified or does not have model data available; and generating a mesh representation of the specific object to be included in the 3D representation of the environment. However, Szasz teaches: determining that a specific object of the one or more objects is unable to be identified or does not have model data available (Szasz, [1:52-59]: “Often the series of 2D images may contain incomplete information about the object. For example, an object being scanned may be placed on a table or on the ground. Methods for generating the 3D model may be unable to reconstruct parts of the object without sufficient information about portions of the object that may be concealed.”); and generating a mesh representation of the specific object to be included in the 3D representation of the environment.(Szasz, [6:27-43]: “Generating a 3D mesh of a physical object may involve the use of a physical camera to capture multiple images of the physical object. For instance, the camera may be rotated around the physical object being scanned. Based on the generated images, a mesh representation of the physical object may be generated. The mesh representation may be used in many different environments.”; “As described herein, generating the 3D mesh may be improved by truncated portions of the mesh which are less useful. As will be described more fully herein, a 3D mesh representation of an object may be initialized to a generic shape (e.g., a sphere) that is subsequently refined based on repeatedly analyzing images of the object from different angles.”) Eder and Szasz are analogous because they both involve the reconstruction of three-dimensional models of real physical objects and locations. Therefore, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to have modified the teachings of Eder of generating a 3D representation of an environment and replacing segments with pre-existing model data, including the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, utilizing the technique of using stored model data as provided by Rieffel, and further utilizing the technique of classifying objects as identified or unidentified for changing a visual depiction of the object as provided by Nussbaum, with the method of determining when an object is unidentifiable or lacks model data and generating a mesh representation for it, as taught by Szasz. This combination would enable all objects, whether they have known models or not, to be accurately represented in the 3D environment, resulting in a more realistic and complete 3D representation of real-world environments, as taught by Szasz. Claim(s) 10 and 12-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Eder et al. (US 2021/0279957 A1) in view of Böckem et al. (US 2023/0042369 A1) and in further view of Nussbaum et al. (US 11,210,851 B1) Regarding claim 10, Eder discloses: A processor, (Eder, ¶10: system comprising one or more hardware processors) comprising: one or more circuits (Eder, ¶246 discloses subject matter realized in digital electronic circuitry, integrated circuitry, computer hardware, firmware, software and combination thereof) to: use one or more neural networks (Eder, ¶199: An AI algorithm (e.g., neural network) specifically trained to identify key elements may be used (e.g., walls, ceiling, floor, furniture, wall hangings, appliances, and/or other objects)) to generate, based at least on image data captured of an environment, a three-dimensional (3D) representation of the environment including specified 3D representations of one or more identified objects (Eder, ¶170: “Operation S24 (similar to operation S14 of FIG. 2) involves generating, via a machine learning model, a 3D model of the location and elements therein. The machine learning model may be configured to receive the plurality of images as input and predict the geometry of the location and the elements therein to form the 3D model.”) and one or more unidentified objects (Eder, Fig. 8F and ¶159: final 3D model of room constructed, with surface representations of everything in the room – this would include “unidentified objects; ¶213: 3D map generated may be incomplete, and therefore “use a machine learning model (e.g., to interpolate between different images, etc.) to make the map more consistent/complete”, where block 3204 illustrates a module comprising a machine learning model which reads the 3D map generated from block 3203 and identifies inventory items associated with that map, and provide bounding box around identified objects – i.e. examiner notes that this implies no bounding box around unidentified objects, but the interpolation of the map to make consistent would still generate surface representations of unidentified objects); and generate, based at lease on the image data, one or more surface representations of the one or more unidentified objects (Eder, Fig. 8F and ¶159: final 3D model of room constructed, with surface representations of everything in the room – this would include “unidentified objects; ¶213: 3D map generated may be incomplete, and therefore “use a machine learning model (e.g., to interpolate between different images, etc.) to make the map more consistent/complete”, where block 3204 illustrates a module comprising a machine learning model which reads the 3D map generated from block 3203 and identifies inventory items associated with that map, and provide bounding box around identified objects – i.e. examiner notes that this implies no bounding box around unidentified objects, but the interpolation of the map to make consistent would still generate surface representations of unidentified objects) replace at least a second portion of the 3D representation corresponding to the one or more unidentified objects with surface representations (Eder, ¶213: 3D map generated may be incomplete, and therefore “use a machine learning model (e.g., to interpolate between different images, etc.) to make the map more consistent/complete”,) Eder does not explicitly disclose replace at least a first portion of the one or more segments corresponding to the object with a portion of a 3D model selected from a repository of 3D models. Böckem discloses: Replacing, by updating a number of points of a mesh representation of the environment, at least a portion of the one or more segments corresponding to the object with a portion of a 3D model selected from a repository of 3D models (Böckem: ¶18: the process of registering or incorporation of a local 3D model (e.g. a 3D point cloud or vector file model such as a mesh) into a translocal 3D model involves replacing data of the translocal 3D model relating to the item represented by the local 3D model by data of the local 3D model, or complementing data of the translocal 3D model relating to the item represented by the local 3D model by data of the local 3D model; ¶21: input data may comprise a 3D terrain and/or city model, e.g. in the form of a 3D point cloud or a 3D vector file model, e.g. such as a 3D mesh model; ¶27: a visualization fusion is carried out such that, considered in a current placement of the 3D item visualization, in a section of the 3D environment visualization which corresponds to the 3D item visualization the 3D environment visualization is replaced by the 3D item visualization, and in a surrounding section, which is adjacent to the 3D item visualization and extends away from the 3D item visualization, the 3D environment visualization is replaced by a replacement visualization based on synthetic data, the replacement visualization providing a gapless transition between the 3D item visualization and the remaining of the 3D environment visualization ¶185: plurality of local 3D models; ¶193 and Figs. 7-8: visualization fusion, inclusion of 3D item visualization 2, e.g. building, into 3D environmental visualization 1 to provide gapless visualization of included object within environment in 3D space; Fig. 8, and ¶198: 3D environment visualization 1 without inserted 3D item visualization and bottom shows fusion with inserted 3D item Replacing at least a portion of the one or more segments corresponding to the identified objects with a wherein the replacing changes a number of points in a mesh representation of the environment; (Böckem: ¶18: the process of registering or incorporation of a local 3D model (e.g. a 3D point cloud or vector file model such as a mesh) into a translocal 3D model involves replacing data of the translocal 3D model relating to the item represented by the local 3D model by data of the local 3D model, or complementing data of the translocal 3D model relating to the item represented by the local 3D model by data of the local 3D model; ¶21: input data may comprise a 3D terrain and/or city model, e.g. in the form of a 3D point cloud or a 3D vector file model, e.g. such as a 3D mesh model; ¶27: a visualization fusion is carried out such that, considered in a current placement of the 3D item visualization, in a section of the 3D environment visualization which corresponds to the 3D item visualization the 3D environment visualization is replaced by the 3D item visualization, and in a surrounding section, which is adjacent to the 3D item visualization and extends away from the 3D item visualization, the 3D environment visualization is replaced by a replacement visualization based on synthetic data, the replacement visualization providing a gapless transition between the 3D item visualization and the remaining of the 3D environment visualization ¶185: plurality of local 3D models; ¶193 and Figs. 7-8: visualization fusion, inclusion of 3D item visualization 2, e.g. building, into 3D environmental visualization 1 to provide gapless visualization of included object within environment in 3D space; Fig. 8, and ¶198: 3D environment visualization 1 without inserted 3D item visualization and bottom shows fusion with inserted 3D item – note replacement of ground mesh with building changes the number of points). Both Eder and Böckem are directed to methods and systems to map objects within a real-world environment and reproducing virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Edger, by allowing for the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, using known electronic interfacing and programming techniques. The modification merely substitutes one type of virtual object for modifying a virtual display of an environment for another, yielding predictable results of replacing an object with another type of known virtual object for display. Moreover, the modification results in an improved system and method for modifying a mapped environment of a user by allowing for incorporation of different types of virtual objects into the environment for more diverse and tailored perspectives of the environment, while also using a seamless fusion of data to provide improved visualization of data in augmented reality applications (see e.g. Bockem, ¶2 discussing different application of data fusion, such as gaming and augmented reality applications, etc.). Nussbaum discloses Classify first individual ones of the one or more objects as identified and second individual ones of the one or more objects as unidentified; and generate, based at least on the image data and classifying the second individual ones of the one or more objects as unidentified objects, a representation of the unidentified objects. (Nussbaum, [6:39-50]: delineate objects in 3D model; [6:51-57]: identify different classes of objects; [10:17-28]: identified objects may be shaded or otherwise denoted, such as the shading on building 128 and lane markings 120, as shown in FIG. 1 and unidentified virtual objects, such as tree 126, may be a collection of points that, when represented in virtual environment 132, are recognizable as an object to user 10) Eder, Böckem, and Nussbaum are directed to methods and systems to map objects within a real-world environment and reproducing virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Eder, by allowing for the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, by further utilizing the technique of classifying objects as identified or unidentified for changing a visual depiction of the object as provided by Nussbaum, using known electronic interfacing and programming techniques. The modification results in an improved 3D modeling and segmentation system by better clarifying to the user what the system is able to process such that the user can more readily understand how the system is working in the environment, allowing a user more information for better judgment about the resulting visualization and data presented. Regarding claim 12, Eder further discloses: wherein the one or more circuits are further to provide a presentation of the 3D environment using at least a portion of at least one specified 3D representation of the specified 3D representations (Eder, ¶144: “FIG. 8A shows the first frame 4710 of the video stream associated with a first portion of a room. Based on the first frame 4710 of the video stream, a 3D model may be constructed (e.g., as discussed at operation S16 of FIG. 2). For example, a 3D model 4720 includes elements 4721 corresponding to e.g., table and walls in the first frame 4710 of the video stream.”; ¶186: “In an embodiment, a user may suggest modifications to the virtual representation through the graphical user interface. The graphical user interface enables a mode of operation in which the aforementioned modifications may be stored as tentative changes to the virtual representation. The user may later accept or decline these tentative modifications to the virtual representation in part or on the whole.”), wherein one or more aspects of the specified 3D representations are modifiable in the presentation (Eder, ¶186: “The user may later accept or decline these tentative modifications to the virtual representation in part or on the whole. In an embodiment, when more than one user is interacting with the virtual representation, other users may choose to approve or decline these tentative changes in the same way. The tentative changes may be viewed or hidden by a user, and the tentative changes may be displayed not final changes, such as by adjusting the transparency or color of the relevant aspects of the virtual representation.”; ¶129: “As shown in FIG. 11B, a user may select a region 5101 of the first portion 5100 of the virtual representation of the room. The selected region 5101 may be defined by a free shaped boundary or a boundary with particular shape (e.g., square, rectangle, etc.) within the virtual representation 5100a. Further, the selected region 5103 may be deleted or otherwise hidden from view to generate a modified virtual representation 5100c.”) Regarding claim 13, Eder further discloses: wherein the one or more circuits are further to provide at least one specified 3D representation of the specified 3D representations of the one or more identified objects to a cloud-hosted collaborative content creation platform for multi- dimensional assets (Eder, ¶74: “Alternatively, a device-to-cloud streaming process can be used with sufficient connectivity between the capturing device and the cloud-based server.”; ¶¶155-156: “the updating or reconstruction of the virtual representation VIR may also be performed offline. For example, the video or images of the room (e.g., images 4710, 4730, 4750, 4770, and 4790 of FIGS. 8A-8E) may be stored on one or more servers and the virtual representation may be generated offline....To enable offline construction, the graphical user interface may be provided with a capability for a user to upload the captured data (e.g., digital media) to a server.”; ¶186: “In an embodiment, modifications to the virtual representation may be suggested. In an embodiment, a user may suggest modifications to the virtual representation through the graphical user interface. The graphical user interface enables a mode of operation in which the aforementioned modifications may be stored as tentative changes to the virtual representation. The user may later accept or decline these tentative modifications to the virtual representation in part or on the whole. In an embodiment, when more than one user is interacting with the virtual representation, other users may choose to approve or decline these tentative changes in the same way. The tentative changes may be viewed or hidden by a user, and the tentative changes may be displayed not final changes, such as by adjusting the transparency or color of the relevant aspects of the virtual representation.”) Regarding claim 14, Eder further discloses: wherein at least one specified 3D representation of the specified 3D representations (Eder, Fig. 8G floor region 4805; Fig. 8H a TV 4810) includes model data (Eder, Fig. 8G meta data 4806; Fig. 8H metadata 4812) corresponding to the one or more identified objects and comprised in a repository of model data corresponding to a plurality of objects (Eder, ¶61 — “In an embodiment, the metadata may include information about elements of the locations, such as information about a wall, a chair, a bed, a floor, a carpet, a window, or other elements...The metadata may be sourced from a database or uploaded by the user.”) Regarding claim 15, Eder further discloses: wherein the repository of model data is comprised in a data store of a cloud-hosted collaborative content creation platform for multi-dimensional assets (Eder, ¶61: “In an embodiment, the metadata may include information about elements of the locations, such as information about a wall, a chair, a bed, a floor, a carpet, a window, or other elements...The metadata may be sourced from a database or uploaded by the user.”; ¶74: “Alternatively, a device-to-cloud streaming process can be used with sufficient connectivity between the capturing device and the cloud-based server.”; ¶156: “To enable offline construction, the graphical user interface may be provided with a capability for a user to upload the captured data (e.g., digital media) to a server.”; ¶189: “In some embodiments, system 100 may include one or more server 102. The server(s) 102 may be configured to communicate with one or more user computing platforms 104 according to a client/server architecture. The users may access system 100 via user computing platform(s) 104.”) Regarding claim 16, Eder discloses: A system, comprising: one or more processors; and memory including instructions that, when performed by the one or more processors, cause the system to: (Eder, ¶12: “A memory module, which can include a computer-readable storage medium, may include, encode, store, or the like, one or more programs that cause one or more processors to perform one or more of the operations described herein”; ¶243: “A system comprising: at least one programmable processor; and a non-transitory machine-readable medium storing instructions which, when executed by the at least one programmable processor, cause the at least one programmable processor to perform operations”) Generate, based at least on video data captured for a space including one or more objects, a three-dimensional (3D) representation of the space (Eder, Abstract: “The operations includes receiving description data (e.g., a plurality of images and videos) of the location, the description data being generated via at least one of a camera, a user interface; receive metadata associated elements within the location; generating (e.g., offline or in real-time), via a machine learning model and/or a geometric model, a 3-dimensional (3D) model of the location and elements therein”; also see Fig. 6A and ¶21 and ¶170); Classify the one or more objects as identified objects or unidentified objects (**Note this only requires either classified as an identified object, e.g. identify an object as a pillow would be classified as an identified object, and does not require any specific classifying of unidentified objects, such as a labeling of an object as “unidentified”, with the additional limitation of generating the surface representations merely reciting what is performed, and is not requiring something more specific like obtaining an object labeled in memory as unidentified and based on the object identified as unidentified, generating a surface representation. Eder, ¶69: machine learning model identifying elements of one or more received images or the 3D model; Figs. 6A to 6C and ¶121: images spatially localized in context of 3D model, with annotations back-projected to 3D model, with regional segmentation annotations, labels – e.g. bed, pillow, table – represented on surfaces of 3D model, with weighted aggregation and voting schemes may be used to disambiguate regions that may appear to share different labels due to effects of noise and errors in the construction of the 3D model; ¶134: generating the visual indicators includes generating, via a machine learning model, a probability map indicating how accurately a particular element is represented in the virtual representation VIR or a 2D representation of the virtual representation VIRs, where a low probability portion indicates the corresponding portion of the virtual representation VIR may need additional data to further improve the virtual representation VIR; ¶159: 3D model may be configured to further semantically identify each of the elements in the room, such as bed, pillows, floor, wall, and window – See Fig. 8F; ¶213: identify inventory items with 3D map; ¶225: semantically trained machine learning model configured to perform semantic or instance segmentation and 3D object detection and localization of each object in an inputted image); Generate one or more surface representations of the unidentified objects in the 3D representation; (Eder, Fig. 8F and ¶159: final 3D model of room constructed, with surface representations of everything in the room – this would include “unidentified objects; ¶213: 3D map generated may be incomplete, and therefore “use a machine learning model (e.g., to interpolate between different images, etc.) to make the map more consistent/complete”, where block 3204 illustrates a module comprising a machine learning model which reads the 3D map generated from block 3203 and identifies inventory items associated with that map, and provide bounding box around identified objects – i.e. examiner notes that this implies no bounding box around unidentified objects, but the interpolation of the map to make consistent would still generate surface representations of unidentified objects) Provide an updated 3D representation of the space including respective 3D representations of the identified objects and the one or more surface representations of the unidentified objects (Eder, ¶¶128-130 discloses user interface for modifying aspects of environment – See Figs. 11A-11C ; ¶186: user may later accept or decline these tentative modifications to the virtual representation in part or on the whole; ¶144: “FIG. 8A shows the first frame 4710 of the video stream associated with a first portion of a room. Based on the first frame 4710 of the video stream, a 3D model may be constructed (e.g., as discussed at operation S16 of FIG. 2). For example, a 3D model 4720 includes elements 4721 corresponding to e.g., table and walls in the first frame 4710 of the video stream.”; Fig. 8F and ¶159: final 3D model of room constructed, with surface representations of everything in the room – this would include “unidentified objects; ¶213: 3D map generated may be incomplete, and therefore “use a machine learning model (e.g., to interpolate between different images, etc.) to make the map more consistent/complete”, where block 3204 illustrates a module comprising a machine learning model which reads the 3D map generated from block 3203 and identifies inventory items associated with that map, and provide bounding box around identified objects – i.e. examiner notes that this implies no bounding box around unidentified objects, but the interpolation of the map to make consistent would still generate surface representations of unidentified objects); and Modify, in response to a received input, the respective 3D representations of the identified objects in at least one of selection or placement within the 3D representation of the space (Eder, Eder, ¶¶128-129: user interface for interacting with virtual representation having 3D model of elements in room, including user selecting region of room – Fig. 11B; ¶186: a user may suggest modifications to the virtual representation through the graphical user interface) Eder does not explicitly disclose replacing, by updating a number of points of a mesh representation of the space, at least a portion of the identified objects with a portion of a 3D model selected from a repository of 3D models, wherein the replacing changes a number of points in a mesh representation of the environment. Böckem discloses: Replace at least a portion of the identified objects with a stored of a 3D model selected from a repository of 3D models, wherein the replacing changes a number of points in a mesh representation of the environment; (Böckem: ¶18: the process of registering or incorporation of a local 3D model (e.g. a 3D point cloud or vector file model such as a mesh) into a translocal 3D model involves replacing data of the translocal 3D model relating to the item represented by the local 3D model by data of the local 3D model, or complementing data of the translocal 3D model relating to the item represented by the local 3D model by data of the local 3D model; ¶21: input data may comprise a 3D terrain and/or city model, e.g. in the form of a 3D point cloud or a 3D vector file model, e.g. such as a 3D mesh model; ¶22: existing model data; ¶27: a visualization fusion is carried out such that, considered in a current placement of the 3D item visualization, in a section of the 3D environment visualization which corresponds to the 3D item visualization the 3D environment visualization is replaced by the 3D item visualization, and in a surrounding section, which is adjacent to the 3D item visualization and extends away from the 3D item visualization, the 3D environment visualization is replaced by a replacement visualization based on synthetic data, the replacement visualization providing a gapless transition between the 3D item visualization and the remaining of the 3D environment visualization ¶185: plurality of local 3D models; ¶193 and Figs. 7-8: visualization fusion, inclusion of 3D item visualization 2, e.g. building, into 3D environmental visualization 1 to provide gapless visualization of included object within environment in 3D space; Fig. 8, and ¶198: 3D environment visualization 1 without inserted 3D item visualization and bottom shows fusion with inserted 3D item – note replacement of ground mesh with building changes the number of points; note ¶); and Providing the updated 3D representation of the environment including 3D representations of the identified objects (Böckem: Fig. 8 and ¶198: fusion visualization includes inserted 3D item with other represented 3D surfaces) Both Eder and Böckem are directed to methods and systems to map objects within a real-world environment and reproducing virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Edger, by allowing for the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, using known electronic interfacing and programming techniques. The modification merely substitutes one type of virtual object for modifying a virtual display of an environment for another, yielding predictable results of replacing an object with another type of known virtual object for display. Moreover, the modification results in an improved system and method for modifying a mapped environment of a user by allowing for incorporation of different types of virtual objects into the environment for more diverse and tailored perspectives of the environment, while also using a seamless fusion of data to provide improved visualization of data in augmented reality applications (see e.g. Bockem, ¶2 discussing different application of data fusion, such as gaming and augmented reality applications, etc.). Nussbaum discloses Classify first individual ones of the one or more objects as identified and second individual ones of the one or more objects as unidentified; and generate, based at least on the image data and classifying the second individual ones of the one or more objects as unidentified objects, a representation of the unidentified objects. (Nussbaum, [6:39-50]: delineate objects in 3D model; [6:51-57]: identify different classes of objects; [10:17-28]: identified objects may be shaded or otherwise denoted, such as the shading on building 128 and lane markings 120, as shown in FIG. 1 and unidentified virtual objects, such as tree 126, may be a collection of points that, when represented in virtual environment 132, are recognizable as an object to user 10) Eder, Böckem, and Nussbaum are directed to methods and systems to map objects within a real-world environment and reproducing virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Eder, by allowing for the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, by further utilizing the technique of classifying objects as identified or unidentified for changing a visual depiction of the object as provided by Nussbaum, using known electronic interfacing and programming techniques. The modification results in an improved 3D modeling and segmentation system by better clarifying to the user what the system is able to process such that the user can more readily understand how the system is working in the environment, allowing a user more information for better judgment about the resulting visualization and data presented. Regarding claim 17, Eder further discloses: Wherein the instructions when performed further cause the system to: provide the respective 3D representations of the one or more objects to a cloud-hosted collaborative content creation platform for multi-dimensional assets (Eder, ¶74 — “Alternatively, a device-to-cloud streaming process can be used with sufficient connectivity between the capturing device and the cloud-based server.”; ¶¶155-156 — “The updating or reconstruction of the virtual representation VIR may also be performed offline. For example, the video or images of the room (e.g., images 4710, 4730, 4750, 4770, and 4790 of FIGS. 8A-8E) may be stored on one or more servers and the virtual representation may be generated offline....To enable offline construction, the graphical user interface may be provided with a capability for a user to upload the captured data (e.g., digital media) to a server.”; ¶186 — “In an embodiment, modifications to the virtual representation may be suggested. In an embodiment, a user may suggest modifications to the virtual representation through the graphical user interface. The graphical user interface enables a mode of operation in which the aforementioned modifications may be stored as tentative changes to the virtual representation. The user may later accept or decline these tentative modifications to the virtual representation in part or on the whole. In an embodiment, when more than one user is interacting with the virtual representation, other users may choose to approve or decline these tentative changes in the same way. The tentative changes may be viewed or hidden by a user, and the tentative changes may be displayed not final changes, such as by adjusting the transparency or color of the relevant aspects of the virtual representation.”) Regarding claim 18, Eder further discloses: Wherein the respective 3D representations include model data corresponding to the one or more objects (Eder, Fig. 8G meta data 4806; Fig. 8H, TV 4810) and comprised in a repository of model data corresponding to a plurality of objects (Eder, ¶61: metadata including information about elements and sourced from database) Regarding claim 19, Eder further discloses: Wherein the repository of model data is comprised in a data store of a cloud-hosted collaborative content creation platform for multi-dimensional assets (Eder, ¶74: “Alternatively, a device-to-cloud streaming process can be used with sufficient connectivity between the capturing device and the cloud-based server.”; ¶61: “In an embodiment, the metadata may include information about elements of the locations, suchas information about a wall, a chair, a bed, a floor, a carpet, a window, or other elements...The metadata may be sourced from a database or uploaded by the user.”; ¶156 — “To enable offline construction, the graphical user interface may be provided with a capability for a user to upload the captured data (e.g., digital media) to a server.”; ¶189 — “In some embodiments, system 100 may include one or more server 102. The server(s) 102 may be configured to communicate with one or more user computing platforms 104 according to a client/server architecture. The users may access system 100 via user computing platform(s) 104.”) Regarding claim 20, Examiner notes claim recites a list of alternatives any one of which alone will teach the recited claim limitations. Eder teaches: wherein the system comprises at least one of: a system for performing simulation operations (Eder, ¶178 - In an embodiment, the method may further include generating a floor plan (e.g., as discussed with respect to FIGS. 13A-13F). For example, the floor plan may be generated by specifying points of interest within the virtual representation displayed on a graphical user interface; generating the floor plan using the points of interest as input to a machine learning model or a geometric model; and spatially localizing the floor plan on to the virtual representation. ); a system for performing collaborative content creation for 3D assets (Eder, ¶186: “In an embodiment, modifications to the virtual representation may be suggested. In an embodiment, a user may suggest modifications to the virtual representation through the graphical user interface. The graphical user interface enables a mode of operation in which the aforementioned modifications may be stored as tentative changes to the virtual representation. The user may later accept or decline these tentative modifications to the virtual representation in part or on the whole. In an embodiment, when more than one user is interacting with the virtual representation, other users may choose to approve or decline these tentative changes in the same way. The tentative changes may be viewed or hidden by a user, and the tentative changes may be displayed not final changes, such as by adjusting the transparency or color of the relevant aspects of the virtual representation.” ); a system for performing deep learning operations (Eder, ¶213: “One example scenario is using a model, like a 3D convolutional neural network (CNN), on top of a point cloud that specifies a chair which is occluded by an artificial plant in the scene.”); a system for performing real-time streaming (Eder, ¶64- “At operation $18, a virtual representation VIR is generated based on the 3D model of the location. In an embodiment, generating the virtual representation VIR includes generating or updating the 3D model based on the real-time video stream of the location.”); a system implemented at least partially using cloud computing resources (Eder, ¶74 — “Alternatively, a device-to-cloud streaming process can be used with sufficient connectivity between the capturing device and the cloud-based server.”; ¶191 — “The Al work may be performed in one or more of the cloud, a mobile device, and/or other devices.”; ¶83 — “Alternatively, the images, video, and associated camera pose information may be streamed to a cloud - based server where the volumetric integration and iso surface extraction is performed. Subsequently the mesh or a rendering thereof can be streamed back to the device. This cloud-based streaming approach can be performed at interactive speeds with sufficient connectivity between the capturing device and the cloud-based server.”) Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Eder et al. (US 2021/0279957 A1) in view of Böckem et al. (US 2023/0042369 A1) and Nussbaum et al. (US 11,210,851 B1) in further view of Arrasvuori (US 2008/0071559 A1). Regarding claim 11, the limitations included from claim 10 are rejected based on claim 10 set forth above. Further regarding claim 11, Eder further discloses: Wherein the one or mor circuits are further to analyze segments of the 3D representation of the environment 3D representations to determine an identity of at least one object, (Eder, 122: In a 3D-model based semantic segmentation, given a 3D model, semantic labels can be inferred by one or more machine learning models (e.g., a neural network) trained for 3D semantic or instance segmentation and 3D object detection and localization; ¶159: the 3D model may be configured to further semantically identify each of the elements in the room; Also ¶225: where generating the virtual representation with the semantic information includes identifying elements from the plurality of image or the 3D model by a semantically trained machine learning model, the semantically trained machine learning model configured to perform semantic or instance segmentation and 3D object detection and localization of each object in an inputted image ) Eder modified by Böckem further discloses: replace at least a portion of the segments with the portion of the 3D model selected from the repository of 3D models, (Böckem: ¶18: the process of registering or incorporation of a local 3D model (e.g. a 3D point cloud or vector file model such as a mesh) into a translocal 3D model involves replacing data of the translocal 3D model relating to the item represented by the local 3D model by data of the local 3D model, or complementing data of the translocal 3D model relating to the item represented by the local 3D model by data of the local 3D model; ¶21: input data may comprise a 3D terrain and/or city model, e.g. in the form of a 3D point cloud or a 3D vector file model, e.g. such as a 3D mesh model; ¶27: a visualization fusion is carried out such that, considered in a current placement of the 3D item visualization, in a section of the 3D environment visualization which corresponds to the 3D item visualization the 3D environment visualization is replaced by the 3D item visualization, and in a surrounding section, which is adjacent to the 3D item visualization and extends away from the 3D item visualization, the 3D environment visualization is replaced by a replacement visualization based on synthetic data, the replacement visualization providing a gapless transition between the 3D item visualization and the remaining of the 3D environment visualization ¶185: plurality of local 3D models; ¶193 and Figs. 7-8: visualization fusion, inclusion of 3D item visualization 2, e.g. building, into 3D environmental visualization 1 to provide gapless visualization of included object within environment in 3D space; Fig. 8, and ¶198: 3D environment visualization 1 without inserted 3D item visualization and bottom shows fusion with inserted 3D item) Both Eder and Böckem are directed to methods and systems to map objects within a real-world environment and reproducing virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Edger, by allowing for the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, using known electronic interfacing and programming techniques. The modification merely substitutes one type of virtual object for modifying a virtual display of an environment for another, yielding predictable results of replacing an object with another type of known virtual object for display. Moreover, the modification results in an improved system and method for modifying a mapped environment of a user by allowing for incorporation of different types of virtual objects into the environment for more diverse and tailored perspectives of the environment, while also using a seamless fusion of data to provide improved visualization of data in augmented reality applications (see e.g. Bockem, ¶2 discussing different application of data fusion, such as gaming and augmented reality applications, etc.). Arrasvuori discloses: wherein the repository corresponds to a plurality of objects from a digital catalog (Arrasvuori, ¶38: online shopping service 302 runs on remote server 304 accessible via network and containing 3D models or other graphical representations of tangible products that are being sold, the system handling cataloguing of products) Eder, Böckem, Nussbaum and Arrasvuori are directed to methods and systems to map objects within a real-world environment and reproducing virtual objects for display within the environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface for displaying a modifiable real-world environment as a virtual model as provided by Edger, including the modification of the visualization of the environment by fusing a 3D virtual model into a 3D environment representation as provided by Böckem, further utilizing the technique of classifying objects as identified or unidentified for changing a visual depiction of the object as provided by Nussbaum, by further incorporating a product catalog for 3D models as provided by Arrasvuori, using known electronic interfacing and programming techniques. The modification results in an improved virtual object replacement system and technique by allowing for additional utility and commercialization of system, and allowing for easier remote control over distributed objects based on available products by a remote server, rather than requiring all data to be stored locally with a user. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM A BEUTEL whose telephone number is (571)272-3132. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DANIEL HAJNIK can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM A BEUTEL/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Dec 14, 2022
Application Filed
Sep 05, 2024
Non-Final Rejection — §102, §103
Nov 21, 2024
Examiner Interview Summary
Dec 11, 2024
Response Filed
Jan 06, 2025
Final Rejection — §102, §103
Mar 21, 2025
Examiner Interview Summary
Mar 21, 2025
Applicant Interview (Telephonic)
Apr 10, 2025
Request for Continued Examination
Apr 11, 2025
Response after Non-Final Action
Apr 15, 2025
Non-Final Rejection — §102, §103
Jun 25, 2025
Applicant Interview (Telephonic)
Jun 25, 2025
Examiner Interview Summary
Oct 17, 2025
Response Filed
Nov 21, 2025
Final Rejection — §102, §103
Feb 11, 2026
Examiner Interview Summary
Feb 11, 2026
Applicant Interview (Telephonic)
Feb 18, 2026
Request for Continued Examination
Feb 22, 2026
Response after Non-Final Action
Feb 26, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581262
AUGMENTED REALITY INTERACTION METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12572258
APPARATUS AND METHOD WITH IMAGE PROCESSING USER INTERFACE
2y 5m to grant Granted Mar 10, 2026
Patent 12566531
CONFIGURING A 3D MODEL WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12561927
MEDIA RESOURCE DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12554384
SYSTEMS AND METHODS FOR IMPROVED CONTENT EDITING AT A COMPUTING DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
70%
Grant Probability
90%
With Interview (+20.4%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 469 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month