Prosecution Insights
Last updated: April 19, 2026
Application No. 18/646,351

SYSTEMS, METHODS, AND DEVICES TO GENERATE INTERACTIVE VIRTUAL ENVIRONMENTS

Non-Final OA §103
Filed
Apr 25, 2024
Examiner
PEREN, VINCENT ROBERT
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Obsess Inc.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
266 granted / 382 resolved
+7.6% vs TC avg
Strong +20% interview lift
Without
With
+20.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
15 currently pending
Career history
397
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
26.0%
-14.0% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 382 resolved cases

Office Action

§103
DETAILED ACTION Status of Claims Claims 1-20 are pending in this application, with claims 1, 11 and 19 being independent. Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Obligation Under 37 CFR 1.56 – Joint Inventors This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Drawings The drawings were received on April 25, 2024. These drawings are acceptable. Claim Objections Claim 1 is objected to because of the following informalities: “as the virtual avatar navigable in the computer-generated 3D space” (lines 11-12 of claim 1) is grammatically improper. As one possibility for overcoming the objection, the examiner suggests amending to instead recite: “with the virtual avatar navigable in the computer-generated 3D space”. Appropriate correction is required. Claim 11 is objected to because of the following informalities: “as the virtual avatar navigable in the computer-generated 3D space” (line 15 of claim 11) is grammatically improper. As one possibility for overcoming the objection, the examiner suggests amending to instead recite: “with the virtual avatar navigable in the computer-generated 3D space”. Appropriate correction is required. Claim 19 is objected to because of the following informalities: “as the virtual avatar navigable in the computer-generated 3D space” (line 10 of claim 19) is grammatically improper. As one possibility for overcoming the objection, the examiner suggests amending to instead recite: “with the virtual avatar navigable in the computer-generated 3D space”. Appropriate correction is required. Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art; Ascertaining the differences between the prior art and the claims at issue; Resolving the level of ordinary skill in the pertinent art; and Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over SKEEN (US 2024/0161178) in view of HELINGER et al. (US 2024/0112431, hereinafter “HELINGER”). Regarding claim 19, SKEEN discloses a method to provide a virtual interactive environment (¶ [0020]: “techniques, devices, and systems for implementing XR storefronts.” ¶ [0020]: “XR storefront service can be implemented as a VR storefront service,” ¶ [0020]: “In an example, a VR storefront service may facilitate configuring VR storefronts and subsequently causing the VR storefronts to be displayed on end user devices.” ¶ [0031]: “the XR storefront service 110 to provide the interactive and immersive experiences to end users (e.g., customers 102, clerks 104, etc.) described herein, such as by rendering XR content (e.g., 3D scenes) to the user's electronic devices 106, 108,”), the method comprising: providing, with an environment generator engine (e.g., ¶ [0031]: “XR storefront service 110”;), a computer-generated three-dimensional (3D) space (¶ [0031]: “3D scenes”) by rendering a 3D model (e.g., ¶ [0033]: “3D model” ¶ [0031]: “rendering XR content (e.g., 3D scenes)”) (¶ [0031]: “The XR storefront service 110 can utilize any suitable type of component(s) to implement XR storefronts, as described herein. In some examples, the WebXR Device application programming interface (API) is utilized by the XR storefront service 110 to provide the interactive and immersive experiences to end users (e.g., customers 102, clerks 104, etc.) described herein, such as by rendering XR content (e.g., 3D scenes) to the user's electronic devices 106, 108, while maintaining compatibility with traditional browsers. In some examples, the A-frame open source library is utilized by the XR storefront service 110. The A-frame open source library uses HyperText Markup Language (HTML) and JavaScript as the central primitive for defining/building XR experiences. The code used by the XR storefront service 110 to implement the XR storefronts can be declarative HTML and/or JavaScript. This declarative software stack allows developers who are unfamiliar with more sophisticated game engines to develop a XR experience for merchants 112 using HTML and/or JavaScript code, and it also allows the end user (e.g., the customer 102, the clerk 104, etc.) to access XR storefronts (e.g., by downloading storefront data 126 used to render content (e.g., 3D scenes) via a browser) without having to download special-purpose applications or programs to access the XR storefronts.” ¶ [0032]: “FIG. 2 shows an example user interface 200 that can be displayed on a merchant device 116 at a time when a merchant 112 is configuring a XR storefront. Although the user interface 200 is shown as a browser, it is to be appreciated that an application may be downloaded from the XR storefront service 110 to a merchant device 116, and that the downloaded application may present a similar user interface. The user interface 200 may provide merchants 112 with a democratized on-ramp to configure their own XR storefronts, meaning that the merchant does not have to be a specialist (e.g., a 3D modeler, a game designer, etc.) to use the user interface 200 for configuring a XR storefront. In some examples, the user interface 200 can be used to configure a merchant-specific XR storefront for a particular merchant 112.” ¶ [0033]: “For example, the merchant 112(1) can interact with (e.g., select) a drop-down menu 202 to reveal a list of different XR storefronts that the merchant 112(1) can choose from. As mentioned, these predefined XR storefronts in the menu 124 may have been created and tested for compatibility with a XR shopping experience by a development team associated with the XR storefront service 110.” ¶ [0033]: “Each XR storefront in the menu 124 may be associated with a corresponding 3D model that is maintained in the datastore(s) 120. After selecting a XR storefront from the menu 124, the merchant 112(1) may be able to preview the selected XR storefront by selecting a “preview” button 204. For example, upon selecting the preview button 204, a window showing the selected XR storefront may be displayed on the merchant device 116(1), such as a pop-up window. In some example, the preview may be a static image of the XR storefront, while in other examples, the merchant 112(1) may be able to interact with the 3D model corresponding to the XR storefront and/or view video demonstrations of what an end user would see during a XR experience within the XR storefront.” ¶ [0034]: “In some examples, the XR storefront service 110 may provide a XR storefront creation tool to create a new XR storefront. For example, the merchant 112(1) may be able to modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc., and/or the merchant 112(1) may be able to combine (e.g., mix-and-match) multiple different XR storefronts to create a unique XR storefront. In some examples, the XR storefront service 110 may offer (e.g., as a premium service) the ability for the merchant 112(1) to create a XR storefront that is a replica of a brick-and-mortar store of the merchant 112(1). For example, a service provider of the XR storefront service 110 may provide 3D scanning hardware to the merchant 112(1), which the merchant 112(1) can use to scan the interior space of an existing brick-and-mortar store, and the resulting scan data can be uploaded to the server(s) 114 and used to create a 3D model for a replica XR storefront. In other examples, the XR storefront service 110 may not provide 3D scanning hardware to the merchant 112(1) and the servers(s) 114 may instead receive scan data from 3D scanning hardware already possessed by the merchant 112(1). In some examples, the service provider may send personnel to a brick-and-mortar location to scan the interior space of an existing brick-and-mortar store as a service for the merchant 112(1). In some examples, this type of service may be provided in combination with a service to scan physical items of the merchant's 112(1) inventory to create 3D models of the items.” ¶ [0043]: “FIG. 3 is an example 3D model 300 of a virtual space corresponding to a XR storefront, according to an implementation of the present subject matter. For example, as a result of the merchant 112(1) configuring a XR storefront (e.g., via the user interface 200 of FIG. 2), the storefront data 126 representing the XR storefront may include the data corresponding to the 3D model 300 shown in FIG. 3. Any suitable component can be used to create the 3D model 300. For example, the 3D model 300 may be created using Blender.”) and applying one or more of a lighting source layer (e.g., ¶ [0034]: “modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc.”), a reflective material layer (e.g., ¶ [0034]: “modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc.” NOTE: The color of a surface in a 3D model represents, at least in part, the reflective properties of the surface.), or a texture layer to the 3D model (e.g., ¶ [0034]: “modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc.” NOTE: The color and/or lighting of surfaces in a 3D model represent, at least in part, the texture properties of the surface in the 3D model.) (¶ [0033]: “Each XR storefront in the menu 124 may be associated with a corresponding 3D model that is maintained in the datastore(s) 120. After selecting a XR storefront from the menu 124, the merchant 112(1) may be able to preview the selected XR storefront by selecting a “preview” button 204. For example, upon selecting the preview button 204, a window showing the selected XR storefront may be displayed on the merchant device 116(1), such as a pop-up window. In some example, the preview may be a static image of the XR storefront, while in other examples, the merchant 112(1) may be able to interact with the 3D model corresponding to the XR storefront and/or view video demonstrations of what an end user would see during a XR experience within the XR storefront.” ¶ [0034]: “In some examples, the XR storefront service 110 may provide a XR storefront creation tool to create a new XR storefront. For example, the merchant 112(1) may be able to modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc., and/or the merchant 112(1) may be able to combine (e.g., mix-and-match) multiple different XR storefronts to create a unique XR storefront.” ¶ [0043]: “FIG. 3 is an example 3D model 300 of a virtual space corresponding to a XR storefront, according to an implementation of the present subject matter. For example, as a result of the merchant 112(1) configuring a XR storefront (e.g., via the user interface 200 of FIG. 2), the storefront data 126 representing the XR storefront may include the data corresponding to the 3D model 300 shown in FIG. 3. Any suitable component can be used to create the 3D model 300. For example, the 3D model 300 may be created using Blender.” NOTE: As is known by one of ordinary skill in the art, 3D models created and generated using Blender™, as suggested by SKEEN, include applying one or more of a lighting source layer, a reflective material layer, or a texture layer to the 3D model. For instance, paragraph [0030] of HELINGER et al. (US 2024/011243) discloses: “As used herein, “model” or “environment model” may refer to a computational or data-based model of an environment. In some embodiments or cases, “model” and “environment” may be synonymous. In some embodiments or cases, “model” may refer to the data which records or encodes an environment. A model may contain information for one or more of the following aspects of an environment: the geometry of the environment (e.g., encoded using points and/or polygons), textures of surfaces (e.g., splat maps or sprites), lighting (e.g., positions and types of light sources), light effects (e.g., reflectivity and transparency of materials or objects), etc. In some embodiments, a plurality or set of points and/or polygons may be described as a mesh. A mesh may have vertices, faces, and/or edges. A mesh may, for example, have textures or surfaces applied to it, or the textures or surfaces may be applied separately to individual faces of the mesh. A mesh may describe part of, or the entirety of, an object or a layer. In some embodiments, a model may include a number of constituent parts, such as objects and layers. Models may be constructed using a number of computer programs and/or computer aided design programs. For example, models might be constructed using one or more of the following commercially available programs or software: Blender, Cinema 4D, LightWave, Maya, Modo, 3ds Max, 3ixam, POV-Ray, RealityCapture, Metashape, and 3DF Zephyr, Unity, Unreal, or AI tools.” Thus, one of ordinary skill in the art would understand that a 3D model of the virtual space generated using Blender™, as taught by SKEEN, would include applying one or more of a lighting source layer, a reflective material layer, or a texture layer to the 3D model, as clearly disclosed by HELINGER et al.); providing a creator user interface (UI) (¶ [0029]: “storefront menu 124 may be displayed on the merchant device 116(1), thereby allowing the merchant 112(1) to select a XR storefront from the menu 124.” ¶ [0029]: “the merchant 112(1) may use the XR storefront service 110 to generate digital representations of items” ¶ [0032]: “FIG. 2 is an example user interface 200 for configuring a XR storefront,” ¶ [0034]: “the XR storefront service 110 may provide a XR storefront creation tool to create a new XR storefront.”) for mapping one or more product models (¶ [0028]: “3D model data associated with items” ¶ [0034]: “scan physical items of the merchant's 112(1) inventory to create 3D models of the items.” ¶ [0038]: “create or generate digital representations of the selected items from the second step. The digital representations may be 2D representations or 3D representations.” ¶ [0038]: “a 360-degree interactive model of an item (e.g., a product).”) to one or more virtual surfaces of the 3D model (NOTE: As clearly shown in FIG. 3, first digital representation 302(1) of a candle and second digital representation 302(2) of a mug are clearly mapped to positions on surfaces of display tables in the 3D model of the XR storefront shown in FIG. 3. ¶ [0043]: “FIG. 3 also illustrates how digital representations 302 of items showcased in the XR storefront may be positioned within the virtual space. For example, FIG. 3 shows a first digital representation 302(1) of a candle for sale by the merchant 112(1), which is positioned at a first position within the virtual space, and a second digital representation 302(2) of a mug for sale by the merchant 112(1), which is positioned at a second, different position within the virtual space. These respective positions of the digital representations 302 within the virtual space may be based on the positioning indications received from the merchant 112(1) at the fourth step in the example of FIG. 2. In other words, the merchant 112(1) may have chosen the positions of the digital representations 302(1) and 302(2) within the virtual space. Accordingly, the digital representations 302(1) and 302(2) may be associated with respective positions within the virtual space based at least in part on the 3D model 300,” ¶ [0029]: “generate a XR storefront (e.g., the XR storefront selected from the menu 124) including the digital representations of the items positioned within a virtual space” ¶ [0040]: “indicate the respective positions within a virtual space corresponding to the XR storefront to position the digital representations of the items.” ¶ [0043]: “the digital representations 302(1) and 302(2) may be associated with respective positions within the virtual space based at least in part on the 3D model 300,”) ([0023]: “the XR storefront service may provide merchants with an easy-to-use storefront configuration tool (e.g., an Internet-accessible a user interface(s)).” ¶ [0027]: “FIG. 1 also depicts merchant devices 116 (e.g., electronic devices), which may be used by the merchants 112 to access the XR storefront service 110.” ¶ [0028]: “For example, the merchant 112(1) may utilize the server(s) 114 as an ecommerce platform to sell items online, and the catalogue data 122 associated with the merchant 112(1) may specify the merchant's 112(1) items that are available for purchase via the ecommerce platform. Additionally, or alternatively, the catalogue data 122 may be associated with the merchant's 112(1) items that are available for purchase from a brick-and-mortar store. In some examples, the catalogue data 122 may include image data representing images of items and/or 3D model data associated with items to enable user interaction with a 360-degree interactive model of an item (e.g., a product), thereby allowing a customer to view an item (e.g., a product) in 3D.” ¶ [0029]: “Individual merchants 112 may use a merchant device 116 to access the XR storefront service 110 in order to configure a XR storefront. FIG. 1 shows the merchant 112(1) using a merchant device 116(1) to access the XR storefront service 110 (e.g., the server(s) 114) over the network(s) 118 for purposes of configuring a XR storefront. In an example, a merchant 112 may be interested in setting up and configuring a VR storefront in order provide an immersive VR experience for customers, such as the customer 102,” ¶ [0029]: “Configuring a XR storefront can involve various operations, as described herein. In some examples, a merchant 112(1) may access a storefront menu(s) 124 that includes multiple different XR storefronts. That is, a storefront menu 124 may be displayed on the merchant device 116(1), thereby allowing the merchant 112(1) to select a XR storefront from the menu 124. These predefined XR storefronts in the menu 124 may have been created and tested for compatibility with a XR shopping experience by a development team associated with the XR storefront service 110. In some examples, the merchant 112(1) may use the XR storefront service 110 to generate digital representations of items included in its catalogue data 122, and to generate a XR storefront (e.g., the XR storefront selected from the menu 124) including the digital representations of the items positioned within a virtual space corresponding to the XR storefront.” ¶ [0032]: “FIG. 2 shows an example user interface 200 that can be displayed on a merchant device 116 at a time when a merchant 112 is configuring a XR storefront. Although the user interface 200 is shown as a browser, it is to be appreciated that an application may be downloaded from the XR storefront service 110 to a merchant device 116, and that the downloaded application may present a similar user interface. The user interface 200 may provide merchants 112 with a democratized on-ramp to configure their own XR storefronts, meaning that the merchant does not have to be a specialist (e.g., a 3D modeler, a game designer, etc.) to use the user interface 200 for configuring a XR storefront. In some examples, the user interface 200 can be used to configure a merchant-specific XR storefront for a particular merchant 112.” ¶ [0033]: “Each XR storefront in the menu 124 may be associated with a corresponding 3D model that is maintained in the datastore(s) 120. After selecting a XR storefront from the menu 124, the merchant 112(1) may be able to preview the selected XR storefront by selecting a “preview” button 204. For example, upon selecting the preview button 204, a window showing the selected XR storefront may be displayed on the merchant device 116(1), such as a pop-up window. In some example, the preview may be a static image of the XR storefront, while in other examples, the merchant 112(1) may be able to interact with the 3D model corresponding to the XR storefront and/or view video demonstrations of what an end user would see during a XR experience within the XR storefront.” ¶ [0038]: “As illustrated by the encircled number 3 in FIG. 2, a third step may be to create or generate digital representations of the selected items from the second step. The digital representations may be 2D representations or 3D representations. Accordingly, the user interface 200 may provide an option 212 to select from existing images in the merchant's 112(1) catalogue, along with a “select images” button 214, to select the desired images. In some examples, selection of the “select images” button 214 may allow for selecting existing 3D models of items that are being, or that have been, utilized on an ecommerce website of the merchant 112(1) to provide users with the ability to interact with a 360-degree interactive model of an item (e.g., a product). In other words, 3D model data associated with the selected items may be reused or repurposed for creating or generating the digital representations of the selected items for inclusion within the XR storefront.” ¶ [0038]: “The user interface 200 may additionally, or alternatively, provide an option 216 to create 3D models of the selected items, along with an “upload scans” button 218 to upload scan data obtained from scanning the items in a real-world space. For example, a 3D scanning device may obtain scan data of an item from different angles, which is usable to create a 3D model of the item. A service provider of the XR storefront service 110 may provide this 3D scanning hardware and/or software to the merchant 112(1) and/or send personnel to the merchant's 112(1) brick-and-mortar store to obtain the scan data for the items the merchant 112(1) would like to showcase in the XR storefront. Creating 3D digital representations of items to showcase in the XR storefront is particularly useful for items that are more interesting to examine from different angles. The example of FIG. 2 shows that the merchant 112(1) has selected the option 216 to generate 3D digital representations of the selected items, and, as such, the merchant 112(1) may have uploaded scan data for the items to create the 3D representations.” ¶ [0039]: “In some implementations, the user interface 200 may provide an option for the merchant to provide input that identifies 3D representation of an item. For example, a manufacturer may provide a 3D model of an item at a URL and the user interface 200 may provide an option for the merchant to indicate and enter a URL for the 3D representation of the item.” ¶ [0040]: “As illustrated by the encircled number 4 in FIG. 2, a fourth step may be to indicate the respective positions within a virtual space corresponding to the XR storefront to position the digital representations of the items. For example, the merchant 112(1) can interact with (e.g., select) a drop-down menu 220 to reveal a list of the items that the merchant 112(1) selected at the second step, and the merchant 112(1) may select one of those items. In addition, the merchant 112(1) can interact with (e.g., select) a drop-down menu 222 to reveal a list of predefined locations within the virtual space corresponding to the XR storefront that the merchant 112(1) can choose from to position the selected item within the virtual space. The merchant 112(1) may be able to indicate the positions of any suitable number of the selected items by selecting the “+Add” element.” In some examples, the user interface 200 may present an option to confirm, correct, and/or reject these automatically determined locations of items within the XR storefront.” ¶ [0068] At 1014, in some examples, the digital representations of the items generated at block 1006 may be associated with respective positions within the virtual space corresponding to the XR storefront based at least in part on the 3D model. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may associate the digital representations of the items with the respective positions within the virtual space at block 1014.”); providing an avatar customization UI (e.g. FIG. 2. ¶ [0030]: “The XR storefront service 110 may cause various user interfaces to be presented on a user's electronic device 106, 108,” ¶ [0032]: “FIG. 2 is an example user interface 200 for configuring a XR storefront,” ¶ [0041]: “As illustrated by the encircled number 5 in FIG. 2, a fifth step may be to configure a merchant avatar to represent a user (e.g., the clerk 104 in FIG. 1) associated with the merchant 112(1) within the XR storefront.” ¶ [0080]: : “At 1118, one or more avatars may be configured. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may configure the avatar(s) at block 1118. In some examples, the avatar configured at block 1118 may be a merchant avatar 904 that is to be associated with a clerk 104. In this manner, when a customer 102 accesses the XR storefront at the same time as the clerk 104, the merchant avatar 904 may be displayed within the XR storefront on the electronic device 106 of the customer 102. In some examples, the avatar configured at block 1118 may be a customer avatar 902 that is to be associated with a customer 102.”) for generating, with an avatar customization engine (e.g., ¶ [0031]: “XR storefront service 110”), a virtual avatar (¶ [0041]: “As illustrated by the encircled number 5 in FIG. 2, a fifth step may be to configure a merchant avatar to represent a user (e.g., the clerk 104 in FIG. 1) associated with the merchant 112(1) within the XR storefront. That is, the merchant 112(1) may employ users, such as the clerk 104, to access the XR storefront to interact with customers within the XR storefront. Accordingly, when a customer 102 accesses the XR storefront at the same time that a clerk 104 is accessing the same XR storefront, the customer 102 may see a merchant avatar within the XR storefront. The fifth step in the example of FIG. 2 is to configure such a merchant avatar. The merchant 112(1) may be able to select from a menu of predefined avatars and/or create a new avatar by selecting features, such as height, weight, hair color, eye color, skin color, or the like. If the merchant 112(1) does not complete the fifth step, a default avatar may be chosen for users (e.g., clerks 104) associated with the merchant 112(1).” ¶ [0058]: “The avatars 902, 904 may be human-like in form such that, when they are configured, the avatars, such as the customer avatars 902, may have similar sizing to the customers associated with those avatars. For example, the avatars 902 may be configured to have the same height, weight, neck size, waist size, chest size, and the like, as their corresponding customers. This may allow for evaluating clothing items within the VR storefronts. For example, a customer may be able to virtually try on clothes in a VR storefront to see if they fit. Such an experience may mimic a real-world shopping experience to drive customer engagement with VR storefronts.” ¶ [0080]: “At 1118, one or more avatars may be configured. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may configure the avatar(s) at block 1118. In some examples, the avatar configured at block 1118 may be a merchant avatar 904 that is to be associated with a clerk 104. In this manner, when a customer 102 accesses the XR storefront at the same time as the clerk 104, the merchant avatar 904 may be displayed within the XR storefront on the electronic device 106 of the customer 102. In some examples, the avatar configured at block 1118 may be a customer avatar 902 that is to be associated with a customer 102. In this manner, when the clerk 104 (and/or another customer) accesses the XR storefront at the same time as the customer 102, the customer avatar 902 may be displayed within the XR storefront on the electronic device 108 of the clerk 104 (and/or the electronic device 106 of the other customer).”) navigable in the computer-generated 3D space (¶ [0044]: “In some examples, the XR storefront service 110 may analyze the 3D model 300 to determine open floor areas that are contiguous with each other and with an entrance and/or an exit of the XR storefront, and the navigation mesh 400 may be applied to, or positioned on, the contiguous open-floor areas so that an avatar is able to enter the XR storefront, traverse the navigation mesh 400, and exit the XR storefront without getting “stuck” on an island of the navigation mesh 400 that is not connected to the entrance and/or the exit. In some examples, the navigation mesh 400 may be modified by a user to expand or shrink the navigation mesh 400, and/or to change a path along which avatars may traverse the XR storefront.” ¶ [0052] FIG. 9 is an example user interface 900 of a virtual space corresponding to a VR storefront, the virtual space including avatars, such as the avatar 902 and the avatar 904, of other users who are also accessing the VR storefront at the same time as the viewing user, according to an implementation of the present subject matter. As mentioned above, in some examples, the VR storefront service 110 provides multi-user support to enable interactions between users 102, 104 within the VR storefront. For example, a merchant 112 (e.g., a user, such as a clerk 104, associated with the merchant 112) can interact with customers while the customers are accessing the VR storefront, and/or friends can shop together in a VR storefront even though they are located in disparate geographical locations. This provides an interactive experience where the customers can interact with each other and with one or more users (e.g., clerks 104) associated with the merchant 112.” ¶ [0067]: “At 1012, in some examples, a navigation mesh may be applied to the 3D model. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may apply the navigation mesh to the 3D model at block 1012. An example of a navigation mesh 400 applied to a 3D model 300 is depicted in FIG. 4. The navigation mesh 400, when applied to the 3D model, may constrain movement of avatars (e.g., the avatars 902, 904 depicted in FIG. 9) within the virtual space (e.g., 3D virtual space) corresponding to the XR storefront.” ¶ [0081]: “At 1120, data may be stored to save the configured XR storefront (and possibly the configured avatar(s)). In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may store storefront data 126 in the datastore(s) 120 at block 1120, the storefront data 126 representing the XR storefront configured by implementing the preceding blocks of the process 1100. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may store avatar data in the datastore(s) 120 at block 1120, the avatar data representing the avatar(s) (e.g., the merchant avatar(s) 904, the customer avatar(s) 902, etc.) configured at block 1118.”); causing the virtual interactive environment to be presented (¶ [0020]: “The XR storefront service can maintain multiple different XR storefronts associated with multiple different merchants to implement a virtual shopping experience for customers akin to a virtual shopping mall. Customers may request access to a XR storefront using their electronic devices. When a customer requests access to a XR storefront, the XR storefront service may be executed to access storefront data representing the XR storefront, and to cause the customer's electronic device to display the XR storefront based at least in part on the storefront data.”), at a display of a user device (¶ [0020]: “cause the customer's electronic device to display the XR storefront” ¶ [0029]: “cause the customer's 102 electronic device 106 to display the XR storefront” ¶ [0030]: “XR storefront is displayed on the electronic device 106(1) of the customer 102 via a user interface 128 in response to the customer 102 requesting access to the XR storefront.”), as the virtual avatar navigable in the computer-generated 3D space (e.g., ¶ [0044]: “navigation mesh 400 may be applied to, or positioned on, the contiguous open-floor areas so that an avatar is able to enter the XR storefront, traverse the navigation mesh 400, and exit the XR storefront without getting “stuck” on an island of the navigation mesh 400 that is not connected to the entrance and/or the exit.” ¶ [0044]: “avatars may traverse the XR storefront.”) (¶ [0044]: “FIG. 4 is an example 3D model 300 of a virtual space corresponding to a XR storefront, the 3D model 300 having applied thereto a navigation mesh 400 to constrain avatar movement within the virtual space, according to an implementation of the present subject matter. A navigation mesh 400 may be applied to a 3D model 300 to constrain the movement of avatars within the virtual space corresponding to the XR storefront. In the example of FIG. 4, the navigation mesh 400 is shown as a shaded area on the floor of the 3D model 300, but this is exemplary. If a user, such as the customer 102 or the clerk 104, accesses the XR storefront associated with the 3D model 300 depicted in FIG. 4 with the navigation mesh 400 applied thereto, the avatar of the user may be allowed to walk in the shaded area of the navigation mesh 400, but may be unable to walk outside of the shaded area of the navigation mesh 400. This navigation mesh 400 may help prevent anomalous events and errors from occurring during a XR experience. In some examples, a determination is made as to where to position the navigation mesh 400 on, in, or with respect to the 3D model 300 and/or the XR storefront. For example, the XR storefront service 110 may determine where digital objects (e.g., tables, chairs, display cases, clothing racks, etc.) are positioned on a floor of the 3D model 300, and, based on the floor locations of, and/or the area of the floor covered by, those digital objects, the XR storefront service 110 may determine how to apply the navigation mesh 400 to the 3D model 300. For example, the XR storefront service 110 may avoid positioning the navigation mesh 400 on the areas of the floor that are covered by digital objects (e.g., tables, chairs, display cases, clothing racks, etc.) of the 3D model 300. In some examples, the XR storefront service 110 may analyze the 3D model 300 to determine open floor areas that are contiguous with each other and with an entrance and/or an exit of the XR storefront, and the navigation mesh 400 may be applied to, or positioned on, the contiguous open-floor areas so that an avatar is able to enter the XR storefront, traverse the navigation mesh 400, and exit the XR storefront without getting “stuck” on an island of the navigation mesh 400 that is not connected to the entrance and/or the exit. In some examples, the navigation mesh 400 may be modified by a user to expand or shrink the navigation mesh 400, and/or to change a path along which avatars may traverse the XR storefront.”); receiving user input controlling the virtual avatar and selecting at least one product model of the one or more product models (¶ [0048]: “In some examples, the customer 102 may utilize a user input device of, or associated with, the electronic device 106 to provide user input indicative of browsing and purchase intents within the VR storefront. For example, the customer 102 may use a user input device to “hover” a pointer 702 on the user interface 700 over the digital representation 302(1) of the item to reveal item details 704 associated with the item. In an example where the electronic device 106 is a head-mounted display 106(1) (e.g., a VR headset), the customer 102 may operate a handheld controller by extending his/her arm forward to move the pointer 702 (e.g., a laser control) over the digital representation 302(1) of the item. In another example where the electronic device 106 is a desktop PC 106(N), the customer 102 may operate a mouse to move the pointer 702 over the digital representation 302(1) of the item. When the pointer 702 is hovering over the digital representation 302(1) of the item, item details 704 may be revealed, such as by the user interface 700 presenting the item details 704 in a pop-up window adjacent to the digital representation 302(1) of the item. In the example of FIG. 7, the item details 704 include a description of the item and a price of the item.” ¶ [0056]: “In some examples, the VR storefront service 110 is configured to analyze interactions of users 102, 104 with digital representations of items within the VR storefront and perform actions based on the analysis of those interactions. For example, if an avatar 902 picks up an item in the VR storefront and is examining the item from different angles (e.g., by turning the digital representation of the item in different orientations), the VR storefront service 110 may cause display, on the user electronic device 106, 108, of higher-resolution images of the items and/or a pop-up option that is selectable for the user 102, 104 to access additional imagery (e.g., images, videos, etc.) associated with the item that the user 102, 104 is currently examining from different angles.”); and presenting, at the display, data associated with the at least one product model (¶ [0048]: “In some examples, the customer 102 may utilize a user input device of, or associated with, the electronic device 106 to provide user input indicative of browsing and purchase intents within the VR storefront. For example, the customer 102 may use a user input device to “hover” a pointer 702 on the user interface 700 over the digital representation 302(1) of the item to reveal item details 704 associated with the item. In an example where the electronic device 106 is a head-mounted display 106(1) (e.g., a VR headset), the customer 102 may operate a handheld controller by extending his/her arm forward to move the pointer 702 (e.g., a laser control) over the digital representation 302(1) of the item. In another example where the electronic device 106 is a desktop PC 106(N), the customer 102 may operate a mouse to move the pointer 702 over the digital representation 302(1) of the item. When the pointer 702 is hovering over the digital representation 302(1) of the item, item details 704 may be revealed, such as by the user interface 700 presenting the item details 704 in a pop-up window adjacent to the digital representation 302(1) of the item. In the example of FIG. 7, the item details 704 include a description of the item and a price of the item.” ¶ [0056]: “In some examples, the VR storefront service 110 is configured to analyze interactions of users 102, 104 with digital representations of items within the VR storefront and perform actions based on the analysis of those interactions. For example, if an avatar 902 picks up an item in the VR storefront and is examining the item from different angles (e.g., by turning the digital representation of the item in different orientations), the VR storefront service 110 may cause display, on the user electronic device 106, 108, of higher-resolution images of the items and/or a pop-up option that is selectable for the user 102, 104 to access additional imagery (e.g., images, videos, etc.) associated with the item that the user 102, 104 is currently examining from different angles.”). Claims 1-2, 8-9, 11, 13-14, 16, 18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over SKEEN (US 2024/0161178) in view of HELINGER et al. (US 2024/0112431), further in view of ROSS et al. (US 2008/0172635, hereinafter “ROSS”). Regarding claim 1, SKEEN discloses a method to provide a virtual interactive environment (¶ [0020]: “techniques, devices, and systems for implementing XR storefronts.” ¶ [0020]: “XR storefront service can be implemented as a VR storefront service,” ¶ [0020]: “In an example, a VR storefront service may facilitate configuring VR storefronts and subsequently causing the VR storefronts to be displayed on end user devices.” ¶ [0031]: “the XR storefront service 110 to provide the interactive and immersive experiences to end users (e.g., customers 102, clerks 104, etc.) described herein, such as by rendering XR content (e.g., 3D scenes) to the user's electronic devices 106, 108,”), the method comprising: providing, with an environment generator engine (e.g., ¶ [0031]: “XR storefront service 110”;), a computer-generated three-dimensional (3D) space (¶ [0031]: “3D scenes”) by rendering a 3D model (e.g., ¶ [0033]: “3D model” ¶ [0031]: “rendering XR content (e.g., 3D scenes)”) (¶ [0031]: “The XR storefront service 110 can utilize any suitable type of component(s) to implement XR storefronts, as described herein. In some examples, the WebXR Device application programming interface (API) is utilized by the XR storefront service 110 to provide the interactive and immersive experiences to end users (e.g., customers 102, clerks 104, etc.) described herein, such as by rendering XR content (e.g., 3D scenes) to the user's electronic devices 106, 108, while maintaining compatibility with traditional browsers. In some examples, the A-frame open source library is utilized by the XR storefront service 110. The A-frame open source library uses HyperText Markup Language (HTML) and JavaScript as the central primitive for defining/building XR experiences. The code used by the XR storefront service 110 to implement the XR storefronts can be declarative HTML and/or JavaScript. This declarative software stack allows developers who are unfamiliar with more sophisticated game engines to develop a XR experience for merchants 112 using HTML and/or JavaScript code, and it also allows the end user (e.g., the customer 102, the clerk 104, etc.) to access XR storefronts (e.g., by downloading storefront data 126 used to render content (e.g., 3D scenes) via a browser) without having to download special-purpose applications or programs to access the XR storefronts.” ¶ [0032]: “FIG. 2 shows an example user interface 200 that can be displayed on a merchant device 116 at a time when a merchant 112 is configuring a XR storefront. Although the user interface 200 is shown as a browser, it is to be appreciated that an application may be downloaded from the XR storefront service 110 to a merchant device 116, and that the downloaded application may present a similar user interface. The user interface 200 may provide merchants 112 with a democratized on-ramp to configure their own XR storefronts, meaning that the merchant does not have to be a specialist (e.g., a 3D modeler, a game designer, etc.) to use the user interface 200 for configuring a XR storefront. In some examples, the user interface 200 can be used to configure a merchant-specific XR storefront for a particular merchant 112.” ¶ [0033]: “For example, the merchant 112(1) can interact with (e.g., select) a drop-down menu 202 to reveal a list of different XR storefronts that the merchant 112(1) can choose from. As mentioned, these predefined XR storefronts in the menu 124 may have been created and tested for compatibility with a XR shopping experience by a development team associated with the XR storefront service 110.” ¶ [0033]: “Each XR storefront in the menu 124 may be associated with a corresponding 3D model that is maintained in the datastore(s) 120. After selecting a XR storefront from the menu 124, the merchant 112(1) may be able to preview the selected XR storefront by selecting a “preview” button 204. For example, upon selecting the preview button 204, a window showing the selected XR storefront may be displayed on the merchant device 116(1), such as a pop-up window. In some example, the preview may be a static image of the XR storefront, while in other examples, the merchant 112(1) may be able to interact with the 3D model corresponding to the XR storefront and/or view video demonstrations of what an end user would see during a XR experience within the XR storefront.” ¶ [0034]: “In some examples, the XR storefront service 110 may provide a XR storefront creation tool to create a new XR storefront. For example, the merchant 112(1) may be able to modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc., and/or the merchant 112(1) may be able to combine (e.g., mix-and-match) multiple different XR storefronts to create a unique XR storefront. In some examples, the XR storefront service 110 may offer (e.g., as a premium service) the ability for the merchant 112(1) to create a XR storefront that is a replica of a brick-and-mortar store of the merchant 112(1). For example, a service provider of the XR storefront service 110 may provide 3D scanning hardware to the merchant 112(1), which the merchant 112(1) can use to scan the interior space of an existing brick-and-mortar store, and the resulting scan data can be uploaded to the server(s) 114 and used to create a 3D model for a replica XR storefront. In other examples, the XR storefront service 110 may not provide 3D scanning hardware to the merchant 112(1) and the servers(s) 114 may instead receive scan data from 3D scanning hardware already possessed by the merchant 112(1). In some examples, the service provider may send personnel to a brick-and-mortar location to scan the interior space of an existing brick-and-mortar store as a service for the merchant 112(1). In some examples, this type of service may be provided in combination with a service to scan physical items of the merchant's 112(1) inventory to create 3D models of the items.” ¶ [0043]: “FIG. 3 is an example 3D model 300 of a virtual space corresponding to a XR storefront, according to an implementation of the present subject matter. For example, as a result of the merchant 112(1) configuring a XR storefront (e.g., via the user interface 200 of FIG. 2), the storefront data 126 representing the XR storefront may include the data corresponding to the 3D model 300 shown in FIG. 3. Any suitable component can be used to create the 3D model 300. For example, the 3D model 300 may be created using Blender.”) and applying one or more of a lighting source layer (e.g., ¶ [0034]: “modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc.”), a reflective material layer (e.g., ¶ [0034]: “modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc.” NOTE: Colors of a surface in a 3D model represent reflective properties of the surface.), or a texture layer to the 3D model (e.g., ¶ [0034]: “modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc.” NOTE: Colors and/or lighting of a surface in a 3D model represent texture properties of the surface in the 3D model.) (¶ [0033]: “Each XR storefront in the menu 124 may be associated with a corresponding 3D model that is maintained in the datastore(s) 120. After selecting a XR storefront from the menu 124, the merchant 112(1) may be able to preview the selected XR storefront by selecting a “preview” button 204. For example, upon selecting the preview button 204, a window showing the selected XR storefront may be displayed on the merchant device 116(1), such as a pop-up window. In some example, the preview may be a static image of the XR storefront, while in other examples, the merchant 112(1) may be able to interact with the 3D model corresponding to the XR storefront and/or view video demonstrations of what an end user would see during a XR experience within the XR storefront.” ¶ [0034]: “In some examples, the XR storefront service 110 may provide a XR storefront creation tool to create a new XR storefront. For example, the merchant 112(1) may be able to modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc., and/or the merchant 112(1) may be able to combine (e.g., mix-and-match) multiple different XR storefronts to create a unique XR storefront.” ¶ [0043]: “FIG. 3 is an example 3D model 300 of a virtual space corresponding to a XR storefront, according to an implementation of the present subject matter. For example, as a result of the merchant 112(1) configuring a XR storefront (e.g., via the user interface 200 of FIG. 2), the storefront data 126 representing the XR storefront may include the data corresponding to the 3D model 300 shown in FIG. 3. Any suitable component can be used to create the 3D model 300. For example, the 3D model 300 may be created using Blender.” NOTE: As is known by one of ordinary skill in the art, 3D models created and generated using Blender™, as suggested by SKEEN, include applying one or more of a lighting source layer, a reflective material layer, or a texture layer to the 3D model. For instance, paragraph [0030] of HELINGER et al. (US 2024/011243) discloses: “As used herein, “model” or “environment model” may refer to a computational or data-based model of an environment. In some embodiments or cases, “model” and “environment” may be synonymous. In some embodiments or cases, “model” may refer to the data which records or encodes an environment. A model may contain information for one or more of the following aspects of an environment: the geometry of the environment (e.g., encoded using points and/or polygons), textures of surfaces (e.g., splat maps or sprites), lighting (e.g., positions and types of light sources), light effects (e.g., reflectivity and transparency of materials or objects), etc. In some embodiments, a plurality or set of points and/or polygons may be described as a mesh. A mesh may have vertices, faces, and/or edges. A mesh may, for example, have textures or surfaces applied to it, or the textures or surfaces may be applied separately to individual faces of the mesh. A mesh may describe part of, or the entirety of, an object or a layer. In some embodiments, a model may include a number of constituent parts, such as objects and layers. Models may be constructed using a number of computer programs and/or computer aided design programs. For example, models might be constructed using one or more of the following commercially available programs or software: Blender, Cinema 4D, LightWave, Maya, Modo, 3ds Max, 3ixam, POV-Ray, RealityCapture, Metashape, and 3DF Zephyr, Unity, Unreal, or AI tools.” Thus, one of ordinary skill in the art would understand that a 3D model of the virtual space generated using Blender™, as taught by SKEEN, would include applying one or more of a lighting source layer, a reflective material layer, or a texture layer to the 3D model, as clearly disclosed by HELINGER et al.); providing one or more product models (¶ [0028]: “3D model data associated with items” ¶ [0029]: “the merchant 112(1) may use the XR storefront service 110 to generate digital representations of items” ¶ [0034]: “scan physical items of the merchant's 112(1) inventory to create 3D models of the items.” ¶ [0038]: “create or generate digital representations of the selected items from the second step. The digital representations may be 2D representations or 3D representations.” ¶ [0038]: “a 360-degree interactive model of an item (e.g., a product).”) mapped to one or more virtual surfaces of the computer-generated 3D space (NOTE: As clearly shown in FIG. 3, first digital representation 302(1) of a candle and second digital representation 302(2) of a mug are clearly mapped to positions on surfaces of display tables in the 3D space generated by rendering the 3D model of the XR store shown in FIG. 3. ¶ [0043]: “FIG. 3 also illustrates how digital representations 302 of items showcased in the XR storefront may be positioned within the virtual space. For example, FIG. 3 shows a first digital representation 302(1) of a candle for sale by the merchant 112(1), which is positioned at a first position within the virtual space, and a second digital representation 302(2) of a mug for sale by the merchant 112(1), which is positioned at a second, different position within the virtual space. These respective positions of the digital representations 302 within the virtual space may be based on the positioning indications received from the merchant 112(1) at the fourth step in the example of FIG. 2. In other words, the merchant 112(1) may have chosen the positions of the digital representations 302(1) and 302(2) within the virtual space. Accordingly, the digital representations 302(1) and 302(2) may be associated with respective positions within the virtual space based at least in part on the 3D model 300,” ¶ [0029]: “generate a XR storefront (e.g., the XR storefront selected from the menu 124) including the digital representations of the items positioned within a virtual space” ¶ [0040]: “indicate the respective positions within a virtual space corresponding to the XR storefront to position the digital representations of the items.” ¶ [0043]: “the digital representations 302(1) and 302(2) may be associated with respective positions within the virtual space based at least in part on the 3D model 300,”) ([0023]: “the XR storefront service may provide merchants with an easy-to-use storefront configuration tool (e.g., an Internet-accessible a user interface(s)).” ¶ [0027]: “FIG. 1 also depicts merchant devices 116 (e.g., electronic devices), which may be used by the merchants 112 to access the XR storefront service 110.” ¶ [0028]: “For example, the merchant 112(1) may utilize the server(s) 114 as an ecommerce platform to sell items online, and the catalogue data 122 associated with the merchant 112(1) may specify the merchant's 112(1) items that are available for purchase via the ecommerce platform. Additionally, or alternatively, the catalogue data 122 may be associated with the merchant's 112(1) items that are available for purchase from a brick-and-mortar store. In some examples, the catalogue data 122 may include image data representing images of items and/or 3D model data associated with items to enable user interaction with a 360-degree interactive model of an item (e.g., a product), thereby allowing a customer to view an item (e.g., a product) in 3D.” ¶ [0029]: “Individual merchants 112 may use a merchant device 116 to access the XR storefront service 110 in order to configure a XR storefront. FIG. 1 shows the merchant 112(1) using a merchant device 116(1) to access the XR storefront service 110 (e.g., the server(s) 114) over the network(s) 118 for purposes of configuring a XR storefront. In an example, a merchant 112 may be interested in setting up and configuring a VR storefront in order provide an immersive VR experience for customers, such as the customer 102,” ¶ [0029]: “Configuring a XR storefront can involve various operations, as described herein. In some examples, a merchant 112(1) may access a storefront menu(s) 124 that includes multiple different XR storefronts. That is, a storefront menu 124 may be displayed on the merchant device 116(1), thereby allowing the merchant 112(1) to select a XR storefront from the menu 124. These predefined XR storefronts in the menu 124 may have been created and tested for compatibility with a XR shopping experience by a development team associated with the XR storefront service 110. In some examples, the merchant 112(1) may use the XR storefront service 110 to generate digital representations of items included in its catalogue data 122, and to generate a XR storefront (e.g., the XR storefront selected from the menu 124) including the digital representations of the items positioned within a virtual space corresponding to the XR storefront.” ¶ [0032]: “FIG. 2 shows an example user interface 200 that can be displayed on a merchant device 116 at a time when a merchant 112 is configuring a XR storefront. Although the user interface 200 is shown as a browser, it is to be appreciated that an application may be downloaded from the XR storefront service 110 to a merchant device 116, and that the downloaded application may present a similar user interface. The user interface 200 may provide merchants 112 with a democratized on-ramp to configure their own XR storefronts, meaning that the merchant does not have to be a specialist (e.g., a 3D modeler, a game designer, etc.) to use the user interface 200 for configuring a XR storefront. In some examples, the user interface 200 can be used to configure a merchant-specific XR storefront for a particular merchant 112.” ¶ [0033]: “Each XR storefront in the menu 124 may be associated with a corresponding 3D model that is maintained in the datastore(s) 120. After selecting a XR storefront from the menu 124, the merchant 112(1) may be able to preview the selected XR storefront by selecting a “preview” button 204. For example, upon selecting the preview button 204, a window showing the selected XR storefront may be displayed on the merchant device 116(1), such as a pop-up window. In some example, the preview may be a static image of the XR storefront, while in other examples, the merchant 112(1) may be able to interact with the 3D model corresponding to the XR storefront and/or view video demonstrations of what an end user would see during a XR experience within the XR storefront.” ¶ [0038]: “As illustrated by the encircled number 3 in FIG. 2, a third step may be to create or generate digital representations of the selected items from the second step. The digital representations may be 2D representations or 3D representations. Accordingly, the user interface 200 may provide an option 212 to select from existing images in the merchant's 112(1) catalogue, along with a “select images” button 214, to select the desired images. In some examples, selection of the “select images” button 214 may allow for selecting existing 3D models of items that are being, or that have been, utilized on an ecommerce website of the merchant 112(1) to provide users with the ability to interact with a 360-degree interactive model of an item (e.g., a product). In other words, 3D model data associated with the selected items may be reused or repurposed for creating or generating the digital representations of the selected items for inclusion within the XR storefront.” ¶ [0038]: “The user interface 200 may additionally, or alternatively, provide an option 216 to create 3D models of the selected items, along with an “upload scans” button 218 to upload scan data obtained from scanning the items in a real-world space. For example, a 3D scanning device may obtain scan data of an item from different angles, which is usable to create a 3D model of the item. A service provider of the XR storefront service 110 may provide this 3D scanning hardware and/or software to the merchant 112(1) and/or send personnel to the merchant's 112(1) brick-and-mortar store to obtain the scan data for the items the merchant 112(1) would like to showcase in the XR storefront. Creating 3D digital representations of items to showcase in the XR storefront is particularly useful for items that are more interesting to examine from different angles. The example of FIG. 2 shows that the merchant 112(1) has selected the option 216 to generate 3D digital representations of the selected items, and, as such, the merchant 112(1) may have uploaded scan data for the items to create the 3D representations.” ¶ [0039]: “In some implementations, the user interface 200 may provide an option for the merchant to provide input that identifies 3D representation of an item. For example, a manufacturer may provide a 3D model of an item at a URL and the user interface 200 may provide an option for the merchant to indicate and enter a URL for the 3D representation of the item.” ¶ [0040]: “As illustrated by the encircled number 4 in FIG. 2, a fourth step may be to indicate the respective positions within a virtual space corresponding to the XR storefront to position the digital representations of the items. For example, the merchant 112(1) can interact with (e.g., select) a drop-down menu 220 to reveal a list of the items that the merchant 112(1) selected at the second step, and the merchant 112(1) may select one of those items. In addition, the merchant 112(1) can interact with (e.g., select) a drop-down menu 222 to reveal a list of predefined locations within the virtual space corresponding to the XR storefront that the merchant 112(1) can choose from to position the selected item within the virtual space. The merchant 112(1) may be able to indicate the positions of any suitable number of the selected items by selecting the “+Add” element.” In some examples, the user interface 200 may present an option to confirm, correct, and/or reject these automatically determined locations of items within the XR storefront.” ¶ [0068] At 1014, in some examples, the digital representations of the items generated at block 1006 may be associated with respective positions within the virtual space corresponding to the XR storefront based at least in part on the 3D model. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may associate the digital representations of the items with the respective positions within the virtual space at block 1014.”); generating, with an avatar customization engine (e.g., ¶ [0031]: “XR storefront service 110”; e.g. FIG. 2. ¶ [0030]: “The XR storefront service 110 may cause various user interfaces to be presented on a user's electronic device 106, 108,” ¶ [0032]: “FIG. 2 is an example user interface 200 for configuring a XR storefront,” ¶ [0041]: “As illustrated by the encircled number 5 in FIG. 2, a fifth step may be to configure a merchant avatar to represent a user (e.g., the clerk 104 in FIG. 1) associated with the merchant 112(1) within the XR storefront.” ¶ [0080]: : “At 1118, one or more avatars may be configured. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may configure the avatar(s) at block 1118. In some examples, the avatar configured at block 1118 may be a merchant avatar 904 that is to be associated with a clerk 104. In this manner, when a customer 102 accesses the XR storefront at the same time as the clerk 104, the merchant avatar 904 may be displayed within the XR storefront on the electronic device 106 of the customer 102. In some examples, the avatar configured at block 1118 may be a customer avatar 902 that is to be associated with a customer 102.”), a virtual avatar (¶ [0041]: “As illustrated by the encircled number 5 in FIG. 2, a fifth step may be to configure a merchant avatar to represent a user (e.g., the clerk 104 in FIG. 1) associated with the merchant 112(1) within the XR storefront. That is, the merchant 112(1) may employ users, such as the clerk 104, to access the XR storefront to interact with customers within the XR storefront. Accordingly, when a customer 102 accesses the XR storefront at the same time that a clerk 104 is accessing the same XR storefront, the customer 102 may see a merchant avatar within the XR storefront. The fifth step in the example of FIG. 2 is to configure such a merchant avatar. The merchant 112(1) may be able to select from a menu of predefined avatars and/or create a new avatar by selecting features, such as height, weight, hair color, eye color, skin color, or the like. If the merchant 112(1) does not complete the fifth step, a default avatar may be chosen for users (e.g., clerks 104) associated with the merchant 112(1).” ¶ [0058]: “The avatars 902, 904 may be human-like in form such that, when they are configured, the avatars, such as the customer avatars 902, may have similar sizing to the customers associated with those avatars. For example, the avatars 902 may be configured to have the same height, weight, neck size, waist size, chest size, and the like, as their corresponding customers. This may allow for evaluating clothing items within the VR storefronts. For example, a customer may be able to virtually try on clothes in a VR storefront to see if they fit. Such an experience may mimic a real-world shopping experience to drive customer engagement with VR storefronts.” ¶ [0080]: “At 1118, one or more avatars may be configured. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may configure the avatar(s) at block 1118. In some examples, the avatar configured at block 1118 may be a merchant avatar 904 that is to be associated with a clerk 104. In this manner, when a customer 102 accesses the XR storefront at the same time as the clerk 104, the merchant avatar 904 may be displayed within the XR storefront on the electronic device 106 of the customer 102. In some examples, the avatar configured at block 1118 may be a customer avatar 902 that is to be associated with a customer 102. In this manner, when the clerk 104 (and/or another customer) accesses the XR storefront at the same time as the customer 102, the customer avatar 902 may be displayed within the XR storefront on the electronic device 108 of the clerk 104 (and/or the electronic device 106 of the other customer).”) navigable in the computer-generated 3D space (¶ [0044]: “In some examples, the XR storefront service 110 may analyze the 3D model 300 to determine open floor areas that are contiguous with each other and with an entrance and/or an exit of the XR storefront, and the navigation mesh 400 may be applied to, or positioned on, the contiguous open-floor areas so that an avatar is able to enter the XR storefront, traverse the navigation mesh 400, and exit the XR storefront without getting “stuck” on an island of the navigation mesh 400 that is not connected to the entrance and/or the exit. In some examples, the navigation mesh 400 may be modified by a user to expand or shrink the navigation mesh 400, and/or to change a path along which avatars may traverse the XR storefront.” ¶ [0052] FIG. 9 is an example user interface 900 of a virtual space corresponding to a VR storefront, the virtual space including avatars, such as the avatar 902 and the avatar 904, of other users who are also accessing the VR storefront at the same time as the viewing user, according to an implementation of the present subject matter. As mentioned above, in some examples, the VR storefront service 110 provides multi-user support to enable interactions between users 102, 104 within the VR storefront. For example, a merchant 112 (e.g., a user, such as a clerk 104, associated with the merchant 112) can interact with customers while the customers are accessing the VR storefront, and/or friends can shop together in a VR storefront even though they are located in disparate geographical locations. This provides an interactive experience where the customers can interact with each other and with one or more users (e.g., clerks 104) associated with the merchant 112.” ¶ [0067]: “At 1012, in some examples, a navigation mesh may be applied to the 3D model. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may apply the navigation mesh to the 3D model at block 1012. An example of a navigation mesh 400 applied to a 3D model 300 is depicted in FIG. 4. The navigation mesh 400, when applied to the 3D model, may constrain movement of avatars (e.g., the avatars 902, 904 depicted in FIG. 9) within the virtual space (e.g., 3D virtual space) corresponding to the XR storefront.” ¶ [0081]: “At 1120, data may be stored to save the configured XR storefront (and possibly the configured avatar(s)). In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may store storefront data 126 in the datastore(s) 120 at block 1120, the storefront data 126 representing the XR storefront configured by implementing the preceding blocks of the process 1100. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may store avatar data in the datastore(s) 120 at block 1120, the avatar data representing the avatar(s) (e.g., the merchant avatar(s) 904, the customer avatar(s) 902, etc.) configured at block 1118.”), causing the virtual interactive environment to be presented (¶ [0020]: “The XR storefront service can maintain multiple different XR storefronts associated with multiple different merchants to implement a virtual shopping experience for customers akin to a virtual shopping mall. Customers may request access to a XR storefront using their electronic devices. When a customer requests access to a XR storefront, the XR storefront service may be executed to access storefront data representing the XR storefront, and to cause the customer's electronic device to display the XR storefront based at least in part on the storefront data.”), at one or more displays (¶ [0020]: “cause the customer's electronic device to display the XR storefront” ¶ [0029]: “cause the customer's 102 electronic device 106 to display the XR storefront” ¶ [0030]: “XR storefront is displayed on the electronic device 106(1) of the customer 102 via a user interface 128 in response to the customer 102 requesting access to the XR storefront.”), as the virtual avatar navigable in the computer-generated 3D space (e.g., ¶ [0044]: “navigation mesh 400 may be applied to, or positioned on, the contiguous open-floor areas so that an avatar is able to enter the XR storefront, traverse the navigation mesh 400, and exit the XR storefront without getting “stuck” on an island of the navigation mesh 400 that is not connected to the entrance and/or the exit.” ¶ [0044]: “avatars may traverse the XR storefront.”) (¶ [0044]: “FIG. 4 is an example 3D model 300 of a virtual space corresponding to a XR storefront, the 3D model 300 having applied thereto a navigation mesh 400 to constrain avatar movement within the virtual space, according to an implementation of the present subject matter. A navigation mesh 400 may be applied to a 3D model 300 to constrain the movement of avatars within the virtual space corresponding to the XR storefront. In the example of FIG. 4, the navigation mesh 400 is shown as a shaded area on the floor of the 3D model 300, but this is exemplary. If a user, such as the customer 102 or the clerk 104, accesses the XR storefront associated with the 3D model 300 depicted in FIG. 4 with the navigation mesh 400 applied thereto, the avatar of the user may be allowed to walk in the shaded area of the navigation mesh 400, but may be unable to walk outside of the shaded area of the navigation mesh 400. This navigation mesh 400 may help prevent anomalous events and errors from occurring during a XR experience. In some examples, a determination is made as to where to position the navigation mesh 400 on, in, or with respect to the 3D model 300 and/or the XR storefront. For example, the XR storefront service 110 may determine where digital objects (e.g., tables, chairs, display cases, clothing racks, etc.) are positioned on a floor of the 3D model 300, and, based on the floor locations of, and/or the area of the floor covered by, those digital objects, the XR storefront service 110 may determine how to apply the navigation mesh 400 to the 3D model 300. For example, the XR storefront service 110 may avoid positioning the navigation mesh 400 on the areas of the floor that are covered by digital objects (e.g., tables, chairs, display cases, clothing racks, etc.) of the 3D model 300. In some examples, the XR storefront service 110 may analyze the 3D model 300 to determine open floor areas that are contiguous with each other and with an entrance and/or an exit of the XR storefront, and the navigation mesh 400 may be applied to, or positioned on, the contiguous open-floor areas so that an avatar is able to enter the XR storefront, traverse the navigation mesh 400, and exit the XR storefront without getting “stuck” on an island of the navigation mesh 400 that is not connected to the entrance and/or the exit. In some examples, the navigation mesh 400 may be modified by a user to expand or shrink the navigation mesh 400, and/or to change a path along which avatars may traverse the XR storefront.”); receiving user input controlling the virtual avatar to select at least one product model of the one or more product models (¶ [0048]: “In some examples, the customer 102 may utilize a user input device of, or associated with, the electronic device 106 to provide user input indicative of browsing and purchase intents within the VR storefront. For example, the customer 102 may use a user input device to “hover” a pointer 702 on the user interface 700 over the digital representation 302(1) of the item to reveal item details 704 associated with the item. In an example where the electronic device 106 is a head-mounted display 106(1) (e.g., a VR headset), the customer 102 may operate a handheld controller by extending his/her arm forward to move the pointer 702 (e.g., a laser control) over the digital representation 302(1) of the item. In another example where the electronic device 106 is a desktop PC 106(N), the customer 102 may operate a mouse to move the pointer 702 over the digital representation 302(1) of the item. When the pointer 702 is hovering over the digital representation 302(1) of the item, item details 704 may be revealed, such as by the user interface 700 presenting the item details 704 in a pop-up window adjacent to the digital representation 302(1) of the item. In the example of FIG. 7, the item details 704 include a description of the item and a price of the item.” ¶ [0056]: “In some examples, the VR storefront service 110 is configured to analyze interactions of users 102, 104 with digital representations of items within the VR storefront and perform actions based on the analysis of those interactions. For example, if an avatar 902 picks up an item in the VR storefront and is examining the item from different angles (e.g., by turning the digital representation of the item in different orientations), the VR storefront service 110 may cause display, on the user electronic device 106, 108, of higher-resolution images of the items and/or a pop-up option that is selectable for the user 102, 104 to access additional imagery (e.g., images, videos, etc.) associated with the item that the user 102, 104 is currently examining from different angles.”); and presenting, at the one or more display, data associated with the at least one product model (¶ [0048]: “In some examples, the customer 102 may utilize a user input device of, or associated with, the electronic device 106 to provide user input indicative of browsing and purchase intents within the VR storefront. For example, the customer 102 may use a user input device to “hover” a pointer 702 on the user interface 700 over the digital representation 302(1) of the item to reveal item details 704 associated with the item. In an example where the electronic device 106 is a head-mounted display 106(1) (e.g., a VR headset), the customer 102 may operate a handheld controller by extending his/her arm forward to move the pointer 702 (e.g., a laser control) over the digital representation 302(1) of the item. In another example where the electronic device 106 is a desktop PC 106(N), the customer 102 may operate a mouse to move the pointer 702 over the digital representation 302(1) of the item. When the pointer 702 is hovering over the digital representation 302(1) of the item, item details 704 may be revealed, such as by the user interface 700 presenting the item details 704 in a pop-up window adjacent to the digital representation 302(1) of the item. In the example of FIG. 7, the item details 704 include a description of the item and a price of the item.” ¶ [0056]: “In some examples, the VR storefront service 110 is configured to analyze interactions of users 102, 104 with digital representations of items within the VR storefront and perform actions based on the analysis of those interactions. For example, if an avatar 902 picks up an item in the VR storefront and is examining the item from different angles (e.g., by turning the digital representation of the item in different orientations), the VR storefront service 110 may cause display, on the user electronic device 106, 108, of higher-resolution images of the items and/or a pop-up option that is selectable for the user 102, 104 to access additional imagery (e.g., images, videos, etc.) associated with the item that the user 102, 104 is currently examining from different angles.”). SKEEN fails to explicitly disclose: the avatar customization engine defining a first set of customization parameters selected to be mutable and a second set of customization parameters selected to be immutable. However, whereas SKEEN may not be completely explicit as to, ROSS clearly teaches: the avatar customization engine (¶ [0010]: “processing means are adapted to generate a visual representation of a virtual user model on the screen based on the parameters for the virtual model of a user stored in the storing means.”) defining a first set of customization parameters selected to be mutable (¶ [0021]: “A user might be interested on the other hand in personalizing a user avatar according to his or her own preferences. In one embodiment of the invention, the storing means are therefore adapted to store parameters for a virtual model of a user that are changeable by a user via the user input means. With such changeable parameters, the user is enabled to assign personal properties to his or her own avatar, like own pictures, sounds and preferences.”) and a second set of customization parameters selected to be immutable (¶ [0020]: “storing means are adapted to store at least fixed parameters for a virtual model of a user, which cannot be changed by a user. These fixed parameters may describe in particular the major properties of a user avatar. The set of fixed parameters is also referred to as avatar template. An avatar template may be defined for instance by a manufacturer, by a network operator, if the electronic device is a mobile terminal, or by another third party. An avatar template may contain necessary parametric information about the basic avatar character and/or behavior tendency and about the technical avatar environment, like available resources, including input and/or output means, sensor information, terminal and network specific data etc.” ¶ [0022]: “Advantageously, an avatar profile is defined, which is always on top of an avatar template. The avatar profile is a collection of changeable and non-changeable parameters, which evolve over time and which define the avatar behavior and capabilities in certain situations. While the template parameters are fixed, the user or other entities, like network operators, can customize the accessible parametric avatar properties of the avatar profile, such as avatar class, skin properties like shape, color, clothing etc.”). Thus, in order to obtain a more versatile method for providing a virtual interactive environment having the cumulative features and/or functionalities taught by SKEEN and ROSS, it would have been obvious to one of ordinary skill in the art to have modified the avatar customization engine in the method/system for providing an interactive virtual environment taught by SKEEN so as to include defining a first set of parameters selected to be mutable and a second set of customization parameters selected to be immutable, as taught by ROSS. Regarding claim 2 (depends on claim 1), SKEEN discloses: presenting an environment creator user interface (UI) (e.g., user interface 200 in FIG. 2. ¶ [0029]: “storefront menu 124 may be displayed on the merchant device 116(1), thereby allowing the merchant 112(1) to select a XR storefront from the menu 124.” ¶ [0029]: “the merchant 112(1) may use the XR storefront service 110 to generate digital representations of items” ¶ [0032]: “FIG. 2 is an example user interface 200 for configuring a XR storefront,” ¶ [0034]: “the XR storefront service 110 may provide a XR storefront creation tool to create a new XR storefront.”) at the one or more displays (¶ [0032]: “In general, various user interfaces may be displayed on a device 106, 108, 116 that is accessing the XR storefront service 110 over the network(s) 118. FIG. 2 shows an example user interface 200 that can be displayed on a merchant device 116 at a time when a merchant 112 is configuring a XR storefront.”) (¶ [0033]: “The user interface 200 includes one or more interactive elements with which the merchant 112(1) can interact (e.g., select via user input). In the example of FIG. 2, the user interface 200 provides a series of steps for the merchant 112 to complete in order to configure a XR storefront.” ¶ [0040]: “As illustrated by the encircled number 4 in FIG. 2, a fourth step may be to indicate the respective positions within a virtual space corresponding to the XR storefront to position the digital representations of the items. For example, the merchant 112(1) can interact with (e.g., select) a drop-down menu 220 to reveal a list of the items that the merchant 112(1) selected at the second step, and the merchant 112(1) may select one of those items. In addition, the merchant 112(1) can interact with (e.g., select) a drop-down menu 222 to reveal a list of predefined locations within the virtual space corresponding to the XR storefront that the merchant 112(1) can choose from to position the selected item within the virtual space. The merchant 112(1) may be able to indicate the positions of any suitable number of the selected items by selecting the “+Add” element.”), wherein providing the one or more product models includes: providing one or more 3D mapping coordinates (e.g., ¶ [0040]: “a list of predefined locations within the virtual space”) corresponding to virtual surfaces of the 3D model (e.g., ¶ [0040]: “the respective positions within a virtual space “ NOTE: In other words, the predefined location selected using the “Select Location” drop-down menu 222 in the merchant UI 200 shown in FIG. 2 must correspond to respective 3D mapping coordinates in the virtual space that identify each position/location for positioning an item in the 3D model of the virtual space. Such as the 3D coordinates in the virtual space corresponding to the positions of the 3D models of the items 302(1) and 302(2) positioned on the top surfaces of the 3D model tables shown in the three-dimensional (3D) model of a virtual space corresponding to the example XR storefront shown in FIG. 3. By necessity, the positions on the tops of the 3D table models must be indicated using a reference system, which, for computer graphic 3D modelling, is customarily performed using 3D coordinates. For a 3D model of a virtual space, positions within the 3D model of the virtual space must have 3D coordinates since the virtual space is modelled in 3D.) (¶ [0040]: “As illustrated by the encircled number 4 in FIG. 2, a fourth step may be to indicate the respective positions within a virtual space corresponding to the XR storefront to position the digital representations of the items. For example, the merchant 112(1) can interact with (e.g., select) a drop-down menu 220 to reveal a list of the items that the merchant 112(1) selected at the second step, and the merchant 112(1) may select one of those items. In addition, the merchant 112(1) can interact with (e.g., select) a drop-down menu 222 to reveal a list of predefined locations within the virtual space corresponding to the XR storefront that the merchant 112(1) can choose from to position the selected item within the virtual space. The merchant 112(1) may be able to indicate the positions of any suitable number of the selected items by selecting the “+Add” element.”); and receiving, at the environment creator UI (e.g., FIG. 2), a selection of the one or more 3D mapping coordinates to designate locations for the one or more product models (¶ [0040]: “As illustrated by the encircled number 4 in FIG. 2, a fourth step may be to indicate the respective positions within a virtual space corresponding to the XR storefront to position the digital representations of the items. For example, the merchant 112(1) can interact with (e.g., select) a drop-down menu 220 to reveal a list of the items that the merchant 112(1) selected at the second step, and the merchant 112(1) may select one of those items. In addition, the merchant 112(1) can interact with (e.g., select) a drop-down menu 222 to reveal a list of predefined locations within the virtual space corresponding to the XR storefront that the merchant 112(1) can choose from to position the selected item within the virtual space. The merchant 112(1) may be able to indicate the positions of any suitable number of the selected items by selecting the “+Add” element.” ¶ [0043]: “FIG. 3 is an example 3D model 300 of a virtual space corresponding to a XR storefront, according to an implementation of the present subject matter. For example, as a result of the merchant 112(1) configuring a XR storefront (e.g., via the user interface 200 of FIG. 2), the storefront data 126 representing the XR storefront may include the data corresponding to the 3D model 300 shown in FIG. 3. Any suitable component can be used to create the 3D model 300. For example, the 3D model 300 may be created using Blender. FIG. 3 also illustrates how digital representations 302 of items showcased in the XR storefront may be positioned within the virtual space. For example, FIG. 3 shows a first digital representation 302(1) of a candle for sale by the merchant 112(1), which is positioned at a first position within the virtual space, and a second digital representation 302(2) of a mug for sale by the merchant 112(1), which is positioned at a second, different position within the virtual space. These respective positions of the digital representations 302 within the virtual space may be based on the positioning indications received from the merchant 112(1) at the fourth step in the example of FIG. 2. In other words, the merchant 112(1) may have chosen the positions of the digital representations 302(1) and 302(2) within the virtual space. Accordingly, the digital representations 302(1) and 302(2) may be associated with respective positions within the virtual space based at least in part on the 3D model 300, and these associations may be included in the storefront data 126 representing the XR storefront.” ¶ [0005]: “FIG. 3 is an example three-dimensional (3D) model of a virtual space corresponding to a XR storefront,”). Regarding claim 8 (depends on claim 1), SKEEN discloses: establishing a multi-user session in the virtual interactive environment (¶ [0021]: “This immersive virtual shopping experience provides a more intuitive browsing experience because it mimics a real-life shopping experience at a brick-and-mortar store. Customers can access XR storefronts remotely (e.g., from the comfort of their own homes) using electronic devices, which alleviates the issues surrounding in-person shopping, as noted above. In some examples, the disclosed XR storefront service provides multi-user support to enable interactions between users within the XR storefronts. For example, a merchant (or a clerk associated therewith) can interact with customers while the customers are accessing a XR storefront of the merchant, and/or friends can shop together in a XR storefront even though they are located in disparate geographical locations.” ¶ [0052]: “FIG. 9 is an example user interface 900 of a virtual space corresponding to a VR storefront, the virtual space including avatars, such as the avatar 902 and the avatar 904, of other users who are also accessing the VR storefront at the same time as the viewing user, according to an implementation of the present subject matter. As mentioned above, in some examples, the VR storefront service 110 provides multi-user support to enable interactions between users 102, 104 within the VR storefront. For example, a merchant 112 (e.g., a user, such as a clerk 104, associated with the merchant 112) can interact with customers while the customers are accessing the VR storefront, and/or friends can shop together in a VR storefront even though they are located in disparate geographical locations. This provides an interactive experience where the customers can interact with each other and with one or more users (e.g., clerks 104) associated with the merchant 112.” ¶ [0053]: “At runtime, multiple users may be accessing a common VR storefront. In the example of FIG. 9, the viewing user who is viewing the user interface 900 on his/her electronic device 106 may represent a first customer 102. Because the first customer 102 can see two avatars 902 and 904 within the VR storefront, a second customer 102 associated with the first avatar 902 (customer avatar 902) may be accessing the VR storefront at the same time as the first customer 102, and a clerk 104 associated with the second avatar 904 (merchant avatar 904) may be accessing the VR storefront at the same time as the first and second customers 102. While these users 102, 104 are accessing the VR storefront, their respective electronic device 106, 108 stream device data (e.g., position data) to the server(s) 114. In an example where the users 102, 104 are using head-mounted displays (e.g., VR headsets), this device data (e.g., position data) may be based on the movement of the users 102, 104 within their respective environments, enabled by VR tracking technology. In an example have the users 102, 104 are using desktop PCs, the device data (e.g., position data) may be based on user input provided to a mouse and/or keyboard indicative of the user intent to move about the VR storefront from one position to another. Upon receiving device data from an electronic device 108 associated with the clerk 104, for example, the server(s) 114 may determine, based at least in part on the device data, a position of the merchant avatar 904 within the virtual space corresponding to the VR storefront, and may cause the VR storefront to be displayed on the electronic device 106 of the first customer with the merchant avatar 904 positioned at the determined position within the virtual space. Likewise, upon receiving device data from an electronic device 106 associated with the second customer, the server(s) 114 may determine, based at least in part on the device data, a position of the customer avatar 902 within the virtual space corresponding to the VR storefront, and may cause the VR storefront to be displayed on the electronic device 106 of the first customer with the customer avatar 902 positioned at the determined position within the virtual space. As device data is streamed from the respective electronic devices 106, 108 to the server(s) 114, the server(s) 114 can keep track of the updated positions of the respective users 102, 104 and update the 3D scenes displayed to each user 102, 104, as well as the positions of the avatars within those rendered 3D scenes. The server(s) 114 can use any suitable component(s) to support multi-user interactions, such as Socket.IO, EasyRTC, or the like. In general, the component(s) used by the server(s) 114 may enable users 102, 104 to interact with one another within VR storefronts. In this manner, different customers may be able to see each other and interact with one another within the VR storefront, and the customers presently accessing the VR storefront can interact with the merchant (e.g., the clerk(s) 104) who is also accessing the VR storefront.”); and generating an invite link associated with the multi-user session (e.g., ¶ [0088]: “the notification(s) may include a link that, upon selection, causes the electronic device(s) of the other user(s) to access the XR storefront.”) for providing access to the multi-user session for a plurality of users (e.g., ¶ [0088]: “the notification(s) may include a link that, upon selection, causes the electronic device(s) of the other user(s) to access the XR storefront.”) (¶ [0088]: “At 1210, a determination is made as to whether to notify another user about the customer 102 having accessed the XR storefront. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may determine whether to notify another user at block 1210. In some examples, the determination made at block 1210 is whether to notify the merchant(s) 112 associated with the XR storefront (e.g., a clerk(s) 104 associated with the merchant 112 who configured the XR storefront). For example, the merchant 112 may specify, in settings, that clerks 104 should be notified about customers entering the XR storefront. In these examples, the merchant 112 may have “on-call” clerks 104 who are notified whenever a customer enters the XR storefront so that the on-call clerks 104 can access the XR storefront to interact with the customer. In some examples, the determination made at block 1210 is whether to notify another user (e.g., a friend, such as a social contact) associated with the customer 102. Accordingly, the server(s) 114 may determine whether any users (e.g., social contacts) are associated with the customer 102 who is accessing the XR storefront, and, if so those users may be notified. In some examples, the customer 102 may specify, in settings, that social contacts (or a subset thereof) should be notified about the customer 102 entering the XR storefront. In some examples, the determination made at block 1210 is whether the customer 102 has explicitly invited another user (e.g., a friend, such as a social contact) to join him/her in the XR storefront. For example, via a user interface displayed on the electronic device 106 of the customer 102, the customer 102 may select an interactive element (e.g., an “invite friends” button) to invite one or more other users to the XR storefront. In some examples, the determination made at block 1210 is whether the customer 102 has explicitly requested the presence of a clerk 104 within the XR storefront. For example, via a user interface displayed on the electronic device 106 of the customer 102, the customer 102 may select an interactive element (e.g., “call a clerk” button) to request that a clerk 104 access the XR storefront to interact with the customer 102. If it is determined, at block 1210, to notify another user(s), the process 1200 may follow the YES route from block 1210 to block 1212 where a notification(s) may be sent to another user(s) and/or another electronic device(s) 106, 108 of the user(s). This notification(s) may be a notification that the customer 102 has entered the XR storefront, and the notification(s) may include a link that, upon selection, causes the electronic device(s) of the other user(s) to access the XR storefront. Such a notification may be sent via any suitable communication channel, such as electronic mail (email), Short Message Service (SMS) text, an in-app notification (e.g., a notification sent to a mobile application installed on the electronic device(s) of the other user(s)), or the like.”). Regarding claim 9 (depends on claim 8), SKEEN discloses: presenting an option (¶ [0057]: “if the first customer 102 (i.e., the viewing user) purchases an item within the VR storefront, the second customer 102 associated with the customer avatar 902 may see, when viewing the digital representation of the purchased item, that the first customer has purchased the item.”), responsive to the user input selecting the at least one product model in the multi-user session (¶ [0057]: “if the first customer 102 (i.e., the viewing user) purchases an item within the VR storefront,), for the plurality of users to add an item corresponding to the at least one product model to accounts associated with the plurality of users (¶ [0057]: “the shared object state may indicate, to the users 102, 104 accessing the VR storefront, that there are only a certain number of a given item left in stock. In some examples, inventory information (e.g., a number of items remaining in stock) is presented via a user interface to a customer 102 regardless of whether shared object state is synchronized across user electronic devices 106, 108 or not. If an item is out of stock, the VR storefront may still include a digital representation of the out-of-stock item along with an indication that the item is back-ordered so that a customer 102 can purchase the item,”) (¶ [0057]: “In addition to synchronizing interaction data (e.g., audio data) across the user electronic devices 106, 108, shared object state may be synchronized across the user electronic devices 106, 108 as well. For example, if the first customer 102 (i.e., the viewing user) purchases an item within the VR storefront, the second customer 102 associated with the customer avatar 902 may see, when viewing the digital representation of the purchased item, that the first customer has purchased the item. In some examples, if the merchant 112 has a limited inventory of items, the shared object state may indicate, to the users 102, 104 accessing the VR storefront, that there are only a certain number of a given item left in stock. In some examples, inventory information (e.g., a number of items remaining in stock) is presented via a user interface to a customer 102 regardless of whether shared object state is synchronized across user electronic devices 106, 108 or not. If an item is out of stock, the VR storefront may still include a digital representation of the out-of-stock item along with an indication that the item is back-ordered so that a customer 102 can purchase the item, but delivery of the item may take longer than usual. In some examples, out-of-stock items are hidden from view (e.g., concealed, removed from the VR storefront, etc.), or out-of-stock items can be converted into background decor of the VR storefront and rendered unpurchasable such that a customer 102 cannot purchase an out-of-stock item. In some examples, when an item goes out-of-stock, the digital representation of the item in the VR storefront is automatically replaced with a digital representation of a different item (e.g., an item that was not included in the VR storefront initially due to space constraints, to avoid cluttering the VR storefront, etc.).” ¶ [0090]: “At 1216, a purchase status of the item may be updated in the datastore(s) 120 to indicate the purchased status of the item. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may update the purchase status of the item at block 1216. In some examples, the purchase status may be updated with respect to the customer 102 to indicate to the customer 102 that he/she has already purchased the item, in case they forget that they had purchased it. In this example, the purchase status of the item for other customers may remain as “unpurchased” or “available for purchase.” In other examples, the object state is synchronized across electronic devices 106, 108 of users who are accessing the XR storefront such that other users would see the updated purchase status, at least in association with the customer 102 who purchased the item.”). Regarding claim 11, SKEEN discloses a system for providing a virtual interactive environment (See FIG. 1. ¶ [0026]: “XR storefront service 110.” ¶ [0061]: “process 1000 can be implemented by a system (e.g., a computing device(s)) including one or more processors and memory storing computer-executable instructions to cause the one or more processors to perform the process 1000.”), the system comprising: one or more processors (¶ [0061]: “a system (e.g., a computing device(s)) including one or more processors and memory storing computer-executable instructions to cause the one or more processors to perform the process 1000.”); and a computer-readable memory device storing instructions that, when executed by the one or more processors (¶ [0061]: “a system (e.g., a computing device(s)) including one or more processors and memory storing computer-executable instructions to cause the one or more processors to perform the process 1000.” ¶ [0072]: “The process 1100 can be implemented by a system (e.g., a computing device(s)) including one or more processors and memory storing computer-executable instructions to cause the one or more processors to perform the process 1100.” ¶ [0083]: “The process 1200 can be implemented by a system (e.g., a computing device(s)) including one or more processors and memory storing computer-executable instructions to cause the one or more processors to perform the process 1200.” ¶ [0094]: “The process 1300 can be implemented by a system (e.g., a computing device(s)) including one or more processors and memory storing computer-executable instructions to cause the one or more processors to perform the process 1300.”), cause the system to: provide an environment generator engine (e.g., ¶ [0031]: “XR storefront service 110”;) for generating a computer-generated three-dimensional (3D) space(¶ [0031]: “3D scenes”) by rendering a 3D model (e.g., ¶ [0033]: “3D model” ¶ [0031]: “rendering XR content (e.g., 3D scenes)”) (¶ [0031]: “The XR storefront service 110 can utilize any suitable type of component(s) to implement XR storefronts, as described herein. In some examples, the WebXR Device application programming interface (API) is utilized by the XR storefront service 110 to provide the interactive and immersive experiences to end users (e.g., customers 102, clerks 104, etc.) described herein, such as by rendering XR content (e.g., 3D scenes) to the user's electronic devices 106, 108, while maintaining compatibility with traditional browsers. In some examples, the A-frame open source library is utilized by the XR storefront service 110. The A-frame open source library uses HyperText Markup Language (HTML) and JavaScript as the central primitive for defining/building XR experiences. The code used by the XR storefront service 110 to implement the XR storefronts can be declarative HTML and/or JavaScript. This declarative software stack allows developers who are unfamiliar with more sophisticated game engines to develop a XR experience for merchants 112 using HTML and/or JavaScript code, and it also allows the end user (e.g., the customer 102, the clerk 104, etc.) to access XR storefronts (e.g., by downloading storefront data 126 used to render content (e.g., 3D scenes) via a browser) without having to download special-purpose applications or programs to access the XR storefronts.” ¶ [0032]: “FIG. 2 shows an example user interface 200 that can be displayed on a merchant device 116 at a time when a merchant 112 is configuring a XR storefront. Although the user interface 200 is shown as a browser, it is to be appreciated that an application may be downloaded from the XR storefront service 110 to a merchant device 116, and that the downloaded application may present a similar user interface. The user interface 200 may provide merchants 112 with a democratized on-ramp to configure their own XR storefronts, meaning that the merchant does not have to be a specialist (e.g., a 3D modeler, a game designer, etc.) to use the user interface 200 for configuring a XR storefront. In some examples, the user interface 200 can be used to configure a merchant-specific XR storefront for a particular merchant 112.” ¶ [0033]: “For example, the merchant 112(1) can interact with (e.g., select) a drop-down menu 202 to reveal a list of different XR storefronts that the merchant 112(1) can choose from. As mentioned, these predefined XR storefronts in the menu 124 may have been created and tested for compatibility with a XR shopping experience by a development team associated with the XR storefront service 110.” ¶ [0033]: “Each XR storefront in the menu 124 may be associated with a corresponding 3D model that is maintained in the datastore(s) 120. After selecting a XR storefront from the menu 124, the merchant 112(1) may be able to preview the selected XR storefront by selecting a “preview” button 204. For example, upon selecting the preview button 204, a window showing the selected XR storefront may be displayed on the merchant device 116(1), such as a pop-up window. In some example, the preview may be a static image of the XR storefront, while in other examples, the merchant 112(1) may be able to interact with the 3D model corresponding to the XR storefront and/or view video demonstrations of what an end user would see during a XR experience within the XR storefront.” ¶ [0034]: “In some examples, the XR storefront service 110 may provide a XR storefront creation tool to create a new XR storefront. For example, the merchant 112(1) may be able to modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc., and/or the merchant 112(1) may be able to combine (e.g., mix-and-match) multiple different XR storefronts to create a unique XR storefront. In some examples, the XR storefront service 110 may offer (e.g., as a premium service) the ability for the merchant 112(1) to create a XR storefront that is a replica of a brick-and-mortar store of the merchant 112(1). For example, a service provider of the XR storefront service 110 may provide 3D scanning hardware to the merchant 112(1), which the merchant 112(1) can use to scan the interior space of an existing brick-and-mortar store, and the resulting scan data can be uploaded to the server(s) 114 and used to create a 3D model for a replica XR storefront. In other examples, the XR storefront service 110 may not provide 3D scanning hardware to the merchant 112(1) and the servers(s) 114 may instead receive scan data from 3D scanning hardware already possessed by the merchant 112(1). In some examples, the service provider may send personnel to a brick-and-mortar location to scan the interior space of an existing brick-and-mortar store as a service for the merchant 112(1). In some examples, this type of service may be provided in combination with a service to scan physical items of the merchant's 112(1) inventory to create 3D models of the items.” ¶ [0043]: “FIG. 3 is an example 3D model 300 of a virtual space corresponding to a XR storefront, according to an implementation of the present subject matter. For example, as a result of the merchant 112(1) configuring a XR storefront (e.g., via the user interface 200 of FIG. 2), the storefront data 126 representing the XR storefront may include the data corresponding to the 3D model 300 shown in FIG. 3. Any suitable component can be used to create the 3D model 300. For example, the 3D model 300 may be created using Blender.”) and applying one or more surface customizations to the 3D model (e.g., ¶ [0034]: “modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc.” NOTE: In 3D model rendering, changing the lighting of a surface and/or changing the colors of a surface in a 3D model applies changes to the surface in 3D model.) (¶ [0033]: “Each XR storefront in the menu 124 may be associated with a corresponding 3D model that is maintained in the datastore(s) 120. After selecting a XR storefront from the menu 124, the merchant 112(1) may be able to preview the selected XR storefront by selecting a “preview” button 204. For example, upon selecting the preview button 204, a window showing the selected XR storefront may be displayed on the merchant device 116(1), such as a pop-up window. In some example, the preview may be a static image of the XR storefront, while in other examples, the merchant 112(1) may be able to interact with the 3D model corresponding to the XR storefront and/or view video demonstrations of what an end user would see during a XR experience within the XR storefront.” ¶ [0034]: “In some examples, the XR storefront service 110 may provide a XR storefront creation tool to create a new XR storefront. For example, the merchant 112(1) may be able to modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc., and/or the merchant 112(1) may be able to combine (e.g., mix-and-match) multiple different XR storefronts to create a unique XR storefront.” ¶ [0043]: “FIG. 3 is an example 3D model 300 of a virtual space corresponding to a XR storefront, according to an implementation of the present subject matter. For example, as a result of the merchant 112(1) configuring a XR storefront (e.g., via the user interface 200 of FIG. 2), the storefront data 126 representing the XR storefront may include the data corresponding to the 3D model 300 shown in FIG. 3. Any suitable component can be used to create the 3D model 300. For example, the 3D model 300 may be created using Blender.” NOTE: As is known by one of ordinary skill in the art, 3D models created and generated using Blender™, as suggested by SKEEN, include applying one or more of a lighting source layer, a reflective material layer, or a texture layer to the 3D model. For instance, paragraph [0030] of HELINGER et al. (US 2024/011243) discloses: “As used herein, “model” or “environment model” may refer to a computational or data-based model of an environment. In some embodiments or cases, “model” and “environment” may be synonymous. In some embodiments or cases, “model” may refer to the data which records or encodes an environment. A model may contain information for one or more of the following aspects of an environment: the geometry of the environment (e.g., encoded using points and/or polygons), textures of surfaces (e.g., splat maps or sprites), lighting (e.g., positions and types of light sources), light effects (e.g., reflectivity and transparency of materials or objects), etc. In some embodiments, a plurality or set of points and/or polygons may be described as a mesh. A mesh may have vertices, faces, and/or edges. A mesh may, for example, have textures or surfaces applied to it, or the textures or surfaces may be applied separately to individual faces of the mesh. A mesh may describe part of, or the entirety of, an object or a layer. In some embodiments, a model may include a number of constituent parts, such as objects and layers. Models may be constructed using a number of computer programs and/or computer aided design programs. For example, models might be constructed using one or more of the following commercially available programs or software: Blender, Cinema 4D, LightWave, Maya, Modo, 3ds Max, 3ixam, POV-Ray, RealityCapture, Metashape, and 3DF Zephyr, Unity, Unreal, or AI tools.” Thus, one of ordinary skill in the art would understand that a 3D model of the virtual space generated using Blender™, as taught by SKEEN, would include applying one or more of a lighting source layer, a reflective material layer, or a texture layer to the 3D model, as clearly disclosed by HELINGER et al.); provide one or more product models (¶ [0028]: “3D model data associated with items” ¶ [0029]: “the merchant 112(1) may use the XR storefront service 110 to generate digital representations of items” ¶ [0034]: “scan physical items of the merchant's 112(1) inventory to create 3D models of the items.” ¶ [0038]: “create or generate digital representations of the selected items from the second step. The digital representations may be 2D representations or 3D representations.” ¶ [0038]: “a 360-degree interactive model of an item (e.g., a product).”) mapped to one or more virtual surfaces of the 3D model (NOTE: As clearly shown in FIG. 3, first digital representation 302(1) of a candle and second digital representation 302(2) of a mug are clearly mapped to positions on surfaces of display tables in the 3D space generated by rendering the 3D model(s) of the XR store shown in FIG. 3. ¶ [0043]: “FIG. 3 also illustrates how digital representations 302 of items showcased in the XR storefront may be positioned within the virtual space. For example, FIG. 3 shows a first digital representation 302(1) of a candle for sale by the merchant 112(1), which is positioned at a first position within the virtual space, and a second digital representation 302(2) of a mug for sale by the merchant 112(1), which is positioned at a second, different position within the virtual space. These respective positions of the digital representations 302 within the virtual space may be based on the positioning indications received from the merchant 112(1) at the fourth step in the example of FIG. 2. In other words, the merchant 112(1) may have chosen the positions of the digital representations 302(1) and 302(2) within the virtual space. Accordingly, the digital representations 302(1) and 302(2) may be associated with respective positions within the virtual space based at least in part on the 3D model 300,” ¶ [0029]: “generate a XR storefront (e.g., the XR storefront selected from the menu 124) including the digital representations of the items positioned within a virtual space” ¶ [0040]: “indicate the respective positions within a virtual space corresponding to the XR storefront to position the digital representations of the items.” ¶ [0043]: “the digital representations 302(1) and 302(2) may be associated with respective positions within the virtual space based at least in part on the 3D model 300,”) ([0023]: “the XR storefront service may provide merchants with an easy-to-use storefront configuration tool (e.g., an Internet-accessible a user interface(s)).” ¶ [0027]: “FIG. 1 also depicts merchant devices 116 (e.g., electronic devices), which may be used by the merchants 112 to access the XR storefront service 110.” ¶ [0028]: “For example, the merchant 112(1) may utilize the server(s) 114 as an ecommerce platform to sell items online, and the catalogue data 122 associated with the merchant 112(1) may specify the merchant's 112(1) items that are available for purchase via the ecommerce platform. Additionally, or alternatively, the catalogue data 122 may be associated with the merchant's 112(1) items that are available for purchase from a brick-and-mortar store. In some examples, the catalogue data 122 may include image data representing images of items and/or 3D model data associated with items to enable user interaction with a 360-degree interactive model of an item (e.g., a product), thereby allowing a customer to view an item (e.g., a product) in 3D.” ¶ [0029]: “Individual merchants 112 may use a merchant device 116 to access the XR storefront service 110 in order to configure a XR storefront. FIG. 1 shows the merchant 112(1) using a merchant device 116(1) to access the XR storefront service 110 (e.g., the server(s) 114) over the network(s) 118 for purposes of configuring a XR storefront. In an example, a merchant 112 may be interested in setting up and configuring a VR storefront in order provide an immersive VR experience for customers, such as the customer 102,” ¶ [0029]: “Configuring a XR storefront can involve various operations, as described herein. In some examples, a merchant 112(1) may access a storefront menu(s) 124 that includes multiple different XR storefronts. That is, a storefront menu 124 may be displayed on the merchant device 116(1), thereby allowing the merchant 112(1) to select a XR storefront from the menu 124. These predefined XR storefronts in the menu 124 may have been created and tested for compatibility with a XR shopping experience by a development team associated with the XR storefront service 110. In some examples, the merchant 112(1) may use the XR storefront service 110 to generate digital representations of items included in its catalogue data 122, and to generate a XR storefront (e.g., the XR storefront selected from the menu 124) including the digital representations of the items positioned within a virtual space corresponding to the XR storefront.” ¶ [0032]: “FIG. 2 shows an example user interface 200 that can be displayed on a merchant device 116 at a time when a merchant 112 is configuring a XR storefront. Although the user interface 200 is shown as a browser, it is to be appreciated that an application may be downloaded from the XR storefront service 110 to a merchant device 116, and that the downloaded application may present a similar user interface. The user interface 200 may provide merchants 112 with a democratized on-ramp to configure their own XR storefronts, meaning that the merchant does not have to be a specialist (e.g., a 3D modeler, a game designer, etc.) to use the user interface 200 for configuring a XR storefront. In some examples, the user interface 200 can be used to configure a merchant-specific XR storefront for a particular merchant 112.” ¶ [0033]: “Each XR storefront in the menu 124 may be associated with a corresponding 3D model that is maintained in the datastore(s) 120. After selecting a XR storefront from the menu 124, the merchant 112(1) may be able to preview the selected XR storefront by selecting a “preview” button 204. For example, upon selecting the preview button 204, a window showing the selected XR storefront may be displayed on the merchant device 116(1), such as a pop-up window. In some example, the preview may be a static image of the XR storefront, while in other examples, the merchant 112(1) may be able to interact with the 3D model corresponding to the XR storefront and/or view video demonstrations of what an end user would see during a XR experience within the XR storefront.” ¶ [0038]: “As illustrated by the encircled number 3 in FIG. 2, a third step may be to create or generate digital representations of the selected items from the second step. The digital representations may be 2D representations or 3D representations. Accordingly, the user interface 200 may provide an option 212 to select from existing images in the merchant's 112(1) catalogue, along with a “select images” button 214, to select the desired images. In some examples, selection of the “select images” button 214 may allow for selecting existing 3D models of items that are being, or that have been, utilized on an ecommerce website of the merchant 112(1) to provide users with the ability to interact with a 360-degree interactive model of an item (e.g., a product). In other words, 3D model data associated with the selected items may be reused or repurposed for creating or generating the digital representations of the selected items for inclusion within the XR storefront.” ¶ [0038]: “The user interface 200 may additionally, or alternatively, provide an option 216 to create 3D models of the selected items, along with an “upload scans” button 218 to upload scan data obtained from scanning the items in a real-world space. For example, a 3D scanning device may obtain scan data of an item from different angles, which is usable to create a 3D model of the item. A service provider of the XR storefront service 110 may provide this 3D scanning hardware and/or software to the merchant 112(1) and/or send personnel to the merchant's 112(1) brick-and-mortar store to obtain the scan data for the items the merchant 112(1) would like to showcase in the XR storefront. Creating 3D digital representations of items to showcase in the XR storefront is particularly useful for items that are more interesting to examine from different angles. The example of FIG. 2 shows that the merchant 112(1) has selected the option 216 to generate 3D digital representations of the selected items, and, as such, the merchant 112(1) may have uploaded scan data for the items to create the 3D representations.” ¶ [0039]: “In some implementations, the user interface 200 may provide an option for the merchant to provide input that identifies 3D representation of an item. For example, a manufacturer may provide a 3D model of an item at a URL and the user interface 200 may provide an option for the merchant to indicate and enter a URL for the 3D representation of the item.” ¶ [0040]: “As illustrated by the encircled number 4 in FIG. 2, a fourth step may be to indicate the respective positions within a virtual space corresponding to the XR storefront to position the digital representations of the items. For example, the merchant 112(1) can interact with (e.g., select) a drop-down menu 220 to reveal a list of the items that the merchant 112(1) selected at the second step, and the merchant 112(1) may select one of those items. In addition, the merchant 112(1) can interact with (e.g., select) a drop-down menu 222 to reveal a list of predefined locations within the virtual space corresponding to the XR storefront that the merchant 112(1) can choose from to position the selected item within the virtual space. The merchant 112(1) may be able to indicate the positions of any suitable number of the selected items by selecting the “+Add” element.” In some examples, the user interface 200 may present an option to confirm, correct, and/or reject these automatically determined locations of items within the XR storefront.” ¶ [0068] At 1014, in some examples, the digital representations of the items generated at block 1006 may be associated with respective positions within the virtual space corresponding to the XR storefront based at least in part on the 3D model. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may associate the digital representations of the items with the respective positions within the virtual space at block 1014.”); provide an avatar customization engine (e.g., ¶ [0031]: “XR storefront service 110”; e.g. FIG. 2. ¶ [0030]: “The XR storefront service 110 may cause various user interfaces to be presented on a user's electronic device 106, 108,” ¶ [0032]: “FIG. 2 is an example user interface 200 for configuring a XR storefront,” ¶ [0041]: “As illustrated by the encircled number 5 in FIG. 2, a fifth step may be to configure a merchant avatar to represent a user (e.g., the clerk 104 in FIG. 1) associated with the merchant 112(1) within the XR storefront.” ¶ [0080]: : “At 1118, one or more avatars may be configured. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may configure the avatar(s) at block 1118. In some examples, the avatar configured at block 1118 may be a merchant avatar 904 that is to be associated with a clerk 104. In this manner, when a customer 102 accesses the XR storefront at the same time as the clerk 104, the merchant avatar 904 may be displayed within the XR storefront on the electronic device 106 of the customer 102. In some examples, the avatar configured at block 1118 may be a customer avatar 902 that is to be associated with a customer 102.”) for generating a virtual avatar (¶ [0041]: “As illustrated by the encircled number 5 in FIG. 2, a fifth step may be to configure a merchant avatar to represent a user (e.g., the clerk 104 in FIG. 1) associated with the merchant 112(1) within the XR storefront. That is, the merchant 112(1) may employ users, such as the clerk 104, to access the XR storefront to interact with customers within the XR storefront. Accordingly, when a customer 102 accesses the XR storefront at the same time that a clerk 104 is accessing the same XR storefront, the customer 102 may see a merchant avatar within the XR storefront. The fifth step in the example of FIG. 2 is to configure such a merchant avatar. The merchant 112(1) may be able to select from a menu of predefined avatars and/or create a new avatar by selecting features, such as height, weight, hair color, eye color, skin color, or the like. If the merchant 112(1) does not complete the fifth step, a default avatar may be chosen for users (e.g., clerks 104) associated with the merchant 112(1).” ¶ [0058]: “The avatars 902, 904 may be human-like in form such that, when they are configured, the avatars, such as the customer avatars 902, may have similar sizing to the customers associated with those avatars. For example, the avatars 902 may be configured to have the same height, weight, neck size, waist size, chest size, and the like, as their corresponding customers. This may allow for evaluating clothing items within the VR storefronts. For example, a customer may be able to virtually try on clothes in a VR storefront to see if they fit. Such an experience may mimic a real-world shopping experience to drive customer engagement with VR storefronts.” ¶ [0080]: “At 1118, one or more avatars may be configured. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may configure the avatar(s) at block 1118. In some examples, the avatar configured at block 1118 may be a merchant avatar 904 that is to be associated with a clerk 104. In this manner, when a customer 102 accesses the XR storefront at the same time as the clerk 104, the merchant avatar 904 may be displayed within the XR storefront on the electronic device 106 of the customer 102. In some examples, the avatar configured at block 1118 may be a customer avatar 902 that is to be associated with a customer 102. In this manner, when the clerk 104 (and/or another customer) accesses the XR storefront at the same time as the customer 102, the customer avatar 902 may be displayed within the XR storefront on the electronic device 108 of the clerk 104 (and/or the electronic device 106 of the other customer).”) navigable in the computer-generated 3D space (¶ [0044]: “In some examples, the XR storefront service 110 may analyze the 3D model 300 to determine open floor areas that are contiguous with each other and with an entrance and/or an exit of the XR storefront, and the navigation mesh 400 may be applied to, or positioned on, the contiguous open-floor areas so that an avatar is able to enter the XR storefront, traverse the navigation mesh 400, and exit the XR storefront without getting “stuck” on an island of the navigation mesh 400 that is not connected to the entrance and/or the exit. In some examples, the navigation mesh 400 may be modified by a user to expand or shrink the navigation mesh 400, and/or to change a path along which avatars may traverse the XR storefront.” ¶ [0052] FIG. 9 is an example user interface 900 of a virtual space corresponding to a VR storefront, the virtual space including avatars, such as the avatar 902 and the avatar 904, of other users who are also accessing the VR storefront at the same time as the viewing user, according to an implementation of the present subject matter. As mentioned above, in some examples, the VR storefront service 110 provides multi-user support to enable interactions between users 102, 104 within the VR storefront. For example, a merchant 112 (e.g., a user, such as a clerk 104, associated with the merchant 112) can interact with customers while the customers are accessing the VR storefront, and/or friends can shop together in a VR storefront even though they are located in disparate geographical locations. This provides an interactive experience where the customers can interact with each other and with one or more users (e.g., clerks 104) associated with the merchant 112.” ¶ [0067]: “At 1012, in some examples, a navigation mesh may be applied to the 3D model. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may apply the navigation mesh to the 3D model at block 1012. An example of a navigation mesh 400 applied to a 3D model 300 is depicted in FIG. 4. The navigation mesh 400, when applied to the 3D model, may constrain movement of avatars (e.g., the avatars 902, 904 depicted in FIG. 9) within the virtual space (e.g., 3D virtual space) corresponding to the XR storefront.” ¶ [0081]: “At 1120, data may be stored to save the configured XR storefront (and possibly the configured avatar(s)). In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may store storefront data 126 in the datastore(s) 120 at block 1120, the storefront data 126 representing the XR storefront configured by implementing the preceding blocks of the process 1100. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may store avatar data in the datastore(s) 120 at block 1120, the avatar data representing the avatar(s) (e.g., the merchant avatar(s) 904, the customer avatar(s) 902, etc.) configured at block 1118.”), cause the virtual interactive environment to be presented (¶ [0020]: “The XR storefront service can maintain multiple different XR storefronts associated with multiple different merchants to implement a virtual shopping experience for customers akin to a virtual shopping mall. Customers may request access to a XR storefront using their electronic devices. When a customer requests access to a XR storefront, the XR storefront service may be executed to access storefront data representing the XR storefront, and to cause the customer's electronic device to display the XR storefront based at least in part on the storefront data.”), at a display of a user device (¶ [0020]: “cause the customer's electronic device to display the XR storefront” ¶ [0029]: “cause the customer's 102 electronic device 106 to display the XR storefront” ¶ [0030]: “XR storefront is displayed on the electronic device 106(1) of the customer 102 via a user interface 128 in response to the customer 102 requesting access to the XR storefront.” ¶ [0178]: “user device 1602 can include a display 1616.” ¶ [0166]: “the user device 1602 may be the same as or similar to the user devices 106, 108 introduced in FIG. 1.”), as the virtual avatar navigable in the computer-generated 3D space (e.g., ¶ [0044]: “navigation mesh 400 may be applied to, or positioned on, the contiguous open-floor areas so that an avatar is able to enter the XR storefront, traverse the navigation mesh 400, and exit the XR storefront without getting “stuck” on an island of the navigation mesh 400 that is not connected to the entrance and/or the exit.” ¶ [0044]: “avatars may traverse the XR storefront.”) (¶ [0044]: “FIG. 4 is an example 3D model 300 of a virtual space corresponding to a XR storefront, the 3D model 300 having applied thereto a navigation mesh 400 to constrain avatar movement within the virtual space, according to an implementation of the present subject matter. A navigation mesh 400 may be applied to a 3D model 300 to constrain the movement of avatars within the virtual space corresponding to the XR storefront. In the example of FIG. 4, the navigation mesh 400 is shown as a shaded area on the floor of the 3D model 300, but this is exemplary. If a user, such as the customer 102 or the clerk 104, accesses the XR storefront associated with the 3D model 300 depicted in FIG. 4 with the navigation mesh 400 applied thereto, the avatar of the user may be allowed to walk in the shaded area of the navigation mesh 400, but may be unable to walk outside of the shaded area of the navigation mesh 400. This navigation mesh 400 may help prevent anomalous events and errors from occurring during a XR experience. In some examples, a determination is made as to where to position the navigation mesh 400 on, in, or with respect to the 3D model 300 and/or the XR storefront. For example, the XR storefront service 110 may determine where digital objects (e.g., tables, chairs, display cases, clothing racks, etc.) are positioned on a floor of the 3D model 300, and, based on the floor locations of, and/or the area of the floor covered by, those digital objects, the XR storefront service 110 may determine how to apply the navigation mesh 400 to the 3D model 300. For example, the XR storefront service 110 may avoid positioning the navigation mesh 400 on the areas of the floor that are covered by digital objects (e.g., tables, chairs, display cases, clothing racks, etc.) of the 3D model 300. In some examples, the XR storefront service 110 may analyze the 3D model 300 to determine open floor areas that are contiguous with each other and with an entrance and/or an exit of the XR storefront, and the navigation mesh 400 may be applied to, or positioned on, the contiguous open-floor areas so that an avatar is able to enter the XR storefront, traverse the navigation mesh 400, and exit the XR storefront without getting “stuck” on an island of the navigation mesh 400 that is not connected to the entrance and/or the exit. In some examples, the navigation mesh 400 may be modified by a user to expand or shrink the navigation mesh 400, and/or to change a path along which avatars may traverse the XR storefront.”); receive user input controlling the virtual avatar and selecting at least one product model of the one or more product models (¶ [0048]: “In some examples, the customer 102 may utilize a user input device of, or associated with, the electronic device 106 to provide user input indicative of browsing and purchase intents within the VR storefront. For example, the customer 102 may use a user input device to “hover” a pointer 702 on the user interface 700 over the digital representation 302(1) of the item to reveal item details 704 associated with the item. In an example where the electronic device 106 is a head-mounted display 106(1) (e.g., a VR headset), the customer 102 may operate a handheld controller by extending his/her arm forward to move the pointer 702 (e.g., a laser control) over the digital representation 302(1) of the item. In another example where the electronic device 106 is a desktop PC 106(N), the customer 102 may operate a mouse to move the pointer 702 over the digital representation 302(1) of the item. When the pointer 702 is hovering over the digital representation 302(1) of the item, item details 704 may be revealed, such as by the user interface 700 presenting the item details 704 in a pop-up window adjacent to the digital representation 302(1) of the item. In the example of FIG. 7, the item details 704 include a description of the item and a price of the item.” ¶ [0056]: “In some examples, the VR storefront service 110 is configured to analyze interactions of users 102, 104 with digital representations of items within the VR storefront and perform actions based on the analysis of those interactions. For example, if an avatar 902 picks up an item in the VR storefront and is examining the item from different angles (e.g., by turning the digital representation of the item in different orientations), the VR storefront service 110 may cause display, on the user electronic device 106, 108, of higher-resolution images of the items and/or a pop-up option that is selectable for the user 102, 104 to access additional imagery (e.g., images, videos, etc.) associated with the item that the user 102, 104 is currently examining from different angles.”); and present, at the display, data associated with the at least one product model (¶ [0048]: “In some examples, the customer 102 may utilize a user input device of, or associated with, the electronic device 106 to provide user input indicative of browsing and purchase intents within the VR storefront. For example, the customer 102 may use a user input device to “hover” a pointer 702 on the user interface 700 over the digital representation 302(1) of the item to reveal item details 704 associated with the item. In an example where the electronic device 106 is a head-mounted display 106(1) (e.g., a VR headset), the customer 102 may operate a handheld controller by extending his/her arm forward to move the pointer 702 (e.g., a laser control) over the digital representation 302(1) of the item. In another example where the electronic device 106 is a desktop PC 106(N), the customer 102 may operate a mouse to move the pointer 702 over the digital representation 302(1) of the item. When the pointer 702 is hovering over the digital representation 302(1) of the item, item details 704 may be revealed, such as by the user interface 700 presenting the item details 704 in a pop-up window adjacent to the digital representation 302(1) of the item. In the example of FIG. 7, the item details 704 include a description of the item and a price of the item.” ¶ [0056]: “In some examples, the VR storefront service 110 is configured to analyze interactions of users 102, 104 with digital representations of items within the VR storefront and perform actions based on the analysis of those interactions. For example, if an avatar 902 picks up an item in the VR storefront and is examining the item from different angles (e.g., by turning the digital representation of the item in different orientations), the VR storefront service 110 may cause display, on the user electronic device 106, 108, of higher-resolution images of the items and/or a pop-up option that is selectable for the user 102, 104 to access additional imagery (e.g., images, videos, etc.) associated with the item that the user 102, 104 is currently examining from different angles.”). SKEEN fails to explicitly disclose: the avatar customization engine defining a first set of customization parameters selected to be mutable and a second set of customization parameters selected to be immutable. However, whereas SKEEN may not be completely explicit as to, ROSS clearly teaches: the avatar customization engine (¶ [0010]: “processing means are adapted to generate a visual representation of a virtual user model on the screen based on the parameters for the virtual model of a user stored in the storing means.”) defining a first set of customization parameters selected to be mutable (¶ [0021]: “A user might be interested on the other hand in personalizing a user avatar according to his or her own preferences. In one embodiment of the invention, the storing means are therefore adapted to store parameters for a virtual model of a user that are changeable by a user via the user input means. With such changeable parameters, the user is enabled to assign personal properties to his or her own avatar, like own pictures, sounds and preferences.” ) and a second set of customization parameters selected to be immutable (¶ [0020]: “storing means are adapted to store at least fixed parameters for a virtual model of a user, which cannot be changed by a user. These fixed parameters may describe in particular the major properties of a user avatar. The set of fixed parameters is also referred to as avatar template. An avatar template may be defined for instance by a manufacturer, by a network operator, if the electronic device is a mobile terminal, or by another third party. An avatar template may contain necessary parametric information about the basic avatar character and/or behavior tendency and about the technical avatar environment, like available resources, including input and/or output means, sensor information, terminal and network specific data etc.” ¶ [0022]: “Advantageously, an avatar profile is defined, which is always on top of an avatar template. The avatar profile is a collection of changeable and non-changeable parameters, which evolve over time and which define the avatar behavior and capabilities in certain situations. While the template parameters are fixed, the user or other entities, like network operators, can customize the accessible parametric avatar properties of the avatar profile, such as avatar class, skin properties like shape, color, clothing etc.”). Thus, in order to obtain a more versatile system for providing a virtual interactive environment having the cumulative features and/or functionalities taught by SKEEN and ROSS, it would have been obvious to one of ordinary skill in the art to have modified the avatar customization engine in the system for providing an interactive virtual environment taught by SKEEN so as to include defining a first set of parameters selected to be mutable and a second set of customization parameters selected to be immutable, as taught by ROSS. Regarding claim 13 (depends on claim 11), SKEEN discloses: the display is a first display (¶ [0020]: “cause the customer's electronic device to display the XR storefront” ¶ [0029]: “cause the customer's 102 electronic device 106 to display the XR storefront” ¶ [0030]: “XR storefront is displayed on the electronic device 106(1) of the customer 102 via a user interface 128 in response to the customer 102 requesting access to the XR storefront.” ¶ [0178]: “user device 1602 can include a display 1616.” ¶ [0166]: “the user device 1602 may be the same as or similar to the user devices 106, 108 introduced in FIG. 1.”); and providing the environment generator engine includes presenting an environment creator user interface (UI) at second display of a merchant device (¶ [0004]: “FIG. 2 is an example user interface for configuring a XR storefront,” ¶ [0029]: “storefront menu 124 may be displayed on the merchant device 116(1), thereby allowing the merchant 112(1) to select a XR storefront from the menu 124.” ¶ [0032]: “FIG. 2 is an example user interface 200 for configuring a XR storefront,” ¶ [0032]: “FIG. 2 shows an example user interface 200 that can be displayed on a merchant device 116 at a time when a merchant 112 is configuring a XR storefront.” ¶ [0034]: “the XR storefront service 110 may provide a XR storefront creation tool to create a new XR storefront.”), the environment creator UI receiving one or more inputs (¶ [0033]: “The user interface 200 includes one or more interactive elements with which the merchant 112(1) can interact (e.g., select via user input). In the example of FIG. 2, the user interface 200 provides a series of steps for the merchant 112 to complete in order to configure a XR storefront.” ) indicating at least one of: a location of the one or more product models in the computer-generated 3D space (¶ [0043]: “FIG. 3 also illustrates how digital representations 302 of items showcased in the XR storefront may be positioned within the virtual space. For example, FIG. 3 shows a first digital representation 302(1) of a candle for sale by the merchant 112(1), which is positioned at a first position within the virtual space, and a second digital representation 302(2) of a mug for sale by the merchant 112(1), which is positioned at a second, different position within the virtual space. These respective positions of the digital representations 302 within the virtual space may be based on the positioning indications received from the merchant 112(1) at the fourth step in the example of FIG. 2. In other words, the merchant 112(1) may have chosen the positions of the digital representations 302(1) and 302(2) within the virtual space. Accordingly, the digital representations 302(1) and 302(2) may be associated with respective positions within the virtual space based at least in part on the 3D model 300,”) ([0023]: “the XR storefront service may provide merchants with an easy-to-use storefront configuration tool (e.g., an Internet-accessible a user interface(s)).” ¶ [0027]: “FIG. 1 also depicts merchant devices 116 (e.g., electronic devices), which may be used by the merchants 112 to access the XR storefront service 110.” ¶ [0029]: “Individual merchants 112 may use a merchant device 116 to access the XR storefront service 110 in order to configure a XR storefront. FIG. 1 shows the merchant 112(1) using a merchant device 116(1) to access the XR storefront service 110 (e.g., the server(s) 114) over the network(s) 118 for purposes of configuring a XR storefront. In an example, a merchant 112 may be interested in setting up and configuring a VR storefront in order provide an immersive VR experience for customers, such as the customer 102,” ¶ [0029]: “Configuring a XR storefront can involve various operations, as described herein. In some examples, a merchant 112(1) may access a storefront menu(s) 124 that includes multiple different XR storefronts. That is, a storefront menu 124 may be displayed on the merchant device 116(1), thereby allowing the merchant 112(1) to select a XR storefront from the menu 124. These predefined XR storefronts in the menu 124 may have been created and tested for compatibility with a XR shopping experience by a development team associated with the XR storefront service 110. In some examples, the merchant 112(1) may use the XR storefront service 110 to generate digital representations of items included in its catalogue data 122, and to generate a XR storefront (e.g., the XR storefront selected from the menu 124) including the digital representations of the items positioned within a virtual space corresponding to the XR storefront.” ¶ [0032]: “FIG. 2 shows an example user interface 200 that can be displayed on a merchant device 116 at a time when a merchant 112 is configuring a XR storefront. Although the user interface 200 is shown as a browser, it is to be appreciated that an application may be downloaded from the XR storefront service 110 to a merchant device 116, and that the downloaded application may present a similar user interface. The user interface 200 may provide merchants 112 with a democratized on-ramp to configure their own XR storefronts, meaning that the merchant does not have to be a specialist (e.g., a 3D modeler, a game designer, etc.) to use the user interface 200 for configuring a XR storefront. In some examples, the user interface 200 can be used to configure a merchant-specific XR storefront for a particular merchant 112.” ¶ [0040]: “As illustrated by the encircled number 4 in FIG. 2, a fourth step may be to indicate the respective positions within a virtual space corresponding to the XR storefront to position the digital representations of the items. For example, the merchant 112(1) can interact with (e.g., select) a drop-down menu 220 to reveal a list of the items that the merchant 112(1) selected at the second step, and the merchant 112(1) may select one of those items. In addition, the merchant 112(1) can interact with (e.g., select) a drop-down menu 222 to reveal a list of predefined locations within the virtual space corresponding to the XR storefront that the merchant 112(1) can choose from to position the selected item within the virtual space. The merchant 112(1) may be able to indicate the positions of any suitable number of the selected items by selecting the “+Add” element.” In some examples, the user interface 200 may present an option to confirm, correct, and/or reject these automatically determined locations of items within the XR storefront.” ¶ [0068] At 1014, in some examples, the digital representations of the items generated at block 1006 may be associated with respective positions within the virtual space corresponding to the XR storefront based at least in part on the 3D model. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may associate the digital representations of the items with the respective positions within the virtual space at block 1014.”); a personalization of at least a portion of the computer-generated 3D space for a particular user (¶ [0077]: “At 1112, customer data associated with a customer 102 may be accessed from the datastore(s) 120. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may access the customer data at block 1112. The customer data accessed at block 1112 may be indicative of customer preferences or predilections (e.g., based on favorited items and/or item categories, past purchases of items, etc.), customer demographics, upcoming events (e.g., birthdays, anniversaries, etc.) associated with the customer (e.g., based on calendar data), or the like. Accessing the customer data 1112 may be done for purposes of personalizing the XR storefront and/or the items showcased therein for the customer 102, in some examples.” ¶ [0078]: “In some examples, the items selected at block 1114 are based at least in part on the customer data accessed at block 1112 (e.g., to generate a personalized XR storefront for the customer 102).” ¶ [0082]: “The process 1100 can additionally, or alternatively, be implemented to configure and store a customer-specific XR storefront. For example, the XR storefront may showcase digital representations of items offered for sale by multiple different merchants, such as the first merchant 112(1) (Merchant A) and the second merchant 112(2) (Merchant B) depicted in FIG. 1. In some examples, the customer data accessed at block 1112 may be used to determine contextual data associated with the customer 102, such as an upcoming event (e.g., a birthday), and the XR storefront and/or items selected for the XR storefront may be configured based at least in part on the contextual data, such as by selecting birthday-related items (e.g., balloons, plates, birthday cakes, etc.), potentially from multiple different merchants in the same XR storefront.” ¶ [0093]: “In some examples, the XR storefront service 110 may be able to access data (e.g., customer data, merchant data, etc.) from various channels based on the server(s) 114 providing additional services to merchants 112 and customers 102. For example, purchase data, sales data, etc. may be accessed by the XR storefront service 110 to determine one or more items that a customer 102 purchased (e.g., online, in a brick-and-mortar store, etc.) in the past, and based at least in part on this data, the XR storefront service 110 may be configured to present one or more visual indications (e.g., a color, an outline, a highlight, etc.) in association with one or more items that are similar to the item(s) the customer 102 has purchased in the past. In some examples, such visual indications may be presented in association with items based on customer preferences, other customers (e.g., customers who purchased this item also purchased these items, etc.). In some examples, digital representations of items can be dynamically positioned and/or repositioned within the XR storefront for a particular customer 102 based at least in part on customer data associated with the customer 102, such as customer preferences, purchase history of the customer, etc. In an example, digital representations of items that the XR storefront service 110 determines a customer 102 may be interested in (e.g., based on purchase history of the customer, customer preferences, etc.) may be positioned towards a front and/or an entrance of the XR storefront to prominently feature specific items that the customer 102 is more likely to engage with, as compared to other items. In some examples, an area (e.g., a virtual table) near an entrance of the XR storefront may include digital representations of items specifically catered to the customer 102 based on customer data accessible to the XR storefront service 110, and these items may change/refresh dynamically each time the user enters the XR storefront, and/or the items may be different for different customers such that multiple avatars located at the same location within a XR storefront may see different digital representations of items at the same location.”); a customized message in the computer-generated 3D space (¶ [0082]: “In some examples, once the data is stored at block 1120, and if the XR storefront is a customer-specific XR storefront, a notification (e.g., promotional message) may be provided, or otherwise sent, to the customer 102 indicating that a personalized XR storefront has been created for them.” ¶ [0082]: “In some examples, once the data is stored at block 1120, and if the XR storefront is a customer-specific XR storefront, a notification (e.g., promotional message) may be provided, or otherwise sent, to the customer 102 indicating that a personalized XR storefront has been created for them.” ); or a lighting theme for a portion of the computer-generated 3D space (¶ [0034]: “In some examples, the XR storefront service 110 may provide a XR storefront creation tool to create a new XR storefront. For example, the merchant 112(1) may be able to modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc., and/or the merchant 112(1) may be able to combine (e.g., mix-and-match) multiple different XR storefronts to create a unique XR storefront.” ¶ [100]: “lighting to use within the XR storefront,”). Regarding claim 14 (depends on claim 13), ROSS further teaches: the first set of customization parameters are mutable (¶ [0021]: “A user might be interested on the other hand in personalizing a user avatar according to his or her own preferences. In one embodiment of the invention, the storing means are therefore adapted to store parameters for a virtual model of a user that are changeable by a user via the user input means. With such changeable parameters, the user is enabled to assign personal properties to his or her own avatar, like own pictures, sounds and preferences.”) and the second set of customization parameters are immutable (¶ [0020]: “storing means are adapted to store at least fixed parameters for a virtual model of a user, which cannot be changed by a user. These fixed parameters may describe in particular the major properties of a user avatar. The set of fixed parameters is also referred to as avatar template. An avatar template may be defined for instance by a manufacturer, by a network operator, if the electronic device is a mobile terminal, or by another third party. An avatar template may contain necessary parametric information about the basic avatar character and/or behavior tendency and about the technical avatar environment, like available resources, including input and/or output means, sensor information, terminal and network specific data etc.” ¶ [0022]: “Advantageously, an avatar profile is defined, which is always on top of an avatar template. The avatar profile is a collection of changeable and non-changeable parameters, which evolve over time and which define the avatar behavior and capabilities in certain situations. While the template parameters are fixed, the user or other entities, like network operators, can customize the accessible parametric avatar properties of the avatar profile, such as avatar class, skin properties like shape, color, clothing etc.”) based on an avatar profile template associated with the merchant device (¶ [0020]: “In one embodiment of the invention, the storing means are adapted to store at least fixed parameters for a virtual model of a user, which cannot be changed by a user. These fixed parameters may describe in particular the major properties of a user avatar. The set of fixed parameters is also referred to as avatar template. An avatar template may be defined for instance by a manufacturer, by a network operator, if the electronic device is a mobile terminal, or by another third party. An avatar template may contain necessary parametric information about the basic avatar character and/or behavior tendency and about the technical avatar environment, like available resources, including input and/or output means, sensor information, terminal and network specific data etc.” ¶ [0022]: “Advantageously, an avatar profile is defined, which is always on top of an avatar template. The avatar profile is a collection of changeable and non-changeable parameters, which evolve over time and which define the avatar behavior and capabilities in certain situations. While the template parameters are fixed, the user or other entities, like network operators, can customize the accessible parametric avatar properties of the avatar profile, such as avatar class, skin properties like shape, color, clothing etc. Such an avatar profile thus enables a good compromise between the interests of manufacturers and network operators and the interest of users.” NOTE: In other words, the avatar template defines the fixed parameters, which conversely also defines the avatar changeable parameters which are not included in the avatar template, i.e., the avatar parameters in the template are immutable (or, unchangeable) based on being included in the avatar template, and any avatar parameters not included in the avatar template are mutable (or, changeable) based on not being included in the avatar template.). Regarding claim 16 (depends on claim 11), the combination of SKEEN as understood in view of HELINGER teaches: the one or more surface customizations include at least one of a lighting source layer (e.g., ¶ [0034]: “modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc.”), a reflective material layer (e.g., ¶ [0034]: “modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc.” NOTE: The color of a surface in a 3D model comprises, at least in part, the reflective properties of the surface.), or a texture layer (e.g., ¶ [0034]: “modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc.” NOTE: The color and/or lighting of a surface in a 3D model represents, at least in part, texture properties of the surface in the 3D model.) (¶ [0033]: “Each XR storefront in the menu 124 may be associated with a corresponding 3D model that is maintained in the datastore(s) 120. After selecting a XR storefront from the menu 124, the merchant 112(1) may be able to preview the selected XR storefront by selecting a “preview” button 204. For example, upon selecting the preview button 204, a window showing the selected XR storefront may be displayed on the merchant device 116(1), such as a pop-up window. In some example, the preview may be a static image of the XR storefront, while in other examples, the merchant 112(1) may be able to interact with the 3D model corresponding to the XR storefront and/or view video demonstrations of what an end user would see during a XR experience within the XR storefront.” ¶ [0034]: “In some examples, the XR storefront service 110 may provide a XR storefront creation tool to create a new XR storefront. For example, the merchant 112(1) may be able to modify an existing XR storefront in the menu, such as by changing colors, virtual objects, lighting, spaces, etc., and/or the merchant 112(1) may be able to combine (e.g., mix-and-match) multiple different XR storefronts to create a unique XR storefront.” ¶ [0043]: “FIG. 3 is an example 3D model 300 of a virtual space corresponding to a XR storefront, according to an implementation of the present subject matter. For example, as a result of the merchant 112(1) configuring a XR storefront (e.g., via the user interface 200 of FIG. 2), the storefront data 126 representing the XR storefront may include the data corresponding to the 3D model 300 shown in FIG. 3. Any suitable component can be used to create the 3D model 300. For example, the 3D model 300 may be created using Blender.” NOTE: As is known by one of ordinary skill in the art, 3D models created and generated using Blender™, as suggested by SKEEN, include applying one or more of a lighting source layer, a reflective material layer, or a texture layer to the 3D model. For instance, paragraph [0030] of HELINGER et al. (US 2024/011243) discloses: “As used herein, “model” or “environment model” may refer to a computational or data-based model of an environment. In some embodiments or cases, “model” and “environment” may be synonymous. In some embodiments or cases, “model” may refer to the data which records or encodes an environment. A model may contain information for one or more of the following aspects of an environment: the geometry of the environment (e.g., encoded using points and/or polygons), textures of surfaces (e.g., splat maps or sprites), lighting (e.g., positions and types of light sources), light effects (e.g., reflectivity and transparency of materials or objects), etc. In some embodiments, a plurality or set of points and/or polygons may be described as a mesh. A mesh may have vertices, faces, and/or edges. A mesh may, for example, have textures or surfaces applied to it, or the textures or surfaces may be applied separately to individual faces of the mesh. A mesh may describe part of, or the entirety of, an object or a layer. In some embodiments, a model may include a number of constituent parts, such as objects and layers. Models may be constructed using a number of computer programs and/or computer aided design programs. For example, models might be constructed using one or more of the following commercially available programs or software: Blender, Cinema 4D, LightWave, Maya, Modo, 3ds Max, 3ixam, POV-Ray, RealityCapture, Metashape, and 3DF Zephyr, Unity, Unreal, or AI tools.” Thus, one of ordinary skill in the art would understand that a 3D model of the virtual space generated using Blender™, as taught by SKEEN, would include applying one or more of a lighting source layer, a reflective material layer, or a texture layer to the 3D model, as clearly disclosed by HELINGER et al.). Regarding claim 18 (depends on claim 11), SKEEN discloses: the instructions, when executed by the one or more processors, cause the system to present a graphical interactive element (e.g., ¶ [0088]: “the notification(s) may include a link that, upon selection, causes the electronic device(s) of the other user(s) to access the XR storefront.”) which, upon receiving a user input (e.g., ¶ [0088]: “the notification(s) may include a link that, upon selection, causes the electronic device(s) of the other user(s) to access the XR storefront.”), establishes a multi-user session for the virtual interactive environment (¶ [0021]: “This immersive virtual shopping experience provides a more intuitive browsing experience because it mimics a real-life shopping experience at a brick-and-mortar store. Customers can access XR storefronts remotely (e.g., from the comfort of their own homes) using electronic devices, which alleviates the issues surrounding in-person shopping, as noted above. In some examples, the disclosed XR storefront service provides multi-user support to enable interactions between users within the XR storefronts. For example, a merchant (or a clerk associated therewith) can interact with customers while the customers are accessing a XR storefront of the merchant, and/or friends can shop together in a XR storefront even though they are located in disparate geographical locations.” ¶ [0052]: “FIG. 9 is an example user interface 900 of a virtual space corresponding to a VR storefront, the virtual space including avatars, such as the avatar 902 and the avatar 904, of other users who are also accessing the VR storefront at the same time as the viewing user, according to an implementation of the present subject matter. As mentioned above, in some examples, the VR storefront service 110 provides multi-user support to enable interactions between users 102, 104 within the VR storefront. For example, a merchant 112 (e.g., a user, such as a clerk 104, associated with the merchant 112) can interact with customers while the customers are accessing the VR storefront, and/or friends can shop together in a VR storefront even though they are located in disparate geographical locations. This provides an interactive experience where the customers can interact with each other and with one or more users (e.g., clerks 104) associated with the merchant 112.” ¶ [0053]: “At runtime, multiple users may be accessing a common VR storefront. In the example of FIG. 9, the viewing user who is viewing the user interface 900 on his/her electronic device 106 may represent a first customer 102. Because the first customer 102 can see two avatars 902 and 904 within the VR storefront, a second customer 102 associated with the first avatar 902 (customer avatar 902) may be accessing the VR storefront at the same time as the first customer 102, and a clerk 104 associated with the second avatar 904 (merchant avatar 904) may be accessing the VR storefront at the same time as the first and second customers 102. While these users 102, 104 are accessing the VR storefront, their respective electronic device 106, 108 stream device data (e.g., position data) to the server(s) 114. In an example where the users 102, 104 are using head-mounted displays (e.g., VR headsets), this device data (e.g., position data) may be based on the movement of the users 102, 104 within their respective environments, enabled by VR tracking technology. In an example have the users 102, 104 are using desktop PCs, the device data (e.g., position data) may be based on user input provided to a mouse and/or keyboard indicative of the user intent to move about the VR storefront from one position to another. Upon receiving device data from an electronic device 108 associated with the clerk 104, for example, the server(s) 114 may determine, based at least in part on the device data, a position of the merchant avatar 904 within the virtual space corresponding to the VR storefront, and may cause the VR storefront to be displayed on the electronic device 106 of the first customer with the merchant avatar 904 positioned at the determined position within the virtual space. Likewise, upon receiving device data from an electronic device 106 associated with the second customer, the server(s) 114 may determine, based at least in part on the device data, a position of the customer avatar 902 within the virtual space corresponding to the VR storefront, and may cause the VR storefront to be displayed on the electronic device 106 of the first customer with the customer avatar 902 positioned at the determined position within the virtual space. As device data is streamed from the respective electronic devices 106, 108 to the server(s) 114, the server(s) 114 can keep track of the updated positions of the respective users 102, 104 and update the 3D scenes displayed to each user 102, 104, as well as the positions of the avatars within those rendered 3D scenes. The server(s) 114 can use any suitable component(s) to support multi-user interactions, such as Socket.IO, EasyRTC, or the like. In general, the component(s) used by the server(s) 114 may enable users 102, 104 to interact with one another within VR storefronts. In this manner, different customers may be able to see each other and interact with one another within the VR storefront, and the customers presently accessing the VR storefront can interact with the merchant (e.g., the clerk(s) 104) who is also accessing the VR storefront.” ¶ [0088]: “At 1210, a determination is made as to whether to notify another user about the customer 102 having accessed the XR storefront. In some examples, a computing device(s) (e.g., the server(s) 114, and/or a processor(s) thereof) may determine whether to notify another user at block 1210. In some examples, the determination made at block 1210 is whether to notify the merchant(s) 112 associated with the XR storefront (e.g., a clerk(s) 104 associated with the merchant 112 who configured the XR storefront). For example, the merchant 112 may specify, in settings, that clerks 104 should be notified about customers entering the XR storefront. In these examples, the merchant 112 may have “on-call” clerks 104 who are notified whenever a customer enters the XR storefront so that the on-call clerks 104 can access the XR storefront to interact with the customer. In some examples, the determination made at block 1210 is whether to notify another user (e.g., a friend, such as a social contact) associated with the customer 102. Accordingly, the server(s) 114 may determine whether any users (e.g., social contacts) are associated with the customer 102 who is accessing the XR storefront, and, if so those users may be notified. In some examples, the customer 102 may specify, in settings, that social contacts (or a subset thereof) should be notified about the customer 102 entering the XR storefront. In some examples, the determination made at block 1210 is whether the customer 102 has explicitly invited another user (e.g., a friend, such as a social contact) to join him/her in the XR storefront. For example, via a user interface displayed on the electronic device 106 of the customer 102, the customer 102 may select an interactive element (e.g., an “invite friends” button) to invite one or more other users to the XR storefront. In some examples, the determination made at block 1210 is whether the customer 102 has explicitly requested the presence of a clerk 104 within the XR storefront. For example, via a user interface displayed on the electronic device 106 of the customer 102, the customer 102 may select an interactive element (e.g., “call a clerk” button) to request that a clerk 104 access the XR storefront to interact with the customer 102. If it is determined, at block 1210, to notify another user(s), the process 1200 may follow the YES route from block 1210 to block 1212 where a notification(s) may be sent to another user(s) and/or another electronic device(s) 106, 108 of the user(s). This notification(s) may be a notification that the customer 102 has entered the XR storefront, and the notification(s) may include a link that, upon selection, causes the electronic device(s) of the other user(s) to access the XR storefront. Such a notification may be sent via any suitable communication channel, such as electronic mail (email), Short Message Service (SMS) text, an in-app notification (e.g., a notification sent to a mobile application installed on the electronic device(s) of the other user(s)), or the like.”); and the multi-user session includes an audio channel (¶ [0054]: “In some examples, users 102, 104 accessing a VR storefront can utilize user input devices to provide user input indicative of an interaction with an avatar that is displayed within the VR storefront. For example, the first customer 102 in the example of FIG. 9 (i.e., the viewing user) may speak into a microphone of his/her electronic device 106 in order to talk to another user(s) associated with the avatars 902 and/or 904. When this occurs, the server(s) 114 receives interaction data (e.g., audio data) from the electronic device 106 of the first customer 102, indicating an interaction of the first customer 102 with the avatar 902 and/or 904, and the server(s) 114 may send (e.g., stream) this interaction data to the electronic device(s) 106, 108 of the user(s) 102, 104 associated with the avatar(s) 902, 904. In other words, interaction data (e.g., audio data) may be synchronized across the user electronic devices 106, 108 to allow the users 102, 104 to interact with one another within the VR storefront.” ¶ [0054]: “As another example, the two customers 102, who might be friends located in different geographical locations, can interact with one another, such as by speaking. For instance, the second customer 102 associated with the customer avatar 902 might virtually try on a digital representation of a shirt and may say to the first customer 102 (i.e., the viewing user) something like “What do you think of this shirt on me?” Again, this mimics real-life experiences to drive customer engagement with VR storefronts of merchants.”) for: receiving an audio signal as input at a first user device corresponding with primary user (¶ [0054]: “In some examples, users 102, 104 accessing a VR storefront can utilize user input devices to provide user input indicative of an interaction with an avatar that is displayed within the VR storefront. For example, the first customer 102 in the example of FIG. 9 (i.e., the viewing user) may speak into a microphone of his/her electronic device 106 in order to talk to another user(s) associated with the avatars 902 and/or 904. When this occurs, the server(s) 114 receives interaction data (e.g., audio data) from the electronic device 106 of the first customer 102, indicating an interaction of the first customer 102 with the avatar 902 and/or 904, and the server(s) 114 may send (e.g., stream) this interaction data to the electronic device(s) 106, 108 of the user(s) 102, 104 associated with the avatar(s) 902, 904. In other words, interaction data (e.g., audio data) may be synchronized across the user electronic devices 106, 108 to allow the users 102, 104 to interact with one another within the VR storefront.” ¶ [0054]: “As another example, the two customers 102, who might be friends located in different geographical locations, can interact with one another, such as by speaking. For instance, the second customer 102 associated with the customer avatar 902 might virtually try on a digital representation of a shirt and may say to the first customer 102 (i.e., the viewing user) something like “What do you think of this shirt on me?” Again, this mimics real-life experiences to drive customer engagement with VR storefronts of merchants.”); and sending the audio signal as output at a plurality of user devices corresponding with a plurality of secondary users (¶ [0054]: “In some examples, users 102, 104 accessing a VR storefront can utilize user input devices to provide user input indicative of an interaction with an avatar that is displayed within the VR storefront. For example, the first customer 102 in the example of FIG. 9 (i.e., the viewing user) may speak into a microphone of his/her electronic device 106 in order to talk to another user(s) associated with the avatars 902 and/or 904. When this occurs, the server(s) 114 receives interaction data (e.g., audio data) from the electronic device 106 of the first customer 102, indicating an interaction of the first customer 102 with the avatar 902 and/or 904, and the server(s) 114 may send (e.g., stream) this interaction data to the electronic device(s) 106, 108 of the user(s) 102, 104 associated with the avatar(s) 902, 904. In other words, interaction data (e.g., audio data) may be synchronized across the user electronic devices 106, 108 to allow the users 102, 104 to interact with one another within the VR storefront.” ¶ [0054]: “As another example, the two customers 102, who might be friends located in different geographical locations, can interact with one another, such as by speaking. For instance, the second customer 102 associated with the customer avatar 902 might virtually try on a digital representation of a shirt and may say to the first customer 102 (i.e., the viewing user) something like “What do you think of this shirt on me?” Again, this mimics real-life experiences to drive customer engagement with VR storefronts of merchants.”). Regarding claim 20 (depends on claim 19), whereas SKEEN may not be completely explicit as to, ROSS clearly teaches: the avatar customization engine (¶ [0010]: “processing means are adapted to generate a visual representation of a virtual user model on the screen based on the parameters for the virtual model of a user stored in the storing means.”) defines a first set of customization parameters selected to be mutable (¶ [0021]: “A user might be interested on the other hand in personalizing a user avatar according to his or her own preferences. In one embodiment of the invention, the storing means are therefore adapted to store parameters for a virtual model of a user that are changeable by a user via the user input means. With such changeable parameters, the user is enabled to assign personal properties to his or her own avatar, like own pictures, sounds and preferences.” ) and a second set of customization parameters selected to be immutable (¶ [0020]: “storing means are adapted to store at least fixed parameters for a virtual model of a user, which cannot be changed by a user. These fixed parameters may describe in particular the major properties of a user avatar. The set of fixed parameters is also referred to as avatar template. An avatar template may be defined for instance by a manufacturer, by a network operator, if the electronic device is a mobile terminal, or by another third party. An avatar template may contain necessary parametric information about the basic avatar character and/or behavior tendency and about the technical avatar environment, like available resources, including input and/or output means, sensor information, terminal and network specific data etc.” ¶ [0022]: “Advantageously, an avatar profile is defined, which is always on top of an avatar template. The avatar profile is a collection of changeable and non-changeable parameters, which evolve over time and which define the avatar behavior and capabilities in certain situations. While the template parameters are fixed, the user or other entities, like network operators, can customize the accessible parametric avatar properties of the avatar profile, such as avatar class, skin properties like shape, color, clothing etc.”). Thus, in order to obtain a more versatile method to provide a virtual interactive environment having the cumulative features and/or functionalities taught by SKEEN and ROSS, it would have been obvious to one of ordinary skill in the art to have modified the method/system for providing an interactive virtual environment taught by SKEEN so as to include an avatar customization engine that defines a first set of parameters selected to be mutable and a second set of customization parameters selected to be immutable, as taught by ROSS. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over SKEEN (US 2024/0161178) in view of HELINGER et al. (US 2024/0112431), further in view of ROSS et al. (US 2008/0172635), and still further in view of YAN et al. (US 2022/0058883, hereinafter “YAN”). Regarding claim 3 (depends on claim 2), whereas SKEEN may not be explicit as to, YAN teaches: wherein receiving the selection of the one or more 3D mapping coordinates (¶ [0016]: “This 3D model may comprise a 3D coordinate system characterized by one or more reference axes (for example, an X-axis, y-axis, and/or z-axis).” ¶ [0016]: “a 3D coordinate system may also be developed for the 3D model, such that objects within the 3D model may be associated with particular coordinates in the 3D coordinate system.” ) includes receiving a drag-and-drop input placing the one or more product models at the one or more 3D mapping coordinates (¶ [0030]: “Whether the virtual representation of the object is initially placed within a determined location in the augmented reality display of the environment, or in a default location (for instance, awaiting user input), may depend on a setting selected by the user. For example, the user may select a setting indicating that they desire for the virtual representation of the object to be automatically displayed in an initial location determined by the augmented reality system as the most likely place for the virtual representation of the object or the user may select a setting indicating that they desire the object to be placed in a default location (for example, in the center of the displayed portion of the real environment on the display of the mobile device) so that they may manually drag and drop the virtual representation of the object in a location of their choosing. In some cases, even if the user opts for the default placement and manual drag and drop setting, the system may still assist the user in the placement of the object by automatically moving the object to a particular location after the user completes a drag and drop movement of the virtual representation of the object. For example, the user may manually drag and drop the virtual representation of the object from the default placement location within the augmented reality display to a second location that is nearby a wall. The system may then automatically reposition the object in a correct orientation against the wall (for example, if the object is a couch, the couch may automatically be repositioned to be against the wall with the back of the couch being against the wall). Additionally, even if the user selects a setting indicating that they want the augmented reality system to automatically determine the initial location for presentation of the virtual representation of the object, the user may still drag and drop the virtual representation of the object at another location after the virtual representation of the object is initially displayed at the initial location (for example, as depicted in FIG. 2).” ¶ [0040]: “The instructions may also allow the for a 3D model of the real environment 106 to be constructed. That is, the two dimensional images being captured by the camera of the user's mobile device 104 may be converted into a 3D model that may be used for subsequent processing and decision making (for example, where to initially place the virtual representation of the object 110, among other processing decisions that may be associated with the virtual presentation of the object on the user's mobile device display). This 3D model may include defining boundaries of the real environment 106, such as walls, floors, ceilings, etc., and may include any real objects already present in the real environment, such as chairs, couches, plants, doors, and televisions, to name a few examples.” ¶ [0040]: “Based on this information, a 3D coordinate system may also be developed for the 3D model, such that objects within the 3D model may be associated with particular coordinates in the 3D coordinate system.” ¶ [0043]: “In some embodiments, the virtual representation of the object 110 may be associated with a data structure that may include information about the virtual representation of the object 110 within the augmented reality display. For example, the data structure may include information, such as a current orientation of the virtual representation of the object 110 and a location of the virtual representation of the object 110 within the 3D model of the environment 106.” ¶ [0043]: “The location of the virtual representation of the object 110 may be represented in the form of one or more coordinates within the 3D model. The coordinates may be representative of a single point on the object (for example, the center of the object) or multiple points on the object (for example, the center and outer boundaries of the object).” ¶ [0018]: “In some embodiments, during and/or subsequent to the pre-processing of the physical environment, one or more 2D or 3D overlays may be presented to the user through the real-time view of the physical environment. That is, the user may be able to view the real-time view through the display on their mobile device, and an overlay may be displayed within the real-time view of the physical environment as well (that is, the overlay may not actually be present in the real environment, but may rather be generated based on the determined 3D model of the real environment and presented in the real-time view of the real environment in certain locations. For example, one type of overlay may include an indication that a location within the real-time view corresponds to a certain type of surface. That is, surfaces within the real-time view that are determined, based on the 3D model, to be floor surfaces may be presented as being highlighted within the real-time view. The overlay may be presented in any other form as well, such as a series of repeated elements (for example, a series of dots rather than a complete highlighting of the surface). The overlay may also identify other elements in the real-time view, such as some or all of the real objects found within the real-time view, specific types of objects found within the real-time view, boundaries between floor surfaces and wall surfaces.”). Thus, in order to obtain a more user friendly and versatile system, it would have been obvious to one of ordinary skill in the art to have modified the system taught by combination of SKEEN and ROSS so as to also incorporate the functionality of receiving a drag-and-drop input placing the one or more product models at the one or more 3D mapping coordinates for selection of the one or more 3D mapping coordinates, as taught by YAN. Claims 4 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over SKEEN (US 2024/0161178) in view of HELINGER et al. (US 2024/0112431), further in view of ROSS et al. (US 2008/0172635), and still further in view of SINGH et al. (US 2022/0058883, hereinafter “SINGH”). Regarding claim 4 (depends on claim 2), SKEEN discloses: the environment creator UI includes a graphical interactive feature for toggling between a high resolution output and a low resolution output (¶ [0056]: “In some examples, the VR storefront service 110 is configured to analyze interactions of users 102, 104 with digital representations of items within the VR storefront and perform actions based on the analysis of those interactions. For example, if an avatar 902 picks up an item in the VR storefront and is examining the item from different angles (e.g., by turning the digital representation of the item in different orientations), the VR storefront service 110 may cause display, on the user electronic device 106, 108, of higher-resolution images of the items and/or a pop-up option that is selectable for the user 102, 104 to access additional imagery (e.g., images, videos, etc.) associated with the item that the user 102, 104 is currently examining from different angles.”); and Whereas SKEEN and ROSS may not be explicit as to, SINGH teaches: in response to receiving a toggle input at the graphical interactive feature (¶ [0042]: “the 360-degree images 310 may be created based on” … “resolution settings” ): converting the 3D model into a plurality of cube maps corresponding to a plurality of viewpoints (¶ [0036]: “the virtual environment generation sever 230 may generate 360-degree files based on the 3D model as part of generating the 3D environment. Additionally, or alternatively, the virtual environment generation sever 230 may use a cube mapping technique with the 3D model as an input to generate the 3D environment.” ¶ [0042]: “In some implementations, the 3D model 301 may be converted (e.g., by the virtual environment generation sever 230) into a series of equirectangular 360-degree images 310 and/or cube maps 311. An example process for generating the cube maps 311 is described in greater detail below with respect to FIG. 4. As described herein, the 360-degree images 310 may each include a “scene” or portion of the entire virtual 3D environment 322. As an illustrative example, the 360-degree images 310 may be created based on the author's selections of a number of scenes through which the environment may be viewed, the locations of the scene mid-points (i.e. the viewpoints) with consideration to the product placement and locations in which the products may be clearly viewed (e.g., in x, y, z coordinates), information identifying a front-facing direction, and resolution settings (e.g., different resolutions of images to be rendered and served based on network conditions). The virtual environment generation sever 230 may process the 3D model 301 based on the above described inputs provided by the author to produce equirectangular 360-degree images 310 (e.g., Computer-Generated Imagery (CGI) from each scene). More specifically, the 360-degree images 310 illustrate the user's perception as if the user were standing at the viewpoint coordinates, flattened out into 2-dimensions, with all the textures and lighting applied. In some embodiments, 360-degree images 310 may be compressed for improved load and/or transmission time. Further, multiple sets of 360-degree images 310 at different resolutions may be generated and stored in the virtual environment generation sever 230.” ¶ [0052]: “Process 400 also may include creating multiple cube maps at different levels of resolution using the 3D model (block 420). For example, the virtual environment generation sever 230 may create multiple cube maps at different levels of resolution using the 3D model (e.g., received at block 410). As an illustrative example, the virtual environment generation sever 230 may create a cube map in the form of six faces of a cube in which the faces respectively represent a front view, top view, a bottom view, a right view, a left view, and a back view. The six faces together form a 360-degree view of a scene of the 3D model. An example of the six cube faces representing a cube map is shown in FIG. 5. For example, referring to FIG. 5, the virtual environment generation sever 230 may create cube faces 502, 504, 506, 508, 510, and 512 which together form a cube of a scene of the 3D model.” ¶ [0055]: “Process 400 also may include storing sets of the multiple square images (block 440). For example, the virtual environment generation sever 230 may store sets of the multiple images such that the images may be provided to the user (e.g., via the user device 210) in order to present the virtual 3D environment at appropriate resolutions. The stored sets of multiple square images may correspond to the cube maps 311 shown in FIG. 3.” ¶ [0036]: “the virtual environment generation sever 230 may generate 360-degree files based on the 3D model as part of generating the 3D environment. Additionally, or alternatively, the virtual environment generation sever 230 may use a cube mapping technique with the 3D model as an input to generate the 3D environment.”); and providing the computer-generated three-dimensional (3D) space by rendering the plurality of cube maps instead of rendering at least a portion of the 3D model (¶ [0055]: “Process 400 also may include storing sets of the multiple square images (block 440). For example, the virtual environment generation sever 230 may store sets of the multiple images such that the images may be provided to the user (e.g., via the user device 210) in order to present the virtual 3D environment at appropriate resolutions. The stored sets of multiple square images may correspond to the cube maps 311 shown in FIG. 3.” ¶ [0057]: “As shown in FIG. 6, process 600 may include rendering and presenting a virtual 3D environment (block 610). For example, the virtual environment generation sever 230 may render and present the virtual 3D environment (e.g., constructed using the example process shown in FIG. 3).” ¶ [0057]: “In some embodiments, the virtual environment generation sever 230 may present different portions of the virtual 3D environment at different resolutions (e.g., higher resolutions for higher priority portions of the virtual 3D environment).” ¶ [0066]: “the 360-degree scene file may be mapped onto the screen of the user device 210, and the rectangular area currently in view may be rendered”). Thus, in order to obtain a more versatile virtual shopping system that provides 360 degree images of 3D models in different resolutions, it would have been obvious to one of ordinary skill in the art to have modified the virtual shopping system taught by the combination of SKEEN and ROSS so as to, in response to receiving a toggle input at the graphical interactive feature, convert the 3D model into a plurality of cube maps corresponding to a plurality of viewpoints and providing the computer-generated three-dimensional (3D) space by rendering the plurality of cube maps instead of rendering at least a portion of the 3D model, as taught by SINGH. Regarding claim 17 (depends on claim 11), whereas SKEEN and ROSS may not be explicit as to, SINGH teaches: the environment generator engine (e.g., ¶ [0042]: “virtual environment generation sever 230”) generates the computer-generated 3D space by rendering one or more cube maps (¶ [0055]: “Process 400 also may include storing sets of the multiple square images (block 440). For example, the virtual environment generation sever 230 may store sets of the multiple images such that the images may be provided to the user (e.g., via the user device 210) in order to present the virtual 3D environment at appropriate resolutions. The stored sets of multiple square images may correspond to the cube maps 311 shown in FIG. 3.” ¶ [0057]: “As shown in FIG. 6, process 600 may include rendering and presenting a virtual 3D environment (block 610). For example, the virtual environment generation sever 230 may render and present the virtual 3D environment (e.g., constructed using the example process shown in FIG. 3).” ¶ [0066]: “the 360-degree scene file may be mapped onto the screen of the user device 210, and the rectangular area currently in view may be rendered”), based on the 3D model (¶ [0019]: “the 3D model 100 may be used to generate a virtual environment using a cube mapping technique (e.g., instead of, or an addition to, creating the virtual environment using the 360-degree scenery files 101).” ¶ [0036]: “the virtual environment generation sever 230 may generate 360-degree files based on the 3D model as part of generating the 3D environment. Additionally, or alternatively, the virtual environment generation sever 230 may use a cube mapping technique with the 3D model as an input to generate the 3D environment.” ¶ [0050]: “Process 400 illustrates an example process for generating cube maps (e.g., the cube maps 311 shown in FIG. 3) based on a 3D computer-generated model.”), in addition to rendering the 3D model (e.g., ¶ [0019]: “the 3D model 100 may be a computer-generated rendering,” ¶ [0036]: “generate 360-degree files based on the 3D model as part of generating the 3D environment.” ¶ [0053]: “Returning to process block 420 of FIG. 4, the virtual environment generation sever 230 may form multiple cube maps to capture the entire 3D model.” NOTE: Each face of each cube map is a rendering of a viewpoint (or scene) in the 3D model, and, in order to capture the entire 3D model by forming cube maps of the 3D model, the entire 3D model must be rendered. Or, in other words, in order to generate the scene view images for each face of a cube map, the 3D model must be rendered for each viewpoint (or face) for the cube map.) (¶ [0036]: “the virtual environment generation sever 230 may generate 360-degree files based on the 3D model as part of generating the 3D environment. Additionally, or alternatively, the virtual environment generation sever 230 may use a cube mapping technique with the 3D model as an input to generate the 3D environment.” ¶ [0042]: “In some implementations, the 3D model 301 may be converted (e.g., by the virtual environment generation sever 230) into a series of equirectangular 360-degree images 310 and/or cube maps 311. An example process for generating the cube maps 311 is described in greater detail below with respect to FIG. 4. As described herein, the 360-degree images 310 may each include a “scene” or portion of the entire virtual 3D environment 322. As an illustrative example, the 360-degree images 310 may be created based on the author's selections of a number of scenes through which the environment may be viewed, the locations of the scene mid-points (i.e. the viewpoints) with consideration to the product placement and locations in which the products may be clearly viewed (e.g., in x, y, z coordinates), information identifying a front-facing direction, and resolution settings (e.g., different resolutions of images to be rendered and served based on network conditions). The virtual environment generation sever 230 may process the 3D model 301 based on the above described inputs provided by the author to produce equirectangular 360-degree images 310 (e.g., Computer-Generated Imagery (CGI) from each scene). More specifically, the 360-degree images 310 illustrate the user's perception as if the user were standing at the viewpoint coordinates, flattened out into 2-dimensions, with all the textures and lighting applied. In some embodiments, 360-degree images 310 may be compressed for improved load and/or transmission time. Further, multiple sets of 360-degree images 310 at different resolutions may be generated and stored in the virtual environment generation sever 230.” ¶ [0052]: “Process 400 also may include creating multiple cube maps at different levels of resolution using the 3D model (block 420). For example, the virtual environment generation sever 230 may create multiple cube maps at different levels of resolution using the 3D model (e.g., received at block 410). As an illustrative example, the virtual environment generation sever 230 may create a cube map in the form of six faces of a cube in which the faces respectively represent a front view, top view, a bottom view, a right view, a left view, and a back view. The six faces together form a 360-degree view of a scene of the 3D model. An example of the six cube faces representing a cube map is shown in FIG. 5. For example, referring to FIG. 5, the virtual environment generation sever 230 may create cube faces 502, 504, 506, 508, 510, and 512 which together form a cube of a scene of the 3D model.” ¶ [0055]: “Process 400 also may include storing sets of the multiple square images (block 440). For example, the virtual environment generation sever 230 may store sets of the multiple images such that the images may be provided to the user (e.g., via the user device 210) in order to present the virtual 3D environment at appropriate resolutions. The stored sets of multiple square images may correspond to the cube maps 311 shown in FIG. 3.” ¶ [0036]: “the virtual environment generation sever 230 may generate 360-degree files based on the 3D model as part of generating the 3D environment. Additionally, or alternatively, the virtual environment generation sever 230 may use a cube mapping technique with the 3D model as an input to generate the 3D environment.”). Thus, in order to obtain a more versatile virtual shopping system having the cumulative features and/or functionalities taught by SKEEN, ROSS and SINGH (such as on demand pre-rendered multi-resolution versions of the virtual environment, as taught by SINGH), it would have been obvious to one of ordinary skill in the art to have modified the virtual shopping system taught by the combination of SKEEN and ROSS so as to generate the computer-generated 3D space by rendering one or more cube maps, based on the 3D model, in addition to rendering the 3D model, as taught by SINGH. Claims 6 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over SKEEN (US 2024/0161178) in view of HELINGER et al. (US 2024/0112431), further in view of ROSS et al. (US 2008/0172635, hereinafter “ROSS”) and further in view of Shaw-Weeks (US20030076318). Regarding claim 6: Skeen as modified does not teach wherein the second set of customization parameters are selected to be immutable based on style parameters associated with a type of merchant entity. Shaw-Weeks teaches the second set of customization parameters are selected to be immutable based on style parameters associated with a type of merchant entity (paragraph 0033, displays a virtual model with the provision of automatic pattern generation and garment manufacture also guarantees that the fabric the client sees on the virtual model is the same as that used by a manufacturer in the creation of garment). Therefore, it would have been obvious to a person with ordinary skill in the art to have further modified Skeen to include: wherein the second set of customization parameters are selected to be immutable based on style parameters associated with a type of merchant entity. The reason of doing so would have allowed a user to see exactly what a garment would look like before purchase to ensure user satisfaction. Regarding claim 7: Skeen as modified does not teach wherein the second set of customization parameters are selected to be immutable include one or more of a body shape or a degree of realism. Shaw-Weeks teaches wherein the second set of customization parameters are selected to be immutable include one or more of a body shape (paragraph 0033, custom made patterns for made to measure garments, all taking into account the specific body shape and size of the client). Therefore, it would have been obvious to a person with ordinary skill in the art to have further modified Skeen to include: wherein the second set of customization parameters are selected to be immutable include one or more of a body shape. The reason of doing so would have allowed a user to see exactly what a garment would look like and to make sure it will fit the particular body shape and size of the user before purchase to ensure user satisfaction. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over SKEEN (US 2024/0161178) in view of HELINGER et al. (US 2024/0112431), further in view of ROSS et al. (US 2008/0172635), and still further in view of SIDDIQUE et al. (US 2016/0210602, hereinafter “SIDDIQUE). Regarding claim 10 (depends on claim 9), whereas SKEEN and ROSS may not be entirely explicit as to, SIDDIQUE teaches: presenting an option to at least one user of the plurality of the users (e.g., ¶ [0179]: “ ‘One Switch View’ (OSV) button that allows users to switch between user views just by pressing one button/switch, which may be a hardware button or a software icon/button”) to switch between viewing a primary user navigating the computer-generated 3D space (e.g., ¶ [0179]: “As the selected user (user 2) browses through products/stores, the same content is displayed on user 1's display screen thereby synchronizing the content on the display screens of users 1 and 2. ) or navigating the computer-generated 3D space independent of the primary user (e.g., ¶ [0179]: “User 1 may switch back to her view whenever she wants and continue browsing on her own.”) (¶ [0179]: “Reference is now made to FIGS. 7A-D which illustrate protocols for collaborative interaction in exemplary embodiments. These protocols can be used for a number of applications. These protocols are described next for the modes of operation of a Shopping Trip™. Other applications based on these protocols are described later in this document. A user may initiate a shopping trip at any time. There are four modes of operation of a shopping trip: regular, asynchronous, synchronous and common. In the regular mode, a user can shop for products in the standard way—browse catalogues, select items for review and purchase desired items. Whereas the regular mode of shopping involves a single user, the asynchronous, synchronous and common modes are different options for collaborative shopping available to users. In the asynchronous mode, the user can collaborate with other shoppers in an asynchronous fashion. The asynchronous mode does not require that other shoppers the user wishes to collaboratively shop with, be online. The user can share images, videos, reviews and other links (of products and stores for instance) they wish to show other users (by dragging and dropping content into a share folder in an exemplary embodiment). They can send them offline messages, and itemized lists of products sorted according to ratings, price or some other criteria. Any share or communication or other electronic collaborative operation can be performed without requiring other collaborators to be online, in the asynchronous mode at the time of browsing. The synchronous and common modes require all collaborating members to be online and permit synchronized share, communication and other electronic collaborative operations. In these modes, the users can chat and exchange messages synchronously in real-time. In the synchronous mode, ‘synchronized content sharing’ occurs. Reference is made to FIG. 20 to describe this operation in an exemplary embodiment. Users involved in synchronized collaboration can browse products and stores on their own. ‘Synchronized content sharing’ permits the user to display the products/store view and other content being explored by other users who are part of the shopping trip by selecting the specific user whose browsing content is desired, from a list 244 as shown in FIG. 20. For example, consider a shopping trip session involving two users—user 1 and user 2, browsing from their respective computing devices and browsers. Suppose user 1 and user 2 are browsing products by selecting “My view” from 244. Suppose user 1 now selects user 2 from the view list 244. As the selected user (user 2) browses through products/stores, the same content is displayed on user 1's display screen thereby synchronizing the content on the display screens of users 1 and 2. User 1 may switch back to her view whenever she wants and continue browsing on her own. Similarly, user 2 can view the content of user 1 by selecting user 1 from the switch view list. In the common mode, users involved in the collaborative shopping trip are simultaneously engaged in browsing products or stores on their display screens. This mode can assume two forms. In the first form, a user is appointed as the ‘head’ from among the members of the same shopping trip. This head navigates/browses products and stores on their display screen and the same view is broadcast and displayed on the screens of all users of the same shopping trip. In the second form, all users can navigate/browse through product, store or other catalogues and virtual environments and the information/content is delivered in the sequence that it is requested (to resolve user conflicts) and the same content is displayed on all user screens simultaneously using the protocol that is described below. In the common mode, all the users are engaged in a shopping trip in a common environment. This environment may be browsed independently by different members of the shopping trip leading to different views of the same environment. The system in FIG. 20 involving synchronous collaboration between users may be integrated with a ‘One Switch View’ (OSV) button that allows users to switch between user views just by pressing one button/switch, which may be a hardware button or a software icon/button. The user whose view is displayed on pressing the switch is the one on the list following the user whose view is currently being displayed, in exemplary embodiment. This OSV button may be integrated with any of the collaborative environments discussed in this document.). Thus, in order to obtain a more versatile virtual shopping system having the cumulative features and/or functionalities taught by SKEEN, ROSS and SEDDIQUE, it would have been obvious to one of ordinary skill in the art to have modified the virtual shopping system taught by the combination of SKEEN and ROSS so as to also include the functionality of presenting an option to at least one user of the plurality of the users to switch between viewing a primary user navigating the computer-generated 3D space or navigating the computer-generated 3D space independent of the primary user, as clearly taught by SEDDIQUE. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over SKEEN (US 2024/0161178) in view of HELINGER et al. (US 2024/0112431), further in view of ROSS et al. (US 2008/0172635), and still further in view of ARMSTRONG (US 2022/0398652). Regarding claim 12 (depends on claim 11), whereas SKEEN and ROSS are not explicit as to, ARMSTRONG teaches: providing a gamification engine (¶ [0033]: “a rewards program”) for layering a scavenger hunt type game (¶ [0033]: “a scavenger hunt”) or a tile matching type game into the computer-generated 3D space (¶ [0033]: “in the virtual marketplace platform.”) (¶ [0033]: “In some embodiments, there may be a rewards program for the customer virtual avatar. For example, an account associated with the customer virtual avatar may receive a certain amount of money, currency, points, stars, etc. based on the amount of time the customer virtual avatar has been active, logged in, a member of, etc. the virtual marketplace platform. In some embodiments, the rewards program may award money, currency, points, starts, etc. based on whether the customer virtual avatar has referred the virtual marketplace platform to anyone. In some embodiments, the rewards program may award money, currency, points, starts, etc. based on whether the customer virtual avatar has completed a scavenger hunt where they complete tasks, run errands, etc. in the virtual marketplace platform. In some embodiments, the rewards program may award money, currency, points, starts, etc. based on whether the customer virtual avatar has shared the virtual marketplace platform with other users. Sharing may refer to transmitting a link (URL) to the virtual marketplace platform to other users email accounts.”). Thus, in order to obtain a more versatile virtual shopping system having the cumulative features and/or functionalities taught by SKEEN, ROSS and ARMSTRONG, it would have been obvious to one of ordinary skill in the art to have modified the virtual shopping system taught by the combination of SKEEN and ROSS so as to also include providing a gamification engine for layering a scavenger hunt type game into the computer-generated 3D space, as taught by ARMSTRONG. Allowable Subject Matter Claims 5 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion At present, it is not apparent to the examiner which part of the application could serve as a basis for new and allowable claims. However, should the applicant nevertheless regard some particular matter as patentable, the examiner encourages applicant to appropriately amend the claims to include such matter and to indicate in the REMARKS the difference(s) between the prior art and the claimed invention as well as the significance thereof. Furthermore, should applicant decide to amend the claims, examiner respectfully requests that the applicant please indicate in the REMARKS from which page(s), line(s) or claim(s) of the originally filed application that any amendments are derived. See MPEP § 2163(II)(A) (There is a strong presumption that an adequate written description of the claimed invention is present in the specification as filed, Wertheim, 541 F.2d at 262, 191 USPQ at 96; however, with respect to newly added or amended claims, applicant should show support in the original disclosure for the new or amended claims.). A shortened statutory period for reply to this action is set to expire THREE MONTHS from the mailing date of this action. Extensions of time may be available under the provisions of 37 CFR 1.136(a). In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Failure to reply within the set or extended period for reply will, by statute, cause the application to become ABANDONED (35 USC § 133). Relevant Prior Art The following prior art, although not relied upon, is made of record since it is considered pertinent to applicant's disclosure: GOETZINGER et al. (US 10,181,218) discloses a system and method for modeling visual and non-visual experiential characteristics of a work space environment, the system comprising at least a first emissive surface useable to view a virtual world (VW) representation, a processor that is programmed to perform the steps of (a) presenting a VW representation via the at least a first emissive surface, the VW representation including an affordance configuration shown in the VW representation, (b) model at least one non-visual experiential characteristic associated with an environment associated with the VW representation and (c) present at least some indication of the non-visual experiential characteristic to the system user. AKAZAWA et al. (US 2002/0113809) discloses generating a variable virtual world in accordance with user's potential needs. A processor causes an image of a first virtual world to be displayed on a display. The first virtual world includes predefined objects and an avatar selected by the user. The avatar is controlled to act in the first virtual world by the user. The processor analyzes the action of the avatar from the position of the avatar relative to the position of the object in the first virtual world to derive a feature of the user and determine a second virtual world including other objects in accordance with the derived feature. The processor causes an image of the second virtual world to be displayed on the display. SIDDIQUE et al. (US 2010/0030578) discloses systems and online methods of collaboration in community environments. The methods and systems are related to an online apparel modeling system that allows users to have three-dimensional models of their physical profile created. Users may purchase various goods and/or services and collaborate with other users in the online environment. ABRAHAM et al. (US 2013/0317950) discloses a system, an apparatus, a computer program product, and a method for customizing a three dimensional virtual store based on user shopping behavior. A planogram associated with a physical store can be identified. The physical store can be a land-based storefront. The physical store can be associated with an inventory. A virtual store comprising of a layout can be created. The virtual store can be a three dimensional environment permitting electronic commerce transactions. The layout of the virtual store and the planogram of the physical store can be identical. The layout can be a position or an orientation of an inventory item associated with the physical store inventory. The virtual store can be customized based on a personalization data. The customization can be an inventory item position and an orientation. The layout of the customized virtual store can be different from the planogram of the physical store. MABREY et al. (US 2014/0095349) discloses providing a social, interactive panoramic shopping experience from pictures. The experience is based upon navigable panoramic spaces from photographs that include the capability for users to interact socially and individually with particular elements within that space. STACEY et al. (US 2015/0332387) discloses a virtual shopping mall contains virtual shop fronts that operate as gateways into virtual rooms containing a graphic user interface (GUI). The GUI supports an online homepage of a vendor previously associated with the virtual shop front. An avatar window in the GUI contains the virtual room and presents discount codes available from the specific vendor. A second window in the GUI presents a vendor's real-time online homepage. A personalized avatar is managed by a marketing affiliate that promotes vendor discount codes. DENHAM (US 2016/0292966) discloses a system and method of providing a virtual shopping experience including virtual environment module over a computerized network; wherein a plurality of users are able to navigate an virtual environment each using an avatar by operation of a graphical user interface. The system includes a virtual object module that manages a plurality of virtual objects displayed in the virtual environment; wherein the plurality of virtual objects includes a plurality of user avatars and product avatars associated with a shopping cart module. The system includes a virtual location module that manages the location of the plurality of virtual objects displayed in the virtual environment. The system includes an audio control module that manages associated audio media with the plurality of virtual objects and the virtual locations; wherein the audio media module plays audio media associated with the plurality of virtual objects. BEACH et al. (US 2017/0038916) discloses a system and method for dynamically generating virtual marketplace platforms. The system receives a set of facility data, a set of selections for a set of user interaction objects, and a set of object placement selections. The system receives user data for a user interacting with the virtual facility and one or more objects of a first and second subset of user interaction objects, dynamically arranges the second subset of user interaction objects into a second arrangement, and causes presentation of the virtual facility and the set of user interaction objects to the user. JEPHCOTT (US 2018/0315117) discloses a user friendly interactive electronic on-line retail communications system and process for use in on-line retail shopping. The attractive on-line retail communications process provides for enjoyable on-line retail shopping by generating an interactive avatar to assist the customer in selecting virtual products. Advantageously, the avatar is an advice avatar or a female or male interactive shopping avatar that provides a virtual salesperson. The avatar can provide visual and/or audible information about the virtual products. Desirably, a customer-like avatar with an appearance similar to the customer can be also be provided for wearing or using the virtual products. SINGH et al. (US 2020/0302693) discloses a computer-implemented method including: receiving, by a computing device, user-specific parameters for generating a virtual environment customized for a user, wherein the virtual environment includes scenery and one or more products associated with the user; generating, by the computing device, the virtual environment with the one or more products displayed within the scenery; presenting, by the computing device, the virtual environment; receiving, by the computing device, user inputs for navigating through the virtual environment; updating, by the computing device, the presentation of the virtual environment based on the user inputs for navigating through the virtual environment; receiving, by the computing device, user inputs for selecting the one or more products within the virtual environment; presenting, by the computing device, information regarding the one or more products; receiving, by the computing device, user inputs for purchasing the one or more products; and processing a purchase for the one or more products. CULLATHER (US 2021/0366032) discloses combined online game and e-commerce systems and methods. The systems and methods provide an online gaming and shopping experience for the gamer in which the gamer may play various aspects of the game wherein at least one of the aspects of the game is a store or group of stores that offer the option to purchase items and/or services for receipt in the physical world. BERGER et al. (US 2022/0374968) discloses a system comprising a computer-readable storage medium storing programs and methods for performing operations comprising: receiving a request from a client device of a first user to engage in a shared virtual reality shopping experience with a second user; generating, for display on respective client devices of the first and second users, the shared virtual reality shopping experience comprising a plurality of virtual reality items that represent real-world products; receiving, from the client device of the second user, data indicating a selection of a first virtual reality item of the plurality of virtual reality items made by the second user; and modifying a display attribute of the first virtual item in the display of the shared virtual reality shopping experience on the client device of the first user to indicate the selection of the first virtual reality item made by the second user. CHESHIRE et al. (US 2023/0385916) discloses three-dimensional (3D) image modeling systems and methods for automatically generating virtual 3D store environments. The 3D image modeling systems and methods comprise loading, from a computer memory, a product set of 3D imaging assets including product texture images and standard product model(s). A virtual 3D area is generated by a 3D engine inputting a matrix file that depicts one or more 3D products. The 3D engine positions the one or more virtual 3D products based on positioning data in the matrix file. A virtual 3D store environment is generated based on the virtual 3D area and a 3D structural model. The virtual 3D store environment is configured for rendering as a photorealistic environment in virtual 3D space. INSLEY et al. (Insley V, Nunan D. “Gamification and the online retail experience.” International Journal of Retail & Distribution Management. 2014 May 6;42(5):340-51.) discloses gamification of an online retail experience. Karac et al. (Karać J, Stabauer M. “Gamification in E-commerce: A survey based on the Octalysis framework.” In International Conference on HCI in Business, Government, and Organizations 2017 May 13 (pp. 41-54). Cham: Springer International Publishing.) discloses gamification of online shopping. De Canio et al. (De Canio F, Fuentes-Blasco M, Martinelli E. “Engaging shoppers through mobile apps: the role of gamification.” International Journal of Retail & Distribution Management. 2021 Jul 8;49(7):919-40.) discloses gamification of online retail shopping. Lopes et al. (Lopes JM, Gomes S, Lopes P, Silva A, Lourenço D, Esteves D, Cardoso M, Redondo V. “Exploring the role of gamification in the online shopping experience in retail stores: An exploratory study.” Social Sciences. 2023 Apr 15;12(4):235.) discloses gamification of an online shopping experience in retail stores. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT PEREN who can be reached by telephone at (571) 270-7781, or via email at vincent.peren@uspto.gov. The examiner can normally be reached on Monday-Friday from 10:00 A.M. to 6:00 P.M. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KING POON, can be reached at telephone number (571). The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /VINCENT PEREN/ Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Apr 25, 2024
Application Filed
Mar 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592017
Rendering XR Avatars Based on Acoustical Features
2y 5m to grant Granted Mar 31, 2026
Patent 12586282
AVATAR COMMUNICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12555314
THREE-DIMENSIONAL SHADING METHOD, APPARATUS, AND COMPUTING DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12555296
ADAPTING SIMULATED CHARACTER INTERACTIONS TO DIFFERENT MORPHOLOGIES AND INTERACTION SCENARIOS
2y 5m to grant Granted Feb 17, 2026
Patent 12541913
METHOD AND APPARATUS FOR REBUILDING RELIGHTABLE IMPLICIT HUMAN BODY MODEL
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
90%
With Interview (+20.2%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 382 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month