Prosecution Insights
Last updated: April 19, 2026
Application No. 19/004,722

PRODUCT CONFIGURATOR WITH ON DEMAND RENDER

Non-Final OA §103
Filed
Dec 30, 2024
Examiner
TSWEI, YU-JANG
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Marxent Labs LLC
OA Round
3 (Non-Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
376 granted / 447 resolved
+22.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
44 currently pending
Career history
491
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
66.4%
+26.4% vs TC avg
§102
5.6%
-34.4% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 447 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the Amendment filed on 12/16/2025. Claims 1-26 are pending. Claims 1-20, 23, 26 have been amended. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/16/2025 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6, 8-11, 13, 15-19, 21-25 is/are rejected under 35 U.S.C. 103 as being unpatentable over Totty et al. (US 20200302681 A1, hereinafter Totty) in view of Wiedmeyer et al. (US 20190251622 A1, hereinafter Wiedmeyer), further in view of Samson et al. (US 20150324940 A1, hereinafter Samson). Regarding Claim 8, Totty teaches a computer system for customizing a product with on demand rendering comprising (Totty, Paragraph [0011], “a variation of a system for virtual interaction with generated three dimensional indoor room imagery”): one or more computer processors (Totty, Paragraph [0099], “the user device 120 ( e.g., one or more processors of the user device)”) ; one or more computer readable storage media (Totty, Paragraph [0099], “is preferably stored by a computer readable medium (e.g., RAM, Flash, etc.) associated with the user device”); and program instructions stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors, the program instructions comprising (Totty, Paragraph [0098], “One or more of the engines, algorithms, and/or modules described herein can be executed by the backend platform, the front end platform, and/or a combination thereof”): program instructions to generate a customized product for display in a three dimensional virtual space by assembling three dimensional model pieces identified from a structured data object that identifies components of the three dimensional model pieces [[ and defines assembly relationships for the product ]] and selected customization options (Totty, Paragraph [0026], “The method functions to provide virtual interaction with a generated three-dimensional visual scene representation of an indoor room”; [0064], “the system can include a virtual object configurator, which functions to enable customization of the object attributes”; [0065], “The interaction modules can be for: a given surface, a set of object pixels or voxels, the entire object, an object component, and/or any other suitable object portion”), wherein the selected customization options [[ correspond to entries in the structured data object that identify ]] three dimensional model pieces correspond to a respective customization option (Totty, Paragraph [0010], “there is a need in the computer vision field to create new and useful systems and methods for portable, convenient, expansive, photorealistic, interactive, and/or 3D-aware indoor model(s) generation” [0048], “a representation of the virtual object (e.g., the virtual object identifier, the virtual object modified according to the room model, or other representation) can be stored in association with the virtual room model” [0060], “The virtual object model (VOM) functions to represent the geometry of the physical object corresponding to the virtual object.” [0064], “he system can include a virtual object configurator, which functions to enable customization of the object attributes”); and program instructions to render the customized product as a virtual object in the three dimensional virtual space (Totty, Paragraph [0111], “The method for virtual interaction with a three-dimensional indoor room <read on three dimensional virtual space > includes: generating a virtual room model S100, generating a virtual room visual representation S200” [0064], “the system can include a virtual object configurator, which functions to enable customization of the object attributes” [0111] “rendering an updated virtual room visual representation based on the virtual object”). But, Totty does not explicitly disclose the selected customization options correspond to entries in the structured data object that identify three dimensional model pieces correspond to However, Wiedmeyer teaches program instructions to generate a customized product for display in a three dimensional virtual space by assembling three dimensional model pieces identified from a structured data object that identifies components of the three dimensional model pieces [[ and defines assembly relationships for the product ]] and selected customization options (Wiedmeyer, Paragraph [0007], “The platform may dynamically create , store , and retrieve product components , process user inputs ( e . g . , inter actions with the simulated environment , voice commands , etc. . ) for assembling products using dimensional , spatial , geometric , or image data and rendering visual representations within a virtual simulation”; Paragraph [0008], “The platform may generate the environmental simulation to include the identified relevant products , thereby creating a virtual storefront that is customized to the user”; [0018], “the asset data <read on components> associated with a first selected product of the one or more selected products… corresponding 3D computer graphic model of the first selected product, and create a first asset object of the one or more asset objects), wherein selected customization options correspond to entries in the structured data object that identify three dimensional model pieces correspond to a respective customization option (Wiedmeyer, Paragraph [0103], “the device may optionally compare the set of identified products to one or more data structures describing current product promotions” [0049], “The present disclosure provides systems, device configurations, and processes for assembling three-dimensional products… generates a VR simulation of a customized retail store that contains the identified products”; [0070], an “ asset ” is a data structure describing a particular physical object that may be placed within the retail store; [0010], “The plurality of asset objects may include a first layout object representing a first layout element <read on entries of structured data>, of the plurality of layout elements”) and program instructions to render the customized product as a virtual object in the three dimensional virtual space (Wiedmeyer, Paragraph [0071], “When the user selects an asset to be added to the virtual environment , the VR platform uses information about the asset to create and visually render the asset within the virtual environment”) Wiedmeyer and Totty are analogous since both of them are dealing with multi-dimensional data in the virtual world. Totty provided a way of virtual interaction with three dimensional indoor room design by customized the virtual object in the three dimensional model using machine learning. Wiedmeyer provided a way of process of customized structured object in the three dimensional virtual world. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate customized structure data object taught by Wiedmeyer into modified invention off Totty such that when dealing with virtual interaction with customized object in the 3D virtual space, system will able to provide enhanced realism and accuracy in order to support better customization and configuration and will help get better training result during the machine learning process. But the combination does not explicitly disclose [[ structured data object that identifies components of the three dimensional model pieces and ]] defines assembly relationships for the product. However, Samson teaches assembling three dimensional model pieces identified from a structured data object that identifies components of the three dimensional model pieces and defines assembly relationships for the product and selected customization options (Samson, Paragraph [0042], "A product assembly is a single part representation with a single set of engineering features, and may further include a single price or other attribute(s). Each product assembly is formed of a combination <read on assembling> of sub-parts, and represents a sum of all of the engineering features and/or sub-component prices. In one example, a single kitchen cabinet assembly may be formed as a collection <read on identifies components> of its sub-component parts, such as the cabinet box, cabinet"; [0043], "Each product assembly, as a single whole assembly, can be maintained by the common information database 212<read on structured data object> for retrieval by the design studio 206. Thus, when a single cabinet assembly is accessed by the design studio 206, one overall set of engineering data and one price can be quickly and easily retrieved from the common information database 212 and incorporated into the virtual experience for the user. As a result, the user can quickly see the effect of changing"); wherein the selected customization options correspond to entries in the structured data object that identify three dimensional model pieces correspond to a respective customization option (Samson, Paragraph [0043], "Each product assembly, as a single whole assembly, can be maintained by the common information database 212<read on structured data object> for retrieval by the design studio 206. Thus, when a single cabinet assembly is accessed by the design studio 206, one overall set of engineering data and one price can be quickly and easily retrieved from the common information database 212 and incorporated into the virtual experience for the user. As a result, the user can quickly see the effect of changing"). Samson and Totty are analogous since both of them are dealing with virtual presentation of customizable products/objects in a graphical environment, and enabling users to select among customization options. Totty provided a way of customizing a virtual object in a three dimensional virtual space (e.g., via a virtual object configurator). Samson provided a way of representing selectable customization options as product assemblies formed from sub-component parts that can be retrieved from a shared common information database and incorporated into a virtual experience. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the product-assembly based structured representation taught by Samson into the modified invention of Totty such that the customized product can be assembled from model pieces defined by component/assembly relationships for the selected customization options, improving real-time performance and modularity of customization. Regarding Claim 9, the combination of Totty, Wiedmeyer and Samson teaches the invention including program instructions stored on the one or more computer readable storage media in claim 8. The combination further taches comprise: program instructions to generate a two dimensional preview of the virtual object based on the rendering of the customized product [[ as assembled in the three dimensional virtual space ]] (Totty, Paragraph [0060], [0064], “the system can include a virtual object configurator, which functions to enable customization of the object attributes. The virtual object configurator can allow a user to select from and preview different product options, such as sizes, lengths, styles, colors, materials, woods, fabrics, optional accessories ( e.g. handles, pulls, doors, legs), artwork frame and matte options, and/or other object attributes”; “An object mask can include two-dimensional and/or three-dimensional binary mask”) Totty does not explicitly disclose but Wiedmeyer teaches assembled in the three dimensional virtual space (Wiedmeyer, Paragraph [0007], “The platform may dynamically create , store , and retrieve product components…for assembling products using dimensional , spatial , geometric , or image data and rendering visual representations within a virtual simulation” [0015], “a corresponding 3D model to produce a plurality of 3D models and to send the plurality of 3D models to the first VR display device”) wherein the rendering is based on parameters determined from dimensions of the three dimensional virtual space (Wiedmeyer, Paragraph Paragraph [0072], “the visual data for a box of coffee filters may include an indicator that the asset is a “ box ” package type , dimensions for the sides of the box , a weight of the box , images of one or more sides of the actual box of coffee filters , and parameters relating to interactivity of the asset . The simulation generator renders the images to a box object , assigns the dimensions of the box to the object , associates the box object with an identifier , and then can place the box object in the simulation As explained in rejection of claim 8, the obviousness for combining of structured data object of Wiedmeyer into Totty is provided above. Regarding Claim 10, the combination of Totty, Wiedmeyer and Samson teaches the invention in claim 9. The combination further taches wherein the selected customization options are [[ stored as a structured data object that describes a product in terms of a structure of three dimensional model pieces and assembly relationship ]] that can be joined together to create a three dimensional model of the product (Totty, Paragraph [0064], “the system can include a virtual object configurator, which functions to enable customization of the object attributes” [0048], “The room-object(s) combination can be revisited by the same user account or shared with other user accounts, modified with changes in virtual objects and/or virtual object poses (e.g., wherein the modifications can be stored in association with the room objects combination, or stored as a new room object combination)”), But, Totty does not explicitly disclose wherein each customization option selected has a corresponding three dimensional model piece for the product. However, Totty does not explicitly disclose stored as a structured data object that describes a product in terms of a structure of three dimensional model pieces However, Wiedmeyer teaches stored as a structured data object that describes a product in terms of a structure of three dimensional model pieces [[ and assembly relationship ]] (Wiedmeyer, Paragraph [0103], “the device may optionally compare the set of identified products to one or more data structures describing current product promotions” [0049], “The present disclosure provides systems, device configurations, and processes for assembling three-dimensional products… generates a VR simulation of a customized retail store that contains the identified products”) As explained in rejection of claim 8, the obviousness for combining of structured data object of Wiedmeyer into Totty is provided above. The combination does not explicitly disclose but Samson teaches the assembly relationship (Samson, Paragraph [0042], "A product assembly is a single part representation with a single set of engineering features, and may further include a single price or other attribute(s). Each product assembly is formed of a combination <read on assembling> of sub-parts, and represents a sum of all of the engineering features and/or sub-component prices. In one example, a single kitchen cabinet assembly may be formed as a collection <read on identifies components> of its sub-component parts, such as the cabinet box, cabinet ...; Samson, Paragraph [0043], "Each product assembly, as a single whole assembly, can be maintained by the common information database 212<read on structured data object> for retrieval by the design studio 206. Thus, when a single cabinet assembly is accessed by the design studio 206, one overall set of engineering data and one price can be quickly and easily retrieved from the common information database 212 and incorporated into the virtual experience for the user”). As explained in rejection of claim 8, the obviousness for combining of the product-assembly based structured representation of Samson into Totty is provided above. Regarding Claim 11, the combination of Totty, Wiedmeyer and Samson teaches the invention in claim 9. The combination further taches wherein the program instructions to render the customized product in the three dimensional virtual space comprise: program instructions to search a database for product documentation and configuration rules associated with the product (Totty, Paragraph [0141], “extracting object features (e.g., text description, style and/or theme, measurements, fabric, type, material, visual identifier, etc.) associated with the object depicted in the image; performing a search within a database based on the object features (e.g., in the catalog on the front end application, in a browser, on the internet, etc.)”; [0052], “the method can be used for design recommendations (e.g., recommendations for virtual objects or virtual object poses). This can be automatically or manually determined based on: design rules or heuristics, pre-designed rooms with similar features”); program instructions to determine compatibility of selected customization options according to the product documentation and the configuration rules; (Totty, Paragraph [0092], “the front end application can include a capture user interface for capturing compatible room imagery. The capture user interface can include: capture instructions (e.g., phone movement patterns, user movement patterns, etc.), feedback (e.g., audio, visual, haptic, etc.), and/or other features” [0047], “If the product (or a sufficiently similar product) is found in a database of 3D models (e.g., based on values of visual features extracted from the image), the product (e.g., virtual object model thereof) can be directly used in a 3D decorating experience”; [0158], “Analyzing the received position can include determining a position is valid by evaluating the position based on a set of guidelines (e.g., design rules <read on configuration rules> ) [0041], “Thirteenth, the method enables the ability to provide automated design guidance <read on product documentation>, to assist the user with product selection”); responsive to determining compatibility of the selected customization options, [[ program instructions to determine a corresponding structured data object entry that represents a respective customization option and assembly relationship]] to create a three dimensional model of the product (Totty, Paragraph [0026], “The method functions to provide virtual interaction with a generated three-dimensional visual scene representation of an indoor room”. [0064], “the system can include a virtual object configurator, which functions to enable customization of the object attributes” [0048], “The room-object(s) combination can be revisited by the same user account or shared with other user accounts, modified with changes in virtual objects and/or virtual object poses (e.g., wherein the modifications can be stored in association with the room objects combination, or stored as a new room object combination)”) ; and program instructions to render, in real time, a three dimensional model of the customized product to fit in the three dimensional virtual space (Totty, Paragraph [0038], [0055], “the system for virtual interaction within rendered three-dimensional room imagery” “the method offers a real-time aspect. Once the virtual model is generated, the user can virtually interact with the virtual room in real or near real time. In variants, this can be accomplished by rendering the virtual objects relative to the room image onboard the user device in real or near real time”). But, Totty does not explicitly disclose program instructions to determine a corresponding structured data object that represents a respective customization option. However, Wiedmeyer teaches program instructions to determine a corresponding structured data object entry that represents a respective customization option and [[ assembly relationship ]] (Wiedmeyer, Paragraph [0007], “The platform may dynamically create , store , and retrieve product components , process user inputs ( e . g . , inter actions with the simulated environment , voice commands , etc. . ) for assembling products using dimensional , spatial , geometric , or image data and rendering visual representations within a virtual simulation”; Paragraph [0008], “The platform may generate the environmental simulation to include the identified relevant products , thereby creating a virtual storefront that is customized to the user”) As explained in rejection of claim 8, the obviousness for combining of structured data object of Wiedmeyer into Totty is provided above. The combination does not explicitly disclose but Samson teaches the assembly relationship (Samson, Paragraph [0042], "A product assembly is a single part representation with a single set of engineering features, and may further include a single price or other attribute(s). Each product assembly is formed of a combination <read on assembling> of sub-parts, and represents a sum of all of the engineering features and/or sub-component prices. In one example, a single kitchen cabinet assembly may be formed as a collection <read on identifies components> of its sub-component parts, such as the cabinet box, cabinet ...; Samson, Paragraph [0043], "Each product assembly, as a single whole assembly, can be maintained by the common information database 212<read on structured data object> for retrieval by the design studio 206. Thus, when a single cabinet assembly is accessed by the design studio 206, one overall set of engineering data and one price can be quickly and easily retrieved from the common information database 212 and incorporated into the virtual experience for the user”). As explained in rejection of claim 8, the obviousness for combining of the product-assembly based structured representation of Samson into Totty is provided above. Regarding Claim 13, the combination of Totty, Wiedmeyer and Samson teaches the invention including program instructions stored on the one or more computer readable storage media in claim 11. The combination further taches program instructions to train a machine learning model based on training data that includes one or more labeled images (Totty, Paragraph [0065], “The interaction modules can be: manually defined, automatically defined ( e.g., trained using a set of videos or physics simulators), and/or otherwise defined” [0082], “rendered at the user device onto the photorealistic image, or otherwise managed. Planar surfaces can be identified and masked as an uneditable region, optionally with an associated depth or label, e.g. “foreground” or “background”), wherein the labeled images include images of one or more products in corresponding physical spaces (Totty, Paragraph [0156], “design rules can include regions tagged with a label associated with the object” [0029], “allows users to virtually interact with (e.g., design) their own physical interior spaces (e.g., rooms)” [0034], “holding a phone screen at arm's length), and improved realism and harmonization of virtual objects with physical objects already in the scene”). Regarding Claim 1, it recites limitations similar in scope to the limitations of Claim 8 but as a method and the combination of Totty, Wiedmeyer and Samson teaches all the limitations as of Claim 8. Therefore is rejected under the same rationale. Regarding Claim 2, it recites limitations similar in scope to the limitations of Claim 9 and therefore is rejected under the same rationale. Regarding Claim 3, it recites limitations similar in scope to the limitations of Claim 10 and therefore is rejected under the same rationale. Regarding Claim 4, it recites limitations similar in scope to the limitations of Claim 11 and therefore is rejected under the same rationale. Regarding Claim 6, it recites limitations similar in scope to the limitations of Claim 13 and therefore is rejected under the same rationale. Regarding Claim 15, it recites limitations similar in scope to the limitations of claim 8 and the combination of Totty, Wiedmeyer and Samson teaches all the limitations as of Claim 8. And Totty discloses these features can be implemented on a computer readable storage medium (Totty, Fig. 1, Paragraph [0099], The remote capturing algorithm is preferably executed by the user device 120 ( e.g., one or more processors of the user device), and is preferably stored by a computer readable medium (e.g., RAM, Flash, etc.) associated with the user device, but can be otherwise implemented). Regarding Claim 16, it recites limitations similar in scope to the limitations of Claim 9 and therefore is rejected under the same rationale. Regarding Claim 17, it recites limitations similar in scope to the limitations of Claim 10 and therefore is rejected under the same rationale. Regarding Claim 18, it recites limitations similar in scope to the limitations of Claim 11 and therefore is rejected under the same rationale. Regarding Claim 19, it recites limitations similar in scope to the limitations of Claim 13 and therefore is rejected under the same rationale. Regarding Claim 21, the combination of Totty, Wiedmeyer and Samson teaches the invention in claim 1. The combination further teaches wherein the one or more processors are programmed with computer instructions to generate the customized product comprise program instructions to: identify, for respective customization options, a corresponding three dimensional model piece based on the structured data object (Wiedmeyer, Paragraph [0103], “the device may optionally compare the set of identified products to one or more data structures describing current product promotions” [0049], “The present disclosure provides systems, device configurations, and processes for assembling three-dimensional products… generates a VR simulation of a customized retail store that contains the identified products”); and assemble the customized product based on identified three dimensional model pieces that correspond to the respective customization options (Wiedmeyer, Paragraph [0007], “The platform may dynamically create , store , and retrieve product components , process user inputs ( e . g . , inter actions with the simulated environment , voice commands , etc. . ) for assembling products using dimensional , spatial , geometric , or image data and rendering visual representations within a virtual simulation”; Paragraph [0008], “The platform may generate the environmental simulation to include the identified relevant products, thereby creating a virtual storefront that is customized to the user”). As explained in rejection of claim 1, the obviousness for combining of structured data object of Wiedmeyer into Totty is provided above. Regarding Claim 22, the combination of Totty, Wiedmeyer and Samson teaches the invention of one or more processors are programmed with computer instructions to render the customized product in claim 1. The combination further teaches wherein the one or more processors are programmed with computer instructions to render the customized product based on dimensions of a corresponding physical object relative to dimensions of a physical space corresponding to the three dimensional virtual space comprise program instructions to (Wiedmeyer, Paragraph [0007], “The platform may dynamically create , store , and retrieve product components , process user inputs ( e . g . , inter actions with the simulated environment , voice commands , etc. . ) for assembling products using dimensional , spatial , geometric , or image data and rendering visual representations within a virtual simulation”; Paragraph [0008], “The platform may generate the environmental simulation to include the identified relevant products , thereby creating a virtual storefront that is customized to the user”; [0099], “teleportation display that allows the user to navigate wide areas of virtual space while constrained to a smaller physical space”): size the virtual object in the three dimensional virtual space based on dimensions of a corresponding physical object relative to dimensions of a physical space corresponding to the three dimensional virtual space (Wiedmeyer, Paragraph [0018], “The hardware computing devices generate a blank 3D model having a shape identified by the package type, scale the blank 3D model according to the set of dimensions to produce a scaled 3D model, render the one or more images onto the scaled 3D model to produce the corresponding 3D computer graphic model of the first selected product, and create a first asset object of the one or more asset objects”); As explained in rejection of claim 1, the obviousness for combining of structured data object of Wiedmeyer into Totty is provided above. Regarding Claim 23, the combination of Totty, Wiedmeyer and Samson teaches the invention of one or more processors are programmed with computer instructions to render the customized product in claim 1. The combination further teaches enable a user to remove, generate, customize, and replace virtual objects within the three dimensional virtual space (Totty, Paragraph [0064], “the system can include a virtual object configurator, which functions to enable customization of the object attributes. The virtual object configurator can allow a user to select from and preview different product options, such as sizes, lengths, styles, colors, materials, woods, fabrics, optional accessories” [0095], “the front-end application allows users to insert virtual objects into a static photographic image view of a room” “the user can virtually move ( e.g., "teleport") between a fixed set of positions in the scene to view and decorate a scene from different vantage points” [0149], “FIG. 11; wherein a real object can be identified using the visual search, and replaced with virtual alternatives of the same or similar object; matching one or more unique features to a virtual object; and importing the virtual object into the virtual room”), wherein any newly generated virtual object is sized based on dimensions of the three dimensional virtual space to reflect how a corresponding physical object would appear in a physical space (Totty, Paragraph [0060], “The virtual object model (VOM) functions to represent the geometry of the physical object corresponding to the virtual object.” [0064], “the system can include a virtual object configurator, which functions to enable customization of the object attributes”). Regarding Claim 24, the combination of Totty, Wiedmeyer and Samson teaches the invention in claim 1. The combination further teaches wherein selected customization options further comprise options for modifying at least one of a color, a component, or a feature of the virtual object (Totty, Paragraph [0060], [0064], “the system can include a virtual object configurator, which functions to enable customization of the object attributes. The virtual object configurator can allow a user to select from and preview different product options, such as sizes, lengths, styles, colors, materials, woods, fabrics, optional accessories ( e.g. handles, pulls, doors, legs), artwork frame and matte options, and/or other object attributes”; “An object mask can include two-dimensional and/or three-dimensional binary mask”; [0166], “Rendering can also include modifying the virtual object based on auxiliary room features such as color, noise, exposure, shadows and highlights (e.g., using respective masks, variable values, etc.)”). Regarding Claim 25, the combination of Totty, Wiedmeyer and Samson teaches the invention of one or more processors are programmed with computer instructions to render the customized product in claim 1. The combination further teaches determine an intent of a user command received through a user interface (Totty, Paragraph [0092], “The capture user interface can include: capture instructions <read on user command> (e.g., phone movement patterns, user movement patterns, etc.), feedback (e.g., audio, visual, haptic, etc.), and/or other features”); and provide customization options for the product based on the determined intent (Totty, Paragraph [0109], “The system can optionally include one or more recommendation engine(s) 118 that functions to recommend objects” “The recommendation engine can generate the recommendations based on:… the room's existing furniture (e.g., from the virtual objects identified within the room), the room's inferred purpose <read on determined intent> (e.g., from the room's features and identified furniture), virtual objects placed within the virtual room”). Claim(s) 7, 14, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Totty et al. (US 20200302681 A1, hereinafter Totty) in view of Wiedmeyer et al. (US 20190251622 A1, hereinafter Wiedmeyer), further in view of Samson et al. (US 20150324940 A1, hereinafter Samson) as applied to Claim 1, 8, 15 above respectively and further in view of Fukuda et al. (US 20040183926 A1, hereinafter Fukuda). Regarding Claim 14, the combination of Totty, Wiedmeyer and Samson teaches the invention including program instructions stored on the one or more computer readable storage media in claim 11. The combination further taches generate the three dimensional virtual space based on automatically determined dimensions of a physical space depicted in a two dimensional image (Wiedmeyer, Paragraph [0099], “teleportation display that allows the user to navigate wide areas of virtual space while constrained to a smaller physical space”; [0071], “information that may be necessary for generating a three-dimensional visual representation of that asset within the virtual environment” “The database may include 1 to 10, or possibly more, 2D images of the asset representing views of the produce identified by the unit identifier” [0072], “the platform generates a blank 3D model of the specified shape type and scales it using the specified dimensions, either from the content library or uploaded by the user”; [0069], “Alternatively, such configurations may be auto - generated by the simulation generator according to the amount and varying types of products to be displayed”) wherein: the virtual object has dimensions sized to fit within corresponding dimensions of the three dimensional virtual space and is adjusted to relative dimensions of other virtual objects depicted in the three dimensional virtual space (Wiedmeyer, Paragraph [0018], “The hardware computing devices generate a blank 3D model having a shape identified by the package type, scale the blank 3D model according to the set of dimensions to produce a scaled 3D model” [0071], “The content library may include details of the asset such as spatial or geometric data, image data, meta-data, or other such information that may be necessary for generating a three-dimensional visual representation of that asset within the virtual environment” “The database may include 1 to 10, or possibly more, 2D images of the asset representing views of the produce identified by the unit identifier and corresponding to the front, back, top, bottom, left or right sides of the asset that may be accessible to the user. The user may also upload additional images, asset dimensions, meta-data or product attributes, or specify a shape type to supplement existing asset information within the content library or to provide additional detail”) As explained in rejection of claim 1, the obviousness for combining of structured data object of Wiedmeyer into Totty is provided above. But, Totty does not explicitly disclose the other virtual objects depicted in the three dimensional virtual space have dimensions corresponding with dimensions of one or more objects in the physical space, and the three dimensional virtual space has dimensions corresponding with the determined dimensions of the physical space. However, Fukuda teaches the other virtual objects depicted in the three dimensional virtual space have dimensions corresponding with dimensions of one or more objects in the physical space (Fukuda, Paragraph [0035], “The virtual object data storage unit 110 stores data expressing a three-dimensional shape and properties of a virtual object constructed in a real space, that is, data of quantized information such as the dimensions, color of the surface, and feel of the virtual object in a three-dimensional space”), and the three dimensional virtual space has dimensions corresponding with the determined dimensions of the physical space (Fukuda, Paragraph [0081], “the virtual dimensions of the virtual object to be combined with the image of the real space is sL, when the virtual object is combined with the image of the real space, the dimension sb on the screen corresponding to the virtual dimension sL can be found based on the proportional relationship with the dimensions of the marker 150 according to the following equation”). Fukuda and Totty are analogous since both of them are dealing with three dimensional data in augmented reality. Totty provided a way of virtual interaction with three dimensional indoor room design by customized the virtual object in the three dimensional model using machine learning in the augmented reality. Fukuda provided a way of process object in the augmented reality environment by adjusting the object according to dimension differences among the objects. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate object adjustment based on dimension taught by Fukuda into modified invention off Totty such that when customizing the object during the room modeling in the three dimensional space, system will be able to dynamically adjust the objects in the augmented reality based on the relative dimension difference between different objects identified in order to create virtual object relative to the dimensions in the physical environment which provide more realistic viewing experience for user who using the augmented reality devices. Regarding Claim 7, it recites limitations similar in scope to the limitations of Claim 14 and therefore is rejected under the same rationale. Regarding Claim 20, it recites limitations similar in scope to the limitations of Claim 14 and therefore is rejected under the same rationale. Claim(s) 5, 12, 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Totty et al. (US 20200302681 A1, hereinafter Totty) in view of Wiedmeyer et al. (US 20190251622 A1, hereinafter Wiedmeyer), further in view of Samson et al. (US 20150324940 A1, hereinafter Samson) as applied to Claim 4, 11 above and further in view of Levi et al. (US 20240112428 A1, hereinafter Levi) Regarding Claim 12, the combination of Totty, Wiedmeyer and Samson teaches the invention in claim 11. The combination further taches wherein the program instructions stored on the one or more computer readable storage media further comprise: program instructions to determine parameters for presenting the customized product in the three dimensional virtual space using a machine learning model trained on labeled images of products in physical spaces, wherein the parameters include at least one of a field of view, product position, or product size and are used to parameterize rendering of the customized product (Totty, Paragraph [0069], “Examples of room imagery include: photographs ( e.g., still images), video files, video frames or sequences, extended field of view (FOV) photos <read on field of view parameter>; [0064], “The virtual object configurator can allow a user to select from and preview different product options, such as sizes, lengths; [0061], “The object attributes function to define adjustable parameters for the object” [0068], “the virtual object can be associated with a virtual object orientation and/or position (e.g., pose) within the virtual room model”; [0110], “the recommendation engine (s) 118 can include machine learning algorithms that can automatically recommend and/or automatically position recommended objects in the room”). The combination does not explicitly disclose but Levi teaches presenting the customized product in the three dimensional virtual space (Levi, Paragraph [0264], “in educational or training scenarios, different learners may require various levels of instruction or content and docking customized virtual objects can enable adaptive learning experiences that align with an individual's skill levels and learning styles”); using a machine learning model (Levi, Paragraph [0282], “ne or more of image processing, computer vision, and machine learning may allow computers to analyze the visual content of an image and determine the type of objects in the image”) trained on labeled images of products in physical spaces (Levi, Paragraph [0282], “features of an image (e.g., shapes, colors, textures, and/or other characteristics of the image) may be extracted and compared with a labeled dataset containing images of several types of objects to identify the objects in an image”), wherein the parameters include at least one of a field of view (Levi, Paragraph [0139], “Second 3D placement requirement 614 may require positioning an associated portion of content in a manner to maintain minimal margins between displayed content and a boundary of the field-of-view of user 100”), product position (Levi, Paragraph [0292], “a location may include the spatial coordinates or position where virtual objects, scenes, or interactions are situated in the extended reality environment in relation to the user's viewpoint and the real world”), or product size (Levi, Paragraph [0077], “The virtual content may be determined based on data from input determination module 312, sensors communication module 314, and other sources (e.g., database 380). In some embodiments, determining the virtual content may include determining the distance, the size, and the orientation of the virtual objects”) and are used to parameterize rendering of the customized product (Levi, Paragraph [0076], “the input device may be used by virtual content determination module 315 to modify display parameters of the virtual content to match the state of the user” [0205], “Device settings of an extended reality appliance refers to one or more parameter values…device settings may refer to the configurable options and preferences that users can customize to tailor their extended reality experience to their liking or specific needs”). Levi and Totty are analogous since both of them are dealing with three dimensional data in augmented reality using machine learning.. Totty provided a way of virtual interaction with three dimensional indoor room design by customized the virtual object in the three dimensional model using machine learning in the augmented reality. Levi provided a way of process object in the augmented reality environment by adjusting the object in the modeling environment according to multiple product parameters and/or field of view. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate object adjustment parameter(s) taught by Levi into modified invention off Totty such that when customizing the object during the room modeling in the three dimensional space, system will be able to dynamically adjust the objects in the environment based on different parameters which enhance the system with flexibility and will be able to create more precise customized modeling system during the machine learning training process. Regarding Claim 5, it recites limitations similar in scope to the limitations of Claim 12 and therefore is rejected under the same rationale. Regarding Claim 26, the combination of Totty, Wiedmeyer, Samson and Levi teaches the invention of one or more processors are programmed with computer instructions to render the customized product in claim 5. The combination further teaches generate the three dimensional virtual space corresponding to the physical space (Totty, Paragraph [0067], “virtual objects are processed to yield usable form (e.g. 2D images can be used as 3D artwork, wallpaper, pillows, fabrics, rugs, etc.; multiple photographs and/or room imagery of an object from different angles can be converted into a 3D model, etc.”) such that the customized product is rendered to replace an object depicted in a two dimensional image (Totty, Paragraph [0064], “the system can include a virtual object configurator, which functions to enable customization of the object attributes. The virtual object configurator can allow a user to select from and preview different product options, such as sizes, lengths, styles, colors, materials, woods, fabrics, optional accessories” [0059], “The virtual object visual representation (VOVR) functions to visually represent the object. Examples of VOVRs that can be used include: a 2D image” [0149], “FIG. 11; wherein a real object can be identified using the visual search, and replaced with virtual alternatives of the same or similar object; matching one or more unique features to a virtual object; and importing the virtual object into the virtual room”). Response to Arguments Applicant’s arguments with respect to claim 1, 8, 15, filed on 12/16/2025, with respect to rejection under 35 USC § 103 have been considered but are moot in view of the new ground(s) of rejection. They have been taught by the combination of prior arts Totty, Wiedmeyer and Samson. In regard to Claims 2-7, 9-14, 16-26, they directly/indirectly depends on independent Claim 1, 8, 15 respectively. Applicant does not argue anything other than the independent claim 1, 8, 15. The limitations in those claims in conjunction with combination previously established as explained. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20220292543 A1 Pop-up retail franchising and complex economic system US 20130215116 A1 System and Method for Collaborative Shopping, Business and Entertainment US 20160210602 A1 System and method for collaborative shopping, business and entertainment. Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YuJang Tswei/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Dec 30, 2024
Application Filed
Mar 14, 2025
Non-Final Rejection — §103
Jun 20, 2025
Response Filed
Jul 15, 2025
Final Rejection — §103
Dec 16, 2025
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579805
AUGMENTED, VIRTUAL AND MIXED-REALITY CONTENT SELECTION & DISPLAY FOR TRAVEL
2y 5m to grant Granted Mar 17, 2026
Patent 12579838
Perspective Distortion Correction on Faces
2y 5m to grant Granted Mar 17, 2026
Patent 12567213
COMPUTER VISION AND ARTIFICIAL INTELLIGENCE METHOD TO OPTIMIZE OVERLAY PLACEMENT IN EXTENDED REALITY
2y 5m to grant Granted Mar 03, 2026
Patent 12567189
RELATIONAL LOSS FOR ENHANCING TEXT-BASED STYLE TRANSFER
2y 5m to grant Granted Mar 03, 2026
Patent 12561930
PARAMETRIC EYEBROW REPRESENTATION AND ENROLLMENT FROM IMAGE INPUT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+17.0%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 447 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month