Prosecution Insights
Last updated: April 17, 2026
Application No. 18/531,968

EXTENDED-REALITY STOREFRONT BUILDER

Final Rejection §103
Filed
Dec 07, 2023
Examiner
GRAY, RYAN M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
unknown
OA Round
2 (Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
589 granted / 672 resolved
+25.6% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
18 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 672 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments and Remarks Applicant's arguments filed 11/7/25 have been fully considered as follows: Applicant’s argument is persuasive. Denham is cited below to address the amended subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Use of indicates a limitation is not explicitly disclosed by the reference alone. Claim(s) 1, 6, 9-12, 16, 19, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over High (US 2017/0116667) in view of Singh (US 2025/0078146) and Denham (US 2016/0292966) Claim 1 High discloses a computer-implemented method for generating a no-code, three-dimensional extended-reality environment, the method comprising: generating a floorplan of the extended-reality environment based on the selected template (High, ¶ 31: “The store layout models may include different types of display cases, shelves, and fixtures, different decoration and/or color schemes, etc. The store layout models may further include a floor plan and layout templates for stores and sections of a store. The models and layouts may be provided to the user device 332 to be rendered for projection display and/or may be at least partially rendered at the central computer system 310.”), wherein the extended-reality environment is configured to be displayed on a head-mounted display configured to be worn by a buyer (High, ¶ 20: “the virtual store may be projected via a head-mounted display”), and wherein the extended-reality environment includes at least one selected from the group of: a virtual simulation of a physical environment (High, ¶ 11: “a virtual shopping space that offers an in-store shopping experience to customers through 3D projection virtual simulation”), or an augmented reality display of a physical environment (High, ¶ 18, 20: “In some embodiments, the virtual store may be projected via a head-mounted display, an augmented reality display…In some embodiments, the projection display device may display computer generated images that augments, overlays, partially obstructs, and/or fully obstructs the user's view of the physical space in front of the user.”); customizing the extended-reality environment by generating a virtual customizable asset to be displayed in the floorplan of the extended-reality environment, wherein the virtual customizable asset is added to the floorplan of the extended-reality environment (High, ¶ 23: “In some embodiments, the display of the virtual store may be customized to different customers. In some embodiments, an arrangement of the plurality of interactive virtual items, an arrangement of sections of the virtual store, a display of in-store promotions, a virtual store decoration, a virtual store color scheme, and a virtual store lighting may be customized based on a user profile. For example, if a customer selects a vegan preference the store may be customized to only display non-animal products. In another example, if a customer never buys anything from the hardware department, the hardware department may be removed from or rearranged to the edge of that user's customized virtual store. In another example, the items and/or sections may be arranged such that items that are often purchased by the customer are spatially prioritized for easy access by the user (e.g. brought closer to the front of the virtual store, displayed on an eye-level shelf, etc.). In yet another example, the virtual store's appearance, decoration, and in-store promotions may also be modified based on user's demographic, preference, and/or shopping history information.”); and upon receiving an indication of a selected product from a buyer, presenting a plurality of display options associated with the selected product, wherein the plurality of display options includes at least one selected from the group of (High, ¶ 17: “For example, specific motions (e.g. swipe down, draw a circle, etc.) may be associated with action commands such as “add an item to basket” and “check out and pay.” In some embodiments, the projection display device 120 may display a menu for the user to select commands and options. In some embodiments, the virtual store may include a menu overlay display and the user motions may correspond to menu navigation and selections.”): displaying a manipulable three-dimensional model of the selected product (High, ¶ 16, 17: “projects a display of a three dimensional (3D) virtual space…the user's hand in the physical space and allow the user to manipulate the location and/or orientation of the virtual object with hand motion (e.g. pick up, turn around, etc.)”), displaying additional information related to the selected product (High ¶ 25: “when a user selects an item by either touching it, picking it up, and/or pointing to it in the virtual store, the user may be presented with a menu of options such as “more information,” “add to basket,” “purchase now,” etc.”), displaying the product being used by an avatar in the virtual three-dimensional store to enable visualization of the selected product (High, ¶ 11, 24: “In some embodiments, the virtual store may further provide “try-on” functions that allow a customer to virtually overlay products with the customer and/or customer's physical environment such as the customer's home prior to purchasing the product. The try-on function may be provided for products with a visual aesthetic factor such as apparels, jewelry, furniture, and home decoration, etc….If the user elects to try on an item, the system may project a visual representation of the item at scale either into the user's physical environment or onto an avatar of the user.”), displaying the product in the virtual simulation of a physical environment to enable visualization of the selected product (High, ¶ 16: “The projection display device is configured to overlay an image of a product over the user's view of the physical space. For example, a display of furniture may overlay the customer's view of his/her living room.”), and displaying a payment platform wherein the user is able to purchase the selected product without leaving the extended-reality environment (High, ¶ 11, 17: “The virtual store may also be configured to allow customers to add items to a purchase list and submit payments within the virtual environment…For example, specific motions (e.g. swipe down, draw a circle, etc.) may be associated with action commands such as “add an item to basket” and “check out and pay.”). High does not explicitly disclose, but Singh makes obvious using a drag and drop selection mechanism (“The inputs to the environment creator UI 124 can also provide the product data (e.g., product details, specifications, price information, videos, photos, shipping information, review, and so forth), the product models or images, and/or product locations mapped to the 3D model 112 (e.g., via a drag-and-drop feature).”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use drag and drop. The claimed invention differs from the claimed invention because different UI techniques are used to customize the virtual layout. The functions of drag and drop were well known in the art, and one of ordinary skill in the art would have recognized that they may be substituted for other interaction techniques to achieve the same purpose. One of ordinary skill in the art could have made the substitution and the results would have been predictable because High considers customization and drag and drop is a known means for moving UI elements. High does not explicitly disclose, but Denham makes obvious displaying, to a seller, a plurality of templates, wherein each of the templates is representative of an interior of a virtual three-dimensional store; receiving, from the seller, an indication of a selected template of the plurality of templates (Denham, ¶ 104; “The illustrated store creation module 50 includes a store template module 87 in communication with the modules and components of the store creation module 50. The store template module manages and stores store templates for merchants to use to create and maintain their virtual store within the virtual environment.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to allow the seller to select templates. One of ordinary skill in the art would have motivation to allow the merchant, who is in the best position for preferred design, to select the desired layout or aesthetics. One of ordinary skill in the art would have had a reasonable expectation of success because High considers the use of pre-made templates. Claim 6 High does not explicitly disclose, but Singh makes obvious wherein the indication of a selected product is made by an ocular movement detected by the head-mounted display (¶ 41: “. For example, inward facing cameras of XR device 210 may be used to detect the user's gaze on a product. A product selection may be automatically made if the user's gaze is held on the product for a threshold period of time”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider gaze control. One of ordinary skill in the art would have motivation to allow hands free operation, and gaze control functions as an alternative input means. One of ordinary skill in the art would have had a reasonable expectation of success because High considers eye trackers as an input device in an HMD context. Claim 9 High does not explicitly disclose, but Singh makes obvious further comprising: upon receiving the indication of a selection of a display option of the plurality of display options, rendering a display based on the display option without obscuring any other displays that were previously rendered (e.g. using a layout; “FIG. 1B depicts an illustrative scenario 150 of providing product recommendations in an optimized virtual store 155 (e.g., in an optimized view of the virtual reality store), in accordance with some embodiments of the disclosure. In an embodiment, the scenario 150 comprises user 101, view field 102, view boundaries 103, optimized virtual store 155, product of interest 110, and associated products 160, 162, 164, 166, 168, and 170 which are recommended to user 101. Continuing from the example in FIG. 1A, user 101 previously selected product of interest 110 (e.g., bathroom vanity) within virtual store 105 (e.g., in its default view). A product use (e.g., vanity renovation project) was identified based on the product of interest 110. Associated products were identified based on the product use. In the example, the associated products include required products for the product use (e.g., faucets 160, pipe wrenches 162, and supply lines 164) and optional products for the product use (e.g., putty 166, drain stoppers 168, and sealant tape 170). As illustrated in FIG. 1B, upon user 101 selecting the product of interest 110, optimized virtual store 155 may be automatically displayed within the view field 102, replacing the default view of virtual store 105. In an embodiment, optimized virtual store 155 comprises a layout that is optimized for assisting the user 101 in shopping for products to accomplish the product use.”) PNG media_image1.png 322 401 media_image1.png Greyscale Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider non obscuring content. One of ordinary skill in the art would have motivation to avoid covering content a user may still be interested in. One of ordinary skill in the art would have had a reasonable expectation of success because window layouts function in the same context when providing more than one object on the display. Claim 10 High discloses further comprising customizing the extended-reality environment by generating a virtual interactive activity that is an activity associated with a product in the extended-reality environment (e.g. product interaction; try-on; High, ¶ 22: “In some embodiments, an arrangement of the plurality of interactive virtual items, an arrangement of sections of the virtual store, a display of in-store promotions, a virtual store decoration, a virtual store color scheme, and a virtual store lighting may be customized based on a user profile.”) Claim 11 High does not explicitly disclose, but Applicant’s admitted prior art makes obvious wherein generating a floorplan of the extended-reality environment based on the selected template causes a reduction in greenhouse gas emissions compared to providing a physical environment based on the floorplan (Specification, ¶ 175: “According to a 2020 MIT Lab study Retail Carbon Footprints: Measuring Impacts from Real Estate and Technology, approximately 0.749 kgCO2e/item is generated for e-commerce, versus 1.181 kgCO2e/item for in-store retail in a base case.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider energy savings. One of ordinary skill in the art would have motivation to for commercial, marketing or other purposes. One of ordinary skill in the art would have had a reasonable expectation of success because on average a digital store would necessarily have reduced emissions (see e.g. Retail Carbon Footprints). Claim 12 The same teachings and rationales in claim 1 are appliable to claim 12, with High further disclosing a corresponding system for generating an extended-reality environment, the system comprising: head-mounted display configured to be worn by a user; a non-transitory computer memory; a processor configured to execute computer-readable instructions stored in the non-transitory computer memory, wherein when the computer-readable instructions are executed the processor is configured to (e.g. Fig. 1). Claim 16 The same teachings and rationales in claim 6 are appliable to claim 16. Claim 19 The same teachings and rationales in claim 11 are appliable to claim 19. Claim 20 Examiner’s Interpretation: Machine readable media can encompass forms of signal transmission media that falls outside of the four statutory categories of invention. MPEP 2106; citing In re Nuijten, 500 F.3d 1346, 84 USPQ2d 1495 (Fed. Cir. 2007). A claim whose BRI covers both statutory and non-statutory embodiments embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter. MPEP 2106. Claim 20 as recites a non-transitory computer readable medium… Because the use of non-transitory explicitly excludes the above ineligible subject matter, the broadest reasonable interpretation of the claimed medium in view of Applicant’s specification covers only eligible subject matter. Claim Mapping: The same teachings and rationales in claim 1 are appliable to claim 20 with High disclosing a non-transitory computer readable medium with instructions stored thereon that, when executed by a processor of a computing device, cause the computing device to perform operations comprising (¶ 41) Claim(s) 5, 7, 8, 15, 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over High (US 2017/0116667) in view of Singh (US 2025/0078146), Denham (US 2016/0292966) and Berger (US Patent 11,580,592) Claim 5 High does not explicitly disclose, but Berger makes obvious further comprising generating an avatar configured to be displayed within the extended-reality environment and that is controllable by the user, wherein the avatar shares visual characteristics with an avatar associated with the user on another platform (“For example, external resources that include full-scale external applications (e.g., a third-party or external application 109) are provided with access to a first type of user data (e.g., only two-dimensional avatars of users with or without different avatar characteristics). As another example, external resources that include small-scale versions of external applications (e.g., web-based versions of third-party applications) are provided with access to a second type of user data (e.g., payment information, two-dimensional avatars of users, three-dimensional avatars of users, and avatars with various avatar characteristics). Avatar characteristics include different ways to customize a look and feel of an avatar, such as different poses, facial features, clothing, and so forth.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use a user’s visual characteristics. One of ordinary skill in the art would have motivation to increase personalization of a user experience, such as in a try-on application. One of ordinary skill in the art would have had a reasonable expectation of success because High uses both avatars of the user and a try-on interactive experience. Claim 7 High does not explicitly disclose, but Berger makes obvious wherein the head-mounted display is located in a physical store and the extended-reality environment is customized to augment a shopping experience in the physical store (“The AR storefront includes one or more AR items that are overlaid on real-world objects depicted in the camera feed of the client device 102 of the first user. The AR items include visual attributes that correspond to the physical layout of the real-world objects depicted in the real-world environment in the camera feed. For example, as shown in FIG. 6, the user interface 600 is presented on the client device 102 of a first user in which one or more AR items that are overlaid on real-world objects depicted in the camera feed of the client device 102 of the first user”) PNG media_image2.png 664 374 media_image2.png Greyscale Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider a physical overlay. One of ordinary skill in the art would have motivation to provide detail in context (such as through AR). One of ordinary skill in the art would have had a reasonable expectation of success because use of AR as an alternative is suggested by High in other contexts (such as a store booth, kiosk etc.) and could be readily applied to a physical store. Claim 8 High does not explicitly disclose, but Berger makes obvious further comprising: recording a quantity of the product available for purchase; and updating the quantity when a purchase is made on the payment platform (Berger, “The shared shopping experience system 224 can select how many AR items to present based on the inventory of the store. For example, the shared shopping experience system 224 can determine that there are 30 pants available in the store inventory. In response, the shared shopping experience system 224 can display up to a maximum of 7 AR items corresponding to pants in a first stack of AR items. As another example, the shared shopping experience system 224 can determine that there are 6 shirts available in the store inventory”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to track inventory. Inventory management was known in the prior art, but is not disclosed by High. One of ordinary skill in the art could have incorporated inventory because inventory management would be a well understood task in a shopping context. One of ordinary skill in the art could have incorporated inventory management and the results would have been predictable because of the recognized need to track inventory in a commercial context. Claim 15 The same teachings and rationales in claim 5 are appliable to claim 15. Claim 17 The same teachings and rationales in claim 8 are appliable to claim 17. Claim(s) 2, 4, 14, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over High (US 2017/0116667) in view of Singh (US 2025/0078146), Denham (US 2016/0292966) and Scaff (US 2025/0054044) Claim 2 Examiner’s Interpretation: Claim 2 is interpreted in the alternative due to the language selected from the group of: Claim Mapping: High does not disclose, but Scaff makes obvious wherein the virtual customizable asset is at least one selected from the group of: a picture of a product (¶ 52: “Broadly speaking, the online marketplace 114 is configured to generate listings for items and to expose those listings (e.g., publish them) to one or more computing devices, including the computing device 102. For example, the online marketplace 114 may generate listings for items for sale and expose those listings to computing devices, such that the users of the computing devices can interact with the listings via user interfaces to initiate transactions (e.g., purchases, add to wish lists, share, and so on) in relation to the respective item or items of the listings. In accordance with the described techniques, the online marketplace 114 is configured to generate listings for one or more types of physical goods or property (e.g., clothing and/or clothing accessories, collectibles, furniture, decorative items, textiles, luxury items, electronics, real property, physical computer-readable storage having one or more video games stored thereon, and so on), services (e.g., babysitting, dog walking, house cleaning, and so on), digital items (e.g., digital images, digital music, digital videos) that can be downloaded via the network(s) 108, and blockchain backed assets (e.g., non-fungible tokens (NFTs)), to name just a few.”), wall art (¶ 52: “Broadly speaking, the online marketplace 114 is configured to generate listings for items and to expose those listings (e.g., publish them) to one or more computing devices, including the computing device 102. For example, the online marketplace 114 may generate listings for items for sale and expose those listings to computing devices, such that the users of the computing devices can interact with the listings via user interfaces to initiate transactions (e.g., purchases, add to wish lists, share, and so on) in relation to the respective item or items of the listings. In accordance with the described techniques, the online marketplace 114 is configured to generate listings for one or more types of physical goods or property (e.g., clothing and/or clothing accessories, collectibles, furniture, decorative items, textiles, luxury items, electronics, real property, physical computer-readable storage having one or more video games stored thereon, and so on), services (e.g., babysitting, dog walking, house cleaning, and so on), digital items (e.g., digital images, digital music, digital videos) that can be downloaded via the network(s) 108, and blockchain backed assets (e.g., non-fungible tokens (NFTs)), to name just a few.”), video content(¶ 52: “Broadly speaking, the online marketplace 114 is configured to generate listings for items and to expose those listings (e.g., publish them) to one or more computing devices, including the computing device 102. For example, the online marketplace 114 may generate listings for items for sale and expose those listings to computing devices, such that the users of the computing devices can interact with the listings via user interfaces to initiate transactions (e.g., purchases, add to wish lists, share, and so on) in relation to the respective item or items of the listings. In accordance with the described techniques, the online marketplace 114 is configured to generate listings for one or more types of physical goods or property (e.g., clothing and/or clothing accessories, collectibles, furniture, decorative items, textiles, luxury items, electronics, real property, physical computer-readable storage having one or more video games stored thereon, and so on), services (e.g., babysitting, dog walking, house cleaning, and so on), digital items (e.g., digital images, digital music, digital videos) that can be downloaded via the network(s) 108, and blockchain backed assets (e.g., non-fungible tokens (NFTs)), to name just a few.”), background music (¶ 52: “Broadly speaking, the online marketplace 114 is configured to generate listings for items and to expose those listings (e.g., publish them) to one or more computing devices, including the computing device 102. For example, the online marketplace 114 may generate listings for items for sale and expose those listings to computing devices, such that the users of the computing devices can interact with the listings via user interfaces to initiate transactions (e.g., purchases, add to wish lists, share, and so on) in relation to the respective item or items of the listings. In accordance with the described techniques, the online marketplace 114 is configured to generate listings for one or more types of physical goods or property (e.g., clothing and/or clothing accessories, collectibles, furniture, decorative items, textiles, luxury items, electronics, real property, physical computer-readable storage having one or more video games stored thereon, and so on), services (e.g., babysitting, dog walking, house cleaning, and so on), digital items (e.g., digital images, digital music, digital videos) that can be downloaded via the network(s) 108, and blockchain backed assets (e.g., non-fungible tokens (NFTs)), to name just a few.”), a three-dimensional model of a product created by a machine learning algorithm (¶ 76, 77: “For example, a three-dimensional avatar of the user wearing the seed clothing item and the complementary clothing items may be generated and displayed via a user interface… In one or more implementations, the prompt 138 for the generative artificial intelligence 122 excludes a clothing type of the seed clothing item 136. For example, if the seed clothing item is a red shirt, then the prompt 138 asks the generative artificial intelligence 122 to locate clothing items other than shirts which complement the seed clothing item 136.”), the machine learning algorithm including: extracting, from a particular two-dimensional image of a product, a feature vector describing the particular two-dimensional image of a product (¶ 20: “one or more implementations, a user provides an image of a clothing item to the automated outfit curation system, e.g., by uploading or capturing an image of the clothing item. In other implementations, a request is received to curate an outfit based on a clothing item that is listed on the online marketplace. For example, the user may select a clothing item that is available on the online marketplace, and then request curation of an outfit that includes the selected clothing item.; and generating, using a machine learning algorithm, a particular three-dimensional model of a product based on the feature vector, wherein the machine learning algorithm is trained, based on input training features, to produce a corresponding three-dimensional model of a product (¶ 62, 70: “additionally or alternatively, the prompt building logic 124 may form a prompt 138 using a partial prompt that is preconfigured, but that is a different type of information from human-readable text, such as a partial feature vector, which is not human understandable text. In such implementations, the clothing attributes 140 also may be extracted and indicated using information that is different from human-readable text. In one or more variations, for instance, the clothing attributes 140 may be expressed in a feature vector format such that they can be combined with a partial feature vector to form a prompt which is a feature vector. The prompt 138 may be formatted in a variety of ways for input to the generative artificial intelligence 122 without departing from the spirit or scope of the techniques described herein… For example, a three-dimensional avatar of the user wearing the seed clothing item and the complementary clothing items may be generated and displayed via a user interface”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use the claimed customization features in designing a virtual store. One of ordinary skill in the art would have motivation to automate manual capture, editing or generation of virtual products. One of ordinary skill in the art would have had a reasonable expectation of success because High could be improved from manually generated products. Claim 4 High does not explicitly disclose, but Scaff discloses wherein the three-dimensional model of a product is generated by: obtaining a particular two-dimensional image of a product via a GUI; and displaying, via the GUI, a particular three-dimensional model of a product generated by one or more generative machine learning algorithms, wherein the particular three-dimensional model of a product is generated by the one or more generative machine learning algorithms in response to the particular two-dimensional image of a product, wherein the one or more generative machine learning algorithms are trained to generate, in response to input training data, a corresponding three-dimensional model of a product (¶ 62, 70: “additionally or alternatively, the prompt building logic 124 may form a prompt 138 using a partial prompt that is preconfigured, but that is a different type of information from human-readable text, such as a partial feature vector, which is not human understandable text. In such implementations, the clothing attributes 140 also may be extracted and indicated using information that is different from human-readable text. In one or more variations, for instance, the clothing attributes 140 may be expressed in a feature vector format such that they can be combined with a partial feature vector to form a prompt which is a feature vector. The prompt 138 may be formatted in a variety of ways for input to the generative artificial intelligence 122 without departing from the spirit or scope of the techniques described herein… For example, a three-dimensional avatar of the user wearing the seed clothing item and the complementary clothing items may be generated and displayed via a user interface”), and wherein the corresponding three-dimensional model of a product comprises a newly generated portion of content that shares one or more properties with the input training data and thereby extends the input training data (¶ 65: “Alternatively or in addition, the generative artificial intelligence 122 is trained or otherwise programmed so that it incorporates the seed clothing item 136 into the outfits that it outputs.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use generative technologies. One of ordinary skill in the art would have motivation to automate manual capture, editing or generation of virtual products. One of ordinary skill in the art would have had a reasonable expectation of success because High could be improved from manually generated products. Claim 14 The same teachings and rationales in claim 4 are appliable to claim 14. Claim 18 The same teachings and rationales in claim 2 are appliable to claim 18. Claim(s) 3, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over High (US 2017/0116667) in view of Singh (US 2025/0078146), Denham (US 2016/0292966) and Uphadhya, FloorGAN: Generative Network for Automated Floor Layout Generation Claim 3 High discloses wherein the floorplan is generated by: obtaining particular floorplan design instructions via a graphical user interface (GUI) (e.g. CAD; editor; native AR/VR interface; ¶ 31: “The store and item model database 322 may contain various store layout models, display shelf models, and/or 3D models of individual items offered for sale. The 3D models of items offered for sale may be a computer aided design (“CAD”) model and/or a 3D scan of the actual item. The store layout models may include different types of display cases, shelves, and fixtures, different decoration and/or color schemes, etc. The store layout models may further include a floor plan and layout templates for stores and sections of a store. The models and layouts may be provided to the user device 332 to be rendered for projection display and/or may be at least partially rendered at the central computer system 310.”) and High does not disclose, but Uphadhya makes obvious displaying, via the GUI, a particular floorplan generated by one or more generative machine learning algorithms, wherein the particular floorplan is generated by the one or more generative machine learning algorithms in response to the particular floorplan design instructions, wherein the one or more generative machine learning algorithms are trained to generate, in response to input training data, a corresponding floorplan, and wherein the corresponding floorplan comprises a newly generated portion of content that shares one or more properties with the input training data and thereby extends the input training data (Fig. 2; abstract: “In this work, we propose a generative adversarial network, FloorGAN, to synthesize floor plans guided by user constraints. Our approach considers user inputs in the form of room types, and spatial relationships and generates layout designs that satisfy these requirements. We evaluate our approach on the dataset, RPLAN, consisting of 80,000 vector-graphics floor plans of residential buildings designed by professional architects. We perform both qualitative and quantitative analysis along three metrics - Realism, Diversity, and Compatibility to evaluate the generated layout designs. We compare our approach with the existing baselines and outperform on all these metrics. The layout designs generated by our approach are more realistic and of better quality.”) PNG media_image3.png 405 679 media_image3.png Greyscale Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to user generative technologies. One of ordinary skill in the art would have motivation to improve the design process (“Floor plan design is an iterative, time-consuming and trial and error based process between designers and users, which requires significant expertise and experience. CAD based sketching tools help architects and designers in designing plan layouts but it requires expertise and vast knowledge of CAD tools”)(Page 140). One of ordinary skill in the art would have had a reasonable expectation of success because High uses designed templates and CAD tools, which could be improved. Claim 13 The same teachings and rationales in claim 3 are appliable to claim 13. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN M GRAY whose telephone number is (571)272-4582. The examiner can normally be reached on Monday through Friday, 9:00am-5:30pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN M GRAY/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Dec 07, 2023
Application Filed
Aug 08, 2025
Non-Final Rejection — §103
Oct 03, 2025
Interview Requested
Oct 10, 2025
Examiner Interview Summary
Oct 10, 2025
Applicant Interview (Telephonic)
Nov 07, 2025
Response Filed
Jan 22, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597216
ARTIFICIAL INTELLIGENCE VIRTUAL MAKEUP METHOD AND DEVICE USING MULTI-ANGLE IMAGE RECOGNITION
2y 5m to grant Granted Apr 07, 2026
Patent 12586252
METHOD FOR ENCODING THREE-DIMENSIONAL VOLUMETRIC DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12572892
SYSTEMS AND METHODS FOR VISUALIZATION OF UTILITY LINES
2y 5m to grant Granted Mar 10, 2026
Patent 12561928
SYSTEMS AND METHODS FOR CALCULATING OPTICAL MEASUREMENTS AND RENDERING RESULTS
2y 5m to grant Granted Feb 24, 2026
Patent 12542946
REMOTE PRESENTATION WITH AUGMENTED REALITY CONTENT SYNCHRONIZED WITH SEPARATELY DISPLAYED VIDEO CONTENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
98%
With Interview (+10.9%)
2y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 672 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month