Prosecution Insights
Last updated: April 19, 2026
Application No. 18/591,612

GOODS AND SERVICES CONTENT SELECTION BASED ON ENVIRONMENT SCANS

Final Rejection §103
Filed
Feb 29, 2024
Examiner
GRAY, RYAN M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Adeia Guides Inc.
OA Round
2 (Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
589 granted / 672 resolved
+25.6% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
18 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 672 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments and Remarks Applicant's arguments filed 11/19/25 have been fully considered as follows: Applicant’s arguments with respect to Miller are persuasive. Ghadar is cited below to address the amended subject matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Use of indicates a limitation is not explicitly disclosed by the reference alone. Claim(s) 1, 3-8, 12-17, 19, 56 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller (US 2021/0183128) in view of Maschmeyer (US 2023/0401802) and Ghadar (US 2019/0295151) Claim 1 Miller discloses a method comprising: extracting environment information from received mapping data of an environment, wherein the environment information includes available space (¶ 75: “Once the dimensions of the desired work areas are established, the user can scan around the room via the mobile device see the measured work areas where, e.g., new cabinets are to be installed. (Note that the work areas/dimensions shown outside of AR viewer are for illustrative purposes only—i.e., they would only be seen within the AR environment 902 shown on the device screen. Accordingly, the AR environment is configured to measure and render a set of work areas within a room, which may be captured and stored using panoramic imagery.”), a catalog of one or more recognized objects in the environment (¶ 63: “In step 405, the tagged Photo Image is identified and processed by Image Recognition software. The tagged Photo Image is then stored in the Image Recognition RDB 193 (step 410). This builds the Image Recognition RDB 193 for future improvement and “smart” system functionality via Image Recognition software capabilities.”), and placement of the one or more recognized objects in the environment (¶ 66: “Each image of a training data set may include metadata tags assigned by a user, one or more properties identified by Image Recognition Software, and/or third-party data associated with a given image. In response to executing a ML algorithm on visual data selected by a user, Application 105 may display one or more design components in an augment reality (AR) or virtual reality (VR) environment via Mobile Computing Device 100—e.g., Application 105 may display one or more design components in Design View of step 259. In response to executing a ML algorithm on visual data selected by a user, Application 105 may generate a three-dimensional (3D) digital representation of a design component based on one or more properties of the design component and a pre-built 3D model scaffold.”); accessing a storage of supplemental content items (Fig. 9B; ¶ 76: “Shown along the bottom bar of the application are a set of design component options 908 that the user can drag and drop. The design components 908 may for example be selected, ranked and displayed using a ML or other algorithm. For example, the options 908 may be selected based on the theme of the room or other rooms in the house, the likely return on investment, user inputs, available stock, dimensions, etc.”); selecting, based at least in part on the environment information and the (¶ 69: “The ML algorithm identifies two design components of Room B: a leather couch and a mahogany coffee table. The ML algorithm utilizes a training data set to determine complementary components for Room A based on the identified design components of Room B. Application 105 displays one or more complementary design components suggested to include in Room A for the user to engage in an AR environment via the mobile computing device.”); and generating, for display on a display device, the supplemental content item (“The ML algorithm identifies two design components of Room B: a leather couch and a mahogany coffee table. The ML algorithm utilizes a training data set to determine complementary components for Room A based on the identified design components of Room B. Application 105 displays one or more complementary design components suggested to include in Room A for the user to engage in an AR environment via the mobile computing device.”). Miller does not explicitly disclose, but Maschmeyer makes obvious identifying, from a profile data source, a user profile; (Maschmeyer, ¶ 72: “or example, collections of 3D objects 24 selected by users or identified automatically by the object viewer tool 12 can be fed to the machine learning tool 28 to train one or more machine learning models that permit the object viewer tool 12 to determine characteristics that can be tagged in the 3D models 24 and/or collections 26. The characteristics modeled and tagged in this way can be based on product type or other characteristics such as gender, style, category, theme, etc. The machine learning tool 28 can therefore be leveraged by the object viewer tool 12 to determine suitable replacement objects 24 to suggest to a user based on an input requesting replacement of the objects 24 as discussed further below.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider a user profile such as user interest. One of ordinary skill in the art would have motivation to provide relevant products to the user based on interest, purchasing behavior, etc. One of ordinary skill in the art would have had a reasonable expectation of success because Miller also considers providing suggestions to the user in the same context. Miller and Maschmeyer do not explicitly disclose, but Ghadar discloses comparing the one or more recognized objects in the environment indicated in the catalog to objects associated with the stored supplemental content items to identify a subset of the objects associated with the stored supplemental content items that do not correspond to the one or more recognized objects in the environment (Ghadar, Fig. 3; ¶ 18: PNG media_image1.png 702 555 media_image1.png Greyscale “In some embodiments, feature vectors of the items in the electronic catalog, or in certain product categories of the electronic catalog, are compared to feature vectors of the other objects in the identified example set, and the visually similar items are determined based on a metric such as euclidean distance. Thus, the items can be provided to the user as recommended products that may be aesthetically compatible with their query object.” Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider comparison as claimed. One of ordinary skill in the art would have motivation to provide relevant products to the user based on interest, purchasing behavior, etc. One of ordinary skill in the art would have had a reasonable expectation of success because Miller also considers providing suggestions to the user in the same context. Claim 3 Miller discloses further comprising receiving via a first device the received mapping data, wherein the first device and the display device are different devices (¶ 47: “A photo image may be captured by various devices including but not limited to a camera device intrinsic to the System Mobile Device 100, another mobile device such as a cellular phone or a computing tablet, a camera device independent of the disclosed system hardware, or a scanning device which is independent of the disclosed system hardware.”) Claim 4 Miller discloses wherein the selected supplemental content item is an image or video of an object or service related to the supplemental content item (Fig. 9C; ¶ 77: “. A kitchen cabinet design component 910 is selected (i.e., dragged and dropped) by the user and rendered on the wall that was previously empty as shown in FIG. 9B. During the drag and drop process, the component 910 may be initially displayed and manipulated by the user (location and orientation) to be established proximate the workspace on the wall and then automatically scaled and rendered in an updated 3D rendering as shown. manipulate a of the selected design component within the augmented reality environment;”) Claim 5 Miller discloses further comprising receiving style information of the environment, and wherein selecting the supplemental content item is further based on the style information of the environment (¶ 66: “The ML algorithm may identify home improvement design solutions based on one or more attributes of visual data selected by the user—such as, e.g., calculated dimensions of an area to be renovated, identified design components, internet browser metadata, color schemes, styles, or user feedback”) Claim 6 Miller discloses further comprising receiving historical information of the environment, and wherein selecting the supplemental content item is further based on the historical information of the environment (¶ 72: “Collecting property data in step 601 may include obtaining historical data sets associated with real estate listings and attributes of each listing such as, for example, region demographic 601A, consumer profile 601B, transaction 601C, and property image 601D. Image recognition in step 603 includes executing a function on property image 601D of each item in collected property data 601 to identify components 605. Subsequently, compile training data 607 includes assembling the identified components 605, region demographic 601A, consumer profile 601B, transaction 601C and property image 601D into a training data set 607 to train a machine learning algorithm. The compiled training data 607 trains machine learning (ML) algorithm 609 to generate a ML algorithm capable of deriving properties and suggestions for future home improvement projects. Finally, deploy ML algorithm 611 includes enabling the Application 105 to programmatically access the ML algorithm to execute functions on a user’s visual data.”) Claim 7 Miller discloses wherein the received mapping data of the environment includes a three-dimensional scan of the environment (¶ 9, 52, 75: “. For example, the Application 105 may employ Point Cloud (PC) data to render and display one or more three-dimensional (3D) models of design components in Design View of step 259….Data received and processed by the System Calibration Server is then stored in the System Calibrate RDB according to the calibration process used: either Frame of Reference; Point of Reference; or 3D Scan…enabled mobile computing device 900 collects visual data by scanning the room. As the user scans the room, the user is able calculate dimensions of various aspects (i.e., work areas 904 ,906) of the room”) Claim 8 Miller does not disclose, but Maschmeyer makes obvious wherein the user profile further includes indicated interest in an object, and wherein selecting the supplemental content item is further based on the indicated interest in an object (Maschmeyer, ¶ 72: “or example, collections of 3D objects 24 selected by users or identified automatically by the object viewer tool 12 can be fed to the machine learning tool 28 to train one or more machine learning models that permit the object viewer tool 12 to determine characteristics that can be tagged in the 3D models 24 and/or collections 26. The characteristics modeled and tagged in this way can be based on product type or other characteristics such as gender, style, category, theme, etc. The machine learning tool 28 can therefore be leveraged by the object viewer tool 12 to determine suitable replacement objects 24 to suggest to a user based on an input requesting replacement of the objects 24 as discussed further below.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider a user profile such as user interest. One of ordinary skill in the art would have motivation to provide relevant products to the user based on interest, purchasing behavior, etc. One of ordinary skill in the art would have had a reasonable expectation of success because Miller also considers providing suggestions to the user in the same context. Claim 12 The same teachings and rationales in claim 1 are appliable to claim 12. Claim 13 The same teachings and rationales in claim 2 are appliable to claim 13. Claim 14 The same teachings and rationales in claim 3 are appliable to claim 14. Claim 15 The same teachings and rationales in claim 4 are appliable to claim 15. Claim 16 The same teachings and rationales in claim 5 are appliable to claim 16. Claim 17 The same teachings and rationales in claim 6 are appliable to claim 17. Claim 19 The same teachings and rationales in claim 8 are appliable to claim 19. Claim 56 Miller and Maschmeyer do not explicitly disclose, but Ghadar discloses wherein the subset is a first subset, the method further comprising: comparing the one or more recognized objects in the environment indicated in the catalog to objects associated with the stored supplemental content items to identify a second subset of the objects associated with the stored supplemental content items that correspond to the one or more recognized objects in the environment; and refraining from recommending at least one supplemental content item associated with the second subset of objects (e.g. displaying the best set out of the total potential set; “The database is searched to identify an example set of compatible items 308a, 308b, 308c, 310 from an image of a styled room 306 that includes a corresponding object 312 having a similar feature vector as the query object 304, which means that the example set includes a corresponding object visually similar to the query object 304. In some embodiments, the measure of similarity is determined using a K-Nearest Neighbor technique with euclidean distance. Since the other objects 308a, 308b, 308c in the identified example set of objects are known to be considered aesthetically compatible with the corresponding, these other objects 308a, 308b, 308c are likely to be aesthetically compatible with the query object 304 and can be used to recommend items to the user are compatible with the query objects.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider comparison as claimed. One of ordinary skill in the art would have motivation to provide relevant products to the user based on interest, purchasing behavior, etc. One of ordinary skill in the art would have had a reasonable expectation of success because Miller also considers providing suggestions to the user in the same context. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller (US 2021/0183128) in view of Maschmeyer (US 2023/0401802), Ghadar (US 2019/0295151) and Seroussi (US 2025/0037395) Claim 2 Miller discloses wherein selecting the supplemental content item comprises (¶ 75-76: “As the user scans the room, the user is able calculate dimensions of various aspects (i.e., work areas 904 ,906) of the room, e.g., the dimensions on the wall where cabinets are to be installed. Dimensions may be calculated by pointing the camera at a location and inserting shapes (e.g., a rectangle) onto a surface or selecting points in the AR environment using the touch enabled interface….For example, the options 908 may be selected based on the theme of the room or other rooms in the house, the likely return on investment, user inputs, available stock, dimensions, etc.”) Miller does not explicitly disclose, but Seroussi discloses determining, based at least in part on the available space and the placement of the one or more recognized objects (¶ 107: “ext, at 305 one or more candidate designable area within the real-world environment are defined as an output based on at least the sense engine module output. The output (candidate designable area) is analyzed at method operation 306 for its spatial attributes (e.g., size, dimensions, location, orientations, proximity to other objects, etc.) relative the real-world environment scene data of operation 302 and other designable areas based on the data from 304. Based on the analysis it may be determined what product category (one or more) is suitable for the designable area (e.g., if the designable area is on the wall, a product category may be shelves, artwork etc.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider available space. One of ordinary skill in the art would have motivation to select appropriately sized objects for display. One of ordinary skill in the art would have had a reasonable expectation of success because Miller also considers placement of objects in a defined space. Claim(s) 9-11, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Miller (US 2021/0183128) in view of Maschmeyer (US 2023/0401802), Ghadar (US 2019/0295151) and Kawamae (US 2022/0114792) Claim 9 Miller does not disclose, but Kawamae makes obvious further comprising extracting environment information from a received mapping data of a second environment, wherein selecting the supplemental content item is further based on the environment information of the second environment (¶ 8: “a mixed reality display device is configured to be connected to other mixed reality display devices worn by other experiencing persons and to cause the experiencing persons to share the reality video and the video of the virtual object with each other… FIGS. 18A and 18B are sequence charts of the MR network system, illustrating data transmission and reception between each of the HMDs 1a, 1b, and 1c (each of the MR experiencing persons 2a, 2b, and 2c) and the application server 52.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider a second environment such as a collaborative user environment. One of ordinary skill in the art would have motivation to allow multiple users to view an environment and make edits. One of ordinary skill in the art would have had a reasonable expectation of success because Miller considers augmented reality editing of a space. Claim 10 Miller does not disclose, but Kawamae makes obvious wherein the selecting, based on the environment information and the user profile, a supplemental content item is further based on a field of view of the display device, and wherein the environment extends beyond the field of view (Fig. 1A; ¶ 44: “he camera images the real space, and the experiencing person visually recognizes an MR video by the MR video displayed on the display. Imaging angle of view of cameras built in the HMDs 1a, 1b, and 1c are indicated by reference signs 3a, 3b, and 3c, respectively. An access point (AP) 5 such as a wireless LAN is provided for communication between the HMDs 1a, 1b, and 1c. Since the positions of the experiencing persons (HMD) and the line-of-sight directions are different, the real spaces visually recognized by the experiencing persons are different. It is possible to experience a common MR video by communicating between the HMDs.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider objects outside of the field of view. One of ordinary skill in the art would have motivation to allow multiple users to view an environment and make edits. One of ordinary skill in the art would have had a reasonable expectation of success because Miller considers augmented reality editing of a space and can pan the camera to different locations. Claim 11 Miller does not disclose, but Kawamae makes obvious wherein the supplemental content item is a representation of an object; wherein the generating, for display on a display device, the supplemental content item further comprises generating for display the representation of the object in the field of view; and wherein the selecting of the supplemental content item is further based on determining that recognized objects in the environment outside of the field of view do not correspond to the object associated with the supplemental content item (Fig. 1A; ¶ 44: “he camera images the real space, and the experiencing person visually recognizes an MR video by the MR video displayed on the display. Imaging angle of view of cameras built in the HMDs 1a, 1b, and 1c are indicated by reference signs 3a, 3b, and 3c, respectively. An access point (AP) 5 such as a wireless LAN is provided for communication between the HMDs 1a, 1b, and 1c. Since the positions of the experiencing persons (HMD) and the line-of-sight directions are different, the real spaces visually recognized by the experiencing persons are different. It is possible to experience a common MR video by communicating between the HMDs.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider objects outside of the field of view. One of ordinary skill in the art would have motivation to allow multiple users to view an environment and make edits. One of ordinary skill in the art would have had a reasonable expectation of success because Miller considers augmented reality editing of a space and can pan the camera to different locations. Claim 20 The same teachings and rationales in claim 9 are appliable to claim 20. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN M GRAY whose telephone number is (571)272-4582. The examiner can normally be reached on Monday through Friday, 9:00am-5:30pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN M GRAY/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Feb 29, 2024
Application Filed
Aug 23, 2025
Non-Final Rejection — §103
Nov 19, 2025
Response Filed
Feb 18, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597216
ARTIFICIAL INTELLIGENCE VIRTUAL MAKEUP METHOD AND DEVICE USING MULTI-ANGLE IMAGE RECOGNITION
2y 5m to grant Granted Apr 07, 2026
Patent 12586252
METHOD FOR ENCODING THREE-DIMENSIONAL VOLUMETRIC DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12572892
SYSTEMS AND METHODS FOR VISUALIZATION OF UTILITY LINES
2y 5m to grant Granted Mar 10, 2026
Patent 12561928
SYSTEMS AND METHODS FOR CALCULATING OPTICAL MEASUREMENTS AND RENDERING RESULTS
2y 5m to grant Granted Feb 24, 2026
Patent 12542946
REMOTE PRESENTATION WITH AUGMENTED REALITY CONTENT SYNCHRONIZED WITH SEPARATELY DISPLAYED VIDEO CONTENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
98%
With Interview (+10.9%)
2y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 672 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month