Prosecution Insights
Last updated: April 19, 2026
Application No. 18/733,228

DEVICES, METHODS AND GRAPHICAL USER INTERFACES FOR PREVIEW OF COMPUTER-GENERATED VIRTUAL OBJECTS FOR EXTENDED REALITY APPLICATIONS

Non-Final OA §103§112
Filed
Jun 04, 2024
Examiner
GUO, XILIN
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
374 granted / 456 resolved
+20.0% vs TC avg
Strong +17% interview lift
Without
With
+17.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
18 currently pending
Career history
474
Total Applications
across all art units

Statute-Specific Performance

§101
7.6%
-32.4% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 456 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 15 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Dependent claim 15 depends upon independent claim 6. The independent claim 6 recites the features for obtaining, from an operating system, three-dimensional size information associated with a software application for a three-dimensional environment that has been granted to the software application by the operating system; obtaining, from the software application, three-dimensional size and location information associated with a virtual object of the software application; then displaying a representation of the virtual object and a representation of a bounding volume for the software application concurrently with the representation of the virtual object. The dependent claim 15 recites “the one or more programs including instructions for displaying a visual indicator indicating a corresponding size of the virtual object in the three-dimensional environment relative to a size of a physical object that is associated with the virtual object”. However, the claim just simply describe “a visual indicator indicating a size of the virtual object relative to a size of a physical object” and each of claim does not set forth any elements involved in “a physical object”. The issue is persons of ordinary skill in the art reading the specification is not able to understand where “a physical object” is located and how to determine a physical object” associated with “the virtual object”. Therefore, the examiner deems the claim indefinite as it fail to particularly point out and distinctly claim what Applicant regards as the invention. Accordingly, the claim is rejected under U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-4, 6, 10-12, 16 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Lal et al (U.S. Patent Application Publication 2024/0096029 A1) in view of BECKER et al (U.S. Patent Application Publication 2023/0377300 A1). Regarding claim 1, Lal discloses a method comprising: at an electronic device (Paragraph [0026], FIG. 2 is an illustrative block diagram showing example system 200 configured to display media content. Although FIG. 2 shows system 200 as including a number and configuration of individual components, in some examples, any number of the components of system 200 may be combined and/or integrated as one device, e.g., as user device 102 (as shown in FIG. 1)) in communication with a display (Paragraph [0030], computing device 202 may receive the displays generated by the remote server and may display the content of the displays locally via display 224) and one or more input devices (Paragraphs [0030-[0031], computing device 202 may receive inputs from the user via input interface 226 and transmit those inputs to the remote server for processing and generating the corresponding displays ... User input interface 226 may be any suitable user interface, such as a remote control, trackball, keypad, keyboard, touchscreen, touchpad, stylus input, joystick ...): obtaining, from an operating system (Paragraph [0022], FIG. 1 illustrates an overview of a system 100 for selecting a 3D object for display in an XR environment), three-dimensional size information associated with a software application for a three-dimensional environment (FIG. 3; paragraph [0034], a game publisher may designate certain areas in the game for placement of a 3D object, such as in a menu or in between levels in the game (e.g., see FIG. 5 showing game area 502 comprising multiple spaces 504, in which 3D objects may be placed). In such cases, the game publisher may indicate the size and shape of the spaces(s) available for 3D object placement) that has been granted to the software application by the operating system (Paragraph [0031], a user device may send instructions, e.g., to initiate an XR experience and allow a user to view and interact with 3D objects in an XR environment, to control circuitry 218 using user input interface 226; paragraph [0034], ... the XR environment may be a 3D gaming environment, or a virtual, augmented or mixed reality. Irrespective of the type of environment, control circuitry determines a space appropriate for a 3D object, provided by a third party, to be placed. For example, at 304, control circuitry determines a whether a predefined space in a game is open for placement of a 3D object by a third party. In some examples, a game publisher may designate certain areas in the game for placement of a 3D object); obtaining, from the software application, three-dimensional size and location information associated with a virtual object of the software application (Paragraphs [0034]-[0035], 3D object is placed above table 402 ... In some examples, e.g., the AR/MR environment, where the opportunity to place 3D objects varies depends on user location, the likelihood of user interaction may be based on a user parameter, such as height, gaze direction, reach, etc. As such, one or more user parameters may be used to help determine whether a user is likely to interact with a designated space; paragraph [0045], the 3D object provider may store an object on a content distributed network in its native size. The 3D object provider may set a minimum and a maximum size for the display of its 3D object in the XR environment); and displaying, via the display and in accordance with the three-dimensional size information associated with the software application, a representation of a bounding volume for the software application (Paragraph [0034], In the example shown in FIG. 4, user 402 is nearby table 404. In such a case, control circuitry may be configured to determine, e.g., by virtue of one or more image processing techniques, that a surface of the table 404 is free from obstruction, and is thus open for placement of a 3D object. In particular, control circuitry may determine a volume of the space free from obstruction, e.g., a volume boundary, defined by the surface area of the table and a free height above the table, e.g., a height free from obstruction. In the example shown in FIG. 4, the volume of the space is defined by the length L and the width W of the table, and the height H above the table clear from obstruction, such as by light 404 above table 402) concurrently with the representation of the virtual object (Paragraphs [0041]-[0042], taking the example shown in FIG, 4, 3D object providers are able to access the 3D space parameters relating to the volume boundary above the surface of table 402 ... 3D object provider may have various versions of a 3D object. In the example shown in FIG. 4, data structure 406 includes various versions of a 3D object representing a pair of headphones 408. For example, the various versions of headphones 408 and laptop 410 may have different characteristics relating to quality, whether the object is moveable, interactive, and/or scalable. Importantly, the data structure 406 includes a compute budget for each version, e.g., as a result of its quality, moveability, level of interactives, and/or scalability ...). It noted that Lal discloses system and method for selecting a 3D object for displaying in an extended reality environment. Lal does not specifically disclose “displaying, via the display and in accordance with the three-dimensional size and location information associated with the virtual object, a representation of the virtual object” before displaying the bounding volume. In additional, BECKER discloses (Paragraph [0017], generating the three-dimensional representation of the three-dimensional object includes displaying a first object capture user interface for identifying a target physical object ... ) “displaying, via the display and in accordance with the three-dimensional size and location information associated with the virtual object, a representation of the virtual object” without displaying the bounding volume (Paragraph [0028], as illustrated in FIG. 3, preview user interface 300 displays image 302, image 304, image 306, image 308, image 310, image 312, image 314, and image 316. Each of images 302-316 includes a portion of the example three-dimensional object 320 (e.g., tool table); paragraph [0041], s illustrated in FIG. 6, user interface 600 presents a third representation 602 of three-dimensional object 320 (e.g., tool table) during the process to generate a three-dimensional mesh reconstruction of three-dimensional object 320 (e.g., tool table) ...; paragraph [0042], and as illustrated in FIG. 7, user interface 700 presents a second representation 702 of three-dimensional object 320 (e.g., tool table) ...). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of BECKER, and applying the method for generating a three-dimensional virtual representation of the physical object taught by BECKER to provide the method for displaying the 3D object in the user interface before displaying the bounding box. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of BECKER to obtain the invention as specified in claim. Regarding claim 3, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 1), and Lal further disclose wherein the method further comprises: determining, based on the size and location information associated with the virtual object and based on the size information associated with the software application that the virtual object fits within the bounding volume (FIG. 3; paragraph [0034], In the example shown in FIG. 4, user 402 is nearby table 404. In such a case, control circuitry may be configured to determine, e.g., by virtue of one or more image processing techniques, that a surface of the table 404 is free from obstruction, and is thus open for placement of a 3D object. In particular, control circuitry may determine a volume of the space free from obstruction, e.g., a volume boundary, defined by the surface area of the table and a free height above the table, e.g., a height free from obstruction. In the example shown in FIG. 4, the volume of the space is defined by the length L and the width W of the table, and the height H above the table clear from obstruction, such as by light 404 above table 402; paragraph [0044], elects one or more 3D objects for placement into the space. For example, depending on the 3D object parameters relating to the object, or object variant, control circuitry selects one or more 3D objects to best fit the space); and in accordance with the determination that the virtual object fits within bounding volume, displaying the representation of the virtual object within the displayed bounding volume (Paragraph [0044], in the example shown in FIG. 4, the exchange server selects a bid relating to “Headphone ID C” and “Laptop ID B”. Selection of these two objects is based on a multiple factors. For example, the 3D space parameters may have specified a total compute budget of 8 (taken as an arbitrary number for the sake of example). The exchange server may have received bids from the provider of the headphones for placement of “Headphone ID B” (compute budget 3) and “Headphone ID C” (compute budget 2) in space 400, and a bid from the provider of the laptop for placement of “Laptop ID B” (compute budget 6). In such a case, exchange server selects the combination of 3D objects having, in combination, 3D object parameters, that best match the 3D space parameters, defined at 318. As shown in FIG. 4, headphones 408 and laptop 410 within bounding volume space 400). Regarding claim 4, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 1), and Lal further disclose wherein the method further comprises: while displaying the representation of the virtual object, and while displaying the bounding volume (FIG. 3; paragraph [0034], In the example shown in FIG. 4, user 402 is nearby table 404. In such a case, control circuitry may be configured to determine, e.g., by virtue of one or more image processing techniques, that a surface of the table 404 is free from obstruction, and is thus open for placement of a 3D object. In particular, control circuitry may determine a volume of the space free from obstruction, e.g., a volume boundary, defined by the surface area of the table and a free height above the table, e.g., a height free from obstruction. In the example shown in FIG. 4, the volume of the space is defined by the length L and the width W of the table, and the height H above the table clear from obstruction, such as by light 404 above table 402; paragraph [0044], elects one or more 3D objects for placement into the space. For example, depending on the 3D object parameters relating to the object, or object variant, control circuitry selects one or more 3D objects to best fit the space), receiving, via the one or more input devices (FIG. 2; paragraph [0029], based on the processed instructions, control circuitry 218 may determine what action to perform when input is received from user input interface 226). However, Lal does not specifically disclose an input corresponding to a selection to remove the bounding volume from being displayed; and in response to receiving the input, ceasing display of the bounding volume. In additional, BECKER discloses (Paragraph [0017], generating the three-dimensional representation of the three-dimensional object includes displaying a first object capture user interface for identifying a target physical object ... In some examples, in response to determining that a physical object is centered within the virtual reticle (and optionally, in response to detecting a selection of an initiation affordance), the electronic device displays an animation that transforms the virtual reticle into a three-dimensional virtual bounding shape around the physical object (e.g., a bounding box)) an input corresponding to a selection to remove the bounding volume from being displayed (FIG. 5; paragraph [0040], user interface element 506 (e.g., a user selectable button) can be selectable to request initiation of a process to generate a second representation (e.g., mesh/model reconstruction) of three-dimensional object 320 different than the point cloud representation ... In some examples, initiation of the process to generate the three-dimensional model can cause the user interface to cease displaying bounding box 504); and in response to receiving the input, ceasing display of the bounding volume (Paragraph [0040], in some examples, initiation of the process to generate the three-dimensional model can cause the user interface to cease displaying bounding box 504) . Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of BECKER, and applying the method for generating a three-dimensional virtual representation of the physical object taught by BECKER to provide user interaction for removing the bounding volume during the 3D object display. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of BECKER to obtain the invention as specified in claim. Regarding claim 6, Lal discloses an electronic device (Paragraph [0026], FIG. 2 is an illustrative block diagram showing example system 200 configured to display media content. Although FIG. 2 shows system 200 as including a number and configuration of individual components, in some examples, any number of the components of system 200 may be combined and/or integrated as one device, e.g., as user device 102 (as shown in FIG. 1)) that is in communication with a display (Paragraph [0030], computing device 202 may receive the displays generated by the remote server and may display the content of the displays locally via display 224) and one or more input devices (Paragraphs [0030-[0031], computing device 202 may receive inputs from the user via input interface 226 and transmit those inputs to the remote server for processing and generating the corresponding displays ... User input interface 226 may be any suitable user interface, such as a remote control, trackball, keypad, keyboard, touchscreen, touchpad, stylus input, joystick ...), the electronic device comprising: one or more processors (Paragraph [0027], control circuitry 218 may be based on any suitable processing circuitry such as processing circuitry 230. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors ...); memory (Paragraph [0027], control circuitry 218 includes storage 228); and one or more programs, wherein the one or more programs are stored in the memory (Paragraph [0028], the application may be implemented as software or a set of executable instructions that may be stored in storage 228) and configured to be executed by the one or more processors (Paragraph [0028], control circuitry 218 may be instructed by the application to perform the functions discussed herein. In some implementations, any action performed by control circuitry 218 may be based on instructions received from the application. For example, the application may be implemented as software or a set of executable instructions that may be stored in storage 228 and executed by control circuitry 218), the one or more programs including instructions for: obtaining, from an operating system (Paragraph [0022], FIG. 1 illustrates an overview of a system 100 for selecting a 3D object for display in an XR environment), three-dimensional size information associated with a software application for a three-dimensional environment (FIG. 3; paragraph [0034], a game publisher may designate certain areas in the game for placement of a 3D object, such as in a menu or in between levels in the game (e.g., see FIG. 5 showing game area 502 comprising multiple spaces 504, in which 3D objects may be placed). In such cases, the game publisher may indicate the size and shape of the spaces(s) available for 3D object placement) that has been granted to the software application by the operating system (Paragraph [0031], a user device may send instructions, e.g., to initiate an XR experience and allow a user to view and interact with 3D objects in an XR environment, to control circuitry 218 using user input interface 226; paragraph [0034], ... the XR environment may be a 3D gaming environment, or a virtual, augmented or mixed reality. Irrespective of the type of environment, control circuitry determines a space appropriate for a 3D object, provided by a third party, to be placed. For example, at 304, control circuitry determines a whether a predefined space in a game is open for placement of a 3D object by a third party. In some examples, a game publisher may designate certain areas in the game for placement of a 3D object); obtaining, from the software application, three-dimensional size and location information associated with a virtual object of the software application (Paragraphs [0034]-[0035], 3D object is placed above table 402 ... In some examples, e.g., the AR/MR environment, where the opportunity to place 3D objects varies depends on user location, the likelihood of user interaction may be based on a user parameter, such as height, gaze direction, reach, etc. As such, one or more user parameters may be used to help determine whether a user is likely to interact with a designated space; paragraph [0045], the 3D object provider may store an object on a content distributed network in its native size. The 3D object provider may set a minimum and a maximum size for the display of its 3D object in the XR environment); and displaying, via the display and in accordance with the three-dimensional size information associated with the software application, a representation of a bounding volume for the software application (Paragraph [0034], In the example shown in FIG. 4, user 402 is nearby table 404. In such a case, control circuitry may be configured to determine, e.g., by virtue of one or more image processing techniques, that a surface of the table 404 is free from obstruction, and is thus open for placement of a 3D object. In particular, control circuitry may determine a volume of the space free from obstruction, e.g., a volume boundary, defined by the surface area of the table and a free height above the table, e.g., a height free from obstruction. In the example shown in FIG. 4, the volume of the space is defined by the length L and the width W of the table, and the height H above the table clear from obstruction, such as by light 404 above table 402) concurrently with the representation of the virtual object (Paragraphs [0041]-[0042], taking the example shown in FIG, 4, 3D object providers are able to access the 3D space parameters relating to the volume boundary above the surface of table 402 ... 3D object provider may have various versions of a 3D object. In the example shown in FIG. 4, data structure 406 includes various versions of a 3D object representing a pair of headphones 408. For example, the various versions of headphones 408 and laptop 410 may have different characteristics relating to quality, whether the object is moveable, interactive, and/or scalable. Importantly, the data structure 406 includes a compute budget for each version, e.g., as a result of its quality, moveability, level of interactives, and/or scalability ...). It noted that Lal discloses system and method for selecting a 3D object for displaying in an extended reality environment. Lal does not specifically disclose “displaying, via the display and in accordance with the three-dimensional size and location information associated with the virtual object, a representation of the virtual object” before displaying the bounding volume. In additional, BECKER discloses (Paragraph [0017], generating the three-dimensional representation of the three-dimensional object includes displaying a first object capture user interface for identifying a target physical object ... ) “displaying, via the display and in accordance with the three-dimensional size and location information associated with the virtual object, a representation of the virtual object” without displaying the bounding volume (Paragraph [0028], as illustrated in FIG. 3, preview user interface 300 displays image 302, image 304, image 306, image 308, image 310, image 312, image 314, and image 316. Each of images 302-316 includes a portion of the example three-dimensional object 320 (e.g., tool table); paragraph [0041], s illustrated in FIG. 6, user interface 600 presents a third representation 602 of three-dimensional object 320 (e.g., tool table) during the process to generate a three-dimensional mesh reconstruction of three-dimensional object 320 (e.g., tool table) ...; paragraph [0042], and as illustrated in FIG. 7, user interface 700 presents a second representation 702 of three-dimensional object 320 (e.g., tool table) ...). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of BECKER, and applying the method for generating a three-dimensional virtual representation of the physical object taught by BECKER to provide the method for displaying the 3D object in the user interface before displaying the bounding box. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of BECKER to obtain the invention as specified in claim. Regarding claim 10, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 6), and Lal further disclose the one or more programs including instructions for: determining, based on the size and location information associated with the virtual object and based on the size information associated with the software application that the virtual object fits within the bounding volume (FIG. 3; paragraph [0034], In the example shown in FIG. 4, user 402 is nearby table 404. In such a case, control circuitry may be configured to determine, e.g., by virtue of one or more image processing techniques, that a surface of the table 404 is free from obstruction, and is thus open for placement of a 3D object. In particular, control circuitry may determine a volume of the space free from obstruction, e.g., a volume boundary, defined by the surface area of the table and a free height above the table, e.g., a height free from obstruction. In the example shown in FIG. 4, the volume of the space is defined by the length L and the width W of the table, and the height H above the table clear from obstruction, such as by light 404 above table 402; paragraph [0044], elects one or more 3D objects for placement into the space. For example, depending on the 3D object parameters relating to the object, or object variant, control circuitry selects one or more 3D objects to best fit the space); and in accordance with the determination that the virtual object fits within bounding volume, displaying the representation of the virtual object within the displayed bounding volume (Paragraph [0044], in the example shown in FIG. 4, the exchange server selects a bid relating to “Headphone ID C” and “Laptop ID B”. Selection of these two objects is based on a multiple factors. For example, the 3D space parameters may have specified a total compute budget of 8 (taken as an arbitrary number for the sake of example). The exchange server may have received bids from the provider of the headphones for placement of “Headphone ID B” (compute budget 3) and “Headphone ID C” (compute budget 2) in space 400, and a bid from the provider of the laptop for placement of “Laptop ID B” (compute budget 6). In such a case, exchange server selects the combination of 3D objects having, in combination, 3D object parameters, that best match the 3D space parameters, defined at 318. As shown in FIG. 4, headphones 408 and laptop 410 within bounding volume space 400). Regarding claim 11, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 6), and Lal further disclose the one or more programs including instructions for: while displaying the representation of the virtual object, and while displaying the bounding volume (FIG. 3; paragraph [0034], In the example shown in FIG. 4, user 402 is nearby table 404. In such a case, control circuitry may be configured to determine, e.g., by virtue of one or more image processing techniques, that a surface of the table 404 is free from obstruction, and is thus open for placement of a 3D object. In particular, control circuitry may determine a volume of the space free from obstruction, e.g., a volume boundary, defined by the surface area of the table and a free height above the table, e.g., a height free from obstruction. In the example shown in FIG. 4, the volume of the space is defined by the length L and the width W of the table, and the height H above the table clear from obstruction, such as by light 404 above table 402; paragraph [0044], elects one or more 3D objects for placement into the space. For example, depending on the 3D object parameters relating to the object, or object variant, control circuitry selects one or more 3D objects to best fit the space), receiving, via the one or more input devices (FIG. 2; paragraph [0029], based on the processed instructions, control circuitry 218 may determine what action to perform when input is received from user input interface 226). However, Lal does not specifically disclose an input corresponding to a selection to remove the bounding volume from being displayed; and in response to receiving the input, ceasing display of the bounding volume. In additional, BECKER discloses (Paragraph [0017], generating the three-dimensional representation of the three-dimensional object includes displaying a first object capture user interface for identifying a target physical object ... In some examples, in response to determining that a physical object is centered within the virtual reticle (and optionally, in response to detecting a selection of an initiation affordance), the electronic device displays an animation that transforms the virtual reticle into a three-dimensional virtual bounding shape around the physical object (e.g., a bounding box)) an input corresponding to a selection to remove the bounding volume from being displayed (FIG. 5; paragraph [0040], user interface element 506 (e.g., a user selectable button) can be selectable to request initiation of a process to generate a second representation (e.g., mesh/model reconstruction) of three-dimensional object 320 different than the point cloud representation ... In some examples, initiation of the process to generate the three-dimensional model can cause the user interface to cease displaying bounding box 504); and in response to receiving the input, ceasing display of the bounding volume (Paragraph [0040], in some examples, initiation of the process to generate the three-dimensional model can cause the user interface to cease displaying bounding box 504) . Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of BECKER, and applying the method for generating a three-dimensional virtual representation of the physical object taught by BECKER to provide user interaction for removing the bounding volume during the 3D object display. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of BECKER to obtain the invention as specified in claim. Regarding claim 12, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 6), and Lal further disclose wherein the size information associated with the software application includes information associated with a width, a height, and a depth of the software application (FIG. 3; paragraph [0034], In the example shown in FIG. 4, user 402 is nearby table 404. In such a case, control circuitry may be configured to determine, e.g., by virtue of one or more image processing techniques, that a surface of the table 404 is free from obstruction, and is thus open for placement of a 3D object. In particular, control circuitry may determine a volume of the space free from obstruction, e.g., a volume boundary, defined by the surface area of the table and a free height above the table, e.g., a height free from obstruction. In the example shown in FIG. 4, the volume of the space is defined by the length L and the width W of the table, and the height H above the table clear from obstruction, such as by light 404 above table 402). Regarding claim 16, Lal discloses a non-transitory computer readable storage medium storing one or more programs (Paragraph [0026], FIG. 2 is an illustrative block diagram showing example system 200 configured to display media content; paragraph [0027], control circuitry 218 includes storage 228; paragraph [0028], the application may be implemented as software or a set of executable instructions that may be stored in storage 228), the one or more programs comprising instructions, which when executed by one or more processors (Paragraph [0027], control circuitry 218 may be based on any suitable processing circuitry such as processing circuitry 230. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors ...; paragraph [0028], control circuitry 218 may be instructed by the application to perform the functions discussed herein. In some implementations, any action performed by control circuitry 218 may be based on instructions received from the application. For example, the application may be implemented as software or a set of executable instructions that may be stored in storage 228 and executed by control circuitry 218) of an electronic device (Paragraph [0026], FIG. 2 shows system 200 as including a number and configuration of individual components, in some examples, any number of the components of system 200 may be combined and/or integrated as one device, e.g., as user device 102 (as shown in FIG. 1), cause the electronic device to: obtain, from an operating system (Paragraph [0022], FIG. 1 illustrates an overview of a system 100 for selecting a 3D object for display in an XR environment), three-dimensional size information associated with a software application for a three-dimensional environment (FIG. 3; paragraph [0034], a game publisher may designate certain areas in the game for placement of a 3D object, such as in a menu or in between levels in the game (e.g., see FIG. 5 showing game area 502 comprising multiple spaces 504, in which 3D objects may be placed). In such cases, the game publisher may indicate the size and shape of the spaces(s) available for 3D object placement) that has been granted to the software application by the operating system (Paragraph [0031], a user device may send instructions, e.g., to initiate an XR experience and allow a user to view and interact with 3D objects in an XR environment, to control circuitry 218 using user input interface 226; paragraph [0034], ... the XR environment may be a 3D gaming environment, or a virtual, augmented or mixed reality. Irrespective of the type of environment, control circuitry determines a space appropriate for a 3D object, provided by a third party, to be placed. For example, at 304, control circuitry determines a whether a predefined space in a game is open for placement of a 3D object by a third party. In some examples, a game publisher may designate certain areas in the game for placement of a 3D object); obtain, from the software application, three-dimensional size and location information associated with a virtual object of the software application (Paragraphs [0034]-[0035], 3D object is placed above table 402 ... In some examples, e.g., the AR/MR environment, where the opportunity to place 3D objects varies depends on user location, the likelihood of user interaction may be based on a user parameter, such as height, gaze direction, reach, etc. As such, one or more user parameters may be used to help determine whether a user is likely to interact with a designated space; paragraph [0045], the 3D object provider may store an object on a content distributed network in its native size. The 3D object provider may set a minimum and a maximum size for the display of its 3D object in the XR environment); and display, via the display and in accordance with the three-dimensional size information associated with the software application, a representation of a bounding volume for the software application (Paragraph [0034], In the example shown in FIG. 4, user 402 is nearby table 404. In such a case, control circuitry may be configured to determine, e.g., by virtue of one or more image processing techniques, that a surface of the table 404 is free from obstruction, and is thus open for placement of a 3D object. In particular, control circuitry may determine a volume of the space free from obstruction, e.g., a volume boundary, defined by the surface area of the table and a free height above the table, e.g., a height free from obstruction. In the example shown in FIG. 4, the volume of the space is defined by the length L and the width W of the table, and the height H above the table clear from obstruction, such as by light 404 above table 402) concurrently with the representation of the virtual object (Paragraphs [0041]-[0042], taking the example shown in FIG, 4, 3D object providers are able to access the 3D space parameters relating to the volume boundary above the surface of table 402 ... 3D object provider may have various versions of a 3D object. In the example shown in FIG. 4, data structure 406 includes various versions of a 3D object representing a pair of headphones 408. For example, the various versions of headphones 408 and laptop 410 may have different characteristics relating to quality, whether the object is moveable, interactive, and/or scalable. Importantly, the data structure 406 includes a compute budget for each version, e.g., as a result of its quality, moveability, level of interactives, and/or scalability ...). It noted that Lal discloses system and method for selecting a 3D object for displaying in an extended reality environment. Lal does not specifically disclose “display, via the display and in accordance with the three-dimensional size and location information associated with the virtual object, a representation of the virtual object” before displaying the bounding volume. In additional, BECKER discloses (Paragraph [0017], generating the three-dimensional representation of the three-dimensional object includes displaying a first object capture user interface for identifying a target physical object ... ) “display, via the display and in accordance with the three-dimensional size and location information associated with the virtual object, a representation of the virtual object” without displaying the bounding volume (Paragraph [0028], as illustrated in FIG. 3, preview user interface 300 displays image 302, image 304, image 306, image 308, image 310, image 312, image 314, and image 316. Each of images 302-316 includes a portion of the example three-dimensional object 320 (e.g., tool table); paragraph [0041], s illustrated in FIG. 6, user interface 600 presents a third representation 602 of three-dimensional object 320 (e.g., tool table) during the process to generate a three-dimensional mesh reconstruction of three-dimensional object 320 (e.g., tool table) ...; paragraph [0042], and as illustrated in FIG. 7, user interface 700 presents a second representation 702 of three-dimensional object 320 (e.g., tool table) ...). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of BECKER, and applying the method for generating a three-dimensional virtual representation of the physical object taught by BECKER to provide the method for displaying the 3D object in the user interface before displaying the bounding box. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of BECKER to obtain the invention as specified in claim. Regarding claim 18, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 16), and Lal further disclose the one or more programs including instructions for: determining, based on the size and location information associated with the virtual object and based on the size information associated with the software application that the virtual object fits within the bounding volume (FIG. 3; paragraph [0034], In the example shown in FIG. 4, user 402 is nearby table 404. In such a case, control circuitry may be configured to determine, e.g., by virtue of one or more image processing techniques, that a surface of the table 404 is free from obstruction, and is thus open for placement of a 3D object. In particular, control circuitry may determine a volume of the space free from obstruction, e.g., a volume boundary, defined by the surface area of the table and a free height above the table, e.g., a height free from obstruction. In the example shown in FIG. 4, the volume of the space is defined by the length L and the width W of the table, and the height H above the table clear from obstruction, such as by light 404 above table 402; paragraph [0044], elects one or more 3D objects for placement into the space. For example, depending on the 3D object parameters relating to the object, or object variant, control circuitry selects one or more 3D objects to best fit the space); and in accordance with the determination that the virtual object fits within bounding volume, displaying the representation of the virtual object within the displayed bounding volume (Paragraph [0044], in the example shown in FIG. 4, the exchange server selects a bid relating to “Headphone ID C” and “Laptop ID B”. Selection of these two objects is based on a multiple factors. For example, the 3D space parameters may have specified a total compute budget of 8 (taken as an arbitrary number for the sake of example). The exchange server may have received bids from the provider of the headphones for placement of “Headphone ID B” (compute budget 3) and “Headphone ID C” (compute budget 2) in space 400, and a bid from the provider of the laptop for placement of “Laptop ID B” (compute budget 6). In such a case, exchange server selects the combination of 3D objects having, in combination, 3D object parameters, that best match the 3D space parameters, defined at 318. As shown in FIG. 4, headphones 408 and laptop 410 within bounding volume space 400). Regarding claim 19, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 16), and Lal further disclose the one or more programs including instructions for: while displaying the representation of the virtual object, and while displaying the bounding volume (FIG. 3; paragraph [0034], In the example shown in FIG. 4, user 402 is nearby table 404. In such a case, control circuitry may be configured to determine, e.g., by virtue of one or more image processing techniques, that a surface of the table 404 is free from obstruction, and is thus open for placement of a 3D object. In particular, control circuitry may determine a volume of the space free from obstruction, e.g., a volume boundary, defined by the surface area of the table and a free height above the table, e.g., a height free from obstruction. In the example shown in FIG. 4, the volume of the space is defined by the length L and the width W of the table, and the height H above the table clear from obstruction, such as by light 404 above table 402; paragraph [0044], elects one or more 3D objects for placement into the space. For example, depending on the 3D object parameters relating to the object, or object variant, control circuitry selects one or more 3D objects to best fit the space), receiving, via the one or more input devices (FIG. 2; paragraph [0029], based on the processed instructions, control circuitry 218 may determine what action to perform when input is received from user input interface 226). However, Lal does not specifically disclose receiving, an input corresponding to a selection to remove the bounding volume from being displayed; and in response to receiving the input, ceasing display of the bounding volume. In additional, BECKER discloses (Paragraph [0017], generating the three-dimensional representation of the three-dimensional object includes displaying a first object capture user interface for identifying a target physical object ... In some examples, in response to determining that a physical object is centered within the virtual reticle (and optionally, in response to detecting a selection of an initiation affordance), the electronic device displays an animation that transforms the virtual reticle into a three-dimensional virtual bounding shape around the physical object (e.g., a bounding box)) receiving, an input corresponding to a selection to remove the bounding volume from being displayed (FIG. 5; paragraph [0040], user interface element 506 (e.g., a user selectable button) can be selectable to request initiation of a process to generate a second representation (e.g., mesh/model reconstruction) of three-dimensional object 320 different than the point cloud representation ... In some examples, initiation of the process to generate the three-dimensional model can cause the user interface to cease displaying bounding box 504); and in response to receiving the input, ceasing display of the bounding volume (Paragraph [0040], in some examples, initiation of the process to generate the three-dimensional model can cause the user interface to cease displaying bounding box 504) . Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of BECKER, and applying the method for generating a three-dimensional virtual representation of the physical object taught by BECKER to provide user interaction for removing the bounding volume during the 3D object display. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of BECKER to obtain the invention as specified in claim. Claims 2, 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Lal et al (U.S. Patent Application Publication 2024/0096029 A1) in view of BECKER et al (U.S. Patent Application Publication 2023/0377300 A1) in view of TAKAHASHI et al (U.S. Patent Application Publication 2022/0318447 A1). Regarding claim 2, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 1). However, Lal does not specifically disclose wherein the method further comprises: determining, based on the obtained three-dimensional size and location information associated with the virtual object and based on the three-dimensional size information associated with software application that a portion of the virtual object is outside of the bounding volume; and in accordance with the determination that at least a portion of the virtual object is outside of the bounding volume, displaying the representation of the virtual object with the displayed bounding volume such that the portion of the representation of the virtual object is outside of the bounding volume. In additional, BECKER discloses (Paragraph [0017], generating the three-dimensional representation of the three-dimensional object includes displaying a first object capture user interface for identifying a target physical object ... In some examples, in response to determining that a physical object is centered within the virtual reticle (and optionally, in response to detecting a selection of an initiation affordance), the electronic device displays an animation that transforms the virtual reticle into a three-dimensional virtual bounding shape around the physical object (e.g., a bounding box)) wherein the method further comprises: determining, based on the obtained three-dimensional size and location information associated with the virtual object and based on the three-dimensional size information associated with software application (See claim 1) that a portion of the virtual object is outside of the bounding volume (Paragraph [0037]-[0039], For example, and as illustrated in FIG. 5, user interface 500 presents a second point representation 502 (e.g., a finalized point cloud) ... user interface 500 can include a bounding box 504 around the second point representation 502 (e.g., around the point cloud) ... prior to generating the three-dimensional model of the three-dimensional object, the user can interact with bounding box 504 to crop the portions of the second point representation 502 of three-dimensional object to be included in the three-dimensional model. For example, as shown in FIGS. 6-7, portions of the point representation outside the bounding box 504 of FIG. 5 are excluded from the three-dimensional (mesh) model); and in accordance with the determination that at least a portion of the virtual object is outside of the bounding volume, excluding one or more portions of the representation of the virtual object (Paragraph [0039], the user can interact with bounding box 504 to crop the portions of the second point representation 502 of three-dimensional object to be included in the three-dimensional model. For example, as shown in FIGS. 6-7, portions of the point representation outside the bounding box 504 of FIG. 5 are excluded from the three-dimensional (mesh) model). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of BECKER, and applying the method for generating a three-dimensional virtual representation of the physical object taught by BECKER to provide the method for indicating portion of the 3D object outside a bounding box prior to present the 3D object within the bounding box of the software application. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of BECKER to obtain the invention as specified in claim. However, Lal does not specifically disclose displaying the representation of the virtual object with the displayed bounding volume such that the portion of the representation of the virtual object is outside of the bounding volume. In additional, TAKAHASHI discloses (Abstract, an information processing apparatus includes a processor configured to display a relationship object indicating a positional relationship of at least two or more measurement spots in a molded product at a position corresponding to the relationship on a three-dimensional model of the molded product; FIGS. 2 and 3 show display unit) displaying the representation of the virtual object (Paragraph [0036], objects representing the positional relationship of at least two or more measurement spots in a molded product are overlappingly displayed on a three-dimensional model of the molded product ... the relationship objects are displayed on the three-dimensional model of the molded product in such a manner that objects of three-dimensional arrows are used; paragraph [0068], FIG. 5 shows a display example of a relationship object. A molded product 30 represented by a cube is shown in FIG. 5. As for the molded product 30 ... the positional relationship between the face 31A and the face 31B is expressed by a relationship object 33. FIG. 5 shows the relationship object 33 within 3D volume of the molded product 30) with the displayed bounding volume (Paragraph [0033], FIG. 25 shows the processing of the information processing apparatus 60; paragraph [0185], according to the information processing apparatus 60 according to the exemplary embodiment of the invention, in a case where the three-dimensional model of the entire molded product 30 displaying the relationship objects 33 are displayed ... Additionally, by displaying the relationship objects 33 together with the three-dimensional model at the corresponding positions obtained by dividing the bounding box of the three-dimensional model of the molded product in a grid pattern ...) such that the portion of the representation of the virtual object is outside of the bounding volume (Paragraph [0039], the three-dimensional arrow objects are displayed as parts of the relationship objects ... three-dimensional arrow objects having outward arrows may be displayed in a case where the relative distances are far away; paragraph [0097], as shown in FIG. 10A, the relationship object 33 is displayed with an arrow object faces a direction opposite to the first measurement spot 32A from the second inspection target 31B. This is to make it possible to visually grasp at a glance that the positional relationship between the first measurement spot 32A and the second measurement spot 32B is too far away than the design information). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal in view of BECKER incorporate the teachings of TAKAHASHI, and applying the information processing apparatus taught by TAKAHASHI to display part of 3D object outside the 3D volume space on the display device. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal in view of BECKER according to the relied-upon teachings of TAKAHASHI to obtain the invention as specified in claim. Regarding claim 7, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 6). However, Lal does not specifically disclose the one or more programs including instructions for: determining, based on the obtained three-dimensional size and location information associated with the virtual object and based on the three-dimensional size information associated with software application that a portion of the virtual object is outside of the bounding volume; and in accordance with the determination that at least a portion of the virtual object is outside of the bounding volume, displaying the representation of the virtual object with the displayed bounding volume such that the portion of the representation of the virtual object is outside of the bounding volume. In additional, BECKER discloses (Paragraph [0017], generating the three-dimensional representation of the three-dimensional object includes displaying a first object capture user interface for identifying a target physical object ... In some examples, in response to determining that a physical object is centered within the virtual reticle (and optionally, in response to detecting a selection of an initiation affordance), the electronic device displays an animation that transforms the virtual reticle into a three-dimensional virtual bounding shape around the physical object (e.g., a bounding box)) the one or more programs including instructions for: determining, based on the obtained three-dimensional size and location information associated with the virtual object and based on the three-dimensional size information associated with software application (See claim 6) that a portion of the virtual object is outside of the bounding volume (Paragraph [0037]-[0039], For example, and as illustrated in FIG. 5, user interface 500 presents a second point representation 502 (e.g., a finalized point cloud) ... user interface 500 can include a bounding box 504 around the second point representation 502 (e.g., around the point cloud) ... prior to generating the three-dimensional model of the three-dimensional object, the user can interact with bounding box 504 to crop the portions of the second point representation 502 of three-dimensional object to be included in the three-dimensional model. For example, as shown in FIGS. 6-7, portions of the point representation outside the bounding box 504 of FIG. 5 are excluded from the three-dimensional (mesh) model); and in accordance with the determination that at least a portion of the virtual object is outside of the bounding volume, excluding one or more portions of the representation of the virtual object (Paragraph [0039], the user can interact with bounding box 504 to crop the portions of the second point representation 502 of three-dimensional object to be included in the three-dimensional model. For example, as shown in FIGS. 6-7, portions of the point representation outside the bounding box 504 of FIG. 5 are excluded from the three-dimensional (mesh) model). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of BECKER, and applying the method for generating a three-dimensional virtual representation of the physical object taught by BECKER to provide the method for indicating portion of the 3D object outside a bounding box prior to present the 3D object within the bounding box of the software application. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of BECKER to obtain the invention as specified in claim. However, Lal does not specifically disclose displaying the representation of the virtual object with the displayed bounding volume such that the portion of the representation of the virtual object is outside of the bounding volume. In additional, TAKAHASHI discloses (Abstract, an information processing apparatus includes a processor configured to display a relationship object indicating a positional relationship of at least two or more measurement spots in a molded product at a position corresponding to the relationship on a three-dimensional model of the molded product; FIGS. 2 and 3 show display unit) displaying the representation of the virtual object (Paragraph [0036], objects representing the positional relationship of at least two or more measurement spots in a molded product are overlappingly displayed on a three-dimensional model of the molded product ... the relationship objects are displayed on the three-dimensional model of the molded product in such a manner that objects of three-dimensional arrows are used; paragraph [0068], FIG. 5 shows a display example of a relationship object. A molded product 30 represented by a cube is shown in FIG. 5. As for the molded product 30 ... the positional relationship between the face 31A and the face 31B is expressed by a relationship object 33. FIG. 5 shows the relationship object 33 within 3D volume of the molded product 30) with the displayed bounding volume (Paragraph [0033], FIG. 25 shows the processing of the information processing apparatus 60; paragraph [0185], according to the information processing apparatus 60 according to the exemplary embodiment of the invention, in a case where the three-dimensional model of the entire molded product 30 displaying the relationship objects 33 are displayed ... Additionally, by displaying the relationship objects 33 together with the three-dimensional model at the corresponding positions obtained by dividing the bounding box of the three-dimensional model of the molded product in a grid pattern ...) such that the portion of the representation of the virtual object is outside of the bounding volume (Paragraph [0039], the three-dimensional arrow objects are displayed as parts of the relationship objects ... three-dimensional arrow objects having outward arrows may be displayed in a case where the relative distances are far away; paragraph [0097], as shown in FIG. 10A, the relationship object 33 is displayed with an arrow object faces a direction opposite to the first measurement spot 32A from the second inspection target 31B. This is to make it possible to visually grasp at a glance that the positional relationship between the first measurement spot 32A and the second measurement spot 32B is too far away than the design information). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal in view of BECKER incorporate the teachings of TAKAHASHI, and applying the information processing apparatus taught by TAKAHASHI to display part of 3D object outside the 3D volume space on the display device. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal in view of BECKER according to the relied-upon teachings of TAKAHASHI to obtain the invention as specified in claim. Regarding claim 17, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 16). However, Lal does not specifically disclose the one or more programs including instructions for: determining, based on the obtained three-dimensional size and location information associated with the virtual object and based on the three-dimensional size information associated with software application that a portion of the virtual object is outside of the bounding volume; and in accordance with the determination that at least a portion of the virtual object is outside of the bounding volume, displaying the representation of the virtual object with the displayed bounding volume such that the portion of the representation of the virtual object is outside of the bounding volume. In additional, BECKER discloses (Paragraph [0017], generating the three-dimensional representation of the three-dimensional object includes displaying a first object capture user interface for identifying a target physical object ... In some examples, in response to determining that a physical object is centered within the virtual reticle (and optionally, in response to detecting a selection of an initiation affordance), the electronic device displays an animation that transforms the virtual reticle into a three-dimensional virtual bounding shape around the physical object (e.g., a bounding box)) the one or more programs including instructions for: determining, based on the obtained three-dimensional size and location information associated with the virtual object and based on the three-dimensional size information associated with software application (See claim 16) that a portion of the virtual object is outside of the bounding volume (Paragraph [0037]-[0039], For example, and as illustrated in FIG. 5, user interface 500 presents a second point representation 502 (e.g., a finalized point cloud) ... user interface 500 can include a bounding box 504 around the second point representation 502 (e.g., around the point cloud) ... prior to generating the three-dimensional model of the three-dimensional object, the user can interact with bounding box 504 to crop the portions of the second point representation 502 of three-dimensional object to be included in the three-dimensional model. For example, as shown in FIGS. 6-7, portions of the point representation outside the bounding box 504 of FIG. 5 are excluded from the three-dimensional (mesh) model); and in accordance with the determination that at least a portion of the virtual object is outside of the bounding volume, excluding one or more portions of the representation of the virtual object (Paragraph [0039], the user can interact with bounding box 504 to crop the portions of the second point representation 502 of three-dimensional object to be included in the three-dimensional model. For example, as shown in FIGS. 6-7, portions of the point representation outside the bounding box 504 of FIG. 5 are excluded from the three-dimensional (mesh) model). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of BECKER, and applying the method for generating a three-dimensional virtual representation of the physical object taught by BECKER to provide the method for indicating portion of the 3D object outside a bounding box prior to present the 3D object within the bounding box of the software application. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of BECKER to obtain the invention as specified in claim. However, Lal does not specifically disclose displaying the representation of the virtual object with the displayed bounding volume such that the portion of the representation of the virtual object is outside of the bounding volume. In additional, TAKAHASHI discloses (Abstract, an information processing apparatus includes a processor configured to display a relationship object indicating a positional relationship of at least two or more measurement spots in a molded product at a position corresponding to the relationship on a three-dimensional model of the molded product; FIGS. 2 and 3 show display unit) displaying the representation of the virtual object (Paragraph [0036], objects representing the positional relationship of at least two or more measurement spots in a molded product are overlappingly displayed on a three-dimensional model of the molded product ... the relationship objects are displayed on the three-dimensional model of the molded product in such a manner that objects of three-dimensional arrows are used; paragraph [0068], FIG. 5 shows a display example of a relationship object. A molded product 30 represented by a cube is shown in FIG. 5. As for the molded product 30 ... the positional relationship between the face 31A and the face 31B is expressed by a relationship object 33. FIG. 5 shows the relationship object 33 within 3D volume of the molded product 30) with the displayed bounding volume (Paragraph [0033], FIG. 25 shows the processing of the information processing apparatus 60; paragraph [0185], according to the information processing apparatus 60 according to the exemplary embodiment of the invention, in a case where the three-dimensional model of the entire molded product 30 displaying the relationship objects 33 are displayed ... Additionally, by displaying the relationship objects 33 together with the three-dimensional model at the corresponding positions obtained by dividing the bounding box of the three-dimensional model of the molded product in a grid pattern ...) such that the portion of the representation of the virtual object is outside of the bounding volume (Paragraph [0039], the three-dimensional arrow objects are displayed as parts of the relationship objects ... three-dimensional arrow objects having outward arrows may be displayed in a case where the relative distances are far away; paragraph [0097], as shown in FIG. 10A, the relationship object 33 is displayed with an arrow object faces a direction opposite to the first measurement spot 32A from the second inspection target 31B. This is to make it possible to visually grasp at a glance that the positional relationship between the first measurement spot 32A and the second measurement spot 32B is too far away than the design information). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal in view of BECKER incorporate the teachings of TAKAHASHI, and applying the information processing apparatus taught by TAKAHASHI to display part of 3D object outside the 3D volume space on the display device. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal in view of BECKER according to the relied-upon teachings of TAKAHASHI to obtain the invention as specified in claim. Claims 5, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lal et al (U.S. Patent Application Publication 2024/0096029 A1) in view of BECKER et al (U.S. Patent Application Publication 2023/0377300 A1) in view of Chapman et al (U.S. Patent No. 12,094,072 B2). Regarding claim 5, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 1). However, Lal does not specifically disclose wherein displaying the bounding volume concurrently with the representation of the virtual object includes displaying a first representation of the bounding volume concurrently with a first representation of the virtual object from a first viewpoint and displaying a second representation of the bounding volume concurrently with a second representation of the virtual object from a second viewpoint, different from the first viewpoint. In additional, Chapman discloses (FIG. 1; Col 7, lines 20-59, one embodiment in which 3D dioramas are associated with downloadable applications from an “app store” and integrated into a rendered 3D environment, a computing system 100 such as a spatial computing system includes a computing device 110 ... Computing device 110, such as a smartphone, touchscreen laptop computer, wearable computing device or spatial computing device such as a head worn spatial computing or VR/AR device (generally, computing device 110) includes a display 112 and one or more input devices 114 ...) wherein displaying the bounding volume concurrently with the representation of the virtual object (FIG. 3; Col 9, lines 8-45, portal 120 also includes or accesses diorama integrator 170. Diorama integrator 170 receives data of assets 152 of 3D diorama 150 associated with a selected application 140, data of objects 162 of rendered 3D environment 160. Diorama integrator 170 composites or merges 3D diorama assets 152 and rendered 3D environment objects 162 based on data thereof to generate a composite view 172 of the 3D diorama 150 and rendered 3D environment 160 ... At 304, diorama integrator 170 accesses 3D diorama assets 152 associated with application 140, and at 306, determines a bounding structure 180 such as a bounding box (generally, bounding box 180) in which 3D diorama 150 is to be contained in rendered 3D environment 160. Bounding box 180 provides a virtual separation of or boundary between 3D diorama assets 152 and rendered 3D environment objects 162 such that 3D diorama assets 152 do not extend beyond bounding box 180 ... At 308, 3D diorama 150 is composited or merged with rendered 3D environment 160 to generate composite view 172. Composite view 172 includes 3D diorama 150 contained within bounding box 180, which is contained within rendered 3D environment 160. Composite view 172 is then presented to user through display of 112 of computing device 110 at 310 ...) includes displaying a first representation of the bounding volume concurrently with a first representation of the virtual object from a first viewpoint (Col 9, lines 38-45, at 308, 3D diorama 150 is composited or merged with rendered 3D environment 160 to generate composite view 172. Composite view 172 includes 3D diorama 150 contained within bounding box 180, which is contained within rendered 3D environment 160. Composite view 172 is then presented to user through display of 112 of computing device 110 at 310 ...) and displaying a second representation of the bounding volume concurrently with a second representation of the virtual object from a second viewpoint, different from the first viewpoint (FIG. 4; Col 10, lines 20-35, at 402, diorama integrator 170 receives user input to rotate 3D diorama 150 based an input or control received through a portal 120 page, or is triggered to execute rotation or programmed animation. At 404, diorama integrator 170 determines the requested or programmed degree of 3D diorama 150 movement or rotation 153 (e.g., rotate 3D diorama 30 degrees). For example, if portal 120 is a web portal, user may use their mouse, touchpad, or a controller of a VR display device to initiate rotation 153 of presentation of 3D diorama 150. 3D diorama 150 can be rotated 153 in a manner that simulates movement of a viewpoint around 3D diorama 150, as if a camera is panning around the scene of the 3D diorama 150 presented within rendered 3D environment 160. Claim 1 of Chapman describes “rotating a viewpoint of the composite view as if a camera viewpoint is panning around the composite view from a first rotational viewpoint to a second rotational viewpoint different from the first rotational viewpoint”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of Chapman, and applying the method for presenting a three-dimensional (3D) diorama in a computer-generated environment taught by Chapman to present the 3D object with the 3D volume from different by rotating the composite view. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of Chapman to obtain the invention as specified in claim. Regarding claim 14, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 6). However, Lal does not specifically disclose wherein displaying the bounding volume concurrently with the representation of the virtual object includes displaying a first representation of the bounding volume concurrently with a first representation of the virtual object from a first viewpoint and displaying a second representation of the bounding volume concurrently with a second representation of the virtual object from a second viewpoint, different from the first viewpoint. In additional, Chapman discloses (FIG. 1; Col 7, lines 20-59, one embodiment in which 3D dioramas are associated with downloadable applications from an “app store” and integrated into a rendered 3D environment, a computing system 100 such as a spatial computing system includes a computing device 110 ... Computing device 110, such as a smartphone, touchscreen laptop computer, wearable computing device or spatial computing device such as a head worn spatial computing or VR/AR device (generally, computing device 110) includes a display 112 and one or more input devices 114 ...) wherein displaying the bounding volume concurrently with the representation of the virtual object (FIG. 3; Col 9, lines 8-45, portal 120 also includes or accesses diorama integrator 170. Diorama integrator 170 receives data of assets 152 of 3D diorama 150 associated with a selected application 140, data of objects 162 of rendered 3D environment 160. Diorama integrator 170 composites or merges 3D diorama assets 152 and rendered 3D environment objects 162 based on data thereof to generate a composite view 172 of the 3D diorama 150 and rendered 3D environment 160 ... At 304, diorama integrator 170 accesses 3D diorama assets 152 associated with application 140, and at 306, determines a bounding structure 180 such as a bounding box (generally, bounding box 180) in which 3D diorama 150 is to be contained in rendered 3D environment 160. Bounding box 180 provides a virtual separation of or boundary between 3D diorama assets 152 and rendered 3D environment objects 162 such that 3D diorama assets 152 do not extend beyond bounding box 180 ... At 308, 3D diorama 150 is composited or merged with rendered 3D environment 160 to generate composite view 172. Composite view 172 includes 3D diorama 150 contained within bounding box 180, which is contained within rendered 3D environment 160. Composite view 172 is then presented to user through display of 112 of computing device 110 at 310 ...) includes displaying a first representation of the bounding volume concurrently with a first representation of the virtual object from a first viewpoint (Col 9, lines 38-45, at 308, 3D diorama 150 is composited or merged with rendered 3D environment 160 to generate composite view 172. Composite view 172 includes 3D diorama 150 contained within bounding box 180, which is contained within rendered 3D environment 160. Composite view 172 is then presented to user through display of 112 of computing device 110 at 310 ...) and displaying a second representation of the bounding volume concurrently with a second representation of the virtual object from a second viewpoint, different from the first viewpoint (FIG. 4; Col 10, lines 20-35, at 402, diorama integrator 170 receives user input to rotate 3D diorama 150 based an input or control received through a portal 120 page, or is triggered to execute rotation or programmed animation. At 404, diorama integrator 170 determines the requested or programmed degree of 3D diorama 150 movement or rotation 153 (e.g., rotate 3D diorama 30 degrees). For example, if portal 120 is a web portal, user may use their mouse, touchpad, or a controller of a VR display device to initiate rotation 153 of presentation of 3D diorama 150. 3D diorama 150 can be rotated 153 in a manner that simulates movement of a viewpoint around 3D diorama 150, as if a camera is panning around the scene of the 3D diorama 150 presented within rendered 3D environment 160. Claim 1 of Chapman describes “rotating a viewpoint of the composite view as if a camera viewpoint is panning around the composite view from a first rotational viewpoint to a second rotational viewpoint different from the first rotational viewpoint”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of Chapman, and applying the method for presenting a three-dimensional (3D) diorama in a computer-generated environment taught by Chapman to present the 3D object with the 3D volume from different by rotating the composite view. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of Chapman to obtain the invention as specified in claim. Regarding claim 20, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 16). However, Lal does not specifically disclose wherein displaying the bounding volume concurrently with the representation of the virtual object includes displaying a first representation of the bounding volume concurrently with a first representation of the virtual object from a first viewpoint and displaying a second representation of the bounding volume concurrently with a second representation of the virtual object from a second viewpoint, different from the first viewpoint. In additional, Chapman discloses (FIG. 1; Col 7, lines 20-59, one embodiment in which 3D dioramas are associated with downloadable applications from an “app store” and integrated into a rendered 3D environment, a computing system 100 such as a spatial computing system includes a computing device 110 ... Computing device 110, such as a smartphone, touchscreen laptop computer, wearable computing device or spatial computing device such as a head worn spatial computing or VR/AR device (generally, computing device 110) includes a display 112 and one or more input devices 114 ...) wherein displaying the bounding volume concurrently with the representation of the virtual object (FIG. 3; Col 9, lines 8-45, portal 120 also includes or accesses diorama integrator 170. Diorama integrator 170 receives data of assets 152 of 3D diorama 150 associated with a selected application 140, data of objects 162 of rendered 3D environment 160. Diorama integrator 170 composites or merges 3D diorama assets 152 and rendered 3D environment objects 162 based on data thereof to generate a composite view 172 of the 3D diorama 150 and rendered 3D environment 160 ... At 304, diorama integrator 170 accesses 3D diorama assets 152 associated with application 140, and at 306, determines a bounding structure 180 such as a bounding box (generally, bounding box 180) in which 3D diorama 150 is to be contained in rendered 3D environment 160. Bounding box 180 provides a virtual separation of or boundary between 3D diorama assets 152 and rendered 3D environment objects 162 such that 3D diorama assets 152 do not extend beyond bounding box 180 ... At 308, 3D diorama 150 is composited or merged with rendered 3D environment 160 to generate composite view 172. Composite view 172 includes 3D diorama 150 contained within bounding box 180, which is contained within rendered 3D environment 160. Composite view 172 is then presented to user through display of 112 of computing device 110 at 310 ...) includes displaying a first representation of the bounding volume concurrently with a first representation of the virtual object from a first viewpoint (Col 9, lines 38-45, at 308, 3D diorama 150 is composited or merged with rendered 3D environment 160 to generate composite view 172. Composite view 172 includes 3D diorama 150 contained within bounding box 180, which is contained within rendered 3D environment 160. Composite view 172 is then presented to user through display of 112 of computing device 110 at 310 ...) and displaying a second representation of the bounding volume concurrently with a second representation of the virtual object from a second viewpoint, different from the first viewpoint (FIG. 4; Col 10, lines 20-35, at 402, diorama integrator 170 receives user input to rotate 3D diorama 150 based an input or control received through a portal 120 page, or is triggered to execute rotation or programmed animation. At 404, diorama integrator 170 determines the requested or programmed degree of 3D diorama 150 movement or rotation 153 (e.g., rotate 3D diorama 30 degrees). For example, if portal 120 is a web portal, user may use their mouse, touchpad, or a controller of a VR display device to initiate rotation 153 of presentation of 3D diorama 150. 3D diorama 150 can be rotated 153 in a manner that simulates movement of a viewpoint around 3D diorama 150, as if a camera is panning around the scene of the 3D diorama 150 presented within rendered 3D environment 160. Claim 1 of Chapman describes “rotating a viewpoint of the composite view as if a camera viewpoint is panning around the composite view from a first rotational viewpoint to a second rotational viewpoint different from the first rotational viewpoint”). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of Chapman, and applying the method for presenting a three-dimensional (3D) diorama in a computer-generated environment taught by Chapman to present the 3D object with the 3D volume from different by rotating the composite view. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of Chapman to obtain the invention as specified in claim. Claims 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Lal et al (U.S. Patent Application Publication 2024/0096029 A1) in view of BECKER et al (U.S. Patent Application Publication 2023/0377300 A1) in view of TAKAHASHI et al (U.S. Patent Application Publication 2022/0318447 A1) in view of Gonzalez Martin et al (U.S. Patent Application Publication 2022/0342384 A1). Regarding claim 8, the combination of Lal in view of BECKER in view of TAKAHASHI discloses everything claimed as applied above (see claim 7). In additional, the combination of Lal in view of BECKER in view of TAKAHASHI discloses the one or more programs including instructions for: while displaying the representation of the virtual object such that the portion of the representation of the virtual object is outside of the bounding volume (see claim 7), However, Lal does not specifically disclose receiving, via the one or more input devices, an indication to apply one or more visual attributes at the portion of the representation of the virtual object that is outside of the bounding volume; and in response to the received input corresponding to a request to apply the one or more visual attributes, displaying one or more visual attributes at the portion of the representation of the virtual object that is outside of the bounding volume. In additional, Gonzalez Martin discloses (Paragraph [0020], FIG. 1 illustrates an arrangement in which a voxel representation generation engine 104 receives an input representation 102 of a 3D object, and produces, based on the input representation 102 of the 3D object, a voxel representation 106 of the 3D object; paragraph [0023], the input representation 102 includes an arrangement of meshes 108, such as polygonal meshes, that define surfaces of the 3D object to be built based on the input representation 102. Some of the meshes 108 are to be combined (e.g., by a union operation, an intersection operation, a difference operation, etc.) using 3D Boolean operations, as represented by a list of 3D Boolean operations 110. The list of 3D Boolean operations can be expressed by using descriptions of the 3D Boolean operations as part of input job information relating to building of the 3D object ...) receiving, via the one or more input devices, an indication to apply one or more visual attributes at the portion of the representation of the virtual object that is outside of the bounding volume (Paragraph [0030], FIG. 3 is a flow diagram of an example classification process 300 of classifying child nodes of a parent node when constructing an octree of nodes (e.g., the octree 107 of FIG. 1). In some examples, the classification of child nodes includes classifying each child node as black, white, or gray. A respective child node is assigned a corresponding color depending upon the following factors: 1) whether the respective child node represents a volume that is completely within, completely outside, or partially within and partially outside of a corresponding mesh 108 that is associated with a 3D Boolean operation (e.g., included in the list of 3D Boolean operations 110 in FIG. 1), and 2) the type of the 3D Boolean operation (e.g., union operation, intersection operation, or difference operation)); and in response to the received input corresponding to a request to apply the one or more visual attributes, displaying one or more visual attributes at the portion of the representation of the virtual object that is outside of the bounding volume (Paragraphs [0038]-[0039], the tasks of classifying child node k (at 302) are further depicted in a process 400 of FIG. 4. A child node represents a volume in a 3D object that is combined within a bounding box having 8 corners. In other examples, other bounding structures containing a volume of a child node can have a different bounding structure ... Assuming the example where the bounding box of the volume represented by child node k has 8 corners represented by 8 respective node points, the process 400 determines (at 402) for each of the 8 node points whether the node point is within a mesh 108 associated with the 3D Boolean operation. The process 400 assigns either the color black or the color white to a node point of a bounding box of child node k depending on the following factors: 1) the type of the 3D Boolean operation associated with the mesh 108, and 2) whether the node point is within or outside the mesh 108; paragraph [0052], ... The machine-readable instructions indicate (e.g., assign a node the color white) that the respective sub-volume is part of the 3D object responsive to determining that the points of the respective sub-volume are outside the structure and the Boolean operation being a difference operation ...). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal in view of BECKER in view of TAKAHASHI incorporate the teachings of Gonzalez Martin, and applying the method for building a three-dimensional (3D) object taught by Gonzalez Martin to process each portion of the 3D object and assign the different color for each portion based on whether the portion of the 3D object is within or outside the bounding box of the software application. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal in view of BECKER in view of TAKAHASHI according to the relied-upon teachings of Gonzalez Martin to obtain the invention as specified in claim. Regarding claim 9, the combination of Lal in view of BECKER in view of TAKAHASHI in view of Gonzalez Martin discloses everything claimed as applied above (see claim 7). In additional, Gonzalez Martin discloses wherein the one or more visual attributes are configured to visually distinguish the portion of the representation of the virtual object that is outside of the bounding volume from one or more portions of the representation of the virtual object that are inside of the bounding volume (Paragraphs [0038]-[0039], the tasks of classifying child node k (at 302) are further depicted in a process 400 of FIG. 4. A child node represents a volume in a 3D object that is combined within a bounding box having 8 corners. In other examples, other bounding structures containing a volume of a child node can have a different bounding structure ... Assuming the example where the bounding box of the volume represented by child node k has 8 corners represented by 8 respective node points, the process 400 determines (at 402) for each of the 8 node points whether the node point is within a mesh 108 associated with the 3D Boolean operation. The process 400 assigns either the color black or the color white to a node point of a bounding box of child node k depending on the following factors: 1) the type of the 3D Boolean operation associated with the mesh 108, and 2) whether the node point is within or outside the mesh 108; paragraph [0052], ... The machine-readable instructions indicate (e.g., assign a node the color white) that the respective sub-volume is part of the 3D object responsive to determining that the points of the respective sub-volume are outside the structure and the Boolean operation being a difference operation ...). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal in view of BECKER in view of TAKAHASHI incorporate the teachings of Gonzalez Martin, and applying the method for building a three-dimensional (3D) object taught by Gonzalez Martin to process each portion of the 3D object and assign the different color for each portion based on whether the portion of the 3D object is within or outside the bounding box of the software application. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal in view of BECKER in view of TAKAHASHI according to the relied-upon teachings of Gonzalez Martin to obtain the invention as specified in claim. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Lal et al (U.S. Patent Application Publication 2024/0096029 A1) in view of BECKER et al (U.S. Patent Application Publication 2023/0377300 A1) in view of Sayers et al (U.S. Patent Application Publication 2023/0064797 A1). Regarding claim 13, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 12), and Lal further disclose wherein the size information associated with the width, height, and depth of the software application (FIG. 3; paragraph [0034], In the example shown in FIG. 4, user 402 is nearby table 404. In such a case, control circuitry may be configured to determine, e.g., by virtue of one or more image processing techniques, that a surface of the table 404 is free from obstruction, and is thus open for placement of a 3D object. In particular, control circuitry may determine a volume of the space free from obstruction, e.g., a volume boundary, defined by the surface area of the table and a free height above the table, e.g., a height free from obstruction. In the example shown in FIG. 4, the volume of the space is defined by the length L and the width W of the table, and the height H above the table clear from obstruction, such as by light 404 above table 402). However, Lal does not specifically disclose the size information associated with the width, height, and depth are displayed concurrently with the bounding volume. In additional, Sayers (Abstract, systems and methods are described herein to display 3D part models with nonlinearly scaled bounding boxes and/or font sizes ...) the size information associated with the width, height, and depth are displayed concurrently with the bounding volume (Paragraph [0048], FIG. 6 illustrates an example graphical user interface rendered within a browser window 600 to display a two-dimensional perspective view 610 of a relatively large 3D part model which is printable on a 3D printer. As illustrated, a bounding box 605 is illustrated with display dimensions that are nonlinearly calculated based on the print dimensions of the 3D part model. The part model is near the maximum possible printable size, and so is consuming much of the usable display. The print dimensions (illustrated as height 621, length 622, and depth 623) are displayed in a normal or average font size). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of Sayers, and applying the method for displaying 3D part with the bounding box taught by Sayers to display the size information associated with the width, height, and depth concurrently with the bounding volume. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of Sayers to obtain the invention as specified in claim. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Lal et al (U.S. Patent Application Publication 2024/0096029 A1) in view of BECKER et al (U.S. Patent Application Publication 2023/0377300 A1) in view of McIntyre, Jr et al (U.S. Patent 10,846,940 B1). Regarding claim 15, the combination of Lal in view of BECKER discloses everything claimed as applied above (see claim 6). However, Lal does not specifically disclose the one or more programs including instructions for displaying a visual indicator indicating a corresponding size of the virtual object in the three-dimensional environment relative to a size of a physical object that is associated with the virtual object. In additional, McIntyre, Jr discloses (Abstract, systems and methods providing for determining physical location of a device of a user of an augmented reality environment corresponding to a physical space ...) the one or more programs including instructions (Col 5, lines 60-67 to Col 6, lines 1-35; FIG. 2A illustrates an example of an augmented reality module 201 ... The processor 202 may be configured to execute computer-readable instructions stored in a memory to perform various functions related to augmented reality and mapping of environments in the vicinity of the user ... the memory 203 may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices ...) for displaying a visual indicator indicating a corresponding size of the virtual object in the three-dimensional environment relative to a size of a physical object (Col 8, lines 12-14, the user, physical objects captured by the camera module 204) that is associated with the virtual object (Col 8, lines 25-62, SLAM module 212 may perform mapping operations to construct a map of the environment, while, at the same time, determining the location of the participant and their device based on the map being built. ... The SLAM module 212 may use one or more camera modules 205 to track a set of points through successive camera frames. The tracked set of points may be used to triangulate their 3D position as well as simultaneously calculate the camera pose as it observes those points at their estimated position ... Overlay module 213 may determine how 3D virtual objects are to be placed and rendered at the positioning, scale/size and orientation specified with relation to the 3D model of the environment. Overlay module 213 may determine the shape or appearance, including texture and color, of the virtual 3D object that is to be placed at the specified location. For example, in the case of crowd sourcing the mapping of an environment, the overlay module 213 may superimpose indicators of the location and position of other participants within the environment). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the method and system for selecting a 3D object for display in an extended reality environment taught by Lal incorporate the teachings of McIntyre, Jr, and applying the method for the determination of a physical location of a device of a first user in an augmented reality environment corresponding to a physical space by McIntyre, Jr to display a indictor for indicating the size of the virtual object in the three-dimensional environment relative to a size of a physical object. Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify Lal according to the relied-upon teachings of McIntyre, Jr to obtain the invention as specified in claim. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Xilin Guo whose telephone number is (571)272-5786. The examiner can normally be reached Monday - Friday 9:00 AM-5:30 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XILIN GUO/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jun 04, 2024
Application Filed
Jan 29, 2026
Non-Final Rejection — §103, §112
Apr 01, 2026
Applicant Interview (Telephonic)
Apr 01, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602855
LIVE MODEL PROMPTING AND REAL-TIME OUTPUT OF PHOTOREAL SYNTHETIC CONTENT
2y 5m to grant Granted Apr 14, 2026
Patent 12597403
DISPLAY DEVICE FOR A VEHICLE
2y 5m to grant Granted Apr 07, 2026
Patent 12579712
ASSET CREATION USING GENERATIVE ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Mar 17, 2026
Patent 12579766
SYSTEM AND METHOD FOR RAPID OUTFIT VISUALIZATION
2y 5m to grant Granted Mar 17, 2026
Patent 12573121
Automated Generation and Presentation of Sign Language Avatars for Video Content
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+17.4%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 456 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month