Prosecution Insights
Last updated: April 19, 2026
Application No. 18/676,171

Devices, Methods, and Graphical User Interfaces for Viewing and Interacting with Three-Dimensional Environments

Non-Final OA §103
Filed
May 28, 2024
Examiner
ROBINSON, TERRELL M
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
90%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
403 granted / 486 resolved
+20.9% vs TC avg
Moderate +8% lift
Without
With
+7.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
27 currently pending
Career history
513
Total Applications
across all art units

Statute-Specific Performance

§101
7.0%
-33.0% vs TC avg
§103
54.5%
+14.5% vs TC avg
§102
11.7%
-28.3% vs TC avg
§112
17.2%
-22.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 486 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Allowable Subject Matter Claims 5, 6, 17, and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if the claims are rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: In regards to dependent claim 5, none of the cited prior art alone or in combination provides motivation to teach “wherein displaying the first simulated shadow at a second shadow position in the three-dimensional environment includes: when the changing of the orientation of the first user interface object from the first orientation to the second orientation includes a first magnitude of rotation of the first user interface object, displaying a first magnitude of change in spatial relationship between the first simulated shadow and the first user interface object; and when the changing of the orientation of the first user interface object from the first orientation to the second orientation includes a second magnitude of rotation of the first user interface object, displaying a second magnitude of change in spatial relationship between the first simulated shadow and the first user interface object, wherein the second magnitude of rotation is different from the first magnitude of rotation, and the second magnitude of change in spatial relationship is different from the first magnitude of change in spatial relationship” as the references only teach techniques for various user interactions with a virtual object and rendering virtual shadows to replicate realistic lighting and appearance features, however the references fail to explicitly disclose the concept of displaying a magnitude of rotation to quantify the amount of change in the distance between first and second orientations of a user interface object and representing the corresponding simulation of the objects shadow, in conjunction with the features of claim 1 with which it depends for the purpose of representing appearances of a graphical user interface as a result of changes in orientation due to user interactions. In addition, there is no teaching, suggestion, or motivation found in the current references and none that can be inferred from the examiner’s own knowledge with respect to the current limitation. In regards to dependent claim 17, none of the cited prior art alone or in combination provides motivation to teach “detecting an input corresponding to a request to move the first user interface object relative to the three-dimensional environment; and in response to detecting the input corresponding to the request to move the first user interface object relative to the three-dimensional environment: moving the first user interface object relative to the three-dimensional environment; -and visually deemphasizing the first simulated shadow corresponding to the first user interface object as a distance between the first user interface object and a respective plane in the three-dimensional environment changes, wherein visually deemphasizing the first simulated shadow includes changing one or more visual properties of the first simulated shadow” as the references only teach techniques for various user interactions with a virtual object and rendering virtual shadows to replicate realistic lighting and appearance features, in addition to emphasizing and deemphasizing user interface objects, however the references fail to explicitly disclose the concept of performing this deemphasizing feature specifically in relation to a user input that affects just the simulated shadows which correspond to a user interface object in relation to changes between distance and a plane of the virtual environment, in conjunction with the features of claim 1 with which it depends for the purpose of representing appearances of a graphical user interface as a result of changes in orientation due to user interactions. In addition, there is no teaching, suggestion, or motivation found in the current references and none that can be inferred from the examiner’s own knowledge with respect to the current limitation. In regards to dependent claims 6 and 18, these claims depend from objected to base claims 5 and 17, and thus are objected to based on the same rationale as provided above. As allowable subject matter has been indicated, applicant's reply must either comply with all formal requirements or specifically traverse each requirement not complied with. See 37 CFR 1.111(b) and MPEP § 707.07(a). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 4, 7-10, 14, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Faulkner (US 2021/0097776 A1, hereinafter referenced “Faulkner”) in view of Krivoruchko (US 2023/0092874 A1, hereinafter referenced “Kriv”). In regards to claim 1. Faulkner discloses a method (Faulkner, Abstract), comprising: -at a computer system that is in communication with a display generation component and one or more sensors (Faulkner, para [0042]; Reference discloses as shown in FIG. 1, the CGR experience is provided to the user via an operating environment 100 that includes a computer system 101…includes a display generation component 120 (e.g., a head-mounted device (HMD)…one or more sensors 190 (e.g., image sensors, light sensors, depth sensors, tactile sensors, orientation sensors, proximity sensors, temperature sensors, location sensors, motion sensors, velocity sensors, etc.), and optionally one or more peripheral devices 195 (e.g., home appliances, wearable devices, etc.): -while a first view of a three-dimensional environment is visible via the display generation component, displaying a first user interface object with a first orientation and at a first object position in the three-dimensional environment (Faulkner, para [0038] and [0041]; Reference at [0038] discloses a user places a display generation component of the computer system in a predefined position relative to the user (e.g., putting a display in front of his/her eyes, or putting a head-mounted device on his/her head), the user's view of the real world is blocked by the display generation component, and the content presented by the display generation component dominates the user's view (i.e. first view of 3D environment is visible via the display generation component). Para [0041] discloses FIGS. 7C-7F are block diagrams illustrating methods for generating a computer-generated three-dimensional environment (e.g., including simulating visual interplay between physical and virtual objects), in accordance with some embodiments (i.e. displaying a first user interface object with a default first orientation and at a first object position in the three-dimensional environment)), -and displaying a first simulated shadow, corresponding to the first user interface object, at a first shadow position in the three-dimensional environment (Faulkner, para [0128]; Reference discloses in the physical environment, the computer system also generates a simulated shadow 7338 at a location in the three-dimensional environment (e.g., on floor representation 7338′) that corresponds to a location of a real shadow (e.g., on floor 7308) that would have been cast by furniture 7310 if illuminated by a real light source with the same location and characteristics of the virtual object 7332 (e.g., a real window on side wall 7306)), -wherein the first simulated shadow at the first shadow position has a first spatial relationship to the first user interface object (Faulkner, para [0129] and [0130]; Reference at [0129] discloses as a result, for many positions on the side wall representation 7306′, the luminance and color values of the corresponding positions on the first virtual object 7332 have changed (e.g., from those values shown in FIG. 7E to those shown in FIG. 7F… In some embodiments, due to the size change of the first virtual object or movement of the first virtual object, for some positions on the side wall representation 7306′, the luminance and colors corresponding to those positions will be change because the first virtual object have now expanded or moved to that position. Para [0130] discloses in FIG. 7F, the shadow 7338 cast on floor representation 7308′ also appears less dark as compared to shadow 7308 in FIG. 7E due to the reduced amount of illumination from the reduced size of the first virtual object 7332. The first simulated shadow tied to movement of the virtual object interpreted as wherein the first simulated shadow at the first shadow position has a first spatial relationship to the first user interface object); -detecting a user input directed to the first user interface object (Faulkner, para [0140]; Reference discloses in response to the first predefined gesture, the computer system also optionally adds another virtual element (e.g., virtual object 7404) to the three-dimensional environment, without replacing any whole class of physical elements. The virtual object 7404 is optionally a user interface object, such as a menu (e.g., menu of application, documents, etc.), a control (e.g., display brightness control, display focus control, etc.), or other objects (e.g., a virtual assistant, a document, media item, etc.) that can be manipulated by user inputs or provides information or feedback in the three-dimensional environment); -and displaying the first simulated shadow at a second shadow position in the three-dimensional environment, wherein: the second shadow position in the first view of the three-dimensional environment is different from the first shadow position in the first view of the three-dimensional environment (Faulkner, para [0197]; Reference discloses In some embodiments, detecting the predefined input specifying the destination location for the virtual object in the three-dimensional environment includes detecting a movement (e.g., movement of the gaze input, or movement of the finger before the tap of the finger) in the predefined input, and wherein displaying the second visual indication (e.g., a glowing or shadowy overlay (e.g., in the shape of the virtual object)) at the destination location includes updating a location of the second visual indication based on the movement of the predefined input (e.g., the location of the glowing or shadowy overlay (e.g., in the shape of the virtual object) is continuously and dynamically changed in accordance with the movement of the gaze input and/or the location of the finger before the tap of the input)); -the first simulated shadow at the second shadow position has a second spatial relationship to the first user interface object (Faulkner, para [0196]; Reference discloses while displaying the virtual object with the first visual indication that the virtual object has transitioned into the reconfiguration mode, the computer system detects a predefined input specifying a destination location for the virtual object in the three-dimensional environment (e.g., detecting the predefined input includes detecting movement of a user's gaze from the first spatial location to the second spatial location, or detecting a tap input by a finger of the hand…while the user's gaze is focused on the second spatial location in the three-dimensional space). In response to detecting the predefined input specifying the destination location for the virtual object in the three-dimensional environment, the computer system displays a second visual indication (e.g., a glowing or shadowy overlay (e.g., in the shape of the virtual object)) (i.e. shadowy overlay moves to second position based on user input indicating spatial relationship in 3D environment) at the destination location before moving the virtual object from the first spatial location to the destination location (e.g., the second spatial location or a location different from the second spatial location)); -and the second spatial relationship is different from the first spatial relationship (Faulkner, para [0196]; Reference discloses while displaying the virtual object with the first visual indication that the virtual object has transitioned into the reconfiguration mode, the computer system detects a predefined input specifying a destination location for the virtual object in the three-dimensional environment (e.g., detecting the predefined input includes detecting movement of a user's gaze from the first spatial location to the second spatial location, or detecting a tap input by a finger of the hand…while the user's gaze is focused on the second spatial location in the three-dimensional space). In response to detecting the predefined input specifying the destination location for the virtual object in the three-dimensional environment, the computer system displays a second visual indication (e.g., a glowing or shadowy overlay at the destination location before moving the virtual object from the first spatial location to the destination location (e.g., the second spatial location or a location different from the second spatial location) (i.e. second spatial relationship is different from the first spatial relationship)). Faulkner does not explicitly disclose but Kriv teaches -and in response to detecting the user input, changing an orientation of the first user interface object from the first orientation to a second orientation, including: displaying the first user interface object with the second orientation (Kriv, para [0121]; Reference discloses after the user has moved more than the threshold amount (i.e. in response to detecting user input) (e.g., by more than a threshold distance, by more than a threshold amount of change in orientation and/or position, and/or for a longer period of time than the first predefined time period), the user interface object 7104-4 is updated to be displayed at a position in the three-dimensional environment, different from its initial position (e.g., before the user began movement) (i.e. changing an orientation of the first user interface object from the first orientation to a second orientation, including: displaying the first user interface object with the second orientation based on orientation of user)); Faulkner and Kriv are combinable because they are in the same field of endeavor regarding virtual object interactions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the 3D environment interaction system of Faulkner to include the graphical user interface user attention features of Kriv in order to provide the user with a system generating a first visual effect at a second location of a three-dimensional scene, including modifying a visual appearance of a first portion of a first physical surface in the three-dimensional scene in accordance with the first value for the first display property that corresponds to the first portion of a first virtual object and replicating the action for a second portion of the object as taught by Faulkner, while incorporating the graphical user interface user attention features of Kriv to allow for use of various configurations for detecting whether a user satisfies an attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment or not and modifying the user interfaces appearance providing features that allow for more efficient feedback for reducing complexity in manipulating virtual objects, applicable to improving virtual an augmented reality rendering systems such as those taught in Faulkner. In regards to claim 2. Faulkner in view of Kriv teach the method of claim 1. Faulkner further discloses -including displaying the first simulated shadow at the second shadow position in the three-dimensional environment without changing a size of the first simulated shadow (Faulkner, para [0197]; Reference discloses In some embodiments, detecting the predefined input specifying the destination location for the virtual object in the three-dimensional environment includes detecting a movement (e.g., movement of the gaze input, or movement of the finger before the tap of the finger) in the predefined input, and wherein displaying the second visual indication (e.g., a glowing or shadowy overlay (e.g., in the shape of the virtual object)) at the destination location includes updating a location of the second visual indication based on the movement of the predefined input (e.g., the location of the glowing or shadowy overlay (e.g., in the shape of the virtual object) is continuously and dynamically changed in accordance with the movement of the gaze input and/or the location of the finger before the tap of the input. Simulated shadow is moved to second position but size is not indicated as being change in this operation thus interpreted as displaying the first simulated shadow at the second shadow position in the three-dimensional environment without changing a size of the first simulated shadow)). In regards to claim 4. Faulkner in view of Kriv teach the method of claim 1. Faulkner further discloses -wherein displaying the first simulated shadow at a second shadow position in the three-dimensional environment includes: when the changing of the orientation of the first user interface object from the first orientation to the second orientation includes rotation of the first user interface object in a first direction, repositioning the first simulated shadow in a third direction corresponding to the first direction (Faulkner, para [0036] and para [0202]; Reference at [0036] discloses while in the reconfiguration mode, the object is moved from one location to another location in the computer-generated environment in response to a first respective gesture that is configured to trigger a first type of interaction with the virtual object (e.g., to activate, navigate within, or rotate the virtual object) when the virtual object is not in the reconfiguration mode. Para [0202] discloses while the virtual object is in the reconfiguration mode, the computer system detects a fourth hand movement after detecting the second hand movement and moving the virtual object in accordance with the second movement. In response to detecting the fourth hand movement: in accordance with a determination that the fourth hand movement meets the first gesture criteria, the computer system moves the virtual object from the second spatial location to a third spatial location in accordance with the fourth hand movement); -and when the changing of the orientation of the first user interface object from the first orientation to the second orientation includes rotation of the first user interface object in a second direction that is different from the first direction, repositioning the first simulated shadow in a fourth direction corresponding to the second direction, wherein the fourth direction is different from the third direction (Faulkner, para [0036], [0196], and [0202]; Reference at [0036] discloses while in the reconfiguration mode, the object is moved from one location to another location in the computer-generated environment in response to a first respective gesture that is configured to trigger a first type of interaction with the virtual object (e.g., to activate, navigate within, or rotate the virtual object) when the virtual object is not in the reconfiguration mode. Para [0202] discloses while the virtual object is in the reconfiguration mode, the computer system detects a fourth hand movement after detecting the second hand movement and moving the virtual object in accordance with the second movement. In response to detecting the fourth hand movement: in accordance with a determination that the fourth hand movement meets the first gesture criteria, the computer system moves the virtual object from the second spatial location to a third spatial location in accordance with the fourth hand movement. The user input for moving the object to different position based on an interaction such as rotation interpreted as accounting for when the changing of the orientation of the first user interface object from the first orientation to the second orientation includes rotation of the first user interface object in a first direction and when the changing of the orientation of the first user interface object from the first orientation to the second orientation includes rotation of the first user interface object in a second direction that is different from the first direction, repositioning the first simulated shadow in a fourth direction corresponding to the second direction. With regards to the shadows being moved the reference details the correspondence between virtual object and shadow movement in para [0196] which details in response to detecting the predefined input specifying the destination location for the virtual object in the three-dimensional environment, the computer system displays a second visual indication (e.g., a glowing or shadowy overlay (e.g., in the shape of the virtual object)) at the destination location before moving the virtual object from the first spatial location to the destination location (e.g., the second spatial location or a location different from the second spatial location).). In regards to claim 7. Faulkner in view of Kriv teach the method of claim 1. Faulkner further discloses -including: in response to detecting the user input: moving the first user interface object from the first object position to a second object position that is different from the first object position (Faulkner, para [0190]; Reference discloses In response to detecting the second hand movement performed by the user: in accordance with a determination that the second hand movement meets the first gesture criteria (e.g., the first hand movement is a pinch and drag gesture (e.g., movement of the pinching fingers is resulted from the whole hand moving laterally)), the computer system moves (8010) the virtual object from the first spatial location to a second spatial location (e.g., without performing the first operation) in accordance with the second hand movement)). In regards to claim 8. Faulkner in view of Kriv teach the method of claim 7. Faulkner does not explicitly disclose but Kriv teaches -wherein changing the orientation of the first user interface object from the first orientation to the second orientation is performed automatically in accordance with the moving of the first user interface object from the first object position to the second object position (Kriv, para [0104]; Reference discloses movement of the user's head and/or torso, and/or the movement of the display generation component or other location sensing elements of the computer system (e.g., due to the user holding the display generation component or wearing the HMD), etc., relative to the physical environment cause corresponding movement of the viewpoint (e.g., with corresponding movement direction, movement distance, movement speed, and/or change in orientation) relative to the three-dimensional environment, resulting in corresponding change in the currently displayed view of the three-dimensional environment. In some embodiments, when a virtual object has a preset spatial relationship relative to the viewpoint (e.g., is anchored or fixed to the viewpoint), movement of the viewpoint relative to the three-dimensional environment would cause movement of the virtual object relative to the three-dimensional environment while the position of the virtual object in the field of view is maintained). Faulkner and Kriv are combinable because they are in the same field of endeavor regarding virtual object interactions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the 3D environment interaction system of Faulkner to include the graphical user interface user attention features of Kriv in order to provide the user with a system generating a first visual effect at a second location of a three-dimensional scene, including modifying a visual appearance of a first portion of a first physical surface in the three-dimensional scene in accordance with the first value for the first display property that corresponds to the first portion of a first virtual object and replicating the action for a second portion of the object as taught by Faulkner, while incorporating the graphical user interface user attention features of Kriv to allow for use of various configurations for detecting whether a user satisfies an attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment or not and modifying the user interfaces appearance providing features that allow for more efficient feedback for reducing complexity in manipulating virtual objects, applicable to improving virtual an augmented reality rendering systems such as those taught in Faulkner. In regards to claim 9. Faulkner in view of Kriv teach the method of claim 1. Faulkner does not explicitly disclose but Kriv teaches -including: displaying the first user interface object with the first orientation in accordance with displaying the first user interface object at the first object position at a first distance from a viewpoint of a user; and changing the orientation of the first user interface object from the first orientation to the second orientation in accordance with displaying the first user interface object at a second distance from the viewpoint of the user, wherein the second distance is different from the first distance (Kriv, para [0121]; Reference discloses after the user has moved more than the threshold amount (i.e. in response to detecting user input) (e.g., by more than a threshold distance, by more than a threshold amount of change in orientation and/or position, and/or for a longer period of time than the first predefined time period), the user interface object 7104-4 is updated to be displayed at a position in the three-dimensional environment, different from its initial position (e.g., before the user began movement) (i.e. changing an orientation of the first user interface object from the first orientation to a second orientation, including: displaying the first user interface object with the second orientation based on orientation of user)). Faulkner and Kriv are combinable because they are in the same field of endeavor regarding virtual object interactions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the 3D environment interaction system of Faulkner to include the graphical user interface user attention features of Kriv in order to provide the user with a system generating a first visual effect at a second location of a three-dimensional scene, including modifying a visual appearance of a first portion of a first physical surface in the three-dimensional scene in accordance with the first value for the first display property that corresponds to the first portion of a first virtual object and replicating the action for a second portion of the object as taught by Faulkner, while incorporating the graphical user interface user attention features of Kriv to allow for use of various configurations for detecting whether a user satisfies an attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment or not and modifying the user interfaces appearance providing features that allow for more efficient feedback for reducing complexity in manipulating virtual objects, applicable to improving virtual an augmented reality rendering systems such as those taught in Faulkner. In regards to claim 10. Faulkner in view of Kriv teach the method of claim 1. Faulkner further discloses -wherein: while the first simulated shadow has the first spatial relationship to the first user interface object with the first orientation, the first simulated shadow is displayed at a respective location relative to the first user interface object; and while the first simulated shadow has the second spatial relationship to the first user interface object with the second orientation, the first simulated shadow is displayed in front of the respective location relative to the first user interface object (Faulkner, para [0197]; Reference discloses in some embodiments, detecting the predefined input specifying the destination location for the virtual object in the three-dimensional environment includes detecting a movement (e.g., movement of the gaze input, or movement of the finger before the tap of the finger) in the predefined input, and wherein displaying the second visual indication (e.g., a glowing or shadowy overlay (e.g., in the shape of the virtual object)) at the destination location includes updating a location of the second visual indication based on the movement of the predefined input (e.g., the location of the glowing or shadowy overlay (e.g., in the shape of the virtual object) is continuously and dynamically changed in accordance with the movement of the gaze input and/or the location of the finger before the tap of the input)). In regards to claim 14. Faulkner in view of Kriv teach the method of claim 1. Faulkner further discloses -wherein the user input directed to the first user interface object includes an air gesture (Faulkner, para [0148]; Reference discloses in some embodiments, the content or appearance of the virtual elements 7402 and 7406 (e.g., virtual windows or virtual screens) change in response to additional gesture inputs (e.g., horizontal swipe of the hand in the air, or swipe in a predefined direction around a finger)). In regards to claim 19. Faulkner discloses a non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display generation component and one or more sensors (Faulkner, para [0014]), the one or more programs including instructions for: -while a first view of a three-dimensional environment is visible via the display generation component, displaying a first user interface object with a first orientation and at a first object position in the three-dimensional environment (Faulkner, para [0038] and [0041]; Reference at [0038] discloses a user places a display generation component of the computer system in a predefined position relative to the user (e.g., putting a display in front of his/her eyes, or putting a head-mounted device on his/her head), the user's view of the real world is blocked by the display generation component, and the content presented by the display generation component dominates the user's view (i.e. first view of 3D environment is visible via the display generation component). Para [0041] discloses FIGS. 7C-7F are block diagrams illustrating methods for generating a computer-generated three-dimensional environment (e.g., including simulating visual interplay between physical and virtual objects), in accordance with some embodiments (i.e. displaying a first user interface object with a default first orientation and at a first object position in the three-dimensional environment)), -and displaying a first simulated shadow, corresponding to the first user interface object, at a first shadow position in the three-dimensional environment (Faulkner, para [0128]; Reference discloses in the physical environment, the computer system also generates a simulated shadow 7338 at a location in the three-dimensional environment (e.g., on floor representation 7338′) that corresponds to a location of a real shadow (e.g., on floor 7308) that would have been cast by furniture 7310 if illuminated by a real light source with the same location and characteristics of the virtual object 7332 (e.g., a real window on side wall 7306)), -wherein the first simulated shadow at the first shadow position has a first spatial relationship to the first user interface object (Faulkner, para [0129] and [0130]; Reference at [0129] discloses as a result, for many positions on the side wall representation 7306′, the luminance and color values of the corresponding positions on the first virtual object 7332 have changed (e.g., from those values shown in FIG. 7E to those shown in FIG. 7F… In some embodiments, due to the size change of the first virtual object or movement of the first virtual object, for some positions on the side wall representation 7306′, the luminance and colors corresponding to those positions will be change because the first virtual object have now expanded or moved to that position. Para [0130] discloses in FIG. 7F, the shadow 7338 cast on floor representation 7308′ also appears less dark as compared to shadow 7308 in FIG. 7E due to the reduced amount of illumination from the reduced size of the first virtual object 7332. The first simulated shadow tied to movement of the virtual object interpreted as wherein the first simulated shadow at the first shadow position has a first spatial relationship to the first user interface object); -detecting a user input directed to the first user interface object (Faulkner, para [0140]; Reference discloses in response to the first predefined gesture, the computer system also optionally adds another virtual element (e.g., virtual object 7404) to the three-dimensional environment, without replacing any whole class of physical elements. The virtual object 7404 is optionally a user interface object, such as a menu (e.g., menu of application, documents, etc.), a control (e.g., display brightness control, display focus control, etc.), or other objects (e.g., a virtual assistant, a document, media item, etc.) that can be manipulated by user inputs or provides information or feedback in the three-dimensional environment); -and displaying the first simulated shadow at a second shadow position in the three-dimensional environment, wherein: the second shadow position in the first view of the three-dimensional environment is different from the first shadow position in the first view of the three-dimensional environment (Faulkner, para [0197]; Reference discloses In some embodiments, detecting the predefined input specifying the destination location for the virtual object in the three-dimensional environment includes detecting a movement (e.g., movement of the gaze input, or movement of the finger before the tap of the finger) in the predefined input, and wherein displaying the second visual indication (e.g., a glowing or shadowy overlay (e.g., in the shape of the virtual object)) at the destination location includes updating a location of the second visual indication based on the movement of the predefined input (e.g., the location of the glowing or shadowy overlay (e.g., in the shape of the virtual object) is continuously and dynamically changed in accordance with the movement of the gaze input and/or the location of the finger before the tap of the input)); -the first simulated shadow at the second shadow position has a second spatial relationship to the first user interface object (Faulkner, para [0196]; Reference discloses while displaying the virtual object with the first visual indication that the virtual object has transitioned into the reconfiguration mode, the computer system detects a predefined input specifying a destination location for the virtual object in the three-dimensional environment (e.g., detecting the predefined input includes detecting movement of a user's gaze from the first spatial location to the second spatial location, or detecting a tap input by a finger of the hand…while the user's gaze is focused on the second spatial location in the three-dimensional space). In response to detecting the predefined input specifying the destination location for the virtual object in the three-dimensional environment, the computer system displays a second visual indication (e.g., a glowing or shadowy overlay (e.g., in the shape of the virtual object)) (i.e. shadowy overlay moves to second position based on user input indicating spatial relationship in 3D environment) at the destination location before moving the virtual object from the first spatial location to the destination location (e.g., the second spatial location or a location different from the second spatial location)); -and the second spatial relationship is different from the first spatial relationship (Faulkner, para [0196]; Reference discloses while displaying the virtual object with the first visual indication that the virtual object has transitioned into the reconfiguration mode, the computer system detects a predefined input specifying a destination location for the virtual object in the three-dimensional environment (e.g., detecting the predefined input includes detecting movement of a user's gaze from the first spatial location to the second spatial location, or detecting a tap input by a finger of the hand…while the user's gaze is focused on the second spatial location in the three-dimensional space). In response to detecting the predefined input specifying the destination location for the virtual object in the three-dimensional environment, the computer system displays a second visual indication (e.g., a glowing or shadowy overlay at the destination location before moving the virtual object from the first spatial location to the destination location (e.g., the second spatial location or a location different from the second spatial location) (i.e. second spatial relationship is different from the first spatial relationship)). Faulkner does not explicitly disclose but Kriv teaches -and in response to detecting the user input, changing an orientation of the first user interface object from the first orientation to a second orientation, including: displaying the first user interface object with the second orientation (Kriv, para [0121]; Reference discloses after the user has moved more than the threshold amount (i.e. in response to detecting user input) (e.g., by more than a threshold distance, by more than a threshold amount of change in orientation and/or position, and/or for a longer period of time than the first predefined time period), the user interface object 7104-4 is updated to be displayed at a position in the three-dimensional environment, different from its initial position (e.g., before the user began movement) (i.e. changing an orientation of the first user interface object from the first orientation to a second orientation, including: displaying the first user interface object with the second orientation based on orientation of user)); Faulkner and Kriv are combinable because they are in the same field of endeavor regarding virtual object interactions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the 3D environment interaction system of Faulkner to include the graphical user interface user attention features of Kriv in order to provide the user with a system generating a first visual effect at a second location of a three-dimensional scene, including modifying a visual appearance of a first portion of a first physical surface in the three-dimensional scene in accordance with the first value for the first display property that corresponds to the first portion of a first virtual object and replicating the action for a second portion of the object as taught by Faulkner, while incorporating the graphical user interface user attention features of Kriv to allow for use of various configurations for detecting whether a user satisfies an attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment or not and modifying the user interfaces appearance providing features that allow for more efficient feedback for reducing complexity in manipulating virtual objects, applicable to improving virtual an augmented reality rendering systems such as those taught in Faulkner. In regards to claim 20. Faulkner discloses a computer system that is in communication with a display generation component and one or more sensors, the computer system comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions (Faulkner, para [0014]) for: -while a first view of a three-dimensional environment is visible via the display generation component, displaying a first user interface object with a first orientation and at a first object position in the three-dimensional environment (Faulkner, para [0038] and [0041]; Reference at [0038] discloses a user places a display generation component of the computer system in a predefined position relative to the user (e.g., putting a display in front of his/her eyes, or putting a head-mounted device on his/her head), the user's view of the real world is blocked by the display generation component, and the content presented by the display generation component dominates the user's view (i.e. first view of 3D environment is visible via the display generation component). Para [0041] discloses FIGS. 7C-7F are block diagrams illustrating methods for generating a computer-generated three-dimensional environment (e.g., including simulating visual interplay between physical and virtual objects), in accordance with some embodiments (i.e. displaying a first user interface object with a default first orientation and at a first object position in the three-dimensional environment)), -and displaying a first simulated shadow, corresponding to the first user interface object, at a first shadow position in the three-dimensional environment (Faulkner, para [0128]; Reference discloses in the physical environment, the computer system also generates a simulated shadow 7338 at a location in the three-dimensional environment (e.g., on floor representation 7338′) that corresponds to a location of a real shadow (e.g., on floor 7308) that would have been cast by furniture 7310 if illuminated by a real light source with the same location and characteristics of the virtual object 7332 (e.g., a real window on side wall 7306)), -wherein the first simulated shadow at the first shadow position has a first spatial relationship to the first user interface object (Faulkner, para [0129] and [0130]; Reference at [0129] discloses as a result, for many positions on the side wall representation 7306′, the luminance and color values of the corresponding positions on the first virtual object 7332 have changed (e.g., from those values shown in FIG. 7E to those shown in FIG. 7F… In some embodiments, due to the size change of the first virtual object or movement of the first virtual object, for some positions on the side wall representation 7306′, the luminance and colors corresponding to those positions will be change because the first virtual object have now expanded or moved to that position. Para [0130] discloses in FIG. 7F, the shadow 7338 cast on floor representation 7308′ also appears less dark as compared to shadow 7308 in FIG. 7E due to the reduced amount of illumination from the reduced size of the first virtual object 7332. The first simulated shadow tied to movement of the virtual object interpreted as wherein the first simulated shadow at the first shadow position has a first spatial relationship to the first user interface object); -detecting a user input directed to the first user interface object (Faulkner, para [0140]; Reference discloses in response to the first predefined gesture, the computer system also optionally adds another virtual element (e.g., virtual object 7404) to the three-dimensional environment, without replacing any whole class of physical elements. The virtual object 7404 is optionally a user interface object, such as a menu (e.g., menu of application, documents, etc.), a control (e.g., display brightness control, display focus control, etc.), or other objects (e.g., a virtual assistant, a document, media item, etc.) that can be manipulated by user inputs or provides information or feedback in the three-dimensional environment); -and displaying the first simulated shadow at a second shadow position in the three-dimensional environment, wherein: the second shadow position in the first view of the three-dimensional environment is different from the first shadow position in the first view of the three-dimensional environment (Faulkner, para [0197]; Reference discloses In some embodiments, detecting the predefined input specifying the destination location for the virtual object in the three-dimensional environment includes detecting a movement (e.g., movement of the gaze input, or movement of the finger before the tap of the finger) in the predefined input, and wherein displaying the second visual indication (e.g., a glowing or shadowy overlay (e.g., in the shape of the virtual object)) at the destination location includes updating a location of the second visual indication based on the movement of the predefined input (e.g., the location of the glowing or shadowy overlay (e.g., in the shape of the virtual object) is continuously and dynamically changed in accordance with the movement of the gaze input and/or the location of the finger before the tap of the input)); -the first simulated shadow at the second shadow position has a second spatial relationship to the first user interface object (Faulkner, para [0196]; Reference discloses while displaying the virtual object with the first visual indication that the virtual object has transitioned into the reconfiguration mode, the computer system detects a predefined input specifying a destination location for the virtual object in the three-dimensional environment (e.g., detecting the predefined input includes detecting movement of a user's gaze from the first spatial location to the second spatial location, or detecting a tap input by a finger of the hand…while the user's gaze is focused on the second spatial location in the three-dimensional space). In response to detecting the predefined input specifying the destination location for the virtual object in the three-dimensional environment, the computer system displays a second visual indication (e.g., a glowing or shadowy overlay (e.g., in the shape of the virtual object)) (i.e. shadowy overlay moves to second position based on user input indicating spatial relationship in 3D environment) at the destination location before moving the virtual object from the first spatial location to the destination location (e.g., the second spatial location or a location different from the second spatial location)); -and the second spatial relationship is different from the first spatial relationship (Faulkner, para [0196]; Reference discloses while displaying the virtual object with the first visual indication that the virtual object has transitioned into the reconfiguration mode, the computer system detects a predefined input specifying a destination location for the virtual object in the three-dimensional environment (e.g., detecting the predefined input includes detecting movement of a user's gaze from the first spatial location to the second spatial location, or detecting a tap input by a finger of the hand…while the user's gaze is focused on the second spatial location in the three-dimensional space). In response to detecting the predefined input specifying the destination location for the virtual object in the three-dimensional environment, the computer system displays a second visual indication (e.g., a glowing or shadowy overlay at the destination location before moving the virtual object from the first spatial location to the destination location (e.g., the second spatial location or a location different from the second spatial location) (i.e. second spatial relationship is different from the first spatial relationship)). Faulkner does not explicitly disclose but Kriv teaches -and in response to detecting the user input, changing an orientation of the first user interface object from the first orientation to a second orientation, including: displaying the first user interface object with the second orientation (Kriv, para [0121]; Reference discloses after the user has moved more than the threshold amount (i.e. in response to detecting user input) (e.g., by more than a threshold distance, by more than a threshold amount of change in orientation and/or position, and/or for a longer period of time than the first predefined time period), the user interface object 7104-4 is updated to be displayed at a position in the three-dimensional environment, different from its initial position (e.g., before the user began movement) (i.e. changing an orientation of the first user interface object from the first orientation to a second orientation, including: displaying the first user interface object with the second orientation based on orientation of user)); Faulkner and Kriv are combinable because they are in the same field of endeavor regarding virtual object interactions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the 3D environment interaction system of Faulkner to include the graphical user interface user attention features of Kriv in order to provide the user with a system generating a first visual effect at a second location of a three-dimensional scene, including modifying a visual appearance of a first portion of a first physical surface in the three-dimensional scene in accordance with the first value for the first display property that corresponds to the first portion of a first virtual object and replicating the action for a second portion of the object as taught by Faulkner, while incorporating the graphical user interface user attention features of Kriv to allow for use of various configurations for detecting whether a user satisfies an attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment or not and modifying the user interfaces appearance providing features that allow for more efficient feedback for reducing complexity in manipulating virtual objects, applicable to improving virtual an augmented reality rendering systems such as those taught in Faulkner. Claims 3, 11-13, 15, and 16 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Faulkner (US 2021/0097776 A1) in view of Krivoruchko (US 2023/0092874 A1) as applied to claim 1 above, and further in view of Wan (US 2023/0114043 A1, hereinafter referenced “Wan”). In regards to claim 3. Faulkner in view of Kriv teach the method of claim 1. Faulkner further discloses -including: in response to detecting the user input: displaying the first user interface object at a second object position in the first view of the three-dimensional environment (Faulkner, Fig.8; Reference at step 8010 discloses in accordance with a determination that the second hand movement meets the first gesture criteria , move the virtual object from the first spatial location to a second spatial location); Faulkner and Kriv does not explicitly disclose but Wan teaches -wherein: a degree of change between the first object position and the second object position of the first user interface object is different from a degree of change between the first shadow position and the second shadow position of the first simulated shadow (Wan, para [0333] and [0342]; Reference at para [0333] discloses displaying a user interface object that simulates an object made of a transparent or partially transparent material while decoupling the amount of shadow that the object is displayed as casting on other objects provides the user with information about the dimensions and position of the user interface object in the three-dimensional environment. Para [0342] discloses in some embodiments, the appearance of the shadow indicates a simulated distance between the first user interface object (e.g., a first point in simulated three-dimensional space in or on the first user interface object) and the background content (e.g., a second point in simulated three-dimensional space in or on the background content), and while the first user interface object is a first distance from the background content, the shadow is displayed with a first appearance, whereas while the first user interface object is a second distance from the background content (e.g., in response to a user input to move the first user interface object in the simulated three-dimensional space and relative to the background content) the shadow is displayed with a second appearance that is different from the first appearance. The teaching of decoupling the amount of shadow based on the object and description of difference is distance based on movement indicates the ability to produce the situation where the degree of change between the first object position and the second object position of the first user interface object is different from a degree of change between the first shadow position and the second shadow position of the first simulated shadow). Faulkner and Kriv are combinable because they are in the same field of endeavor regarding virtual object interactions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the 3D environment interaction system of Faulkner to include the graphical user interface user attention features of Kriv in order to provide the user with a system generating a first visual effect at a second location of a three-dimensional scene, including modifying a visual appearance of a first portion of a first physical surface in the three-dimensional scene in accordance with the first value for the first display property that corresponds to the first portion of a first virtual object and replicating the action for a second portion of the object as taught by Faulkner, while incorporating the graphical user interface user attention features of Kriv to allow for use of various configurations for detecting whether a user satisfies an attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment or not and modifying the user interfaces appearance providing features that allow for more efficient feedback for reducing complexity in manipulating virtual objects, applicable to improving virtual an augmented reality rendering systems such as those taught in Faulkner. Faulkner and Wan are also combinable because they are in the same field of endeavor regarding virtual object interactions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the 3D environment interaction system of Faulkner, in view of the graphical user interface user attention features of Kriv, to include the graphical user interface appearance features of Wan in order to provide the user with a system generating a first visual effect at a second location of a three-dimensional scene, including modifying a visual appearance of a first portion of a first physical surface in the three-dimensional scene in accordance with the first value for the first display property that corresponds to the first portion of a first virtual object and replicating the action for a second portion of the object as taught by Faulkner, while incorporating the graphical user interface user attention features of Kriv to allow for use of various configurations for detecting whether a user satisfies an attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment or not and modifying the user interfaces appearance. Further incorporating the graphical user interface appearance features of Wan allows for a response to changes in appearance of the physical environment, the appearance of the computer-generated user interface element is updated at a first time based on a graphical composition of the appearance of one or more portions of the physical environment at different times prior to the first time allowing for more intuitive and efficient user experiences, applicable to improving virtual an augmented reality rendering systems such as those taught in Faulkner and Kriv. In regards to claim 11. Faulkner in view of Kriv teach the method of claim 1. Faulkner and Kriv does not explicitly disclose but Wan teaches -wherein the first simulated shadow displayed at the first shadow position has a respective thickness, and the first simulated shadow displayed at the second shadow position has the respective thickness (Wan, para [0342]; Reference discloses the computer system detects an input interacting with the first user interface object (e.g., to move the first user interface object in simulated three-dimensional space); and, in response to detecting the input interacting with the first user interface object, changes an appearance of the shadow that corresponds to the first user interface object (e.g., to indicate a change in thickness, distance, or height of the first user interface object relative to the background)). Faulkner and Wan are also combinable because they are in the same field of endeavor regarding virtual object interactions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the 3D environment interaction system of Faulkner, in view of the graphical user interface user attention features of Kriv, to include the graphical user interface appearance features of Wan in order to provide the user with a system generating a first visual effect at a second location of a three-dimensional scene, including modifying a visual appearance of a first portion of a first physical surface in the three-dimensional scene in accordance with the first value for the first display property that corresponds to the first portion of a first virtual object and replicating the action for a second portion of the object as taught by Faulkner, while incorporating the graphical user interface user attention features of Kriv to allow for use of various configurations for detecting whether a user satisfies an attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment or not and modifying the user interfaces appearance. Further incorporating the graphical user interface appearance features of Wan allows for a response to changes in appearance of the physical environment, the appearance of the computer-generated user interface element is updated at a first time based on a graphical composition of the appearance of one or more portions of the physical environment at different times prior to the first time allowing for more intuitive and efficient user experiences, applicable to improving virtual an augmented reality rendering systems such as those taught in Faulkner and Kriv. In regards to claim 12. Faulkner in view of Kriv teach the method of claim 1. Faulkner and Kriv does not explicitly disclose but Wan teaches -including, in response to detecting the user input, changing an appearance of the first simulated shadow while maintaining display of the first user interface object with the first orientation and at the first object position (Wan, para [0342]; Reference discloses the computer system detects an input interacting with the first user interface object (e.g., to move the first user interface object in simulated three-dimensional space); and, in response to detecting the input interacting with the first user interface object, changes an appearance of the shadow that corresponds to the first user interface object (e.g., to indicate a change in thickness, distance, or height of the first user interface object relative to the background)). Faulkner and Wan are also combinable because they are in the same field of endeavor regarding virtual object interactions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the 3D environment interaction system of Faulkner, in view of the graphical user interface user attention features of Kriv, to include the graphical user interface appearance features of Wan in order to provide the user with a system generating a first visual effect at a second location of a three-dimensional scene, including modifying a visual appearance of a first portion of a first physical surface in the three-dimensional scene in accordance with the first value for the first display property that corresponds to the first portion of a first virtual object and replicating the action for a second portion of the object as taught by Faulkner, while incorporating the graphical user interface user attention features of Kriv to allow for use of various configurations for detecting whether a user satisfies an attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment or not and modifying the user interfaces appearance. Further incorporating the graphical user interface appearance features of Wan allows for a response to changes in appearance of the physical environment, the appearance of the computer-generated user interface element is updated at a first time based on a graphical composition of the appearance of one or more portions of the physical environment at different times prior to the first time allowing for more intuitive and efficient user experiences, applicable to improving virtual an augmented reality rendering systems such as those taught in Faulkner and Kriv. In regards to claim 13. Faulkner in view of Kriv in further view of Wan teach the method of claim 12. Faulkner further discloses -including: detecting an end of the user input; and in response to detecting the end of the user input, Faulkner, para [0174]; Reference discloses For example, the visual representation is a glowing ellipsoid with a first distribution of luminance values and a first distribution of color values across the different portions of the visual representation. The computer system, in accordance with the first set of values of the first display property, modifies the visual appearance of a first physical surface 7312 of physical object 7310 or its representation 7312′ in the three-dimensional environment, as well as the visual appearance of a first virtual surface of virtual object 7404 in the three-dimensional environment). Faulkner and Kriv does not explicitly disclose but Wan teaches -at least partially reversing (Wan, para [0228]; Reference discloses the transformation of icon 7060 in response to completion of the selection gesture at least partially reverses the transformation of icon 7060 in response to the initiation of the selection gesture (e.g., initial movement of the two fingers toward each other before making contact) Faulkner and Wan are also combinable because they are in the same field of endeavor regarding virtual object interactions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the 3D environment interaction system of Faulkner, in view of the graphical user interface user attention features of Kriv, to include the graphical user interface appearance features of Wan in order to provide the user with a system generating a first visual effect at a second location of a three-dimensional scene, including modifying a visual appearance of a first portion of a first physical surface in the three-dimensional scene in accordance with the first value for the first display property that corresponds to the first portion of a first virtual object and replicating the action for a second portion of the object as taught by Faulkner, while incorporating the graphical user interface user attention features of Kriv to allow for use of various configurations for detecting whether a user satisfies an attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment or not and modifying the user interfaces appearance. Further incorporating the graphical user interface appearance features of Wan allows for a response to changes in appearance of the physical environment, the appearance of the computer-generated user interface element is updated at a first time based on a graphical composition of the appearance of one or more portions of the physical environment at different times prior to the first time allowing for more intuitive and efficient user experiences, applicable to improving virtual an augmented reality rendering systems such as those taught in Faulkner and Kriv. In regards to claim 15. Faulkner in view of Kriv teach the method of claim 1. Faulkner and Kriv does not explicitly disclose but Wan teaches -wherein: prior to detecting the user input directed to the first user interface object, displaying the first simulated shadow at the first shadow position includes: while attention of a user is not directed to the first user interface object, displaying the first simulated shadow with a first appearance; and while the attention of the user is directed to the first user interface object, displaying the first simulated shadow with a second appearance that is different from the first appearance (Wan, para [0233] and [0236]; Reference at [0233] discloses as shown in FIG. 7Q, regions 7080-b, 7080-c, and 7080-d are flat and flush with the surface of user interface element 7080 (e.g., while user 7002 is not indicating readiness to interact with, such as by not directing attention to, any of regions 7080-b, 7080-c, and 7080-d)…The appearances of regions 7080-b, 7080-c, and 7080-d in FIG. 7Q are optionally default appearances (e.g., based on default settings or values for one or more visual properties, such as brightness or darkness, opacity or transparency, size, thickness, amount of specular reflection, degree of blurring, and/or degree of separation from user interface element 7080) (i.e. a first appearance). Para [0236] discloses As shown in FIG. 7R, in response to user 7002 directing attention to region 7080-b, the visual appearance of region 7080-b is changed (e.g., to an appearance indicative of selection of region 7080-b for further interaction): a thickness of region 7080-b is increased; an opacity of region 7080-b is decreased (such that region 7080-b becomes more transparent); and region 7080-b is lifted away from the surface of region 7080-a and away from the surface of user interface element 7080. Concept can be applied to shadowing as discussed further in para [0236] which discloses accordingly, a shadow is displayed as being cast onto the surface of user interface element 7080 by the lifted region 7080-b (e.g., due to the increased thickness of region 7080-b and/or the separation of region 7080-b from the surface of user interface element 7080)). Faulkner and Wan are also combinable because they are in the same field of endeavor regarding virtual object interactions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the 3D environment interaction system of Faulkner, in view of the graphical user interface user attention features of Kriv, to include the graphical user interface appearance features of Wan in order to provide the user with a system generating a first visual effect at a second location of a three-dimensional scene, including modifying a visual appearance of a first portion of a first physical surface in the three-dimensional scene in accordance with the first value for the first display property that corresponds to the first portion of a first virtual object and replicating the action for a second portion of the object as taught by Faulkner, while incorporating the graphical user interface user attention features of Kriv to allow for use of various configurations for detecting whether a user satisfies an attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment or not and modifying the user interfaces appearance. Further incorporating the graphical user interface appearance features of Wan allows for a response to changes in appearance of the physical environment, the appearance of the computer-generated user interface element is updated at a first time based on a graphical composition of the appearance of one or more portions of the physical environment at different times prior to the first time allowing for more intuitive and efficient user experiences, applicable to improving virtual an augmented reality rendering systems such as those taught in Faulkner and Kriv. In regards to claim 16. Faulkner in view of Kriv teach the method of claim 1. Faulkner and Kriv does not explicitly disclose but Wan teaches -including: prior to detecting the user input directed to the first user interface object, displaying the first simulated shadow at the first shadow position includes displaying the first simulated shadow with a respective appearance; and while detecting the user input directed to the first user interface object, changing one or more visual properties of the respective appearance of the first simulated shadow, including displaying the first simulated shadow with the changed one or more visual properties of the respective appearance as the first simulated shadow is moved from the first shadow position to the second shadow position (Wan, para [0342]; Reference discloses in some embodiments, the appearance of the shadow indicates a simulated distance between the first user interface object (e.g., a first point in simulated three-dimensional space in or on the first user interface object) and the background content (e.g., a second point in simulated three-dimensional space in or on the background content), and while the first user interface object is a first distance from the background content, the shadow is displayed with a first appearance, whereas while the first user interface object is a second distance from the background content (e.g., in response to a user input to move the first user interface object in the simulated three-dimensional space and relative to the background content) the shadow is displayed with a second appearance that is different from the first appearance)). Faulkner and Wan are also combinable because they are in the same field of endeavor regarding virtual object interactions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the 3D environment interaction system of Faulkner, in view of the graphical user interface user attention features of Kriv, to include the graphical user interface appearance features of Wan in order to provide the user with a system generating a first visual effect at a second location of a three-dimensional scene, including modifying a visual appearance of a first portion of a first physical surface in the three-dimensional scene in accordance with the first value for the first display property that corresponds to the first portion of a first virtual object and replicating the action for a second portion of the object as taught by Faulkner, while incorporating the graphical user interface user attention features of Kriv to allow for use of various configurations for detecting whether a user satisfies an attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment or not and modifying the user interfaces appearance. Further incorporating the graphical user interface appearance features of Wan allows for a response to changes in appearance of the physical environment, the appearance of the computer-generated user interface element is updated at a first time based on a graphical composition of the appearance of one or more portions of the physical environment at different times prior to the first time allowing for more intuitive and efficient user experiences, applicable to improving virtual an augmented reality rendering systems such as those taught in Faulkner and Kriv. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: See the Notice of References (PTO-892) Any inquiry concerning this communication or earlier communications from the examiner should be directed to TERRELL M ROBINSON whose telephone number is (571)270-3526. The examiner can normally be reached 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, KENT CHANG can be reached at 571-272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TERRELL M ROBINSON/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

May 28, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602852
DYNAMIC GRAPHIC EDITING METHOD AND DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12572196
MANAGING AN INDUSTRIAL ENVIRONMENT HAVING MACHINERY OPERATED BY REMOTE WORKERS AND PHYSICALLY PRESENT WORKERS
2y 5m to grant Granted Mar 10, 2026
Patent 12573124
PROGRESSIVE REAL-TIME DIFFUSION OF LAYERED CONTENT FILES WITH ANIMATED FEATURES
2y 5m to grant Granted Mar 10, 2026
Patent 12573111
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING METHOD FOR APPROPRIATE DISPLAY OF PRESENTER AND PRESENTATION ITEM
2y 5m to grant Granted Mar 10, 2026
Patent 12561904
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD FOR CORRECTING COMPUTER GRAPHICS IMAGE IN MIXED REALITY
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
90%
With Interview (+7.5%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 486 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month