Prosecution Insights
Last updated: April 19, 2026
Application No. 18/473,155

METHODS FOR DISPLAYING OBJECTS RELATIVE TO VIRTUAL SURFACES

Final Rejection §103
Filed
Sep 22, 2023
Examiner
LIU, ZHENGXI
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
225 granted / 354 resolved
+1.6% vs TC avg
Strong +40% interview lift
Without
With
+40.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
31 currently pending
Career history
385
Total Applications
across all art units

Statute-Specific Performance

§101
13.2%
-26.8% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
15.7%
-24.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 354 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-29 are pending. Claims 1-3, 9-12, 21, 23, and 27-29 have been amended. No claim has been cancelled or added. Response to Arguments Applicant’s arguments (Remarks pp. 12-13) are moot in view of the Examiner’s new ground of rejections based on new references to address amended limitations. Applicant did not respond to the objections regarding the claim limitation “while.” The Examiner requested clarification. Without input from Applicant, in order to make the record clear, the Examiner provides BRI for the limitation, which is illustrated through the following example. Claim 1 recites: while displaying a virtual content container that includes one or more three-dimensional virtual objects in a three-dimensional environment, detecting . . . , [BRI on the record] The Examiner is reading the limitation as displaying a virtual content container that includes one or more three-dimensional virtual objects in a three-dimensional environment; while displaying [[a]]the virtual content container Claim Objections Claims 1 is objected to because of the following informalities: The claim recites “while,” and the Examiner requests clarification from Applicant’s representative. Claim 1 recites: while displaying a virtual content container . . ., detecting . . .; in response to detecting . . . : . . . displaying . . .. Here, it is unclear whether “while,” similar to “if,” requires a contingent limitation. If a reference art teaches never displays a virtual container, is the “while” limitation satisfied? Claims 7, 12-13, 15-19, 21-22, 24, 26, and 28-29 also recite “while.” These claims are also objected, and Applicant’s clarification is required. For the purposes of compact prosecution, the Examiner recommends the following amendments to Claim 1, as an example, to address the issue: 1. A method comprising: at a computer system in communication with a display generation component and one or more input devices: displaying a virtual content container that includes one or more three-dimensional virtual objects in a three-dimensional environment; while displaying [[a]]the virtual content container in response to detecting the first input directed to the first three-dimensional virtual object: in accordance with a determination that the respective location in the first three-dimensional environment satisfies one or more criteria, displaying a virtual surface within the three-dimensional environment concurrently with the first three-dimensional virtual object, wherein the virtual surface was not displayed within the three-dimensional environment prior to detecting the first input directed to the first three-dimensional virtual object. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-5, 9-12, 16, 22, 24-26, and 28-29 are rejected under 35 U.S.C. 103 as being unpatentable over Co-Spaces (“MANUAL FOR THE DESIGN OF DIDACTIC UNITS IN AUGMENTED REALITY USING THE COSPACES EDU APPLICATION”) in view of West et al. (US 20190018498 A1), Cullum et al. (US 20180114370 A1), and Stauber et al. (US 20200111267 A1). Regarding Claim 1, Co-Spaces teaches A method comprising: at a computer system in communication with a display generation component and one or more input devices: while displaying a virtual content container that includes one or more three-dimensional virtual objects PNG media_image1.png 254 544 media_image1.png Greyscale Co-Spaces states, “If you have clicked on the Catalogue category, a menu will open with the library of 3D elements that the CoSpaces EDU application has by default. These elements are grouped in different subcategories to facilitate your search: Characters, Animals, Dwellings, Nature, Transport, Articles, Instruction, and Special.” Co-Spaces p. 45. The claimed “virtual content container” is mapped to the menu interface containing libraries of 3D objects.), detecting, via the one or more input devices, a first input directed to a first three-dimensional virtual object of the one or more three-dimensional virtual objects, wherein the first input corresponds to a request to move the first three-dimensional virtual object out of the virtual content container and to a respective location in the three-dimensional environment ( Co-Spaces p. 48: PNG media_image2.png 280 678 media_image2.png Greyscale “Click on the element you want to place in the plane, and without releasing, drag it to the space to place it in the desired part of the plane.”); and in response to detecting the first input directed to the first three-dimensional virtual object: in accordance with a determination that the respective location in the first three-dimensional environment satisfies one or more criteria, ( The claimed “one or more criteria” is mapped to criteria that include i) the placement location is outside of the virtual content container, and/or ii) the placement location is on a supporting surface, e.g., ground/floor plane. “Click on the element you want to place in the plane, and without releasing, drag it to the space to place it in the desired part of the plane.” Co-Spaces p. 48.), displaying a virtual surface within the three-dimensional environment concurrently with the first three-dimensional virtual object (Co-Spaces p. 48’s figure shows that a character has been placed in a 3D scene.), wherein the virtual surface was not displayed within the three-dimensional environment prior to detecting the first input directed to the first three-dimensional virtual object ( PNG media_image3.png 512 446 media_image3.png Greyscale As shown in the figure (p. 48), any surface of the bounding box around the inserted character could be the “virtual surface” added to the scene with the addition of the character. The blue circle may also correspond to the added “virtual surface.” The bounding box does not exist in the scene before the insertion of the virtual character. Compare figures on pp. 45, 48.). Co-Spaces does not explicitly disclose in accordance with satisfying the one or more criteria, , displaying the virtual surface; or displaying the virtual content container in a three-dimensional environment. West teaches the virtual content container in a three-dimensional environment ( PNG media_image4.png 396 450 media_image4.png Greyscale “FIG. 5A shows the UI element 408 before the transition, whereby the UI element 408 is anchored in world space and has a rectangular shape (e.g., in accordance with the display style instructions related to the anchored position for the UI element).” West ¶ 37. ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine West’s in-the-world interface element with CoSpaces. One of ordinary skill in the art would be motivated to provide a more immersive and convenient experiences for the user when the interface elements are also part of the virtual, augmented, or mixed world. See West ¶ 35. CoSpaces in view of West does not explicitly disclose in accordance with satisfying the one or more criteria, , displaying the virtual surface. Cullum teaches: in accordance with satisfying the one or more criteria, , grouping the first three-dimensional virtual object and the second three-dimensional virtual object to manipulate as a group of objects (“For example, an object may be added to a group when the object is moved within a threshold distance of an existing group. Similarly, a group can be formed when two objects are moved next to each other.” Cullum ¶ 21. “In order to identify group members, a recursive process can evaluate objects in the 3-D space according to a set of criteria. The criteria can group objects that touch or are within a threshold distance. The threshold distance can be absolute or relative to an object's size. An absolute threshold could be measured along the closest distance between the exterior of two objects. Any suitable unit of measure within the 3-D space could be used, for example pixels.” Cullum ¶ 21. “The control can automatically group objects together for common manipulation.” Cullum Abstract. After CoSpaces in view of West is combined with Cullum, when the first three-dimensional virtual object based on CoSpaces in view of West, e.g., PNG media_image3.png 512 446 media_image3.png Greyscale , is placed close enough to another object, these objects would become a group of objects to be manipulated as a single object.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Cullum’s grouping objects based on distance between objects with CoSpaces in view of West. One of ordinary skill in the art would be motivated to conveniently manipulate a group of object. “The control can automatically group objects together for common manipulation.” Cullum Abstract. CoSpaces in view of West and Cullum does not explicitly disclose displaying the virtual surface in response to grouping the first three-dimensional virtual object and the second three-dimensional virtual object. Stauber teaches displaying the virtual surface in response to grouping the first three-dimensional virtual object and the second three-dimensional virtual object ( PNG media_image5.png 514 576 media_image5.png Greyscale Here, the expanded bounding box for the group of objects is similar to CoSpaces’ PNG media_image3.png 512 446 media_image3.png Greyscale , where any surface of the bounding box around the inserted character, the group of objects after the combination of Stauber, could be the “virtual surface” added to the scene with the addition of the character. The blue circle may also correspond to the added “virtual surface,” and the blue circle could be expanded to encircle the group of objects after the combination of Stauber.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Stauber’s composite bounding box with CoSpaces in view of West and Cullum. One of ordinary skill in the art would be motivated to conveniently indicate a group of object has been selected and/or clearly indicate members of a group visually Claim 28 is substantially similar to Claim 1. Claim 1’s rejection analyses are applied to Claim 28. In addition, Claim 28 recites “A computer system that is in communication with a display generation component and one or more input devices, the computer system comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: . . .” (both CoSpaces and West discloses application/invention related to computer graphics). Claim 29 is substantially similar to Claim 1. Claim 1’s rejection analyses are applied to Claim 29. In addition, Claim 29 recites “A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a computer system that is in communication with a display generation component and one or more input devices, cause the computer system to perform a method comprising: . . .” (both CoSpaces and West discloses application/invention related to computer graphics). Regarding Claim 3, Co-Spaces further teaches The method of claim 1, wherein the one or more criteria include a criterion that is satisfied when the respective location is outside of the virtual content container ( PNG media_image1.png 254 544 media_image1.png Greyscale “Click on the element you want to place in the plane, and without releasing, drag it to the space to place it in the desired part of the plane.” Co-Space p. 48. The claimed “virtual content container” is mapped to the menu interface containing libraries of 3D objects. The respective location in the plane in the virtual space is outside of virtual content container as shown in the figure. ). Regarding Claim 4, Co-Spaces further teaches The method of claim 1, wherein the one or more criteria include a criterion that is not satisfied when the respective location is located within the virtual content container (“Click on the element you want to place in the plane, and without releasing, drag it to the space to place it in the desired part of the plane.” Co-Space p. 48. The desired part of the plane is located outside of the virtual content container.). Regarding Claim 5, Co-Spaces further teach The method of claim 1. Co-Spaces does not explicitly disclose wherein the one or more criteria include a criterion that is not satisfied if the respective location includes a user interface of a respective application other than an application associated with the virtual content container ( Co-Spaces p. 48: PNG media_image2.png 280 678 media_image2.png Greyscale “Click on the element you want to place in the plane, and without releasing, drag it to the space to place it in the desired part of the plane.” Co-Space p. 48. The claimed “virtual content container” is mapped to PNG media_image6.png 86 452 media_image6.png Greyscale The claimed “user interface of a respective application” is mapped to PNG media_image7.png 436 390 media_image7.png Greyscale or just PNG media_image8.png 92 248 media_image8.png Greyscale ). Regarding Claim 9, Co-Spaces in view of West, Cullum, and Stauber teaches The method of claim 1, wherein the virtual content container is located at a first location within the three-dimensional environment (Co-Spaces p. 48: PNG media_image2.png 280 678 media_image2.png Greyscale ; after the combination with West, the container in Co-Spaces figure’s location within the three-dimensional environment), in response to detecting the first input directed to the first three-dimensional virtual object and in accordance with the determination that the respective location of the three-dimensional environment satisfies the one or more criteria ( a virtual 3D character is placed at the “respective location” in 3D environment. The claimed “one or more criteria” is mapped to criteria that include i) the placement location is outside of the virtual content container, and/or ii) the placement location is on a supporting surface, e.g., ground/floor plane. “Click on the element you want to place in the plane, and without releasing, drag it to the space to place it in the desired part of the plane.” Co-Spaces p. 48 ), the virtual surface has a second location within the three-dimensional environment ( PNG media_image3.png 512 446 media_image3.png Greyscale As shown in the figure, any surface of the bounding box around the inserted character could be the “virtual surface” added to the scene with the addition of the character. The blue circle may also correspond to the added “virtual surface.” Those surfaces are placed at the “second location.”), wherein a spatial relationship between the virtual surface and the virtual content container is a predefined spatial relationship (“Click on the element you want to place in the plane, and without releasing, drag it to the space to place it in the desired part of the plane.” Co-Spaces p. 48. As shown in the figure above, the “spatial relationship” is that the virtual surface is outside of the container.). Regarding Claim 10, CoSpaces further teaches The method of claim 1, wherein before detecting the first input, the virtual content container includes the first three-dimensional virtual object and a third virtual object ( PNG media_image1.png 254 544 media_image1.png Greyscale Co-Spaces states, “If you have clicked on the Catalogue category, a menu will open with the library of 3D elements that the CoSpaces EDU application has by default. These elements are grouped in different subcategories to facilitate your search: Characters, Animals, Dwellings, Nature, Transport, Articles, Instruction, and Special.” Co-Spaces p. 45.). Regarding Claim 11, Co-Spaces further teaches The method of claim 10, wherein the first three-dimensional virtual object is a first type of virtual object, and the third virtual object is a second type of virtual object, different from the first type of virtual object (Co-Spaces states, “If you have clicked on the Catalogue category, a menu will open with the library of 3D elements that the CoSpaces EDU application has by default. These elements are grouped in different subcategories to facilitate your search: Characters, Animals, Dwellings, Nature, Transport, Articles, Instruction, and Special.” Co-Spaces p. 45.). Regarding Claim 12, Co-Spaces further teaches The method of claim 1, the method further comprising: while concurrently displaying the virtual surface and the first three-dimensional virtual object at the respective location (Co-Spaces p. 48), detecting, via the one or more input devices, a second input directed to a third virtual object in the three-dimensional environment, wherein the second input corresponds to a request to move the third virtual object from a first location to the respective location in the three-dimensional environment (Co-Spaces p. 48); and in response to detecting the second input directed to the third virtual object, displaying the virtual surface concurrently with the first three-dimensional virtual object and the third virtual object at the respective location ( “Click on the element you want to place in the plane, and without releasing, drag it to the space to place it in the desired part of the plane.” Co-Spaces p. 48. A user may repeat the process to place multiple virtual objects, including the “third virtual object” into the scene. The same “respective location in the three-dimensional environment” may correspond to the floor plane in CoSpaces’ figure.). Regarding Claim 16, CoSpaces in view of West, Cullum, and Stauber teaches The method of claim 1, the method further comprising: while concurrently displaying the first three-dimensional virtual object and the virtual surface at the respective location, detecting, via the one or more input devices, a second input directed to the virtual content container, wherein the second input corresponds to a request to move the virtual content container away from a current location in the three-dimensional environment (West discloses moving interface object, e.g., menu, to a new location in the 3D environment, stating “In accordance with an embodiment, FIG. 4(C shows the method 300 outlined in FIG. 3 at a point during process 306 after process 316 has determined that the HMD velocity is less than the threshold velocity, whereby the user has slowed (or stopped) the HMD movement below the threshold velocity and the UI element 408 is anchored (e.g., by the MR-UI module 210) to a new location and displayed again in the view frustum to the user.” PNG media_image9.png 418 474 media_image9.png Greyscale PNG media_image10.png 410 366 media_image10.png Greyscale ); and in response to detecting the second input directed to the virtual content container, moving the virtual content container away from the current location in the three-dimensional environment (West figs. 4A, C) in accordance with the second input while maintaining the virtual surface and the first three-dimensional virtual object at the respective location (the world space remains the same and the virtual surface and the virtual character remains at the same location in the world space, because turning one’s head to look at the world does not changes locations of objects in the world.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine West’s placement strategy for the user interface object with CoSpaces. One of ordinary skill in the art would be motivated to keep the menu objects in sight on HMD display while a person turns his/her head. Regarding Claim 22, Co-Spaces in view of West, Cullum, and Stauber teaches The method of claim 1, wherein while displaying a plurality of virtual objects concurrently with the virtual surface: in accordance with a determination that an orientation of the virtual surface is a first orientation relative to the three-dimensional environment (Co-Spaces p. 48: PNG media_image3.png 512 446 media_image3.png Greyscale Here, the bottom surface of the bounding box, mapped to “virtual surface,” is at the first orientation. The “plurality of virtual objects” include the virtual character’s head, legs, torso, and arms.), orientations of the plurality of virtual objects relative to the three-, dimensional environment are a first set of orientations based on the first orientation (the virtual character’s head, legs, torso, and arms are based on the orientation of the surface where the character stands on); and in accordance with a determination that the orientation of the virtual surface is a second orientation, different from the first orientation, relative to the three-dimensional environment, the orientations of the plurality of virtual objects relative to the three-dimensional environment are a second set of orientations, different from the first set of orientations, based on the second orientation ( Co-Spaces p. 49: PNG media_image11.png 784 728 media_image11.png Greyscale Here, the bounding box, as well as the virtual character, may rotate in the 3D virtual environment, and the virtual character’s head, legs, torso, and arms are rotated accordingly.). Regarding Claim 24, Co-Spaces in view of West, Cullum, and Stauber teaches The method of claim 1, further comprising: while displaying the virtual surface in the three-dimensional environment, wherein a first portion of the virtual surface has a first orientation relative to a viewpoint of a user of the computer system and a first spatial arrangement relative to the three-dimensional environment ( CoSpaces p. 48: PNG media_image3.png 512 446 media_image3.png Greyscale Here, the claimed “first portion of the virtual surface” could be the center of the bottom square of the bounding box.), detecting, via the one or more input devices, a second input corresponding to a request to modify a spatial arrangement of the virtual surface relative to the three-dimensional environment from the first spatial arrangement to a second spatial arrangement, different from the first spatial arrangement ( Co-Spaces p. 49: PNG media_image11.png 784 728 media_image11.png Greyscale . The figure shows that the virtual character could be rotated freely. If the character undergoes a simple rotation while standing upright, e.g., turn left 90 degrees. In this situation: The bottom square of the bounding box’s “spatial arrangement” relative to the three-dimensional environment is changed. ); and in response to detecting the second input, moving the virtual surface in the three-dimensional environment to have the second spatial arrangement relative to the three-dimensional environment while maintaining the first portion of the virtual surface having the first orientation relative to the viewpoint of the user ( However, meanwhile the “first portion,” mapped to the center of the bottom square of the bounding box, remains the same, thereby maintaining the orientation relative to the viewpoint of the user.). Regarding Claim 25, Co-Spaces in view of West, Cullum, and Stauber teaches The method of claim 24, wherein moving the virtual surface in the three-dimensional environment to have the second spatial arrangement relative to the three-dimensional environment includes maintaining a relative orientation of a second portion of the virtual surface relative to a frame of reference, other than the viewpoint of the user, in the three-dimensional environment ( Co-Spaces p. 49: PNG media_image11.png 784 728 media_image11.png Greyscale . The figure shows that the virtual character could be rotated freely. If the character undergoes a simple rotation while standing upright, e.g., turn left 90 degrees. In this situation: The bottom square of the bounding box’s “spatial arrangement” relative to the three-dimensional environment is changed. The “frame of reference” could be mapped to ground plane. The center point of the bottom square of the bounding box remains at the same location in the ground plane. Therefore, maintaining a relative orientation with respect to the ground plane.). Regarding Claim 26, Co-Spaces in view of West, Cullum, and Stauber teaches The method of claim 1, further comprising while detecting the first input, displaying, in the virtual content container, a representation of the first three-dimensional virtual object ( PNG media_image2.png 280 678 media_image2.png Greyscale ) Claims 2, 6-7, 13, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Co-Spaces in view of West, Cullum, and Stauber as applied to Claim 1, in further view of Horita et al. (US 20220157029 A1). Regarding Claim 2, Co-Spaces in view of West, Cullum, and Stauber teaches The method of claim 1. However, Co-Spaces in view of West, Cullum, and Stauber does not explicitly disclose the method further comprising: in response to detecting the first input directed to the first three-dimensional virtual object: in accordance with a determination that the respective location in the three-dimensional environment does not satisfy one or more criteria, displaying the first three-dimensional virtual object at the respective location without displaying the virtual surface within the three-dimensional environment. Horita teaches the method further comprising: in response to detecting the first input directed to the first three-dimensional virtual object: in accordance with a determination that the respective location in the three-dimensional environment does not satisfy one or more criteria, displaying the first three-dimensional virtual object at the respective location without displaying the virtual surface within the three-dimensional environment ( (a) Horita figs. 4-7: PNG media_image12.png 384 232 media_image12.png Greyscale PNG media_image13.png 384 238 media_image13.png Greyscale PNG media_image14.png 386 238 media_image14.png Greyscale PNG media_image15.png 388 238 media_image15.png Greyscale Horita teaches a displaying like in fig. 4 when a plane in a scene is not identified for the virtual object to land, stating “However, in this non-limiting example, until the plane has been detected in the captured image, the player object PO is displayed on the display screen of the display unit 35 with the player object PO overlaid on the captured image. For example, as illustrated in FIG. 4, the player object is overlaid on the captured image and displayed on the display unit 35 such that the player object PO is displayed at a center of the display screen of the display unit 35, facing front.” Horita ¶ 63. In fig.4, virtual object (dog) appears to be supported by a virtual square surface, when the virtual object is in mid air, when a surface in the scene has not been identified for the virtual object to land. Horita teaches that the virtual square surface is not displayed when the plane to land the virtual object is detected and the position of the virtual object is fixated, stating “The player object PO is displayed in different display forms before and after position fixation in the captured image. For example, comparison of FIGS. 4-6 with FIG. 7 clearly indicates that before position fixation, the player object PO is displayed in the display form in which a label image M is added. Meanwhile, after position fixation, the player object PO is displayed in the display form in which the label image M is not added. Thus, the player object PO is displayed in the different display forms before and after position fixation, which can notify the user of whether or not the player object PO is in the position-fixed state.” Horita ¶ 71. When Co-Spaces in view of West is combined with Horita, we have: the claimed “one or more criteria” is mapped to whether a plane cannot be detected to land the virtual object. The “one or more criteria” is not satisfied, when a plane is detected to land the virtual object. For example, when a virtual dog is moved to a scene in mid air, it could be in (i) mid air over floor; (ii) mid air over a tree or other plants, or (iii) mid air over bottomless abyss (e.g., areas between floating mountains in scenes of Avatar I), depending on the virtual dog's respective location with respect to a determined environment, the virtual dog may land when (i), and may not land when (ii) and (iii). When the virtual dog lands, the plane is not shown; and when the virtual dog does not land, the plane is shown When the plane to land the virtual object is detected and the virtual object is fixated on the detected plane, the square surface as shown in fig. 4 is not displayed. (b) An alternative mapping could be provided in light of changed ground of rejection for Claim 1. The virtual surface has been mapped to: any surface of the bounding box around the inserted character, the group of objects after the combination of Stauber, could be the “virtual surface” added to the scene with the addition of the character. Stauber: PNG media_image5.png 514 576 media_image5.png Greyscale The blue circle may also correspond to the added “virtual surface,” and the blue circle could be expanded to encircle the group of objects after the combination of Stauber. PNG media_image3.png 512 446 media_image3.png Greyscale When the respective location is not close enough to another object, and a group of objects are not formed, the virtual surface related to an expanded group bounding box or expanded blue circle will not be displayed as a result.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Horita’s in air labeled surface with CoSpaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be motivated to clearly identify virtual objects that are in the air and/or bottom surface of the virtual object. After CoSpaces in view of West, Cullum, and Stauber is combined with Horita, the virtual square as shown in West fig. 4 is similar to the bottom surface of CoSpaces’ bounding box. Regarding Claim 6, CoSpaces in view of West, Cullum, and Stauber teaches The method of claim 1, wherein an orientation of the virtual surface is automatically selected by the computer system to be within a threshold of being parallel to an orientation corresponding to a floor Co-Spaces p. 48: PNG media_image3.png 512 446 media_image3.png Greyscale Here, the virtual surface, bottom surface of the bounding box or the blue circle, is placed on the floor, and therefore, within a threshold of being parallel to an orientation corresponding to a floor.). CoSpaces in view of West, Cullum, and Stauber does not explicitly disclose the floor of a physical environment of a user of the computer system. Horita teaches the floor of a physical environment of a user of the computer system ( “In FIG. 4, in a game process of this non-limiting example, an overlay image in which an image (virtual space image) of a player object PO existing in a three-dimensional virtual space is overlaid on an image of the real world currently captured by the imaging unit 38, is displayed on the display screen of the display unit 35.” Horita ¶ 63. Fig. 4: PNG media_image12.png 384 232 media_image12.png Greyscale . Here, the floor for a virtual object to land on is the floor of a physical environment.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Horita’s augmented images of real scenes with CoSpaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be motivated to create augmented reality images that combine virtual images and real images, which may be entertaining/informative to a viewer. Regarding Claim 7, Co-Spaces in view of West, Cullum, Stauber, and Horita teaches The method of claim 1, further comprising: while concurrently displaying the virtual surface and the first three-dimensional virtual object, displaying, on the virtual surface, a virtual shadow of the first three-dimensional virtual object at a location on the virtual surface corresponding to a position of the first three-dimensional virtual object relative to the virtual surface ( Fig. 6: PNG media_image16.png 516 380 media_image16.png Greyscale “As illustrated in FIG. 6, when the player object PO is displayed on the display unit 35, the player object PO can be disposed on the shadow object S, i.e., the virtual reference plane set in the virtual space image, and can be overlaid and displayed on the captured image.” Horita ¶ 65. The claimed “virtual shadow” is mapped to disclosed “shadow object S.” The claimed “virtual surface” is mapped to surface that “M” represents.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Horita’s in air labeled surface with CoSpaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be motivated to strengthen the 3D effect of virtual objects in a scene, so that 3D effect appears stronger and potentially more appealing. Regarding Claim 13, Co-Spaces in view of West, Cullum, and Stauber teaches The method of claim 1, the method further comprising: while concurrently displaying the first three-dimensional virtual object and the virtual surface at the respective location, detecting, via the one or more input devices, a second input directed to the first three-dimensional virtual object, wherein the second input corresponds to a request to move the first three-dimensional virtual object away from the respective location ( “Click once on the element to display the menu shown in the image. You can move the element by pressing the button indicated in the image and dragging the element, without releasing, along the x, y, and z axes with your finger.” CoSpaces p. 49. PNG media_image17.png 300 546 media_image17.png Greyscale ); and Co-Spaces in view of West, Cullum, and Stauber does not explicitly disclose in response to detecting the second input directed to the first three-dimensional virtual object: in accordance with a determination that one or more second criteria are satisfied, moving the first three-dimensional virtual object away from the respective location and ceasing display of the virtual surface in the three-dimensional environment. Horita teaches in response to detecting the second input directed to the first three-dimensional virtual object: in accordance with a determination that one or more second criteria are satisfied, moving the first three-dimensional virtual object away from the respective location and ceasing display of the virtual surface in the three-dimensional environment ( (a) Horita figs. 4-7 PNG media_image12.png 384 232 media_image12.png Greyscale PNG media_image13.png 384 238 media_image13.png Greyscale PNG media_image14.png 386 238 media_image14.png Greyscale PNG media_image15.png 388 238 media_image15.png Greyscale Horita teaches that the virtual square surface is displayed when the system has not found a plane for the virtual object to land (fig. 4), and the same virtual square surface will be removed when the plane is found and the location is fixated (fig. 7), stating “The player object PO is displayed in different display forms before and after position fixation in the captured image. For example, comparison of FIGS. 4-6 with FIG. 7 clearly indicates that before position fixation, the player object PO is displayed in the display form in which a label image M is added. Meanwhile, after position fixation, the player object PO is displayed in the display form in which the label image M is not added. Thus, the player object PO is displayed in the different display forms before and after position fixation, which can notify the user of whether or not the player object PO is in the position-fixed state.” Horita ¶ 71. Horita teaches a displaying like in fig. 4 when a plane in a scene is not identified for the virtual object (dog) to land, stating “However, in this non-limiting example, until the plane has been detected in the captured image, the player object PO is displayed on the display screen of the display unit 35 with the player object PO overlaid on the captured image. For example, as illustrated in FIG. 4, the player object is overlaid on the captured image and displayed on the display unit 35 such that the player object PO is displayed at a center of the display screen of the display unit 35, facing front.” Horita ¶ 63. After Co-Spaces in view of West, Cullum, and Stauber is combined with Horita, when CoSpaces p. 48’s virtual character is placed in the air and has no place to land, an image similar to PNG media_image12.png 384 232 media_image12.png Greyscale is displayed. Afterwards, a user may move the virtual character to a different new location according to Co-Spaces p. 49, and when the virtual character is placed at the new location, the virtual character finds a plane to land, and an image similar to PNG media_image15.png 388 238 media_image15.png Greyscale will be displayed. (b) An alternative mapping could be provided in light of changed ground of rejection for Claim 1. The virtual surface has been mapped to: any surface of the bounding box around the inserted character, the group of objects after the combination of Stauber, could be the “virtual surface” added to the scene with the addition of the character. Stauber: PNG media_image5.png 514 576 media_image5.png Greyscale The blue circle may also correspond to the added “virtual surface,” and the blue circle could be expanded to encircle the group of objects after the combination of Stauber. PNG media_image3.png 512 446 media_image3.png Greyscale After moving the first three-dimensional virtual object away from the respective location, which causes the breakup of a group, the virtual surface related to an expanded group bounding box or expanded blue circle will cease being displayed.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Horita’s in air labeled surface with CoSpaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be motivated to clearly identify virtual objects that are in air and/or bottom surface of the virtual object. Regarding Claim 21, Co-Spaces in view of West, Cullum, Stauber, and Horita teaches The method of claim 1, wherein the first three-dimensional virtual object is a first type of virtual object, and the virtual content container includes a third virtual object that is a second type of virtual object, different from the first type of virtual object (Co-Spaces states, “If you have clicked on the Catalogue category, a menu will open with the library of 3D elements that the CoSpaces EDU application has by default. These elements are grouped in different subcategories to facilitate your search: Characters, Animals, Dwellings, Nature, Transport, Articles, Instruction, and Special.” Co-Spaces p. 45.), the method further comprising: while displaying the virtual content container that includes the third virtual object in the three-dimensional environment, detecting, via the one or more input devices, a second input directed to the third virtual object, wherein the second input corresponds to a request to move the third virtual object out of the virtual content container and to a second respective location in the three-dimensional environment (CoSpaces p. 48: PNG media_image2.png 280 678 media_image2.png Greyscale The same selection and placement process is applied to other types of objects, e.g., dog.); and in response to detecting the second input: in accordance with a determination that the second respective location in the three-dimensional environment satisfies one or more criteria, displaying the third virtual object at the second respective location without displaying a virtual surface within the three-dimensional environment (Horita fig. 7: PNG media_image15.png 388 238 media_image15.png Greyscale . Horita teaches the virtual square surface is not displayed when the plane for the virtual object to land is found and the location is fixated (fig. 7), stating “The player object PO is displayed in different display forms before and after position fixation in the captured image. For example, comparison of FIGS. 4-6 with FIG. 7 clearly indicates that before position fixation, the player object PO is displayed in the display form in which a label image M is added. Meanwhile, after position fixation, the player object PO is displayed in the display form in which the label image M is not added. Thus, the player object PO is displayed in the different display forms before and after position fixation, which can notify the user of whether or not the player object PO is in the position-fixed state.” Horita ¶ 71. ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Horita’s in air labeled surface with CoSpaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be motivated to clearly identify virtual objects that are not in air and/or the position of the virtual object is fixated. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Co-Spaces in view of West, Cullum, and Stauber as applied to Claim 1, in further view of Feit et al. (US 20160098972 A1). Regarding Claim 8, Co-Spaces in view of West, Cullum, and Stauber teaches The method of claim 1. However, Co-Spaces in view of West, Cullum, and Stauber does not explicitly disclose wherein displaying the virtual surface within the three-dimensional environment in response to detecting the first input directed to the first three-dimensional virtual object includes: displaying the virtual surface at a first level of visual prominence; and after displaying the virtual surface at the first level of visual prominence, displaying the virtual surface at a second level of visual prominence, greater than the first level of visual prominence. Feit teaches wherein displaying the virtual surface within the three-dimensional environment in response to detecting the first input directed to the first three-dimensional virtual object includes: displaying the virtual surface at a first level of visual prominence; and after displaying the virtual surface at the first level of visual prominence, displaying the virtual surface at a second level of visual prominence, greater than the first level of visual prominence ( “. . . Transitions, such as fade-ins, fly-ins, or other animations will be described in greater detail herein.” Feit ¶ 47. “Similarly, the rendering component 170 may fade-in one or more graphic elements (e.g., controls 522 and 524) from 0% opacity to 30% opacity in 20 frames, providing a 1.5% change in opacity per frame, for example.” Feit ¶ 71. The claimed “first level of visual prominence” is mapped lower opacity. The claimed “second level of visual prominence” is mapped higher opacity.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Feit’s fade-in with CoSpaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be motivated to smooth the visual transition and/or to bring attention to a virtual object. It helps to maintain a user’s visual comfort. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Co-Spaces in view of West, Cullum, and Stauber as applied to Claim 1, in further view of Bennett et al. (US 20190278432 A1). Regarding Claim 14, Co-Spaces in view of West, Cullum, and Stauber teaches The method of claim 1. Co-Spaces in view of West, Cullum, and Stauber does not explicitly disclose wherein the virtual surface is world-locked. Bennett teaches wherein the virtual surface is world-locked ( “ While in the various modes, additional rules may be implemented to control whether or not the user's head movement will affect the view frame. For example, a head lock command and a world lock command may be provided. In response to a head lock command, the system can maintain the view frame regardless of head position of the user; and in response to a world lock command, the system can maintain the container space within the view frame while permitting head motion tracking within the container space. Conceptually, a virtual object may be locked to a reference location in the real-world environment such that the location of the virtual object remains constant relative to the real-world environment regardless of the position or angle from which it is viewed. When a head lock command is instantiated, a virtual object may be locked to the position and direction of a user's head such that the location of the virtual object in a user's field of view appears fixed regardless of a direction in which the user faces.” Bennett ¶ 28.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bennett’s world-locked virtual object with CoSpaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be that the virtual objects can remain constant relative to the virtual environment, which is expected in a physical environment. The position of a chair in a study should not change, because the viewpoint is changed. This would provide a more relatable immersive experience. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Co-Spaces in view of West, Cullum, and Stauber as applied to Claim 1, in further view of Energin et al. (US 20180122043 A1). Regarding Claim 15, CoSpaces in view of West, Cullum, and Stauber teaches The method of claim 1, the method further comprising: while concurrently displaying the first three-dimensional virtual object and the virtual surface at the respective location, detecting, via the one or more input devices, a second input directed to the virtual content container, wherein the second input corresponds to a request to move the virtual content container away from a current location in the three-dimensional environment ( West discloses moving interface object, e.g., menu, to a new location in the 3D environment, stating “In accordance with an embodiment, FIG. 4(C shows the method 300 outlined in FIG. 3 at a point during process 306 after process 316 has determined that the HMD velocity is less than the threshold velocity, whereby the user has slowed (or stopped) the HMD movement below the threshold velocity and the UI element 408 is anchored (e.g., by the MR-UI module 210) to a new location and displayed again in the view frustum to the user.” PNG media_image9.png 418 474 media_image9.png Greyscale PNG media_image10.png 410 366 media_image10.png Greyscale ); and in response to detecting the second input directed to the virtual content container: moving the virtual content container away from the current location in the three-dimensional environment (West figs. 4A, C) in accordance with the second input However, Co-Spaces in view of West, Cullum, and Stauber does not explicitly disclose moving the virtual surface and the first three-dimensional virtual object away from the respective location in accordance with the second input when a user turns his/her head, and its head mounted display. Energin teaches moving the virtual surface and the first three-dimensional virtual object away from the respective location in accordance with the second input when a user turns his/her head, and its head mounted display ( “A body-locked augmented-reality object may appear to move along with a perspective of the user as a pose (e.g., six degrees of freedom (DOF): x, y, z, yaw, pitch, roll) of the virtual-reality computing system 800 changes. As such, a body-locked, augmented-reality object may appear to occupy the same portion of the near-eye display 802 and may appear to be at the same distance from the user, even as the user moves in the physical space.” Energin ¶ 51.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Energin’s body-locked virtual object with CoSpaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be that the virtual objects can remain at the focus or attention of the user. For example, the user is still working on the virtual object/character and the user may prefer that the virtual object/character appears to move along with a perspective of the user. Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Co-Spaces in view of West, Cullum, and Stauber as applied to Claim 1, in further view of Lin et al. (US 20240012530 A1). Regarding Claim 17, Co-Spaces further teaches The method of claim 1, the method further comprising: while concurrently displaying the first three-dimensional virtual object and the virtual surface at the respective location, detecting, via the one or more input devices, a second input directed to the virtual content container (Co-Spaces pp. 48-49). However, Co-Spaces in view of West, Cullum, and Stauber does not explicitly disclose wherein the second input corresponds to a request to cease display of the virtual content container; and in response to detecting the second input directed to the virtual content container, ceasing display of the virtual content container and ceasing display of the virtual surface in the three-dimensional environment. Lin teaches wherein the second input corresponds to a request to cease display of the virtual content container; and in response to detecting the second input directed to the virtual content container, ceasing display of the virtual content container and ceasing display of the virtual surface in the three-dimensional environment ( “In an optional embodiment of the present disclosure, the playing module 06 is configured to hide all virtual objects in the virtual scene interface and switch to a video playing interface in response to the playing instruction of the video, to play the video in the video playing interface.” Lin ¶ 168. Lin further suggests hiding all virtual objects and switching to video playing based on an user input, stating “In this embodiment, the terminal can play the video based on various methods, and the user can directly play the video based on the virtual scene interface, so that the user interaction experience is optimized.” Lin ¶ 108. Lin further teaches user switching instructions, stating “In this embodiment, in the case of playing the video through cutscenes, the terminal can switch the cutscenes corresponding to different virtual cameras based on the user's switching instruction.” Lin ¶ 106. However, Lin does not explicitly disclose that hiding all virtual objects and switching to the video playing are based on a user’s instructions. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lin’s teaching about user’s input/instructions with Lin’s hiding all virtual objects. One of ordinary skill in the art would be that a user is given more control over a software system, for viewing preference and/or debugging purpose. After Co-Spaces in view of West, Cullum, and Stauber is combined with Lin, the “virtual content container” and “virtual surface” will be hidden, ceasing to be displayed, upon a user’s input to hide all virtual objects.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lin’s teaching with Co-Spaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be that a user is given more control over a software system, for viewing preference and/or debugging purpose. In addition, when all the virtual objects are hidden, a user can check and see the physical objects better in a mixed reality environment. Regarding Claim 18, Co-Spaces in view of West, Cullum, Stauber, and Lin teaches The method of claim 17, the method further comprising: after detecting the second input and while the virtual content container and the virtual surface are not displayed in the three-dimensional environment (see Analyses for the previous claim), detecting, via the one or more input devices, a third input corresponding to a request to display the virtual content container in the three-dimensional environment; and in response to detecting the third input, displaying the virtual content container and the virtual surface in the three-dimensional environment ( Lin teaches unhiding virtual objects, stating “Just one cutscene may be displayed in the virtual scene interface at this time to play the video. After detecting that the video playing is completed, the terminal can switch back to the virtual scene interface and restore the virtual objects in the virtual scene.” Lin ¶ 106. Lin further suggests hiding all virtual objects and switching to video playing based on an user input, stating “In this embodiment, the terminal can play the video based on various methods, and the user can directly play the video based on the virtual scene interface, so that the user interaction experience is optimized.” Lin ¶ 108. Lin further teaches user switching instructions, stating “In this embodiment, in the case of playing the video through cutscenes, the terminal can switch the cutscenes corresponding to different virtual cameras based on the user's switching instruction.” Lin ¶ 106. However, Lin does not explicitly disclose that unhiding all virtual objects is based on a user’s instructions/input. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lin’s teaching about user’s input/instructions with Lin’s unhiding all virtual objects. One of ordinary skill in the art would be that a user is given more control over a software system, for viewing preference and/or debugging purpose. After Co-Spaces in view of West, Cullum, and Stauber is combined with Lin, the “virtual content container” and “virtual surface” will be restored upon a user’s input to restore all virtual objects). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lin’s teaching with Co-Spaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be that a user is given more control over a software system, for viewing preference and/or debugging purpose. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Co-Spaces in view of West, Cullum, and Stauber as applied to Claim 1, in further view of Grinstein et al. (US 6714201 B1). Regarding Claim 19, Co-Spaces in view of West, Cullum, and Stauber teaches The method of claim 1. Co-Spaces in view of West, Cullum, and Stauber does not explicitly disclose wherein the virtual surface is displayed concurrently with a selectable option that is selectable to cease display of the virtual surface in the three-dimensional environment, the method further comprising: while displaying the virtual surface and the selectable option in the three-dimensional environment, detecting, via the one or more input devices, a second input corresponding to selection of the selectable option; and in response to detecting the second input, ceasing display of the virtual surface in the three-dimensional environment. Grinstein teaches wherein the virtual surface is displayed concurrently with a selectable option that is selectable to cease display of the virtual surface in the three-dimensional environment (Grinstein discloses “The remove-bounding-box option 562 would-remove the display of the bounding box 554 from the currently selected node.” Grinstein col. 56 lines 55-58.), the method further comprising: while displaying the virtual surface and the selectable option in the three-dimensional environment, detecting, via the one or more input devices, a second input corresponding to selection of the selectable option (Grinstein discloses “The remove-bounding-box option 562 would-remove the display of the bounding box 554 from the currently selected node.” Grinstein col. 56 lines 55-58.); and in response to detecting the second input, ceasing display of the virtual surface in the three-dimensional environment (Grinstein discloses “The remove-bounding-box option 562 would-remove the display of the bounding box 554 from the currently selected node.” Grinstein col. 56 lines 55-58. After Co-Spaces in view of West, Cullum, and Stauber is combined with Grinstein, Co-Spaces’ bounding box, bottom surface of which is mapped to “virtual surface,” is removed.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Grinstein’s teaching related to bounding box with Co-Spaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be motivated to remove highlight from an object that is not selected anymore, so that there would be less visual distraction. Grinstein states, “The button 521 hides the bounding box which indicates the currently selected portion, if any, of a graphic model in the scene view window 503.” Grinstein col. 56 lines 55-58. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Co-Spaces in view of West, Cullum, Stauber, and Grinstein as applied to Claim 19, in further view of Peebler et al. (US 20200073521 A1). Regarding Claim 20, Co-Spaces in view of West, Cullum, Stauber, and Grinstein teaches The method of claim 19, wherein when the second input is detected, the first three-dimensional virtual object is displayed concurrently with the virtual surface at the respective location in the three-dimensional environment (CoSpaces pp. 48-49), the method further comprising. Co-Spaces in view of West, Cullum, Stauber and Grinstein does not explicitly disclose in response to detecting the second input, moving the first three-dimensional virtual object away from the respective location and to the virtual content container. Peebler teaches in response to detecting the second input, moving the first three-dimensional virtual object away from the respective location and to the virtual content container ( “As shown in FIG. 1C, in response to receiving the user input 60a in FIG. 1B, the selected kits area 40 indicates that the user has selected the multipede virtual object kit in order to form a new virtual object kit. In the example of FIG. 1C, an appearance of the multipede kit affordance 20a is changed (e.g., the multipede kit affordance 20a is grayed-out) in order to indicate that the user has selected the multipede virtual object kit to create a new virtual object kit. In some implementations, the selected kits area 40 provides an option to remove (e.g., unselect) the multipede virtual object kit (e.g., by left-swiping the representation of the multipede virtual object kit shown in the selected kits area 40).” Peebler ¶ 33. PNG media_image18.png 516 686 media_image18.png Greyscale Here, when a virtual item is unselected, the item is removed from the selected area, and the virtual item in the catalog will no longer be grayed out to indicate that the item is again selectable. Creating the appearance that virtual item has returned from the selected area to the catalog (unselected area). After Co-Spaces in view of West, Cullum, Stauber and Grinstein is combined with Peebler, when the virtual character as shown in Co-Spaces’ is unselected according to Grinstein and Peebler, the bounding box of the virtual character will be removed according to Grinstein, and the virtual character will further be returned to the catalog according to Peebler.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Peebler’s teaching of making an unselected item selectable with Co-Spaces in view of West, Cullum, Stauber, and Grinstein. One of ordinary skill in the art would be motivated to allow a user to select an item again in the future. Peebler ¶ 33. This setup is useful when a limited number of instances of a virtual object is allowed. For example, when the virtual object is a purchased instance of virtual furniture, or the virtual character correlates to a specific person. Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Co-Spaces in view of West, Cullum, and Stauber as applied to Claim 22, in further view of BIRAN et al. (US 20230009683 A1). Regarding Claim 23, Co-Spaces in view of West, Cullum, and Stauber teaches The method of claim 22, wherein: displaying the virtual surface within the three-dimensional environment in response to detecting the first input directed to the first three-dimensional virtual object includes: in accordance with a determination that an orientation of the first three dimensional virtual object at the respective location in response to the first input being detected is a first respective orientation relative to the three-dimensional environment, displaying the virtual surface with a second respective orientation relative to the three-dimensional environment that is based on the first respective orientation (Co-Spaces p. 48: PNG media_image3.png 512 446 media_image3.png Greyscale Here, the virtual character, mapped to “first three-dimensional virtual object,” is at an upright orientation, and the bottom surface of the bounding box, mapped to “virtual surface,” is based on the upright orientation.); and in accordance with a determination that the orientation of the first three-dimensional virtual object at the respective location in response to the first input being detected is a third respective orientation, different from the first respective orientation, relative to the three-dimensional environment, displaying the virtual surface with a fourth respective orientation, different from the second respective orientation, relative to the three-dimensional environment that is based on the third respective orientation (Co-Spaces p. 49: PNG media_image11.png 784 728 media_image11.png Greyscale Here, when the character is placed at a different orientation, the bottom surface of the bounding box is based on the changed orientation.). However, if the claimed “first input,” under BRI, cannot include multiple commands, e.g., moving the virtual character and then rotate the virtual character, Co-Spaces in view of West, Cullum, and Stauber is insufficient. Biran teaches the orientation of the virtual character is based on the surface the virtual character is placed upon (“. . . the readjustment of the simulation graphical elements (or virtual objects) on the digital terrain model may also use information representative of the slope of the terrain at the position of the virtual object, in order to readjust the orientation of the virtual object in the scene.” Biran ¶ 87. After the combination Biran, when the virtual character is moved into the 3D scene as shown in the figure on CoSpaces p. 48, the orientation of the virtual character and its bounding box depends on the orientation of the surface that the character is placed upon.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Biran’s teaching on how to place a virtual object with Co-Spaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be motivated to make the virtual image more realistic and therefore, more visually appealing and/or comfortable to a viewer. Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Co-Spaces in view of West, Cullum, and Stauber as applied to Claim 1, in further view of Yamamoto et al. (WO 2018106299 A1). Regarding Claim 27, Co-Spaces in view of West, Cullum, and Stauber teaches The method of claim 1. Co-Spaces in view of West, Cullum, and Stauber does not explicitly disclose further comprising: in response to detecting an end of the first input and in accordance with the determination that the respective location in the three-dimensional environment satisfies the one or more criteria, displaying the first three-dimensional virtual object as being included in the virtual surface. Yamamoto teaches further comprising: in response to detecting an end of the first input and in accordance with the determination that the respective location in the three-dimensional environment satisfies the one or more criteria, displaying the first three-dimensional virtual object as being included in the virtual surface ( PNG media_image19.png 488 686 media_image19.png Greyscale PNG media_image20.png 480 678 media_image20.png Greyscale PNG media_image21.png 478 670 media_image21.png Greyscale “In the example shown in FIGs. 4A-4H, the surface contour of the virtual surface 600 is defined by a substantially flat plane, simply for ease of discussion and illustration.” Yamamoto ¶ 26. Virtual Object 700A becomes included in virtual surface 600. “As described above, the virtual objects 700 are in the far virtual field, and outside of the virtual reach of the user. To select one of the virtual objects 700 for annotation, or drawing, on the virtual surface 600, the user may select the virtual object 700 by, for example, orienting the handheld electronic device 200 toward a particular virtual object 700A to be selected.” Yamamoto ¶ 39.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Yamamoto’s teaching on virtual surface with Co-Spaces in view of West, Cullum, and Stauber. One of ordinary skill in the art would be motivated by allowing a user to easily make annotations about a virtual object. Yamamoto states, “To select one of the virtual objects 700 for annotation, or drawing, on the virtual surface 600, the user may select the virtual object 700 by, for example, orienting the handheld electronic device 200 toward a particular virtual object 700A to be selected.” Yamamoto ¶ 39. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Mendes et al. (“Mid-Air Interactions Above Stereoscopic Interactive Tables”), which appears to teach the virtual surface as claimed: PNG media_image22.png 282 386 media_image22.png Greyscale Kim et al. (“Virtual object sizes for efficient and convenient mid-air manipulation”), which appears to teach the virtual surface as claimed: PNG media_image23.png 182 372 media_image23.png Greyscale Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHENGXI LIU whose telephone number is (571)270-7509. The examiner can normally be reached M-F 9 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZHENGXI LIU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Sep 22, 2023
Application Filed
Jul 09, 2025
Non-Final Rejection — §103
Sep 09, 2025
Examiner Interview Summary
Sep 09, 2025
Applicant Interview (Telephonic)
Oct 09, 2025
Response Filed
Dec 30, 2025
Final Rejection — §103
Mar 19, 2026
Applicant Interview (Telephonic)
Mar 19, 2026
Examiner Interview Summary
Apr 01, 2026
Request for Continued Examination
Apr 03, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602865
METHODS FOR DEPTH CONFLICT MITIGATION IN A THREE-DIMENSIONAL ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12599463
COLOR MANAGEMENT PROCESS FOR CUSTOMIZED DENTAL RESTORATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597402
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM FOR APPLICATION WINDOW HAVING FIRST DISPLAY MODE AND SECOND DISPLAY MODE
2y 5m to grant Granted Apr 07, 2026
Patent 12567193
PARTICLE RENDERING METHOD AND APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12561929
METHOD AND ELECTRONIC DEVICE FOR PROVIDING INFORMATION RELATED TO PLACING OBJECT IN SPACE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+40.1%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 354 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month