Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because
Reference character “242”, in FIG. 2, has been used to designate both a “Data Obtaining Unit” and a “Tracking Unit”.
Reference character “246”, in FIG. 2, has been used to designate both a “Coordination Unit” and a “Data Transmission Unit”.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description:
FIG. 1B and/or FIG. 1C, reference character “1-138” is mentioned in the specifications as a “Second Proximal End” but is not in the drawings.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description:
FIG. 1E, reference character “1-302” is not mentioned within the specifications (Paragraph 62 of the specifications refers to it as “1-306”)
FIGS. 7C-7N, reference character “738” is not mentioned within the specifications
FIG. 7F, reference character “734E” is not mentioned with the specifications, but this should more than likely be “736B”
The drawings are objected to because of the following informalities:
FIG. 3, CGR Display(s) 312, CGR Experience Module 340, CGR Presenting Unit 344 and CGR Map Generating Unit 346 are referred to as XR Display(s) 312, XR Experience Module 340, XR Presenting Unit 344 and XR Map Generating Unit 346, respectively, within the specifications and these should be changed so that the specifications and the drawings naming’s match each other
FIG. 6, there should be an arrow going from Step 650 to Step 660
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The disclosure is objected to because of the following informalities:
Page 2, Paragraph 6, Line 13 the word “compoenent” is spelled incorrectly and should be “component”
Page 23, Paragraph 55, Line 6, first button is incorrectly labeled “1-126” and should be “1-128”
Page 31, Paragraph 83, Line 4, the word “display” should be replaced with “displaying”, so that it then reads as “... and creation for displaying a user avatar ...”
Page 37, Paragraph 104, Line 9, distal ends are incorrectly labeled “11.1.2-102,11.1.2-104” and should be “11.1.2-116, 11.1.2-118”
Page 38, Paragraph 108, Line 6, “display” should be “optical” and should read as “... of which the optical module 11.3.2-100 is a part ... “
Page 77, Paragraph 205, Line 2, the word “virtaul” is spelled incorrectly and should be “virtual”
Page 77, Paragraph 205, Line 9, “virtual object 755” should be “virtual shadow 755” and should read as “... accordingly initiates display of virtual shadow 755.”
Page 77, Paragraph 205, Lines 9-10, “virtual shadow 728” should be “virtual shadow 755” and should read as “... virtual shadow 755 in Figure 7E is displayed ...”
Page 85, Paragraph 224, Line 6, the word “optoinally” is spelled incorrectly and should be “optionally”
Page 90, Paragraph 238, Line 3, reference element number “314” for “input devices”, shares the same reference number as “Image Sensors” and should therefore, use a different element number (such as 125 or 150) to reference the “input devices”
Page 123, Paragraph 286, Line 16, the word “object” is missing in between “virtual” and “is” and should read as “... the first virtual object is less than ...”
Page 123, Paragraph 286, Line 22, the period “.” after the word “a” and before “higher” should be removed, so that it then reads as “... the first virtual object at a[[.]] higher speed.”
Page 128, Paragraph 296, Line 3, reference element number “314” for “input devices”, shares the same reference number as “Image Sensors” and should therefore, use a different element number (such as 125 or 150) to reference the “input devices”
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-16, 19-24, 27-34 and 39-42 are rejected under 35 U.S.C. 103 as being unpatentable over Yan et al. (U.S. Patent: #11,210,863 B1), hereinafter Yan, in view of Berliner et al. (Pub. No.: US 2022/0254120 A1), hereinafter Berliner.
Regarding claim 1, Yan discloses a method (FIG. 3C and Col 23, Lines 17-20 teach that FIG. 3C illustrates an example flow diagram 340 for a process for augmented reality object placement, in accordance with one or more example embodiments of the present disclosure.) comprising:
at a computer system (FIG. 4 and Col. 23, Lines 33-55 teach that FIG. 4 illustrates an example system 400 for augmented reality object placement, in accordance with one or more example embodiments of the present disclosure. Referring to FIG. 4, the system 400 may include one or more devices 404, which may include, for example, mobile phones, tablets, and/or any other number and/or types of devices. The one or more devices 404 may include an application 418, which may include an augmented reality module 420 and/or a see and understand module 421. The augmented reality module 420 may be responsible for performing any of the operations described herein, such as presenting the augmented reality display to the user 402, including, for example, the virtual representation of the object, any overlays, the real-time view of the physical environment, etc. The see and understand module 421 may be responsible for performing any of the operations described herein, such as pre-processing or making any determinations with respect placement of a 3D representation of an object, for example. The one or more devices 404 may also include at least one or more processor(s) 422, memory 424, and/or a camera 426. The one or more devices may also include any other elements, such as described with respect to the computing element 500 of FIG. 5.) in communication with a display generation component and one or more input devices (Col. 25, Line 63 through Col. 26, Line 8 teach that the computing element (e.g., computer system) 500 may include a hardware processor 502 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 504 and a static memory 506, some or all of which may communicate with each other via an interlink (e.g., bus) 508. The computing element 500 may further include a power management device 532, a graphics display device 510, an alphanumeric input device 512 (e.g., a keyboard), and a user interface (UI) navigation device 514 (e.g., a mouse). In an example, the graphics display device 510, alphanumeric input device 512, and UI navigation device 514 may be a touch screen display.):
displaying, via the display generation component, a first virtual object at a first location and a second virtual object at a second location, different from the first location, in a three-dimensional environment and wherein the first location is greater than a threshold distance from the second location (Col. 24, Lines 4-15 teach that this real-time view may be presented on a device 404 through an augmented reality display 413 shown on a display of the device. The augmented reality display 413 may include a virtual representation of the object 410 that is being virtually displayed within the real-time view of the environment 408, and may also include the objects that are physically located within the environment 406. That is, a user 402 may be able to preview what an item looks like in a real environment 406 using the presentation of the item as a virtual representation of the object 410 in the augmented reality display 413 on the device 404. Additionally, Col. 12, Line 65 through Col. 13, Line 14 teach that the environment 106 is depicted as including one or more example objects that might occupy the environment 106, such as an end table 107, a plant 108, and a door 113. The environment may also include one or more walls 109 and a floor 111. It should be noted that the objects depicted in the figure are merely exemplary, and any other types and/or combination of objects may similarly exist in the environment 106. The mobile device 104 may include a display 115, which may be capable of displaying an augmented reality view of the environment 106 in which the user 102 is located (as shown in scenes 112, 120, 130, and 140 of the use case 100, as described below). The augmented reality view of the environment 106 may be capable of displaying a virtual representation of the object 110 within the augmented reality view of the environment 106 without the virtual representation of the object 110 being physically present in the environment 106. Lastly, Col. 8, Lines 56-65 teach that the comparison of the location of the one or more points associated with the wall space and the one or more points associated with the virtual representation of the object may be used to determine a distance between the wall space and the object. If it is determined that the distance is greater than a threshold distance, then it may be determined that the virtual representation of the object is not located against the wall, and the location of the virtual representation of the object may be updated to be within the threshold distance from the wall space.). However, Yan fails to disclose wherein the second virtual object is a container that is able to contain the first virtual object.
Berliner discloses wherein the second virtual object is a container that is able to contain the first virtual object (Paragraph 186 teaches that a virtual object may refer to a visual representation rendered by a computing device and configured to represent an object. A virtual object may include, for example, an inanimate virtual object, an animate virtual object, virtual furniture, a virtual decorative object, a virtual widget, a virtual screen, or any other type of virtual representation.). Since Yan teaches a method for displaying virtual objects within a 3-Dimensional environment with the capabilities of moving different types of objects around within that 3-Dimensional environment and Berliner teaches the capabilities of moving around different types of virtual objects within a 3-Dimensional environment, which include objects like a virtual widget or virtual screen, which can act as a container for containing other types of virtual objects and data, it would have been obvious to a person having ordering skill in the art to have combined the features together so that the selection of different movable virtual objects within the 3-Dimensional environment could include objects made up of different container type objects as well.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yan to incorporate the teachings of Berliner, so that the combined features together would provide the user with additional moveable virtual objects and improve the overall displayable features and objects, by including any virtual object consisting of a container type object.
Furthermore, Yan in view of Berliner disclose while displaying, via the display generation component, the first virtual object at the first location and the second virtual object at the second location, detecting, via the one or more input devices, a first input corresponding to a request to move the first virtual object away from the first location in the three-dimensional environment (Col. 22, Lines 38-46 of Yan teach that as such, at block 330, the augmented reality system may determine an input from the user indicating a movement of the virtual representation of the object from the default location to a second location. The input may be in the form of the user interacting with a touch screen of the mobile device to drag the virtual representation of the object from the default location to a second location within the augmented reality display of the environment.);
and in response to detecting the first input:
moving the first virtual object in the three-dimensional environment in accordance with the first input (Col. 9, Lines 55-59 of Yan teaches that for example, the user may manually drag and drop the virtual representation of the object from the default placement location within the augmented reality display to a second location that is nearby a wall.), including:
in accordance with a determination that the first input corresponds to movement of the first virtual object to a third location in the three-dimensional environment that is within the threshold distance of the second location of the second virtual object, displaying the first virtual object with an orientation in the three-dimensional environment that is based on an orientation of the second virtual object (Col. 19, Lines 1-17 of Yan teach that in some embodiments, the third location 250 may include a portion of the wall that is closest to the second location 248 that includes free floor space. In some cases, the virtual representation of the object 210 may only be repositioned in the third location 250 if the virtual representation of the object 210 is dragged to within a threshold distance from a position on the wall where there is adequate space for the virtual representation of the object (and/or based on the application of one of more criteria). Thus, if the virtual representation of the object is within a threshold distance of a suitable location for the virtual representation of the object, then the augmented reality system may automatically place the virtual representation of the object in that new location. If there is more than one location within the threshold distance, then the criteria can be utilized to determining which location to which the virtual representation of the object is automatically moved. Additionally, Col. 16, Lines 19-24 of Yan teach that for example, the location information in the data structure may be used to determine the distance between the virtual representation of the object 110 and a wall, and the orientation information may be used to determine the orientation of the virtual representation of the object 110 with respect to the wall.);
and in accordance with a determination that the first input corresponds to movement of the first virtual object to a fourth location in the three-dimensional environment that is further than the threshold distance from the second location of the second virtual object, displaying the first virtual object with an orientation that is independent of an orientation of the second virtual object (Col. 20, Lines 40-47 of Yan teach that still referring to FIG. 2, the use case 200 may proceed with a third scene 230. The third scene 230 of the use case 200 may illustrate the ability of the augmented reality display to keep the virtual representation of the object 210 positioned in a particular location (for the example the fourth location 250) even if the user 202 pans the camera of the mobile device 204 to face a different direction within the environment 206.).
Regarding claim 2, Yan in view of Berliner disclose everything claimed as applied above (see claim 1), in addition, Yan in view of Berliner disclose while detecting the first input and in accordance with a determination that the first virtual object is further than the threshold distance from any virtual object that is a container that is able to contain the first virtual object, displaying the first virtual object with an orientation that is based on a viewpoint of a user of the computer system (Col. 10, Lines 9-19 of Yan teach that for example, if the user swipes towards a wall from the location of the virtual representation of the object, then the virtual representation of the object may be repositioned against the wall. The virtual representation of the object may also be automatically re-oriented so that it is in a logical orientation against the wall. For example, if the initial orientation of a couch is with the front of the couch facing towards the wall with the couch being in the center of the augmented reality display, and the user swipes towards a wall, then the couch may be reoriented so that the backside of the couch is against the wall.).
Regarding claim 3, Yan in view of Berliner disclose everything claimed as applied above (see claim 1), in addition, Yan in view of Berliner disclose displaying, via the display generation component, a third virtual object at a third location, different from the second location of the second virtual object, in the three-dimensional environment, wherein the third virtual object is a second container that is able to contain the first virtual object (Paragraph 570 of Berliner teaches that some disclosed embodiments may include receiving interplanar Input signals for causing the virtual object to move to a third location on the second virtual plane, while the virtual object is in the second location. Additionally, paragraph 571 of Berliner teaches that some disclosed embodiments may include causing the wearable extended reality appliance to virtually display an interplanar movement of the virtual object from the second location to the third location, in response to the interplanar input signals. Interplanar movement may refer to movement of a virtual object from the plane it is located on to a different virtual plane. A non-limiting example of interplanar movement may include movement from a virtual plane closer to a wearable extended reality appliance to a different virtual plane further away from the wearable extended reality appliance.);
while moving the first virtual object in the three-dimensional environment in accordance with the first input, detecting that the first virtual object is within the threshold distance of the third location of the third virtual object (Col. 19, Lines 1-9 of Yan teach that in some embodiments, the third location 250 may include a portion of the wall that is closest to the second location 248 that includes free floor space. In some cases, the virtual representation of the object 210 may only be repositioned in the third location 250 if the virtual representation of the object 210 is dragged to within a threshold distance from a position on the wall where there is adequate space for the virtual representation of the object (and/or based on the application of one of more criteria).);
and in response to detecting that the first virtual object is within the threshold distance of the third location of the third virtual object, displaying the first virtual object with an orientation that is based on an orientation of the third virtual object (Paragraph 153 of Berliner teaches that modifying a perspective of a scene may include, for example, changing one or more of an orientation, an angle, a size, a direction, a position, an aspect, a parameter of a spatial transformation, a spatial transformation, or any other visual characteristic of the scene or of a part of the scene. Accordingly, a second perspective of the scene presented may include, for example, one or more of a second orientation, a second angle, a second size, a second direction, a second position, a second aspect, a second spatial transformation, or a modification of any other characteristic of the presentation of the scene (or of a part of the scene) in the extended reality environment that is different from the first perspective of the scene.).
Regarding claim 4, Yan in view of Berliner disclose everything claimed as applied above (see claim 1), in addition, Yan in view of Berliner disclose wherein the first virtual object is spatially arranged at a distance apart from the second virtual object while the first virtual object is within the threshold distance of the second virtual object (Paragraph 145 of Berliner teaches that in one example, controlling perspective may include changing a first spatial transformation of a first portion of the scene and changing a second spatial transformation of a second portion of the scene (the second spatial transformation may differ from the first spatial transformation, the change to the second spatial transformation may differ from the change to the first spatial transformation, and/or the second portion of the scene may differ from the first portion of the scene). Additionally, paragraph 164 of Berliner teaches that docking the plurality of virtual objects to a physical space may include maintaining a size, orientation, distance, or any other spatial attribute of the plurality of virtual objects with reference to a physical space. This type of selective position control may allow a user to move within the physical space while maintaining a size, orientation, distance, or any other spatial attribute of the plurality of virtual objects maintained as the user moves.).
Regarding claim 5, Yan in view of Berliner disclose everything claimed as applied above (see claim 1), in addition, Yan in view of Berliner disclose while moving the first virtual object in the three-dimensional environment in accordance with the first input (Col. 9, Lines 55-59 of Yan teach that for example, the user may manually drag and drop the virtual representation of the object from the default placement location within the augmented reality display to a second location that is nearby a wall.):
in accordance with a determination that the first virtual object is within the threshold distance of the second location of the second virtual object, moving the first virtual object in a linear path in accordance with at least a portion of the first input (Paragraph 451 of Berliner teaches that in some examples, the first two-dimensional Input may be based on a movement along a surface (e.g., a two-dimensional movement) as captured by the surface input device, and the movement may have a direction that, when mapped onto the first virtual plane, may point toward the first virtual object located on the first virtual plane. The direction of the movement may indicate an intent to select the first virtual object located on the first virtual plane. In some examples, the movement may be a straight movement or near-straight movement.);
and in accordance with a determination that the first virtual object is further than the threshold distance from the second location of the second virtual object, moving the first virtual object along a curved path in accordance with the at least the portion of the first input (Paragraph 451 of Berliner teaches that in some examples, the movement may be a curved movement or a freeform movement. The movement may have a direction (e.g., a direction of moving at a particular time instance, an overall direction, or an average direction) that, when mapped onto the first virtual plane, may point toward the first virtual object located on the first virtual plane.).
Regarding claim 6, Yan in view of Berliner disclose everything claimed as applied above (see claim 1), in addition, Yan in view of Berliner disclose while moving the first virtual object in the three-dimensional environment in accordance with the first input, detecting that at least a portion of the first input corresponds to moving the first virtual object away from the second virtual object (Col. 11, Lines 21-26 of Yan teach that for example, if the virtual representation of the object is dragged to a location in the center of a room away from any walls, then the virtual representation of the object may remain in that location in the center of the room rather than being repositioned to a second location, such as against a wall.);
and in response to detecting the at least the portion of the first input corresponding to moving the first virtual object away from the second virtual object, moving the first virtual object along a curved path away from the second virtual object (Paragraph 327 of Berliner teaches that in some embodiments, the adjustable extended reality display parameter may include changing a virtual distance of the virtually displayed content on the wearable extended reality appliance. For example, a rule may indicate that the virtually displayed content appears to be moved farther away from the user (i.e., made to appear smaller) or appear to be moved closer to the user (i.e., made to appear larger). Additionally, paragraph 461 of Berliner teaches that additionally or alternatively, the movement may have a direction that, when mapped onto the second virtual plane, may point toward the second virtual object that appears on the physical surface. The direction of the movement may indicate an intent to select the second virtual object that appears on the physical surface. In some examples, the movement may be a straight movement or near-straight movement. In some examples, the movement may be a curved movement or a freeform movement. The direction of the movement may include, for example, a direction of moving at a particular time instance, an overall direction, or an average direction.).
Regarding claim 7, Yan in view of Berliner disclose everything claimed as applied above (see claim 1), in addition, Yan in view of Berliner disclose while moving the first virtual object in the three-dimensional environment in accordance with the first input, detecting that the first virtual object reaches a location that is within the threshold distance of the second virtual object in the three-dimensional environment (Paragraph 114 of Berliner teaches that the motion sensor 373 may include one or more motion sensors configured to measure motion of input unit 202 or motion of objects in the environment of input unit 202. Specifically, the motion sensors may perform at least one of the following: detect motion of objects in the environment of input unit 202,);
and in response to detecting that the first virtual object reaches the location that is within the threshold distance of the second virtual object in the three-dimensional environment, moving the first virtual object to a respective location corresponding to the second virtual object without detecting input for moving the first virtual object to the respective location corresponding to the second virtual object (Col. 8, Line 66 through Col. 9, Line 3 of Yan teach that in some embodiments, once it is determined that the virtual representation of the object is within the threshold distance of the wall space, the orientation of the virtual representation of the object may be determined relative to the orientation of the wall space. Additionally, Col. 10, Line 66 through Col. 11, Line 7 of Yan teach that in some embodiments, once potential secondary locations are identified within an augmented reality display, if any part of the virtual representation of the object is determined to be within the threshold distance (or moved to within a threshold distance) of the secondary location, then the virtual representation of the object may be automatically repositioned to the secondary location at an appropriate orientation (for example, as described above with respect to the initial placement of the virtual representation of the object).).
Regarding claim 8, Yan in view of Berliner disclose everything claimed as applied above (see claim 7), in addition, Yan in view of Berliner disclose wherein moving the first virtual object to the respective location corresponding to the second virtual object includes changing a size of the first virtual object to display the first virtual object with a size at the respective location that is based on a size of the second virtual object (Paragraph 146 of Berliner teaches that changing a size of the scene may include, for example, enlarging, augmenting, broadening, expanding, extending, growing, inflating, lengthening, magnifying, swelling, widening, amplifying, dilating, elongating, spreading, stretching, shrinking, decreasing, diminishing, lessening, narrowing, reducing, shortening, compressing, or constricting one or more dimensions of any component, components, or combination of components in the scene. Additionally, paragraph 162 of Berliner teaches that in some embodiments, one or more virtual objects may be configured such that in the first perspective of the scene, one or more of the virtual objects may be displayed at the same distance or at variable virtual distances from the wearable extended reality appliance; in the second perspective of the scene, at least one of the virtual objects is displayed with a size that differs from others of the plurality of virtual objects; and in the third perspective of the scene, the size of the at least one virtual object reverts to a size of the at least one virtual object prior to presentation in the second perspective of the scene.).
Regarding claim 9, Yan in view of Berliner disclose everything claimed as applied above (see claim 7), in addition, Yan in view of Berliner disclose wherein moving the first virtual object to the respective location corresponding to the second virtual object includes moving the first virtual object closer to the second virtual object (Paragraph 327 of Berliner teaches that in some embodiments, the adjustable extended reality display parameter may include changing a virtual distance of the virtually displayed content on the wearable extended reality appliance. For example, a rule may indicate that the virtually displayed content appears to be moved farther away from the user (i.e., made to appear smaller) or appear to be moved closer to the user (i.e., made to appear larger).).
Regarding claim 10, Yan in view of Berliner disclose everything claimed as applied above (see claim 7), in addition, Yan in view of Berliner disclose wherein moving the first virtual object to the respective location corresponding to the second virtual object includes changing an orientation of the first virtual object to display the first virtual object with an orientation in the three- dimensional environment that is based on the orientation of the second virtual object (Col. 8, Line 66 through Col. 9, Line 3 of Yan teach that in some embodiments, once it is determined that the virtual representation of the object is within the threshold distance of the wall space, the orientation of the virtual representation of the object may be determined relative to the orientation of the wall space.).
Regarding claim 11, Yan in view of Berliner disclose everything claimed as applied above (see claim 7), in addition, Yan in view of Berliner disclose while displaying the first virtual object at the respective location corresponding to the second virtual object, detecting, via the one or more input devices, a second input corresponding to a request to move the first virtual object away from the second virtual object in the three-dimensional environment (Paragraph 165 of Berliner teaches that for example, in response to first input signals, the virtual object may be moved in a first direction and a first distance on the virtual surface, and in response to second input signals, the virtual object may be moved in a second direction and a second distance on the virtual surface.);
and in response to detecting the second input:
in accordance with a determination that the second input corresponds to movement of the first virtual object to a distance greater than the threshold distance and less than a second threshold distance from the second virtual object, moving the first virtual object to a location corresponding to the second virtual object where the orientation of the first virtual object is at least partially based on the orientation of the second virtual object (Col. 16, Lines 38- 52 of Yan teaches that in some embodiments, the virtual representation of the object 110 may be automatically positioned in the second location 132 within the augmented reality display of the environment 106 based on the pre-processing performed in connection with the second scene 120 and/or third scene 130 of the use case 100. For example, since the item 116 described in this particular use case 100 is a couch, the virtual representation of the object 110 may be automatically placed in a particular orientation (for example, facing forwards away from a wall) with a backside of the virtual representation of the object 110 affixed, that is, positioned adjacent, to a wall 109 of the augmented reality display and a bottom portion of the virtual representation of the object 110 affixed to a floor 111 of the augmented reality display.);
and in accordance with a determination that the second input corresponds to movement of the first virtual object to a distance greater than the second threshold distance from the second virtual object, moving the first virtual object in the three-dimensional environment to a location that does not correspond to the second virtual object where the orientation of the first virtual object is not based on the orientation of the second virtual object (Col. 16 Lines 1-10 of Yan teach that for example, the data structure may include information, such as a current orientation of the virtual representation of the object 110 and a location of the virtual representation of the object 110 within the 3D model of the environment 106. The orientation data may indicate the orientation of the virtual representation of the object 110 with respect to a reference orientation or the orientation of the virtual representation of the object 110 with respect to one or more other objects within the real environment 106. Additionally, paragraph 411 of Berliner teaches that some disclosed embodiments may further include selecting the first display mode when the determined value of the at least one parameter is greater than a threshold. Alternatively, other embodiments may further include selecting the second display mode when the determined value of at least one parameter is less than the threshold.).
Regarding claim 12, Yan in view of Berliner disclose everything claimed as applied above (see claim 7), in addition, Yan in view of Berliner disclose in accordance with a determination that the location of the first virtual object relative to the second virtual object when it reaches the threshold distance from the second virtual object is a first respective location, the respective location corresponding to the second virtual object is a second respective location (Col. 18, Lines 45-55 of Yan teach that still referring to FIG. 2, the use case 200 may begin with a first scene 212. The first scene 212 of the use case 200 may involve the user 202 dragging the virtual representation of the object 210 from a first location 232 to a second location 248 within the augmented reality display of the environment 106 (or otherwise swiping in the general direction of the second location 248). Once the user 202 has dragged the virtual representation of the object 210 to the second location 248 within the augmented reality display of the environment 206, the use case 200 may proceed to a second scene 220.);
and in accordance with a determination that the location of the first virtual object relative to the second virtual object when it reaches the threshold distance from the second virtual object is a third respective location, different from the first respective location, the respective location corresponding to the second virtual object is a fourth respective location, different from the second respective location (Col. 19, Lines 3-17 of Yan teach that in some cases, the virtual representation of the object 210 may only be repositioned in the third location 250 if the virtual representation of the object 210 is dragged to within a threshold distance from a position on the wall where there is adequate space for the virtual representation of the object (and/or based on the application of one of more criteria). Thus, if the virtual representation of the object is within a threshold distance of a suitable location for the virtual representation of the object, then the augmented reality system may automatically place the virtual representation of the object in that new location. If there is more than one location within the threshold distance, then the criteria can be utilized to determining which location to which the virtual representation of the object is automatically moved. Additionally, Col. 20, Lines 40-47 of Yan teach that still referring to FIG. 2, the use case 200 may proceed with a third scene 230. The third scene 230 of the use case 200 may illustrate the ability of the augmented reality display to keep the virtual representation of the object 210 positioned in a particular location (for the example the fourth location 250) even if the user 202 pans the camera of the mobile device 204 to face a different direction within the environment 206.).
Regarding claim 13, Yan in view of Berliner disclose everything claimed as applied above (see claim 12), in addition, Yan in view of Berliner disclose wherein the respective location corresponding to the second virtual object is based on a projection, to the second virtual object, of a line between a viewpoint of a user of the computer system and the location of the first virtual object relative to the second virtual object when the first virtual object reaches the threshold distance from the second virtual object (Paragraph 81 of Berliner teaches that in another embodiment, such digital signals may include one or more projections of the virtual content, for example, in a format ready for presentation (e.g., image, video, etc.). For example, each such projection may correspond to a particular orientation or a particular angle. In another embodiment, the digital signals may include a representation of virtual content, for example, by encoding objects in a three-dimensional array of voxels, in a polygon mesh, or in any other format in which virtual content may be presented.).
Regarding claim 14, Yan in view of Berliner disclose everything claimed as applied above (see claim 12), in addition, Yan in view of Berliner disclose wherein the respective location corresponding to the second virtual object is based on a perpendicular projection from a surface of the second virtual object to the location of the first virtual object relative to the second virtual object when it reaches the threshold distance from the second virtual object (Paragraph 165 of Berliner teaches that for example, a virtual surface may be determined based on the physical surface (for example, parallel to the physical surface, overlapping with the physical surface, perpendicular to the physical surface, at a selected orientation with respect to the virtual surface, etc.), and the virtual object may be moved on the virtual surface. In one example, the direction and/or the distance of the motion of the virtual object on the virtual surface may selected based on the first input signals. Additionally, paragraph 173 of Berliner teaches that in some embodiments, the at least one processor may be configured to in response to the first input signals and the second input signals, causing changes in the display of the selected virtual object to be presented via the wearable extended reality appliance. Changes in the display of the selected virtual object may include changing a perspective position relative to the object, changing a distance from the object, changing an angle of the object, and changing a size of the object.).
Regarding claim 15, Yan in view of Berliner disclose everything claimed as applied above (see claim 12), in addition, Yan in view of Berliner disclose wherein the respective location corresponding to the second virtual object is based on a respective angle of a viewpoint of a user relative to the second virtual object (Paragraph 300 of Berliner teaches that for example, the parameters of the extended reality display that may be adjusted may include: picture settings, such as brightness, contrast, sharpness, or display mode (e.g., a game mode with predefined settings); color settings, such as color component levels or other color adjustment settings; a position of the virtual content relative to a location of the user's head; a size, a location, a shape, or an angle of the virtual content within the user's field of view as defined by the wearable extended reality appliance. Additionally, paragraph 368 of Berliner teaches that for example, a viewing angle of the virtual objects may be adjusted corresponding to the user's change in pose.).
Regarding claim 16, Yan in view of Berliner disclose everything claimed as applied above (see claim 15), in addition, Yan in view of Berliner disclose while moving the first virtual object in the three-dimensional environment in accordance with the first input and in response to detecting that the first virtual object reaches the location that is within the threshold distance of the second virtual object in the three-dimensional environment (Paragraph 114 of Berliner teaches that motion sensor 373 may include one or more motion sensors configured to measure motion of input unit 202 or motion of objects in the environment of input unit 202. Specifically, the motion sensors may perform at least one of the following: detect motion of objects in the environment of input unit 202,):
in accordance with a determination that the respective angle of the viewpoint of a user of the computer system relative to the second virtual object is within a first range of angles, displaying the first virtual object at a respective location corresponding to the second virtual object based on a projection, to the second virtual object, of a line between a viewpoint of the user and the location of the first virtual object relative to the second virtual object when the first virtual object reaches the threshold distance from the second virtual object (Paragraph 200 of Berliner teaches that in some examples, the horizontal range and/or vertical range of the portion of the field of view associated with the display system may be 45-90 degrees. In other implementations, the portion of the field of view associated with the display system may be increased to the field of view or the visual field in humans. Additionally, paragraph 462 of Berliner teaches that based on determining whether the projected movement direction from the current location of the virtual cursor is toward the first virtual object (and/or an object, a location, an area, a line, or a surface associated with the first virtual object) or whether the projected movement direction from the current location of the virtual cursor is toward the second virtual object (and/or an object, a location, an area, a line, or a surface associated with the second virtual object), it may be determined whether the two-dimensional input based on the movement associated with the surface input device is reflective of an intent to select the first virtual object or is reflective of an intent to select the second virtual object.);
and in accordance with a determination that the respective angle of the viewpoint of the user relative to the second virtual object is within a second range of angles, different from the first range of angles, displaying the first virtual object at a respective location corresponding to the second virtual object based on a perpendicular projection from a surface of the second virtual object to the location of the first virtual object relative to the second virtual object when the first virtual object reaches the threshold distance from the second virtual object (Paragraph 325 of Berliner teaches that in some embodiments, the adjustable extended reality display parameter may include changing an angle of at least one virtual screen associated with the virtually displayed content. For example, a rule may indicate that the viewing angle of one or more virtual screens is changed to effectively angle the one or more virtual screens at least partially out of the user's field of view so that the user may be able to better view the physical environment.).
Regarding claim 19, Yan in view of Berliner disclose everything claimed as applied above (see claim 1), in addition, Yan in view of Berliner disclose while displaying, via the display generation component, the first virtual object in the three-dimensional environment, detecting, via the one or more input devices, a second input corresponding to a request to move the first virtual object away from the second virtual object (Paragraph 155 of Berliner teaches that some disclosed embodiments may further involve receiving via the touch sensor, second input signals caused by a second multi-finger interaction with the touch sensor. Second input signals may be similar to the first input signal described above and may include signals that provide information regarding one or more parameters that may be determined based on a touch interaction with a touch sensor. Additionally, Paragraph 165 of Berliner teaches that for example, in response to first input signals, the virtual object may be moved in a first direction and a first distance on the virtual surface, and in response to second input signals, the virtual object may be moved in a second direction and a second distance on the virtual surface.);
and in response to detecting the second input:
in accordance with a determination that the first virtual object was within the threshold distance of the second virtual object when the second input was detected, moving the first virtual object away from the second virtual object by a first distance (Paragraph 575 of Berliner teaches that in response to interplanar input signals, the wearable extended reality appliance may magnify, shrink, rotate, or otherwise modify the extended reality display of an animate virtual object, a virtual computer screen, and/or a virtual weather widget. This modification may reflect the difference between the first distance of two feet associated with the first virtual plane and the second distance of one foot associated with the second virtual plane. Additionally, Col. 19, Lines 42-63 of Yan teach that in some embodiments, potential options for the one or more third locations in which the virtual representation of the object 210 may be automatically repositioned may be determined prior to the virtual representation of the object 210 being dragged by the user 202. Thus, when the user 202 drags the virtual representation of the object 210 to a second location (for example, second location 248 depicted in FIG. 2), it may be determined which third location the second location is closest to, and the virtual representation of the object 210 may then be automatically repositioned at that third location if it is within the threshold distance of that third location. In some cases, the virtual representation of the object 210 may be repositioned to that closest third location even if it is not within a given threshold distance of the third location. In some cases, the user 202 may swipe in a general direction rather than drag and drop the virtual representation of the object 210 to a second location 248. In these cases, it may be determined which of the potential third locations is closest to the direction of the swipe, and the virtual representation of the object 210 may be automatically repositioned to that third location based on the direction of the swipe.);
and in accordance with a determination that the first virtual object was further than the threshold distance from the second virtual object when the second input was detected, moving the first virtual object away from the second virtual object by a second distance, greater than the first distance (Paragraph 583 of Berliner teaches that the non-limiting examples of determining whether to move a virtual object may include a user or computer setting a threshold or limit. For example, a threshold movement of a physical object may require a displacement. In another example, the threshold movement of the physical object may require a rotation. When the identified movement is not equal to or greater than the threshold movement, the virtual object may not be moved to a differing virtual plane. Alternatively, when the identified movement is greater than the threshold movement, the virtual object may be moved to the differing virtual plane. Additionally, Col. 32, Line 34 through Col. 33, Line 3 of Yan teaches determining a first distance value and a second distance value for the 3D model of the object, the first distance value indicating a first distance from the 3D model of the object at the first location to the first wall space, the second distance value indicating a second distance from the 3D model of the object at the second location to the second wall space.).
Regarding claim 20, Yan in view of Berliner disclose everything claimed as applied above (see claim 19), in addition, Yan in view of Berliner disclose wherein the request to move the first virtual object away from the second virtual object includes movement in a depth direction relative to the second virtual object (Col. 5, Lines 1-8 of Yan teach that based on this information, a 3D coordinate system may also be developed for the 3D model, such that objects within the 3D model may be associated with particular coordinates in the 3D coordinate system. This depth information may also be used to determine the distance between various objects in the 3D model. Techniques such as simultaneous localization and mapping (SLAM) may also be implemented in constructing the 3D model.).
Regarding claim 21, Yan in view of Berliner disclose everything claimed as applied above (see claim 19), in addition, Yan in view of Berliner disclose wherein the request to move the first virtual object away from the second virtual object includes movement in a lateral direction relative to the second virtual object (Paragraph 388 of Berliner teaches that for example, two virtual objects may be positioned on a horizontal virtual plane (e.g., a surface of a desk) and a different virtual object (e.g., a virtual display screen) may be positioned on a vertical virtual plane in front of a user.).
Regarding claim 22, Yan in view of Berliner disclose everything claimed as applied above (see claim 19), in addition, Yan in view of Berliner disclose in response to detecting the second input and in accordance with the determination that the first virtual object was within the threshold distance of the second virtual object when the second input was detected:
in accordance with a determination that the second input corresponds to movement of the first virtual object in a depth direction relative to the second virtual object, moving the first virtual object away from a respective location corresponding to the second virtual object by a first respective distance (Paragraph 167 of Berliner teaches that a three-dimensional change to the scene may include a change in the scene having three measurements, such as length, width, and depth. For example, a first perspective of the scene may include a cube. In this example, the first input signals from the touch sensor may reflect an input comprising the user moving their thumb and forefinger in opposing directions, in a sliding motion. The second display signals in this example may be configured to modify the cube by increasing or decreasing the depth of the cube.);
and in accordance with a determination that the second input corresponds to movement of the first virtual object in a lateral direction relative to the second virtual object, moving the first virtual object away from the respective location corresponding to the second virtual object by a second respective distance, greater than the first respective distance (FIG. 50A and paragraph 554 of Berliner teaches that as also illustrated in FIG. 50A, the first distance 5008 associated with first virtual plane 5001 is greater than the second distance 5010 associated with second virtual plane 5003. Additionally, paragraph 573 of Berliner teaches that in some embodiments, a first curvature of the first virtual plane may be substantially identical to a second curvature of the second virtual plane, and may further include, in response to the interplanar input signals, modifying a display of the virtual object to reflect a difference between the first distance and the second distance.).
Regarding claim 23, Yan in view of Berliner disclose everything claimed as applied above (see claim 19), in addition, Yan in view of Berliner disclose wherein moving the first virtual object away from the second virtual object by the first distance includes moving the first virtual object based on a first simulated spring between the second virtual object and the first virtual object, and a second simulated spring between the first virtual object and a location in the three-dimensional environment corresponding to the second input, wherein the location corresponding to the second input changes in accordance with the second input (Paragraph 152 of Berliner teaches that the first input signals may include signals that provide information regarding a user's touch based on the user's interaction with a touch sensor. The first input signals may include information regarding one or more parameters that may be determined based on a touch interaction with a touch sensor. For example, the first input signals may include information regarding pressure, force, strain, position, motion, velocity, acceleration, temperature, occupancy, or any other physical or mechanical characteristic. Additionally, paragraph 269 of Berlin teaches that in other embodiments, the highlighting may relate to any physical indicator perceivable by the user which may serve to emphasize, or otherwise differentiate, the group of objects in the first virtual region by, for example, providing physical feedback (e.g., haptic feedback) to the user in response to a kinesics input.).
Regarding claim 24, Yan in view of Berliner disclose everything claimed as applied above (see claim 19), in addition, Yan in view of Berliner disclose in response to detecting the second input (Paragraph 165 of Berliner teaches that for example, in response to first input signals, the virtual object may be moved in a first direction and a first distance on the virtual surface, and in response to second input signals, the virtual object may be moved in a second direction and a second distance on the virtual surface.):
in accordance with a determination that the second input corresponds to movement of the first virtual object to a location that is less than a second threshold distance away from the second virtual object, moving the first virtual object to a respective location corresponding to the second virtual object (Paragraph 553 of Berliner teaches that the first virtual plane may be positioned at a first distance from the wearable extended reality appliance and the second virtual plane may be positioned at a different second distance from the wearable extended reality appliance. In other embodiments, the first distance may be greater than, equal to, or less than the second distance. For example, a first virtual plane formed by the x-axis and the y-axis may be associated with a first distance of three feet from the wearable extended reality appliance. Additionally, a second virtual plane formed by the x-axis and the y-axis different from the first virtual plane may be associated with a second distance of one foot from the wearable extended reality appliance.);
and in accordance with a determination that the second input corresponds to movement of the first virtual object to a location that is greater than the second threshold distance away from the second virtual object, moving a third virtual object to a respective location in the three-dimensional environment that does not correspond to the second virtual object (Paragraph 487 of Berliner teaches that some disclosed embodiments may include, while the virtual cursor is displayed on the second virtual plane, receiving a third two-dimensional input via the surface input device. The third two-dimensional input may be reflective of an intent to select a third virtual object located on a third virtual plane corresponding to another physical surface. The third virtual plane may, for example, overlie the other physical surface (e.g., different from the physical surface which the second virtual plane may overlie). In some examples, the third virtual plane (or an extension of the third virtual plane) may traverse the second virtual plane (or an extension of the second virtual plane). In some examples, the third two-dimensional input may be based on a movement associated with the surface input device (e.g., a movement of a computer mouse on a surface, or a movement of a finger of a user on a touchpad). The movement associated with the surface input device may have a direction that, when mapped onto the second virtual plane, may point toward the third virtual object located on the third virtual plane, may point toward the third virtual plane, and/or may point toward a line of intersection between the second and third virtual planes.).
Regarding claim 27, Yan in view of Berliner disclose everything claimed as applied above (see claim 24), in addition, Yan in view of Berliner disclose wherein the second threshold distance is different from the threshold distance (Paragraph 292 of Berliner teaches that in some embodiments, the second threshold for forgoing triggering the functionality associated with the particular virtual object may be greater than the first threshold for highlighting the group of virtual objects in the first virtual region.).
Regarding claim 28, Yan in view of Berliner disclose everything claimed as applied above (see claim 24), in addition, Yan in view of Berliner disclose further comprising while receiving the second input and before the second input corresponds to movement of the first virtual object to the location that is greater than the second threshold distance away from the second virtual object, moving the first virtual object by a first amount in accordance with the second input (Col. 9, Lines 3-9 of Yan teach that it should be noted that while it is described that the distance between the wall space and the virtual representation of the object is determined before the orientation of the virtual representation of the object relative to the orientation of the wall space, these operations may be performed in any order other than what is described herein. Additionally, paragraph 165 of Berliner teaches that for example, in response to first input signals, the virtual object may be moved in a first direction and a first distance on the virtual surface, and in response to second input signals, the virtual object may be moved in a second direction and a second distance on the virtual surface.).
Regarding claim 29, Yan in view of Berliner disclose everything claimed as applied above (see claim 28), in addition, Yan in view of Berliner disclose further comprising while receiving the second input and in response to the second input corresponding to movement of the first virtual object to the location that is greater than the second threshold distance away from the second virtual object, moving the first virtual object with a second amount of velocity, greater than a first amount of velocity, in accordance with the second input, wherein the first virtual object is moved with the first amount of velocity while receiving the second input and before the first virtual object reaches the second threshold distance away from the second virtual object (Paragraph 114 of Berliner teaches that motion sensor 373 may include one or more motion sensors configured to measure motion of input unit 202 or motion of objects in the environment of input unit 202. Specifically, the motion sensors may perform at least one of the following: detect motion of objects in the environment of input unit 202, measure the velocity of objects in the environment of input unit 202, measure the acceleration of objects in the environment of input unit 202, detect the motion of input unit 202, measure the velocity of input unit 202, measure the acceleration of input unit 202, etc. Additionally, paragraph 411 of Berliner teaches that some disclosed embodiments may further include selecting the first display mode when the determined value of the at least one parameter is greater than a threshold. Alternatively, other embodiments may further include selecting the second display mode when the determined value of at least one parameter is less than the threshold. A threshold may refer to a reference or limit value, or level, or a range of reference or limit values or levels. In operation, when a determined value of at least one parameter exceeds the threshold (or is below it, depending on a particular use case), the at least one processor may select a first display mode and, when the determined value of at least one parameter is less than the threshold (or above it, depending on the particular use case), the at least one processor may select a second display mode. The value of the threshold may be predetermined or may be dynamically selected based on various considerations. Some non-limiting examples of the threshold may include a positive displacement of a wearable extended reality appliance (e.g., smart glasses) along the x-axis by ten meters, a velocity of 10 meters per second along the x-axis, an acceleration of 1 meter per second squared along the x-axis, and a positive direction along the x-axis. When the at least one parameter is a positive displacement of the wearable extended reality appliance along the x-axis by two meters, a velocity of 1 meter per second along the x-axis, an acceleration of 0.1 meters per second squared along the x-axis, and a negative direction along the x-axis then the steps select a first display mode. Alternatively, the steps do not select the first display mode.).
Regarding claim 30, Yan in view of Berliner disclose everything claimed as applied above (see claim 19), in addition, Yan in view of Berliner disclose while receiving the second input and before the second input corresponds to movement of the first virtual object to a location that is greater than a second threshold distance away from the second virtual object, changing an orientation of the first virtual object so that it is not oriented based on the second virtual object (Paragraph 143 of Berliner teaches that when using a wearable extended reality appliance, there may be a desire to change a perspective view, such as by zooming-in on an object, or by changing a virtual orientation of an object. Additionally, paragraph 173 of Berliner teaches that changes in the display of the selected virtual object may include changing a perspective position relative to the object, changing a distance from the object, changing an angle of the object, and changing a size of the object. For example, the virtual object may be a tree within a scene of a forest. The user may increase a size of the tree by inputting first input signals through the touch sensor by increasing a distance between their thumb and forefinger. The user may then rotate the tree by inputting second input signals through the touch sensor by moving their thumb and forefinger together in a clockwise or counterclockwise direction.).
Regarding claim 31, Yan in view of Berliner disclose everything claimed as applied above (see claim 30), in addition, Yan in view of Berliner disclose while receiving the second input and in response to the second input corresponding to movement of the first virtual object to the location that is greater than the second threshold distance away from the second virtual object, changing the orientation of the first virtual object with a rate of change towards not being oriented based on the second virtual object (Paragraph 410 of Berliner teaches that acceleration may refer to a rate of change of a velocity of an object with respect to time. Some non-limiting examples of acceleration may be 2 meters per second squared, 5 meters per second squared, or 1 meter per second squared. Direction may refer to a course along which someone or something moves. For example, the at least one parameter may include a positive displacement of a wearable extended reality appliance (e.g., smart glasses) along the x-axis by two meters, a velocity of 1 meter per second along the x-axis, an acceleration of 0.1 meters per second squared along the x-axis, and a positive direction along the x-axis.).
Regarding claim 32, Yan in view of Berliner disclose everything claimed as applied above (see claim 19), in addition, Yan in view of Berliner disclose wherein moving the first virtual object away from the second virtual object by the first distance in accordance with the second input includes changing an appearance of the first virtual object to indicate that the first virtual object is moving away from the second virtual object (Paragraph 82 of Berliner teaches that The rendered visual presentation may change to reflect changes to a status object or changes in the viewing angle of the object, for example, in a way that mimics changes in the appearance of physical objects. Additionally, paragraph 162 of Berliner teaches that when the one or more virtual objects are displayed at variable virtual distances from the wearable extended reality appliance, a user of the wearable extended reality appliance may view each or some of the one or more virtual objects at a different amount of space away from the user. When at least one of the virtual objects is displayed with a size that differs from others of the plurality of virtual objects, a user of the wearable extended reality appliance may view the at least one of the virtual objects as being larger or smaller than the others of the plurality of virtual objects.).
Regarding claim 33, Yan in view of Berliner disclose everything claimed as applied above (see claim 19), in addition, Yan in view of Berliner disclose while moving the first virtual object away from the second virtual object in accordance with the second input, detecting termination of the second input (Paragraph 103 of Berliner teaches that in one example, virtual content communication module 316 may use data from input determination module to identify a trigger (e.g., the trigger may include a gesture of the user) and to transfer content from the virtual display to a physical display (e.g., TV) or to a virtual display of a different user.);
in response to detecting the termination of the second input, in accordance with a determination that a current location of the first virtual object is not within the threshold distance of any virtual object that is a container that is able to contain the first virtual object, displaying a third virtual object at the current location of the first virtual object, wherein the third virtual object contains the first virtual object at the current location of the first virtual object, and the third virtual object was not displayed in the three-dimensional environment prior to detecting the termination of the second input (FIG. 17 and paragraph 217 of Berliner teach that with reference to FIG. 17, a hand gesture 1710, 1712, 1714 may indicate a drag of virtual widget 114E to table 102. At least one processor may detect hand gesture 1710, 1712, 1714, for example, based on image data captured by at least one image sensor having field of view 1310, and may determine, based on hand gesture 1710, 1712, 1714, an indication to dock the specific virtual object (e.g., virtual widget 114E) with the specific physical object (e.g., table 102). The specific virtual object may be docked at a particular point or area on, or approximate to, the specific physical object. The particular point or area may be specified by a user.).
Regarding claim 34, Yan in view of Berliner disclose everything claimed as applied above (see claim 19), in addition, Yan in view of Berliner disclose wherein the three-dimensional environment further includes a third virtual object, wherein the third virtual object is a container that is able to contain the first virtual object (Paragraph 487 of Berliner teaches that in some examples, the movement associated with the surface input device may have a direction that, when mapped onto the third virtual plane, may point toward the third virtual object located on the third virtual plane. The direction of the movement may indicate an intent to select the third virtual object located on the third virtual plane.), the method further comprising:
while moving the first virtual object away from the second virtual object in accordance with the second input, in accordance with a determination that the first virtual object is within the threshold distance of the third virtual object, displaying the first virtual object at a respective location corresponding to the third virtual object, including displaying the first virtual object with an orientation in the three-dimensional environment that is based on an orientation of the third virtual object (Paragraph 158 of Berliner teaches that accordingly, a third perspective of the scene presented may include one or more of a third orientation, a third angle, a third size, a third direction, a third position, a third aspect, a third spatial transformation, or a modification of any other characteristic the presentation of the scene (or of a part of the scene) In the extended reality environment that is different from the second perspective of the scene. Additionally, paragraph 240 teaches that some disclosed embodiments may involve displaying a plurality of dispersed virtual objects across a plurality of virtual regions. The virtual objects may relate to virtual content including any type of data representation that may be visually displayed to the user, and which the user may interact with, in an extended reality environment via an extended reality appliance. Virtual objects may be considered dispersed if they are not all located in precisely the same location. Lastly, paragraph 366 teaches that in some embodiments, different virtual screens may be moved to different portions of the extended reality environment.).
Regarding claim 39, Yan in view of Berliner disclose everything claimed as applied above (see claim 1), in addition, Yan in view of Berliner disclose the second virtual object includes content at a plurality of different heights relative to a reference plane for the second virtual object (Paragraph 567 of Berliner teaches that in one embodiment, the intraplanar movement may involve moving the virtual object along two orthogonal axes, and wherein a movement along a first axis may include changing dimensions of the virtual object, and a movement along a second axis may exclude changing dimensions of the virtual object. This embodiment may happen when the virtual plan is curved. An orthogonal axis may refer to an axis that is at a right angle to one or more other axes. A non-limiting example of an orthogonal axis may include an x, y, and z axis in a three-dimensional space. Changing dimensions may refer to altering or modifying a measurable extent of a virtual object. Non-limiting examples of dimensions may include length, width, depth, height, radius, angular span, or extent. For example, an intraplanar movement in a curved virtual plane may involve a horizontal translation of a virtual object along the x-axis. Further, the intraplanar movement may also involve a vertical translation of the virtual object along the y-axis. In some examples, the intraplanar movement along both the x-axis and the y-axis may change the dimensions of the virtual object. In other examples, the intraplanar movement along both the x-axis and the y-axis may not change the dimensions of the virtual object. In further examples, only one of the intraplanar movements along both the x-axis and the y-axis may change a dimension of the virtual object. For example, intraplanar movement along one axis, for example, the y-axis may cause an Increase in the height of an animate virtual object, whereas a movement along another axis, for example, the x-axis may not change the dimensions of the animate virtual object.);
the method further comprises, while moving the first virtual object in the three-dimensional environment in accordance with the first input, displaying the first virtual object with a distance relative to the reference plane for the second virtual object that is selected based on a height of content over which the first virtual object is displayed (Paragraph 568 of Berliner teaches that by way of example, FIG. 50D illustrates scene 5000 wherein virtual object 5004 moves by horizontal translation 5095 in the form of intraplanar movement along the x-axis. Also illustrated in FIG. 50D, virtual object 5004 moves by vertical translation 5097 in the form of intraplanar movement along the y-axis. As illustrated in FIG. 50D, the intraplanar movement along the y-axis may cause a change in height dimension 5082 of virtual object 5004, whereas a movement along the x-axis may not cause a change in the dimensions of virtual object 5004.), including:
in accordance with a determination that the first virtual object is over first content with a first height relative to the reference plane, the distance between the first virtual object and the reference plane is a first distance (Paragraph 569 of Berliner teaches that in one example, when the curvature of a first virtual plane is equal to a curvature of a reference virtual plane, the virtual object's height or width may be increased by a certain amount. But, when the curvature of the first virtual plane is greater than the curvature of the reference virtual plane, the virtual object's height or width may be increased by more than the certain amount.);
and in accordance with a determination that the first virtual object is over second content with a second height, different from the first height, relative to the reference plane, the distance between the first virtual object and the reference plane is a second distance, different from the first distance (Paragraph 569 of Berliner teaches that some disclosed embodiments may further include changing the dimensions of the virtual object based on a curvature of the first virtual plane. A curvature may refer to the degree to which a curved surface deviates from a flat plane. Non-limiting examples of changing dimensions of a virtual object based on a curvature may include increasing the height, width, or depth of the virtual object.).
Regarding claim 40, Yan in view of Berliner disclose everything claimed as applied above (see claim 39), in addition, Yan in view of Berliner disclose wherein displaying the first virtual object with a distance relative to the reference plane for the second virtual object that is selected based on a height of content over which the first virtual object is displayed, includes adjusting a distance of the first virtual object from the reference plane based on a height of a first type of content in the second virtual object to a greater degree than adjusting a distance of the first virtual object from the reference plane based on a height of a second type of content in the second virtual object (Paragraph 569 of Berliner teaches that for example, the height of an animate virtual object may be increased by 110% based on the curvature of 10° of the first virtual plane, and the height of the animate virtual object may be increased by 115% based on the curvature of 12° of the first virtual plane.).
Regarding claim 41, the system steps correspond to and are rejected similarly to the method steps of claim 1 (see claim 1 above). In addition, Yan discloses one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors (Col. 29, Lines 25-32 teach that the program module(s), applications, or the like disclosed herein may include one or more software components including, for example, software objects, methods, data structures, or the like. Each such software component may include computer-executable instructions that, responsive to execution, cause at least a portion of the functionality described herein (e.g., one or more operations of the illustrative methods described herein) to be performed).
Regarding claim 42, the non-transitory computer readable storage medium corresponds to and is rejected similarly to the method steps of claim 1 (see claim 1 above) and the system steps of claim 41 (see claim 41 above). In addition, Yan discloses a non-transitory computer readable storage medium storing one or more programs (Col. 26, Lines 45-51 teach that various embodiments may be implemented fully or partially in software and/or firmware. This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. Those instructions may then be read and executed by one or more processors to enable performance of the operations described herein.).
Claims 17-18 and 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Berliner, as applied to claims 7 and 24 above, and further in view of Schwarz et al. (Pub. No.: US 2018/0286126 A1), hereinafter Schwarz.
Regarding claim 17, Yan in view of Berliner disclose everything claimed as applied above (see claim 7), however, Yan in view of Berliner fail to disclose wherein the threshold distance is based on a respective bounding volume associated with the first virtual object, wherein the respective bounding volume is based on one or more dimensions of the first virtual object.
Schwarz discloses wherein the threshold distance is based on a respective bounding volume associated with the first virtual object, wherein the respective bounding volume is based on one or more dimensions of the first virtual object (FIG. 4 and paragraph 48 teach that with reference to FIG. 4, in some examples a virtual object may be displayed within a virtual bounding box or other virtual container.). Since Yan in view of Berliner teach method steps for determining different threshold distances around virtual objects including distances in respect to a 3-Dimensional environment, including height, width and depth for potential volume of a virtual object and Schwarz teaches applying a virtual bounding box around a virtual object to assist in determining a virtual objects volume, it would have been obvious to a person having ordering skill in the art to have combined the features together so that bounding boxes could be applied to the different virtual objects in order to help determine the objects respective bounding volume.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yan in view of Berliner to incorporate the teachings of Schwarz, so that the combined features together would allow for bounding volume boxes to be incorporated in helping to determine a virtual object’s volume and dimensions, which would help improve the overall speed, computational efficiency and accuracy of the overall 3-D modeling and positioning of the virtual object within the 3-D environment.
Regarding claim 18, Yan in view of Berliner and Schwarz disclose everything claimed as applied above (see claim 17), in addition, Yan in view of Berliner and Schwarz disclose wherein in accordance with a determination that the first virtual object is a two-dimensional object, the respective bounding volume has a depth dimension that is independent of a size of the first virtual object (Paragraph 24 of Schwarz teaches that the virtual content may include one or more visual elements in the form of virtual objects 30, such as three-dimensional (3D) holographic objects and two-dimensional (2D) virtual images, that are generated and displayed to appear located within a real world physical environment 32 viewed through the device. Additionally, paragraph 27 of Schwarz teaches that the HMD device 18 may include one or more sensors and related systems that receive physical environment data from the physical environment 32. For example, the HMD device 18 may include a depth sensor system 38 that generates depth image data. The depth sensor system 38 may include one or more depth cameras that capture image data 26 from the physical environment 32. Lastly, paragraph 57 of Schwarz teaches that with reference again to FIG. 4, where a bounding box 400 is displayed with the motorcycle 244, a distance from one or more sides of the bounding box to the surface may be determined. In one example, a distance from the center of each side of the bounding box 400 to the surface may be determined.).
Regarding claim 25, Yan in view of Berliner disclose everything claimed as applied above (see claim 24), however, Yan in view of Berliner fail to disclose wherein the second threshold distance is based on a respective bounding volume associated with the first virtual object, wherein the respective bounding volume is based on one or more dimensions of the first virtual object.
Schwarz discloses wherein the second threshold distance is based on a respective bounding volume associated with the first virtual object, wherein the respective bounding volume is based on one or more dimensions of the first virtual object (Paragraph 57 teaches that with reference again to FIG. 4, where a bounding box 400 is displayed with the motorcycle 244, a distance from one or more sides of the bounding box to the surface may be determined. In one example, a distance from the center of each side of the bounding box 400 to the surface may be determined. The minimum of these 6 distances may be compared to the predetermined distance 500 to determine if the motorcycle 244 is within the predetermined distance.). Since Yan in view of Berliner teach method steps for determining different threshold distances around virtual objects including distances in respect to a 3-Dimensional environment, including height, width and depth for potential volume of a virtual object and Schwarz teaches applying a virtual bounding box around a virtual object to assist in determining a virtual objects volume, it would have been obvious to a person having ordering skill in the art to have combined the features together so that bounding boxes could be applied to the different virtual objects in order to help determine the objects respective bounding volume.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yan in view of Berliner to incorporate the teachings of Schwarz, so that the combined features together would allow for bounding volume boxes to be incorporated in helping to determine a virtual object’s volume and dimensions, which would help improve the overall speed, computational efficiency and accuracy of the overall 3-D modeling and positioning of the virtual object within the 3-D environment.
Regarding claim 26, Yan in view of Berliner and Schwarz discloses everything claimed as applied above (see claim 25), in addition, Yan in view of Berliner and Schwarz disclose wherein in accordance with a determination that the first virtual object is a two-dimensional object, the respective bounding volume associated with the first virtual object has a depth dimension that is independent of a size of the first virtual object.
Claims 35-38 are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Berliner, as applied to claim 1 above, and further in view of Dascola et al. (Pub. No.: US 2019/0228589 A1), hereinafter Dascola.
Regarding claim 35, Yan in view of Berliner disclose everything claimed as applied above (see claim 1), in addition, Yan in view of Berliner disclose in response to detecting the first input, and in accordance with the determination that the first input corresponds to movement of the first virtual object to the third location that is within the threshold distance of the second location of the second virtual object, displaying, via the display generation component, a virtual shadow (FIG. 22 and paragraph 270 of Berliner teach that in the example illustrated in FIG. 22, the visual appearance of each virtual object of the group of virtual objects 2212 in the first virtual region 2214 are shown to be emphasized, or otherwise differentiated, from the second subset of dispersed virtual objects 2216 in the second virtual region 2218 due to an added shadow around the frame of the virtual objects 2212.). However, Yan in view of Berliner fail to disclose a virtual shadow of the first virtual object.
Dascola discloses a virtual shadow of the first virtual object (FIG. 11D and paragraph 363 teach that in FIG. 11D, a shadow 11006 of virtual object 11002 is displayed.). Since Yan in view of Berliner teach applying shadow effects to groupings of virtual objects to help distinguish them from other virtual objects and Dascola teaches applying a shadow an individual first virtual object, it would have been obvious to a person having ordering skill in the art to have combined the features together so that in addition to adding shadow effects to the different groupings of virtual objects, but a shadow effect could be added individually to a first virtual object if needed.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yan in view of Berliner to incorporate the teachings of Dascola, so that the combined features together would allow for (according to paragraph 524 of Dascola) visual feedback to the user by incorporating and a shadow to an individual virtual object and providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient (e.g., by helping the user determine the proper direction for a swipe input to cause rotation about the first axis or the second axis), which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently.
Regarding claim 36, Yan in view of Berliner and Dascola disclose everything claimed as applied above (see claim 35), in addition, Yan in view of Berliner and Dascola disclose wherein displaying the virtual shadow of the first virtual object includes:
in accordance with a determination that the third location is a first distance away from the second location, displaying the virtual shadow of the first virtual object with a first visual property having a first value (Paragraph 524 of Dascola teaches that in some embodiments, the shadow shifts and changes shape to indicate a current orientation of the virtual object relative to an invisible ground plane in the staging user interface that supports a predefined bottom side of the virtual object. In some embodiments, the surface of the virtual three-dimensional object appears to reflects light from a simulated light source located in a predefined direction in a virtual space represented in the staging user interface. Varying a shape of a shadow in accordance with rotation of a virtual object provides visual feedback (e.g., indicating a virtual plane (e.g., a stage of a staging view) relative to which the virtual object is oriented).),
and in accordance with a determination that the third location is a second distance away from the second location, different from the first distance, displaying virtual shadow of the first virtual object with the first visual property having a second value, different from the first value (Paragraph 540 of Dascola teaches that in some embodiments, the appearance of the user interface object is changed dynamically and continuously (e.g., showing different sizes, positions, perspectives, reflections, shadows, etc.) in accordance with the values of the respective movement parameter of the input.).
Regarding claim 37, Yan in view of Berliner and Dascola disclose everything claimed as applied above (see claim 35), in addition, Yan in view of Berliner and Dascola disclose wherein displaying the virtual shadow of the first virtual object in response to moving the first virtual object to within the threshold distance of the second location includes displaying an animation of the virtual shadow appearing over a time period, including changing a visual property of the virtual shadow over the time period (Paragraph 385 of Dascola teaches that FIGS. 13B-13C illustrate input to rotate virtual object 11002 about the y-axis indicated in FIG. 13A. In FIG. 13B, an input by contact 13002 is detected at a location that corresponds to virtual object 11002. The input moves by a distance d.sub.1 along a path indicated by arrow 13004. As the input moves along the path, the virtual object 11002 rotates about the y-axis (e.g., by 35 degrees) to a position indicated in FIG. 13B. In the staging user interface 6010, shadow 13006 that corresponds to virtual object 11002 is displayed. From FIG. 13B to FIG. 13C, shadow 13006 changes in accordance with the changed position of virtual object 11002. Additionally, paragraph 549 of Dascola teaches that in some embodiments, the appearance of the user interface object is changed dynamically and continuously (e.g., showing different sizes, positions, perspectives, reflections, shadows, etc.) in accordance with the values of the respective movement parameter of the input.).
Regarding claim 38, Yan in view of Berliner and Dascola disclose everything claimed as applied above (see claim 35), in addition, Yan in view of Berliner and Dascola disclose wherein displaying the virtual shadow of the first virtual object in response to moving the first virtual object outside of the threshold distance of the second location includes displaying an animation of the virtual shadow disappearing over a time period, including changing a visual property of the virtual shadow over the time period (Paragraph 525 of Dascola teaches that in some embodiments, while rotating the virtual three-dimensional object in the first user interface region (18024): in accordance with a determination that the virtual three-dimensional object is displayed with a second perspective that reveals a predefined bottom of the virtual three-dimensional object, the device forgoes display of the representation of the shadow with the representation of the second perspective of the virtual three-dimensional object. For example, the device does not display the shadow of the virtual object when the virtual object is being viewed from below (e.g., as described with regard to FIGS. 13G-13I). Forgoing display of a shadow of a virtual object in accordance with a determination that the bottom of the virtual object is displayed provides visual feedback (e.g., indicating that the object has rotated to a position that no longer corresponds to a virtual plane (e.g., a stage of a staging view)). Providing improved visual feedback to the user enhances the operability of the device and makes the user-device interface more efficient, which, additionally, reduces power usage and improves battery life of the device by enabling the user to use the device more quickly and efficiently. Additionally, paragraph 549 of Dascola teaches that in some embodiments, the appearance of the user interface object is changed dynamically and continuously (e.g., showing different sizes, positions, perspectives, reflections, shadows, etc.) in accordance with the values of the respective movement parameter of the input.).
Conclusion
The prior arts made of record and not relied upon is considered pertinent to applicant's disclosure.
Syed (Pub. No.: US 2021/0375049 A1) teaches methods and systems for anchoring objects in augmented or virtual reality
Ngo et al. (U.S. Patent: #10,481,755 B1) teaches systems and methods for presenting virtual content in an interactive space
Fujimaki (Pub. No: US 2019/0285895 A1) teaches a head-mounted display apparatus configured to display distance-specific images and objects.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to George Renze whose telephone number is (703)756-5811. The examiner can normally be reached Monday-Friday 9:00am - 6:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/G.R./Examiner, Art Unit 2613
/XIAO M WU/Supervisory Patent Examiner, Art Unit 2613