DETAILED ACTION
This Office Action is sent in response to Applicant's Response received 02/13/2026 for 17934104. Claims 30-48 and 166-169 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/13/2026 was filed before the mailing date of a final action and is accompanied with one of the options set forth in 37 CFR 1.97. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDS is being considered by the examiner.
Response to Arguments
Applicant's summary of the telephonic interview conducted 01/28/2026 has been acknowledged and received.
In view of Applicant's amendments, the objection of claim 169 has been withdrawn.
Applicant’s arguments with respect to the 103 rejection of claim 30 have been fully considered but are not persuasive in view of the new and/or updated citations used in the current rejection of record under Ramsby in view of Agarwal in response to the newly amended limitations.
In response to Applicant's argument that the references fail to show certain features of Applicant’s invention, it is noted that the features upon which Applicant relies (i.e., where a particular location relative to the first object is "displayed underneath object 906a (e.g., first object) and/or slightly in front of object 906a" [pg. 13:2], where a first control user interface does not "exist independently of the location of the chair" and may not "be applied to any virtual object the user chooses to target" [pgs. 14:4-15:1]) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). In this case, the Office Action cites the combination of the augmented reality object and user interface control element displayed in an augmented reality environment as disclosed in Ramsby with the display of an input element remaining at a particular location on a displayed virtual object as disclosed in Agarwal to teach the newly amended limitation "a first control user interface displayed at a particular location relative to the first object".
With respect to Applicant's assertion that "arguments based on measurement of the drawing features are of little value" [pgs. 15:4-16:1], "the description of the article pictured can be relied on, in combination with the drawings, for what they would reasonably teach one of ordinary skill in the art." In re Wright, 569 F.2d 1124, 1127-28, 193 USPQ 332, 335-36 (CCPA 1977) [see MPEP 2125(II)]. In this case, Agarwal discloses an input element selecting a virtual object and moving both the input element and virtual object back and away from a user's view of a user interface displaying the input element and virtual element in both paragraphs 0039, 0041, and 0050 and Figures 4-7. By describing that an input element remains on a virtual object while moving the virtual object [para 0039, 0044] and showing the continued selection in Figures 4-7, the drawings in combination with the description of Agarwal disclose "a first control user interface displayed at a particular location relative to the first object".
Additionally, the claim does not require specific dimensions or scaling with respect to the first object and the first control user interface, only that the "relative size" of the two elements as compared to each other be "different" before and after the movement away from the respective viewpoint. In this case, Agarwal discloses an input element selecting a virtual object before a movement input as described in paragraph 0039 and Figure 4 and an input element remaining on the virtual object during a movement input as described in paragraph 0041-0042, 0044, and Figures 5-7. Based on the visual difference in the displayed input element overlapping a relatively larger amount of the displayed virtual object after moving the virtual object as shown in Figure 7 when compared to the relatively smaller amount of the displayed virtual object being overlapped before moving the virtual object as shown in Figures 4-6, one of ordinary skill in the art would reasonably conclude that Agarwal discloses "a relative size of the first object as compared to the first control user interface is different than a relative size of the first object as compared to the first control user interface before moving the first object and the first control user interface away from the respective viewpoint".
Claim 30 remains rejected under Ramsby in view of Agarwal.
Claims 47 and 48 recite similar limitations to those recited in claim 30 and remain rejected upon a similar basis as claim 30 as stated above.
Dependent claims 31-46 and 166-169 remain rejected at least based on their dependence from independent claim 30.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 30-35, 44-45, 47-48, 166-167, and 169 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ramsby et al. (US 20150254905 A1) in view of Agarwal et al. (US 20220221976 A1).
As to claim 30, Ramsby discloses a method comprising: at an electronic device in communication with a display generation component and one or more input devices [Fig. 11, para 0075, 0084-0085, system includes display and input device]:
at an electronic device in communication with a display generation component and one or more input devices [Fig. 11, para 0075, 0084-0085, system includes display and input device]:
displaying, via the display generation component, a three-dimensional environment that includes a first object at a first location in the three-dimensional environment and a first control user interface associated with the first object and displayed at a particular location …, wherein the first object has a first size in the three-dimensional environment and occupies a first amount of a field of view from a respective viewpoint, and wherein the first control user interface has a first control size in the three-dimensional environment [Fig. 3, para 0032-0033, 0038-0039, 0045-0046, display displays augmented reality environment with x, y, z dimensions (read: three-dimensional) including augmented reality object (read: first object) at location (read: first location) with first apparent size (read: first size) and first display size comprising proportion (read: first amount) of display viewed (read: field of view) by user (read: respective viewpoint) and user interface control element (read: first control user interface) with display size (read: first control size) within (read: particular location) environment including augmented reality object];
while displaying the three-dimensional environment that includes the first object at the first location in the three-dimensional environment and the first control user interface associated with and displayed at the particular location …, receiving, via the one or more input devices, a first input corresponding to a request to move the first object away from the first location in the three- dimensional environment [Figs. 3, 5, para 0039-0040, 0045-0047, 0085, determine user command (read: first input) by input device to set depth of displayed object in environment including object and control element within environment, where display depth is increased from (read: move away from) previous display depth to user]; and
in response to receiving the first input: in accordance with a determination that the first input corresponds to a request to move the first object away from the respective viewpoint [Fig. 3, para 0039-0040, 0046-0048, determine user command to set object at depth increased from previous depth viewed by user]:
moving the first object … away from the respective viewpoint, including moving the first object from the first location to a second location in the three-dimensional environment in accordance with the first input, wherein the second location is further than the first location from the respective viewpoint … [Fig. 3, para 0021-0022, 0033-0034, 0038-0040, increase object depth to user, where increasing depth sets object location (read: second location) away from (read: further) previous location in environment]; and
scaling the first object such that when the first object is located at the second location; the first object has a second size, larger than the first size, in the three-dimensional environment and occupies a second amount of the field of view from the respective viewpoint … [Fig. 3, para 0039-0040, 0047-0048, 0052, display object at increased depth at location and with apparent size in environment according to scaling function, where scaling function maintaining object display size (read: second amount) on display viewed by user displays object with an apparent size (read: second size) increased from previous apparent size]; and
a relative size of the first object as compared to the first control user interface is different than a relative size of the first object as compared to the first control user interface … [Figs. 3, 7-8, para 0046-0048, 0052, 0054, display object with apparent size and control element with display size according to different scaling functions].
However, Ramsby does not specifically disclose a first control user interface displayed at a particular location relative to the first object; moving the first object and the first control user interface away from the respective viewpoint, including wherein the first control user interface is displayed at the particular location relative to the first object; wherein the second amount is smaller than the first amount; and a relative size of the first object as compared to the first control user interface is different than a relative size of the first object as compared to the first control user interface before moving the first object and the first control user interface away from the respective viewpoint.
Agarwal discloses:
a first control user interface associated with the first object and displayed at a particular location relative to the first object [Fig. 4, para 0039, display input element (read: first control user interface) at position on virtual object (read: first object)];
the first control user interface associated with the first object and displayed at a particular location relative to the first object [Fig. 4, para 0039, display input element at position on virtual object];
moving the first object and the first control user interface away from the respective viewpoint, including moving the first object from the first location to a second location in the three-dimensional environment in accordance with the first input, wherein the first control user interface is displayed at the particular location relative to the first object [Figs. 5-7, para 0041-0044, move virtual object and input element back as viewed by user (read: respective viewpoint) with input element remaining in same position on virtual object];
wherein the second amount is smaller than the first amount [Figs. 5-7, para 0041-0044, move virtual object back, note Figure 7 shows virtual object smaller than initial position]; and
a relative size of the first object as compared to the first control user interface is different than a relative size of the first object as compared to the first control user interface before moving the first object and the first control user interface away from the respective viewpoint [Figs. 4-7, para 0039-0044, display input element on virtual object during initial selection and display input element remaining at same position with respect to virtual object after moving virtual object and input element back towards wall, note relative amount of input element overlapping virtual object as shown in Figure 4 has increased and is different than relative amount of input element overlapping virtual object as shown in Figure 7].
Ramsby and Agarwal are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to the displayed first control user interface, first object, moving the first object, and a relative size of the first object as compared to the first control user interface as disclosed by Ramsby with a first control user interface displayed at a particular location relative to the first object, moving the first object and the first control user interface, and a relative size of the first object as compared to the first control user interface before moving the first object and the first control user interface as disclosed by Agarwal with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby as described above to allow ease of object viewing and interaction [Ramsby, para 0061] and help align virtual objects [Agarwal, para 0016, 0042].
As to claim 31, Ramsby discloses the method of claim 30, further comprising: … in accordance with the determination that the first input corresponds to the request to move the first object away from the respective viewpoint, continuously scaling the first object to increasing sizes as the first object moves further from the respective viewpoint [Figs. 3, 5, 6B, para 0039-0040, 0046-0048, 0052, determine object apparent size according to scaling function and set object depth, where scaling function maintaining object display size includes linearly increasing apparent size with linearly increasing depth].
However, Ramsby does not specifically disclose while receiving the first input and in accordance with the determination that the first input corresponds to the request to move the first object…, continuously scaling the first object.
Agarwal discloses while receiving the first input and in accordance with the determination that the first input corresponds to the request to move the first object, continuously scaling the first object [Figs. 6-7, para 0042-0044, continuing moving input element to move virtual object continuously moves virtual object back].
Ramsby and Agarwal are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify continuously scaling the first object in response to an input to move the object as disclosed by Ramsby with continuously scaling an object while receiving an input and in response to the input to move the object as disclosed by Agarwal with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby as described above to help align virtual objects [Agarwal, para 0016, 0042].
As to claim 32, Ramsby discloses the method of claim 30, wherein the first object is an object of a first type [Fig. 3, para 0038, 0045-0046, object includes object type], and the three-dimensional environment further includes a second object that is an object of a second type, different from the first type [Figs. 4-5, para 0041, 0045-0046, display augmented reality object (read: second object) with different object type and different scaling function], the method further comprising:
while displaying the three-dimensional environment that includes the second object at a third location in the three-dimensional environment, wherein the second object has a third size in the three-dimensional environment and occupies a third amount of the field of view from the respective viewpoint [Fig. 4, para 0032-0033, 0041-0042, 0045, display displays augmented reality object at third depth at location (read: third location) in environment with third apparent size (read: third size) and third display size (read: third amount) comprising portion of display viewed by user], receiving, via the one or more input devices, a second input corresponding to a request to move the second object away from the third location in the three-dimensional environment [Figs. 4-5, para 0042-0043, 0045-0047, 0085, determine user command (read: second input) by input device to set depth of displayed object, where display depth is increased from (read: move away from) previous display depth to user]; and
in response to receiving the second input and in accordance with a determination that the second input corresponds to a request to move the second object away from the respective viewpoint [Fig. 4, para 0042-0043, 0046-0048, determine user command to set object at depth increased from previous depth viewed by user]: moving the second object away from the respective viewpoint from the third location to a fourth location in the three-dimensional environment in accordance with the second input, wherein the fourth location is further than the third location from the respective viewpoint [Fig. 4, para 0021-0022, 0033-0034, 0041-0043, increase object depth to user, where increasing depth sets object location (read: fourth location) away from (read: further) previous location in environment], without scaling the second object such that when the second object is located at the fourth location, the second object has the third size in the three-dimensional environment and occupies a fourth amount, less than the third amount, of the field of view from the respective viewpoint [Fig. 4, para 0042-0043, 0047-0048, 0051, display object at increased depth at location and apparent size according to scaling function, where apparent size of object at increased depth is equal to apparent object size at previous depth at location and scaling function scales object display size to display size (read: fourth amount) with decreased proportion of display viewed by user, note a same apparent object size falls under the broadest reasonable interpretation of "without scaling the second object" as consistent with Applicant's specification (00211)].
As to claim 33, Ramsby discloses the method of claim 32, wherein:
the second object is displayed with a [] user interface … associated with the second object [Figs. 4, 10, para 0071, display object with line (read: user interface) used to render object];
when the second object is displayed at the third location, the [] user interface is displayed at the third location and has a fourth size in the three-dimensional environment [Figs. 1, 4, 10, para 0041-0042, 0071-0072, display object and line at third depth at near location, where line is displayed with thickness at a display size with an apparent size (read: fourth size) in environment], and
when the second object is displayed at the fourth location, the control user interface is displayed at the fourth location and has a fifth size, greater than the fourth size, in the three- dimensional environment [Figs. 3, 10, para 0040-0041, 0050, 0071-0072, display object and line at increased depth at farther location in environment, where line is displayed with maintained thickness at a same display size, note scaling function maintaining display size at an increased depth increases an apparent size of an object].
However, Ramsby does not specifically disclose a control user interface for controlling one or more operations associated with the second object; and wherein "the [] user interface" is "the control user interface".
Agarwal discloses a control user interface for controlling one or more operations associated with the second object; and the control user interface [Figs. 4-7, para 0039, display input element (read: control user interface), note the limitation "for controlling one or more operations associated with the second object" is not being given patentable weight as the term "for" suggests or makes optional and does not require the step to be performed as the limitation is an intended result of the "control user interface" as recited in the claim (see MPEP 2111.04), nevertheless, note input element selects and moves virtual object].
Ramsby and Agarwal are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the user interface associated with the second object as disclosed by Ramsby with the control user interface controlling an object as disclosed by Agarwal with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby as described above to help align virtual objects [Agarwal, para 0016, 0042].
As to claim 34, Ramsby discloses the method of claim 30, further comprising:
while displaying the three-dimensional environment that includes the first object at the first location in the three-dimensional environment, the first object having the first size in the three-dimensional environment, wherein the respective viewpoint is a first viewpoint [Fig. 3, para 0032-0033, 0038, 0045, display environment including object at location with first apparent size and viewed by user (read: first viewpoint)], detecting movement of a viewpoint of a user from the first viewpoint to a second viewpoint that changes a distance between the viewpoint of the user and the first object [para 0023-0024, 0030, 0061-0062, detect user movement changing perspective relative to object, note user distance to user may change]; and
in response to detecting the movement of the viewpoint from the first viewpoint to the second viewpoint, updating display of the three-dimensional environment to be from the second viewpoint without scaling a size of the first object at the first location in the three-dimensional environment [Fig. 5, para 0023-0024, 0048, 0061-0063, update displayed environment as user moves and changes perspective while maintaining (read: without scaling) object angular size at same object position in environment].
As to claim 35, Ramsby discloses the method of claim 34, wherein the first object is an object of a first type [Fig. 3, para 0038, 0045-0046, object includes object type], and the three-dimensional environment further includes a second object that is an object of a second type, different from the first type [Figs. 4-5, para 0041, 0045-0046, display augmented reality object (read: second object) with different object type and different scaling function], the method further comprising:
while displaying the three-dimensional environment that includes the second object at a third location in the three-dimensional environment, wherein the second object has a third size in the three-dimensional environment and the viewpoint of the user is the first viewpoint [Fig. 4, para 0032-0033, 0041-0042, 0045, display displays augmented reality object at third depth at location (read: third location) in environment with third apparent size (read: third size) viewed by user], detecting movement of the viewpoint from the first viewpoint to the second viewpoint that changes a distance between the viewpoint of the user and the second object [para 0023-0024, 0030, 0061-0062, detect user movement changing perspective relative to object, note user changing perspective includes changing user distance to object]; and
in response to detecting the movement of the viewpoint:
updating display of the three-dimensional environment to be from the second viewpoint [Fig. 5, para 0023-0024, 0048, 0061-0063, update displayed environment as user moves and changes perspective]; and
scaling a size of the second object at the third location to be a fourth size, different from the third size, in the three-dimensional environment [Fig. 5, para 0023-0024, 0048, 0061-0063, scale apparent size of object at same position to increase (read: fourth size) in environment based on user change in perspective and distance].
As to claim 44, Ramsby discloses the method of claim 30, wherein the three-dimensional environment further includes … a third location in the three-dimensional environment [Figs. 6E, 10, para 0056, 0072, far range distance in environment], the method further comprising:
… a request to move the first object through the third location and further from the respective viewpoint than the third location [para 0047-0048, 0056-0057, determine command to display object at depth according to scaling function, where scaling function includes depths in a scaling range (read: through third location) and far range depths (read: further than third location) as distance to user]:
moving the first object away from the respective viewpoint from the first location to the third location … while scaling the first object in the three-dimensional environment based on a distance between the respective viewpoint and the first object [Figs. 5, 6E, para 0047-0048, 0056-0057, display object at increased depth to user in environment according to scaling function, where scaling function scales display size of object with object distance within scaling range of depths]; and
after the first object reaches the third location, maintaining display of the first object … without scaling the first object … [Figs. 5, 6E, para 0047-0048, 0056-0057, display object at depth in environment according to scaling function, where scaling function maintains display size (read: without scaling) of object at depth range farther than scaling range of depths].
However, Ramsby does not specifically a second object at a third location in the three-dimensional environment; while receiving the first input: in accordance with a determination that the first input corresponds to a request to move the first object through the third location and further from the respective viewpoint than the third location: moving the first object away from the respective viewpoint from the first location to the third location in accordance with the first input; and after the first object reaches the third location, maintaining display of the first object at the third location without scaling the first object while continuing to receive the first input.
Agarwal discloses:
a second object at a third location in the three-dimensional environment [Figs. 3-4, para 0039, 0041, 0049, display wall surface (read: second object) at coordinates (read: third location) within environment]; and
while receiving the first input: in accordance with a determination that the first input corresponds to a request to move the first object through the third location and further from the respective viewpoint than the third location [Figs. 4-6, para, 0039-0042, detect user input (read: first input) remaining on virtual object (read: first object) while dragging object until intersecting a wall and continuing to drag past (read: further from) wall as viewed by user]:
moving the first object away from the respective viewpoint from the first location to the third location in accordance with the first input [Figs. 4-5, para 0039-0041, move virtual object from floor (read: first location) to wall while user input remains]; and
after the first object reaches the third location, maintaining display of the first object at the third location without scaling the first object while continuing to receive the first input [Figs. 5-6, para 0041-0042, obstruct movement of virtual object at wall while user input continues moving past wall, note obstructing object movement maintains object display at wall and object display size does not change (read: without scaling)].
Ramsby and Agarwal are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify moving of the first object away from the respective viewpoint to different locations and maintaining display of the first object without scaling the first object as disclosed by Ramsby with moving an object to a location of another object, farther away from the location, and maintaining display of the first object without scaling while continuing to receive a move input as disclosed by Agarwal with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby as described above to allow quick object alignment [Agarwal, para 0042].
As to claim 45, Ramsby discloses the method of claim 30, wherein scaling the first object is in accordance with a determination that the second amount of the field of view from the respective viewpoint occupied by the first object at the second size is greater than a threshold amount of the field of view [Figs. 5, 6B, 6E, para 0046-0048, 0052, 0056, display object with apparent size according to scaling function, where scaling function displays object with display size greater than small constant size (read: threshold amount) on display], the method further comprising:
while displaying the first object at a respective size in the three-dimensional environment, wherein the first object occupies a first respective amount of the field of view from the respective viewpoint [Figs. 5, 6B, 6E, para 0045-0046, 0048, 0056, display object with apparent size (read: size) and display size in environment, where display size is image (read: amount) viewed on display by user], receiving, via the one or more input devices, a second input corresponding to a request to move the first object away from the respective viewpoint [Fig. 5, para 0043, 0047, 0085, determine user command (read: second input) by input device to set depth of displayed device, where display depth is increased (read: move away from) previous depth to user]; and
in response to receiving the second input: in accordance with a determination that the first respective amount of the field of view from the respective viewpoint is less than the threshold amount of the field of view, moving the first object away from the respective viewpoint in accordance with the second input without scaling a size of the first object in the three-dimensional environment [Fig. 6E, para 0047-0048, 0056, when object display size on display viewed by user reaches small constant size, set display object at increased depth in environment with constant (read: without scaling) display size in environment].
As to claim 47, Ramsby and Agarwal, combined at least for the reasons above, Ramsby discloses an electronic device [Fig. 11, para 0075-0079, system], comprising: one or more processors; memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions [Fig. 11, para 0075-0079, system includes processor, storage holding instructions executable by processor] for: performing limitations substantially similar to those recited in claim 30 and is rejected under similar rationale.
As to claim 48, Ramsby and Agarwal, combined at least for the reasons above, Ramsby discloses a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform a method [Fig. 11, para 0075-0079, storage holds instructions executable by system processor to implement methods] comprising: limitations substantially similar to those recited in claim 30 and is rejected under similar rationale.
As to claim 166, Ramsby discloses the method of claim 30, wherein the first input is directed to … move the first object in the three-dimensional environment [para 0045-0047, 0085, determine user command by input device to set depth of displayed object in environment including object].
However, Ramsby does not specifically disclose wherein the first input is directed to the first control user interface to move the first object.
Agarwal discloses wherein the first input is directed to the first control user interface to move the first object [Figs. 4-7, para 0039, 0041-0042, drag input with input element moves virtual object].
Ramsby and Agarwal are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the first input to move the first object as disclosed by Ramsby with first input directed to a first control user interface to move a first object as disclosed by Agarwal with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby as described above to allow provide additional user functionality [Agarwal, para 0033].
As to claim 167, Ramsby discloses the method of claim 30.
However, Ramsby does not specifically disclose wherein the first control user interface includes a selectable option that is selectable to cease display of the first object.
Agarwal discloses: wherein the first control user interface includes a selectable option that is selectable to cease display of the first object [Fig. 4, para 0038, graphical user interface tools include icon which may be selected to remove an object from space].
Ramsby and Agarwal are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the first control user interface as disclosed by Ramsby with a control user interface including an option to cease display of a first object as disclosed by Agarwal with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby as described above to allow provide additional user functionality [Agarwal, para 0033].
As to claim 169, Ramsby discloses the method of claim 30, wherein in response to receiving the first input and in accordance with the determination that the first input corresponds to a request to move the first object away from the respective viewpoint [Fig. 3, para 0039-0040, 0046-0048, determine user command to set object at depth increased from previous depth viewed by user], the first object is scaled in a first manner … [Fig. 3, para 0039-0040, 0047-0048, 0052, display object at increased depth at location and with apparent size in environment according to scaling function].
However, Ramsby does not specifically disclose wherein the first object is scaled in a first manner and the first control user interface is not scaled in the first manner.
Agarwal discloses wherein in response to receiving the first input and in accordance with the determination that the first input corresponds to a request to move the first object away from the respective viewpoint, the first object is scaled in a first manner and the first control user interface is not scaled in the first manner [Figs. 6-7, para 0042-0044, move virtual object as user continuously moves input element moving virtual object, where Figures 6 and 7 show virtual object at a smaller size based on distance to user and input element that does not change size].
Ramsby and Agarwal are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the request to move the first object away including scaling the first object as disclosed by Ramsby with moving a first object including scaling the first object and not scaling a first control user interface as disclosed by Agarwal with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby as described above to help align virtual objects [Agarwal, para 0016, 0042].
Claims 36-43, 46, and 168 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ramsby and Agarwal in view of Dascola et al. (US 20190228589 A1).
As to claim 36, Ramsby discloses the method of claim 30, further comprising:
while displaying the three-dimensional environment that includes the first object at the first location in the three-dimensional environment [Fig. 3, para 0032-0033, 0038, 0045, display environment including object at location], detecting movement of a viewpoint of a user in the three-dimensional environment from a first viewpoint to a second viewpoint that changes a distance between the viewpoint and the first object [para 0023-0024, 0030, 0061-0062, detect user movement changing viewing perspective (read: viewpoint) of user (read: first viewpoint) relative to object, note user changing perspective includes changing user distance to object];
in response to detecting the movement of the viewpoint, updating display of the three- dimensional environment to be from the second viewpoint without scaling a size of the first object at the first location in the three-dimensional environment [Fig. 5, para 0023-0024, 0048, 0061-0063, update displayed environment as user moves and changes perspective while maintaining (read: without scaling) object angular size at same object position in environment].
However, Ramsby and Agarwal do not specifically disclose while displaying the first object at the first location in the three-dimensional environment from the second viewpoint, receiving, via the one or more input devices, a second input corresponding to a request to move the first object away from the first location in the three-dimensional environment to a third location in the three-dimensional environment that is further from the second viewpoint than the first location; and while detecting the second input and before moving the first object away from the first location, scaling a size of the first object to be a third size, different from the first size, based on a distance between the first object and the second viewpoint when a beginning of the second input is detected.
Dascola discloses:
while displaying the first object at the first location in the three-dimensional environment from the second viewpoint [Figs. 5M, 8D, para 0228-0229, 0286-0287, display chair object (read: first object) at location (read: first location) in space (read: three-dimensional environment) captured in camera view (read: second viewpoint)], receiving, via the one or more input devices, a second input corresponding to a request to move the first object away from the first location in the three-dimensional environment to a third location in the three-dimensional environment that is further from the second viewpoint than the first location [Figs. 5N-5P, 8D, para 0228-0229, 0287-0288, detect contact (read: second input) with screen (read: input device) at chair object to move object from location viewed by camera to further position (read: second location) and further toward table surface location (read: third location)]; and
while detecting the second input and before moving the first object away from the first location [Figs. 5C, 5N-5P, para 0226, 0228-0229, 0287-0288, detect user input including contact at virtual object before movement and subsequent movement to move object further from view of user device], scaling a size of the first object to be a third size, different from the first size, based on a distance between the first object and the second viewpoint when a beginning of the second input is detected [Figs. 5C-5F, 5N-5P, para 0227-0229, 0287-0288, adjust size (read: first size) of virtual object as displayed to size (read: third size) based on distance from object to view of user device camera (read: second viewpoint) as input is maintained on object and before (read: beginning) contact movement].
Ramsby, Agarwal, and Dascola are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the display the three-dimensional environment including the first object at a location as disclosed by Ramsby and Agarwal with receiving a request to move an object to another location further from a location and scaling an object before moving the object as disclosed by Dascola with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby and Agarwal as described above to enhance device operability [Dascola, para 0287].
As to claim 37, Ramsby discloses the method of claim 30, wherein the three-dimensional environment further includes a second object at a third location in the three-dimensional environment [para 0022-0023, 0063, environment includes surface (read: second object) at position (read: third location) relative to environment], the method further comprising:
in response to receiving the first input:
in accordance with a determination that the first input corresponds to a request to move the first object to a fourth location in the three-dimensional environment, the fourth location a first distance from the respective viewpoint [Fig. 3, para 0024, 0033-0034, 0038-0040, 0047, determine user command to set object with depth (read: first distance) to user at location (read: fourth location) in environment], displaying the first object at the fourth location in the three-dimensional environment, wherein the first object has a third size in the three-dimensional environment [Fig. 3, para 0039-0040, 0047-0048, 0052, display object at location with apparent size (read: third size) in environment]; and
in accordance with a determination that the first input satisfies one or more criteria, including a respective criterion that is satisfied when the first input corresponds to a request to move the first object to the third location in the three-dimensional environment [para 0022-0024, 0063, determine user command to set (read: criterion) object upon surface at position relative to environment], … displaying the first object at the third location in the three-dimensional environment, wherein the first object has a fourth size, different from the third size, in the three-dimensional environment [para 0022-0024, 0063, display object against surface at position relative to environment with target size (read: fourth size)].
However, Ramsby and Agarwal do not specifically disclose a request to move the first object to the third location in the three-dimensional environment, the third location the first distance from the respective viewpoint, and wherein the first object has a fourth size, different from the third size, in the three-dimensional environment.
Dascola discloses:
a request to move the first object to the third location in the three-dimensional environment, the third location the first distance from the respective viewpoint [Figs. 5AJ-5AM, 8D, para 0237, 0288-0289, 0292, detect user contact moving lamp object (read: first object) to table surface location (read: third location) at a distance (read: first distance) from camera (read: respective viewpoint)],
displaying the first object at the third location in the three-dimensional environment, wherein the first object has a fourth size, different from the third size, in the three-dimensional environment [Figs. 5AL-5AM, 8D, para 0237, 0288-0289, display object at surface location in space with object size (read: fourth size) different from object size (read: third size) prior to contact releasing object over table surface location].
Ramsby, Agarwal, and Dascola are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the a request to move an object to a location with a size with respect to another object at a location and distance from the respective viewpoint as disclosed by Ramsby and Agarwal with a request to move an object to a location of another object with a location and distance from a viewpoint as disclosed by Dascola with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby and Agarwal as described above to enhance device operability [Dascola, para 0288-0289].
As to claim 38, Ramsby discloses the method of claim 37, wherein the fourth size of the first object is based on a size of the second object [para 0022-0024, 0063, display object at target size based upon target surface].
As to claim 39, Ramsby discloses the method of claim 38.
However, Ramsby and Agarwal do not specifically disclose further comprising: while the first object is at the third location in the three-dimensional environment and has the fourth size that is based on the size of the second object, receiving, via the one or more input devices, a second input corresponding to a request to move the first object away from the third location in the three-dimensional environment; and in response to receiving the second input, displaying the first object at a fifth size, wherein the fifth size is not based on the size of the second object.
Dascola discloses further comprising:
while the first object is at the third location in the three-dimensional environment and has the fourth size that is based on the size of the second object, receiving, via the one or more input devices, a second input corresponding to a request to move the first object away from the third location in the three-dimensional environment [Figs. 5AM-5AP, 8D, para 0237-0238, 0288-0289, detect contact with touch screen (read: input device) at lamp object with size on table surface location with size in space, where contact drags lamp object toward edge (read: away from third location) in space]; and
in response to receiving the second input, displaying the first object at a fifth size, wherein the fifth size is not based on the size of the second object [Figs. 5AM-5AP, 8D, para 0237-0238, 0288-0289, display object with size (read: fifth size) during move, where object size during movement is in response to contact drag and not table surface].
Ramsby, Agarwal, and Dascola are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the first object and the third location in the three-dimensional environment as disclosed by Ramsby and Agarwal with a request to move an object away from a location of another object in an environment and displaying the object at another size not based on the other object as disclosed by Dascola with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby and Agarwal as described above to enhance device operability [Dascola, para 0288-0289].
As to claim 40, Ramsby discloses the method of claim 37, wherein the respective criterion is satisfied when the first input corresponds to a request to move the first object to any location within a volume in the three-dimensional environment that includes the third location [para 0022-0024, 0063, determine user command to set object upon surface at position in space (read: volume) relative to environment including surface].
As to claim 41, Ramsby discloses the method of claim 37.
However, Ramsby and Agarwal do not specifically disclose further comprising: while receiving the first input, and in accordance with a determination that the first object has moved to the third location in accordance with the first input and that the one or more criteria are satisfied, changing an appearance of the first object to indicate that the second object is a valid drop target for the first object.
Dascola discloses further comprising: while receiving the first input, and in accordance with a determination that the first object has moved to the third location in accordance with the first input and that the one or more criteria are satisfied, changing an appearance of the first object to indicate that the second object is a valid drop target for the first object [Figs. 5AJ-5AM, 8D, para 0237, 0288-0289, 0292, change visual appearance of surface to indicate a drop-off location (read: valid drop target) of object as contact moves object over surface including over table surface location, note broadest reasonable interpretation of appearance of the first object includes any outward aspect related to the object].
Ramsby, Agarwal, and Dascola are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the first object as disclosed by Ramsby and Agarwal with changing an appearance of an object indicating another object as a valid drop target while receiving input and determining movement of the object and one or more satisfied criteria as disclosed by Dascola with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby and Agarwal as described above to enhance device operability [Dascola, para 0288-0289].
As to claim 42, Ramsby discloses the method of claim 37.
However, Ramsby and Agarwal do not specifically disclose wherein the one or more criteria include a criterion that is satisfied when the second object is a valid drop target for the first object, and not satisfied when the second object is not a valid drop target for the first object, the method further comprising: in response to receiving the first input: in accordance with a determination that the respective criterion is satisfied but the first input does not satisfy the one or more criteria because the second object is not a valid drop target for the first object, displaying the first object at the fourth location in the three-dimensional environment, wherein the first object has the third size in the three-dimensional environment.
Dascola discloses:
wherein the one or more criteria include a criterion that is satisfied when the second object is a valid drop target for the first object, and not satisfied when the second object is not a valid drop target for the first object [Figs. 5AI-5AL, 8D, para 0237, 0287-0289, 0292, determine contact moving object to location over table surface or determine contact moving object to location over floor surface and not table surface], the method further comprising:
in response to receiving the first input: in accordance with a determination that the respective criterion is satisfied but the first input does not satisfy the one or more criteria because the second object is not a valid drop target for the first object [Figs. 5AI-5AJ, 8D, para 0237, 0287-0289, 0292, determine contact moving object to location over floor surface], displaying the first object at the fourth location in the three-dimensional environment, wherein the first object has the third size in the three-dimensional environment [Figs. 5AI-5AJ, 8D, para 0237, 0287-0289, 0292, display object over floor surface at path location (read: fourth location) in environment with object size (read: third size) in environment].
Ramsby. Agarwal, and Dascola are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify displaying the first object with a size as disclosed by Ramsby and Agarwal with criteria including criterion satisfying and not satisfying another object as a valid drop target for an object and displaying an object in an environment when another object is not a valid drop target for the object as disclosed by Dascola with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby and Agarwal as described above to enhance device operability [Dascola, para 0288-0289].
As to claim 43, Ramsby discloses the method of claim 37, further comprising: in response to receiving the first input: in accordance with the determination that the first input satisfies the one or more criteria, updating an orientation of the first object relative to the respective viewpoint based on an orientation of the second object relative to the respective viewpoint [para 0022-0024, 0063, continuously update object size (read: orientation, note broadest reasonable interpretation of orientation includes any arrangement) as viewed by user with object upon surface as viewed by user in response to user command to set object upon surface].
As to claim 46, Ramsby discloses the method of claim 30, wherein the first input corresponds to the request to move the first object away from the respective viewpoint [Fig. 3, para 0039-0040, 0046-0048, determine user command to set object at depth increased from previous depth viewed by user], the method further comprising:
in response to receiving a first portion of the first input … [Fig. 3, para 0039-0040, 0046-0048, determine user command to set object at depth increased from previous depth viewed by user]: in accordance with a determination that the first size of the first object satisfies one or more criteria, including a criterion that is satisfied when the first size is not based on a current distance between the first object and the respective viewpoint [Figs. 5, 6B-6C, para 0046-0048, 0052-0053, determine object as classified into object types with scaling functions (read: criteria), where object as determined object type with scaling function (read: criterion) includes determining object at apparent size regardless of (read: does not correspond) object depth (read: current distance) to user], scaling the first object to have a third size … that is based on the current distance between the first object and the respective viewpoint [Figs. 5, 6C, para 0046-0048, 0053, scaling function displays object with apparent size scaled as function of object depth to user once object reaches threshold range].
However, Ramsby and Agarwal do not specifically disclose in response to receiving a first portion of the first input and before moving the first object away from the respective viewpoint: … scaling the first object to have a third size, different from the first size, that is based on the current distance between the first object and the respective viewpoint.
Dascola discloses: in response to receiving a first portion of the first input and before moving the first object away from the respective viewpoint [Figs. 5C, 5N-5P, para 0226, 0228-0229, 0287-0288, detect user input including contact at (read: first movement) virtual object before movement and subsequent movement to move object further from view of user device]: scaling the first object to have a third size, different from the first size, that is based on the current distance between the first object and the respective viewpoint [Figs. 5C-5F, 5N-5P, para 0227-0229, 0287-0288, adjust size (read: first size) of virtual object as displayed to size (read: third size) based on distance from object to view of user device camera (read: viewpoint)].
Ramsby, Agarwal, and Dascola are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify scaling the first object in response to first input to move the first object away from the respective viewpoint as disclosed by Ramsby and Agarwal with scaling an object to have another size different from a size in response to receiving a first portion of input and before moving the object as disclosed by Dascola with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby and Agarwal as described above to provide improved user feedback [Dascola, para 0287].
As to claim 168, Ramsby discloses the method of claim 30.
However, Ramsby and Agarwal do not specifically disclose wherein the first control user interface includes a selectable option that is selectable to share the first object with a user other than a user of the electronic device.
Dascola discloses wherein the first control user interface includes a selectable option that is selectable to share the first object with a user other than a user of the electronic device [para 0534, displayed menu (read: first control user interface) includes options to share virtual object with another user or device].
Ramsby, Agarwal, and Dascola are analogous art to the claimed invention being from a similar field of endeavor of extended reality display systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the first control user interface as disclosed by Ramsby and Agarwal with a control user interface including an option to share a first object as disclosed by Dascola with a reasonable expectation of success.
One of ordinary skill in the art would be motivated to modify Ramsby and Agarwal as described above to increase user efficiency [Dascola, para 0534].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Cho et al. (WO 2021133053 A1) and Paul et al. (US 10762716 B1) generally discloses scaling fixed and dynamic objects in a virtual environment according to object type.
Faaborg et al. (US 20170256096 A1) generally discloses scaling objects according to valid drop targets.
Schwarz et al. (US 20180286126 A1) generally discloses performing different movement functions between a first object and associated first control user interface element and a first control user interface element used to move, cease display, and share the first element.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINDA HUYNH whose telephone number is (571)272-5240. The examiner can normally be reached M-F between 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LINDA HUYNH/Primary Examiner, Art Unit 2172