DETAILED ACTION
This action is responsive to the filing of 2/23/24. Claims 1-20 are pending and have been considered below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-3, 8, 10, 13-15, 20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Tang (2020/0225830.)
Claim 1, 13, 20: Tang discloses a method of interacting in a virtual scene (par. 16, user manipulation of virtual objects in a virtual reality (VR) or an augmented reality (AR) environment), comprising:
determining an interaction mode associated with a virtual scene (par. 21, a virtual object 48 at least partially within a field of view 50 … determine that one or more of the control points 52 associated with the virtual object 48 are further than a predetermined threshold distance 54 from the user; par. 22, invoke a far interaction mode for the virtual object 48); and
in response to detecting a predetermined gesture (par. 22, in the far interaction mode, the user may be able to perform a predetermined gesture, such as pointing at the virtual object 48, pinching, swiping, and so forth, for example, in order to select the virtual object) associated with a virtual avatar in the virtual scene (virtual hand (of the avatar), FIG. 4A, a virtual ray 60 is shown as originating from a hand of the user. The virtual ray 60 may extend into the VR or AR environment to a predetermined distance. As the user moves her hand, the virtual ray 60 may moves as directed by the user's hand), presenting a set of controls associated with the virtual scene in proximity to a hand of the virtual avatar (Fig. 5C; 5F, buttons / context menu, near user’s virtual hand), the set of controls being determined at least based on the interaction mode (par. 22, in the far interaction mode the available interactions with the virtual object 48 may be limited, and in some situations may only include selection, movement, resizing, and display of a far context menu; par. 29, The user, therefore, may invoke the near interaction mode as described above and generate display of the scrolling mechanism in proximity to the user's hand.)
Claim 2, 14: Tang discloses the method of claim 1, wherein determining the interaction mode associated with the virtual scene comprises: determining the interaction mode associated with the virtual scene based on scene information of the virtual scene (par. 21, a virtual object 48 at least partially within a field of view 50 … determine that one or more of the control points 52 associated with the virtual object 48 are further than a predetermined threshold distance 54 from the user; par. 22, invoke a far interaction mode for the virtual object 48);
Claim 3, 15: Tang discloses the method of claim 2, wherein determining the interaction mode associated with the virtual scene based on the scene information of the virtual scene comprises: determining a target interaction mode corresponding to the scene information from a set of predetermined interaction modes associated with the virtual scene (Abstract, The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.)
Claim 8: Tang discloses the method of claim 1, wherein the set of controls are only visible to a user corresponding to the virtual avatar (Fig. 2; Fig. 5A-5F.)
Claim 10: Tang discloses the method of claim 1, further comprising: in response to detecting the hand of the virtual avatar is directed toward a predetermined direction (par. 26, in the far interaction mode prior to receiving the trigger input 56 from the user, generate a virtual ray 60 from a hand of the user), presenting an entrance control; and detecting the predetermined gesture for the entrance control (par. 27, in response to the virtual ray intersecting the virtual object 48 as a result of movement of the hand of the user, the processor 12 may be configured to generate a virtual handle 74.)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4-5, 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tang in view of Yang (2025/0030816.)
Claim 4, 16: Tang discloses the method of claim 1. However, Tang does not explicitly disclose: wherein the virtual scene comprises a virtual meeting scene and the interaction mode indicates a meeting mode of the virtual meeting scene.
Yang discloses a similar method for modes in virtual meetings, including:
wherein the virtual scene comprises a virtual meeting scene and the interaction mode indicates a meeting mode of the virtual meeting scene (par. 46, side-by-side conference mode, the conference system may construct a virtual conference space 400A by laterally combining sub-virtual spaces corresponding to the physical conference spaces where the participants 410 and 420 are located.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of Tang and Yang so as to tailor the user interface for a conference / meeting.
Claim 5, 17: Tang and Yang disclose the method of claim 4, wherein determining the meeting mode comprises: determining the meeting mode based on meeting configuration information associated with the virtual meeting scene (Yang par. 61, the conference system may also determine the conference mode according to configuration information associated with the video conference); or determining the meeting mode based on a number of virtual avatars in the virtual meeting scene and a layout of the virtual meeting scene.
Claim(s) 6-7, 9, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tang in view of Yang and in further view of Stovicek (2011/0202879.)
Claim 6, 18: Tang and Yang disclose the method of claim 4. However, Tang does not explicitly disclose: further comprising: obtaining mode preference information, the mode preference information indicating a frequency of a corresponding control used in different meeting modes; and determining the set of controls based on the mode preference information and the meeting mode.
Stovicek discloses a similar method for context menus in various contexts, including: obtaining mode preference information, the mode preference information indicating a frequency of a corresponding control used in different meeting modes; and determining the set of controls based on the mode preference information and the meeting mode (par. 62, each graphical context short menu 500 can include menu items that are predefined, programmer preferences, selected or built by the user, the most commonly used commands in the context, or the user's most frequently used commands in the context. Context can mean based on the application, function selected, or screen context.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of Tang and Stovicek so as to bring to the front those commands that would be most likely selected again by the user in that context.
Claim 7, 19: Tang and Yang disclose the method of claim 4. However, Tang does not explicitly disclose: wherein the set of controls are further determined based on an identity of a meeting participant corresponding to the virtual avatar.
Stovicek discloses a similar method for context menus in various contexts, including: wherein the set of controls are further determined based on an identity of a meeting participant corresponding to the virtual avatar (Stovicek par. 62, user's most frequently used commands in the context.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of Tang and Stovicek so as to bring to the front those commands that would be most likely selected again by the user in that context.
Claim 9: Tang and Yang disclose the method of claim 4, wherein the meeting mode comprises a first meeting mode (Yang par. 37, support different types of conference modes; e.g. Face-to-Face, Round Table, Side-by-Side) and the set of controls comprise a first set of controls, the method further comprising: determining that the virtual meeting scene is switched from the first meeting mode to a second meeting mode (Yang, par. 62, the conference system initially detects only two participants, starts the face-to-face conference mode, and may automatically switch to the round table conference mode after detecting that a new participant has joined the video conference.)
However, Tang does not explicitly disclose: in response to detecting the predetermined gesture associated with the virtual avatar, presenting a second set of controls associated with the virtual meeting scene in proximity to the hand of the virtual avatar, the second set of controls being different from the first set of controls.
Stovicek discloses a similar method for context menus in various contexts, including: in response to detecting the predetermined gesture associated with the virtual avatar, presenting a second set of controls associated with the virtual meeting scene in proximity to the hand of the virtual avatar, the second set of controls being different from the first set of controls (Stovicek, par, 59, The graphical context short menu 402 can include menu options based on the context that the menu was selected. par. 62, user's most frequently used commands in the context.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of Tang and Stovicek so as to bring to the front those commands that would be most likely selected again by the user in that context.
Claim(s) 11-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tang in view of Dascola (2023/0106627.)
Claim 11: Tang discloses the method of claim 1, further comprising: in response to detecting a first gesture for a target control in the set of controls, causing the target control to be activated (par. 29, fine movements/selections with respect to a virtual interaction object 58 (context menu 68) that is in proximity to the user's hand.)
However, Tang does not explicitly disclose: in response to detecting a second gesture, ceasing activating the target control.
Dascola discloses a similar method for manipulating a graphical user interface, including: in response to detecting a second gesture, ceasing activating the target control (par. 217, the user maintains the pinch gesture while performing the drag input, and releases the pinch gesture (e.g., opens their two or more fingers) to end the drag gesture.)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of Tang and Dascola so as to not clutter the user’s view with extraneous controls that they do not presently need.
Claim 12: Tang and Dascola disclose the method of claim 11, wherein the first gesture comprises a finger pinch gesture (Tang par. 26, a predetermined gesture in the form of a pinch gesture as the trigger input 56) and the second gesture comprises a finger release gesture (Dascola par. 217, the user maintains the pinch gesture while performing the drag input, and releases the pinch gesture (e.g., opens their two or more fingers) to end the drag gesture.)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Shi (2025/0039961) plurality of collaboration modes in response to a user's operation.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREY BELOUSOV whose telephone number is (571) 270-1695 and Andrew.belousov@uspto.gov email. The examiner can normally be reached Mon-Friday EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler, can be reached at telephone number 571-272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/Andrey Belousov/
Primary Examiner
Art Unit 2145
11/26/2025