DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 09/21/2025 is considered by the examiner.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-5, 7-15, 17-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2, 4, 6-12, 14, 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Trehan (US Publication Number 2023/0372778 A1) in view of Nakade et al. (US Publication Number 2022/0044450 A1, hereinafter “Nakade”).
(1) regarding claim 1:
As shown in fig. 3, Trehan disclosed an avatar control method, applicable to an avatar control system, configured to control an avatar, which is corresponding to a user in a real-world environment, in an immersive content (para. [0006], note that the method may further include generating, in a metaverse or a virtual environment, an avatar corresponding to the user of the real-world environment based on the at least one of the pose and the movement identified. The avatar and the associated virtual environment may be customizable based on user's physical appearance, user preferences, or user performance), and comprising:
detecting if the user makes a first target pose (para. [0023], note that identify a pose and/or a movement of the user 102 while performing the one or more activities);
when the user makes the first target pose, (para. [0027], note that the user 102 may see themselves as a digital representation called an avatar 108.The immersive fitness device 100 enables assessing functions, poses, gestures, orientation, movements, and the like of the user 102 in the real-world environment);
detecting if the user makes a second target pose different from the first target pose (para. [0030], note that when the user 102 performs a pushups activity in the real-world, the AI model may track movements of the user 102 performing the pushups activity); and
when the user makes the second target pose, controlling the avatar to appear at the target position indicated by the guide object (para. [0030], note that based on this tracking, the AI model may determine a type of an activity the user 102 is performing. As an example, if the user 102 is performing the pushups activity and is in the pushups performing position, while doing up-down motion, then the AI model may determine the type of active as the pushups activity and the user 102 is performing the pushups activity).
Trehan disclosed most of the subject matter as described as above except for specifically teaching displaying a guide object with a trajectory extending from a current position where the avatar currently is in the immersive content to a target position indicated by the guide object along a facing direction of the user.
However, Nakade teaches displaying a guide object with a trajectory extending from a current position where the avatar currently is in the immersive content to a target position indicated by the guide object along a facing direction of the user (as shown in fig. 5, para. [0007], note that a sensor detects a position and a direction of a user carrying the video display device; a trajectory information acquisition unit that acquires information of a migration trajectory of the user from a detection result of the sensor; a storage unit that stores trajectory information of the user acquired by the trajectory information acquisition unit and information of an avatar which is a virtual image indicating the user; a display unit that displays the migration trajectory of the user with the avatar; and a control unit that controls the trajectory information acquisition unit and the display unit. Also see para. [0076], note that FIG. 7C illustrates a starting point/end point position coordinate table 730 that stores position information when trajectory collection starts and when trajectory collection ends. This table is required when the display position (absolute position) of the avatar is calculated using the difference trajectory information storage table 720 illustrated in FIG. 7B.).
At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach displaying a guide object with a trajectory extending from a current position where the avatar currently is in the immersive content to a target position indicated by the guide object along a facing direction of the user. The suggestion/motivation for doing so would have been in order to provide a video display device that guides a return route along a route through which a user moved actually (para. [0006]). Therefore, it would have been obvious to combine Trehan with Nakade obtain the invention as specified in claim 1.
(2) regarding claim 2:
Trehan further disclosed the avatar control method of claim 1, further comprising: when the user makes the first target pose, determining a movement mode of the avatar according to the first target pose, wherein the avatar is controlled to appear at the target position according to the movement mode (para. [0031], note that based on monitoring the user's activity, the AI model may then assign an AI assisted activity trainer i.e., a virtual expert with expertise derived from a human expert who may provide feedbacks to the users, based on user's performance).
(3) regarding claim 4:
Trehan further disclosed the avatar control method of claim 1, wherein controlling the avatar to appear at the target position comprises:
fixing a distal end of the guide object at the target position in the immersive content (para. [0036], note that as illustrated in present FIG. 1F, the AI-assisted virtual expert 112 is assisting the set of avatars (for example, avatar 108, avatar 110, avatar 114, and avatar 116) and instructing each of the avatars or their corresponding real-world users as ‘keep your elbow straight’ during the performance of push-up in the group activity); and
moving the avatar to the target position in a movement mode determined according to the first target pose (para. [0037], note that each of the set of avatars and their corresponding real-world users may be assigned with a personalized AI-assistive virtual expert that may provide personalized feedback to each of the set of avatars or users while engaging in a shared activity with the other users).
(4) regarding claim 7:
Trehan further disclosed the avatar control method of claim 1, further comprising: setting a speed parameter according to the first target pose, wherein the speed parameter is configured to indicate a speed at which the guide object is extended (para. [0003], note that the metaverse may include a virtual gym where the users may interact and perform activities individually or within a group to keep themself healthy and fit. These activities are controlled through a range of input devices such as controllers, keyboards, mice, virtual controllers, and gesture systems. Currently, there exist some mirror related systems that use external mechanical resistance measurement (such as, speed, distance, and weight lifted) for tracking a user's physical activity in real-world and projecting them into the metaverse).
(6) regarding claim 8:
Trehan further disclosed the avatar control method of claim 1, further comprising: obtaining the facing direction according to a head movement of the user, wherein the guide object extends along the facing direction (para. [0032], note that FIG. 1E, the AI-assisted virtual expert 112 is assisting and providing an instruction ‘keep your back straight’ to the user or the corresponding avatar 110 during the performance of crunches, thereby helping the user or the corresponding avatar 110 to perform the crunches correctly facing user avatar).
(7) regarding claim 9:
Trehan further disclosed the avatar control method of claim 1, wherein detecting if the user makes the first target pose comprises: recognizing at least one of an upper body movement and a lower body movement of the user (para. [0020], note that the user 102 performing the one or more activities may be, for example, but are not limited to, high knees, leg raises, crunches, jumping jacks, lateral squats, lunges, squats, burpees, overhead triceps, push-ups, dumbbell squat press, core scissors, elbow knee, a band lateral raise, a band lateral stretch, a hook, an uppercut, boxing, kettlebell, deadlift, dead bug, squat thrusters, yoga, or high-intensity interval training (HIIT). For the sake of explanation, in a current scenario, the user 102 may be performing a high knees activity and a leg raises activity to keep himself healthy and fit, as depicted via FIGS. 1A and 1C respectively.).
(8) regarding claim 10:
Trehan further disclosed the avatar control method of claim 1, further comprising: when the user does not make the second target pose within a preset time, stopping displaying the guide object in the immersive content (para. [0078], note that e disclosed method and system may provide the AI-assistive expert that may provide on demand personalized feedback to each avatar or user performing activities. This feedback may be based on the user's current performance, previous activities, and the activities of the group. The group activities may appear to be in real-time however, in actuality they may be performed at different time and just experienced in the metaverse at the time a user is active).
The proposed rejection of claims 1-2, 4, 6-9 renders obvious the system claims 11-12, 14, 16-19 and non-transitory computer readable medium 20 because these steps occur in the operation of the proposed rejection as discussed above. Thus, the arguments similar to that presented above for claims 1-2, 4, 6-9 are equally applicable to claims 11-12, 14, 16-20.
Claim(s) 3, 5, 13 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Trehan and Nakade, and further in view of Khorshid (US Publication Number 2024/0045704 A1).
(1) regarding claim 3:
Trehan further disclosed the avatar control method of claim 2, wherein determining the movement mode of the avatar according to the first target pose comprises: when the first target pose is a dynamic moving pose, setting the movement mode of the avatar to be a route mode (para. [0049], note that once the pose and the movement of the user 204 is identified, the immersive fitness device 202 may further generate an avatar corresponding to the user 204 into the metaverse or the virtual environment).
Trehan disclosed most of the subject matter as described as above except for specifically teaching wherein determining the movement mode of the avatar according to the first target pose comprises: when the first target pose is a static moving pose, setting the movement mode of the avatar to be a teleport mode.
However, Khorshid teaches wherein determining the movement mode of the avatar according to the first target pose comprises: when the first target pose is a static moving pose, setting the movement mode of the avatar to be a teleport mode (para. [0177], note that if the user is engaged with their VR headset and is interested in a certain musician, the XR assistant avatar may let the user know that a virtual concert by the musician is about to begin and help the user teleport to the correct stage platform to watch).
At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach wherein determining the movement mode of the avatar according to the first target pose comprises: when the first target pose is a static moving pose, setting the movement mode of the avatar to be a teleport mode. The suggestion/motivation for doing so would have been in order to enable the user to interact with the assistant system via user inputs of various modalities (e.g., audio, voice, text, image, video, gesture, motion, location, orientation) in stateful and multi-turn conversations to receive assistance from the assistant system (para. [0005]). Therefore, it would have been obvious to combine Trehan and Nakade with Khorshid obtain the invention as specified in claim 3.
(2) regarding claim 5:
Trehan further disclosed the avatar control method of claim 4, wherein moving the avatar to the target position in the movement mode comprises: when the movement mode is a route mode, moving the avatar from the current position to an intermediate position, and then moving the avatar from the intermediate position to the position indicated by the guide object, wherein the intermediate position is between the current position and the target position indicated by the guide object (para. [0058], note that the at least one of the pose and the movement identified may be rendered in the avatar using an XR technique. The XR technique may be utilized to enable the user to see the one or more activities performed by the avatar via a display (such as the display 212) in a way that is immersive and interactive. The one or more activities may include high knees, leg raises, crunches, jumping jacks, lateral squats, lunges, squats, burpees, overhead triceps, push-ups, dumbbell squat press, core scissors, elbow knee, a band lateral raise, a band lateral stretch, a hook, an uppercut, boxing, kettlebell, deadlift, dead bug, squat thrusters, yoga, or high-intensity interval training (HIIT).).
Trehan disclosed most of the subject matter as described as above except for specifically teaching when the movement mode is a teleport mode, teleporting the avatar immediately from the current position where the avatar currently is to the target position indicated by the guide object.
However, Khorshid disclosed when the movement mode is a teleport mode, teleporting the avatar immediately from the current position where the avatar currently is to the target position indicated by the guide object (para. [0177], note that if the user is engaged with their VR headset and is interested in a certain musician, the XR assistant avatar may let the user know that a virtual concert by the musician is about to begin and help the user teleport to the correct stage platform to watch).
At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to when the movement mode is a teleport mode, teleporting the avatar immediately from the current position where the avatar currently is to the target position indicated by the guide object. The suggestion/motivation for doing so would have been in order to enable the user to interact with the assistant system via user inputs of various modalities (e.g., audio, voice, text, image, video, gesture, motion, location, orientation) in stateful and multi-turn conversations to receive assistance from the assistant system (para. [0005]). Therefore, it would have been obvious to combine Trehan and Nakade with Khorshid obtain the invention as specified in claim 5.
The proposed rejection of claims 3 and 5 renders obvious the system claims 13 and 15 and non-transitory computer readable medium 20 because these steps occur in the operation of the proposed rejection as discussed above. Thus, the arguments similar to that presented above for claims 3 and 5 are equally applicable to claims 13 and 15.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Brudy et al. (US Publication Number 2023/0267667 A1) disclosed a method for analyzing human motion data includes receiving a set of motion data that indicates one or more movements of a first person within a real-world environment; generating a virtual avatar corresponding to the first person based on the set of motion data; determining a position of the virtual avatar within an extended reality (ER) scene based on the one or more movements; and displaying the virtual avatar in the ER scene according to the determined position.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communication from the examiner should be directed to Hilina K Demeter whose telephone number is (571) 270-1676.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y. Poon could be reached at (571) 270- 0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about PAIR system, see http://pari-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HILINA K DEMETER/Primary Examiner, Art Unit 2617