DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, (2, 4), 5 ,7, 8, 9, 10 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 6, 7, 8 of U.S. Patent No. US 12086915 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because
Claim 1 is determined to be obvious in light of claim 1 of 17/792.978 (now is US patent US 12086915 B2) based on reasons below for having similar limitations.
Instant application claims 1
17/792.978 claim 1
1. An avatar display device that is connected to an active terminal that inputs movement instructions for an
active avatar active in a virtual space,
displays video data of the virtual space on the active terminal,
and supports communication between the active avatar and a passive avatar in the virtual space, comprising:
a storage device that stores passive permission data including a permission flag indicating whether the passive
avatar accepts or rejects a movement from the active avatar to the passive avatar that realizes communication between the active avatar and the passive avatar;
an output unit that outputs data for displaying video data on the active terminal that indicates that the passive
avatar does not accept the movement of the active avatar when the movement from the active avatar to the passive
avatar is detected and the permission flag in the passive permission data indicates rejection.
1. An avatar display device that is connected to an active terminal inputting an action instruction of an active avatar acting in a virtual space and a passive terminal inputting an action instruction of a passive avatar acting in the virtual space,
and that displays moving image data of the virtual space on the active terminal and the passive terminal,
the device comprising:
a storage device storing passive permission data including a permission flag indicating permission or denial of the passive avatar in relation to an action from the active avatar with respect to the passive avatar,
wherein the avatar display device is configured to output data for displaying the moving image data in which the passive avatar accepts the action of the active avatar on the active terminal and the passive terminal when the action from the active avatar with respect to the passive avatar is detected,
and the permission flag indicates permission in the passive permission data,
wherein the action of the active avatar prompts a corresponding response from the passive avatar so that the active avatar and the passive avatar act cooperatively,
and wherein at least one of the action of the active avatar and the corresponding response from the passive avatar is adapted based on a current state of the passive avatar relative to the active avatar.
Claim 2, 4 is determined to be obvious in light of claim 4 of app 17/792.978 (now is US patent US 12086915 B2) based on reasons below for having similar limitations.
Instant application claim 2, 4
17/792.978 claim 4
2. The avatar display device according to claim 1,
wherein the video data that does not accept the action is video data in which the active avatar passes through the passive avatar.
4.The avatar display device according to claim 1,wherein the video data that does not accept the action is video data that displays a message indicating that the passive avatar rejects the action from the active avatar.
4. The avatar display device according to claim 1,
wherein when the action from the active avatar with respect to the passive avatar is detected,
and the permission flag for the action of the passive permission data indicates denial, in the moving image data,
it is indicated that the passive avatar does not accept the action of the active avatar.
Claim 5 is determined to be obvious in light of claim 4 of app 17/792.978 (now is US patent US 12086915 B2) based on reasons below for having similar limitations.
Instant application claim 5
17/792.978 claim 4
5. The avatar display device according to claim 1,
which is connected to a passive terminal that inputs movement instructions for the passive avatar active in the virtual space,
and the output unit further outputs data for displaying on the passive terminal video data in which the passive avatar does not accept the movement of the active avatar.
4. The avatar display device according to claim 1,
wherein when the action from the active avatar with respect to the passive avatar is detected,
and the permission flag for the action of the passive permission data indicates denial,
in the moving image data,
it is indicated that the passive avatar does not accept the action of the active avatar.
Claim 7 is determined to be obvious in light of claim 4 of app 17/792.978 (now is US patent US 12086915 B2) based on reasons below for having similar limitations.
Instant application claim 7
17/792.978 claim 4
The avatar display device according claim 1, wherein the passive permission data associates,
for each part of the active avatar,
a permission flag indicating whether the passive avatar permits the action from the active avatar to the passive avatar in the virtual space, and when the action from the active avatar to a specific part of the passive avatar is detected and the permission flag corresponding to the specific part in the passive permission data indicates denial,
the video data indicates that the passive avatar does not accept the action of the active avatar.
4. The avatar display device according to claim 1,
wherein when the action from the active avatar with respect to the passive avatar is detected,
and the permission flag for the action of
the passive permission data indicates denial,
in the moving image data,
it is indicated that the passive avatar does not accept the action of the active avatar.
Claims 8, 9, 10 they recite limitations similar in scope to the limitations of Claims 1 but as a system, method and display program respectively which determined to be obvious in light of claims 6, 7, 8 of 17/792.978 (now is US patent US 12086915 B1) respectively based on same reason described above for having similar limitations as described above for Claim 1.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claim 10 is directed to “An avatar display program for use in an avatar display device” which does not fall within at least one of the four categories of patent eligible subject matter recited in 35 U.S.C. 101 (process, machine, manufacture, or composition of matter). Program claimed as computer to execute per se, i.e., the descriptions or expressions of the programs, are not physical "things." They are neither computer components nor statutory processes, as they are not "acts" being performed. Such claimed computer programs do not define any structural and functional interrelationships between the computer program and other claimed elements of a computer which permit the computer program's functionality to be realized. In contrast, a claimed non-transitory computer-readable medium encoded with a computer program is a computer element which defines structural and functional interrelationships between the computer program and the rest of the computer which permit the computer program's functionality to be realized, and is thus statutory. See Lowry, 32 F.3d at 1583-84, 32 USPQ2d at 1035. Thus, claims 18 is rejected under 35 U.S.C. 101 because, giving the claims their broadest reasonable interpretation, the claimed “An avatar display program to be used in an avatar display device” is considered to be directed to transitory propagating signals per se and therefore encompasses non-statutory subject matter.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. (FP 7.30.05)This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Such claim limitation(s) is/are: output units in Claims 1, 5, 6, 8, 10
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. (FP 7.30.06)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fajt et al. (US 20190379765 A1, hereinafter Fajt765) in view of Fajt et al. (US 20190217192 A1, hereinafter Fajt192).
Regarding Claim 1, Fajt765 teaches An avatar display device (Fajt765, Paragraph [0037], "Fig. 1, a display 100 of a head-mounted display device is illustrated, showing a view of a shared virtual environment 102 presented to a user via the head-mounted display device" <read on avatar display device>"),
that is connected to an active terminal (Fajt765, Paragraph [0038], "An endpoint computing device 208 is connected to a head-mounted display device <read on active terminal> 206 worn by the user 80 via a cable"), that inputs movement instructions for an active avatar active in a virtual space (Fajt765, Paragraph [0038], "the endpoint computing device 208 may use the detected positions and/or motions of the handheld controller devices 210, 212 to control the hands of the avatar 122, 124 within the shared virtual environment 102 <read on virtual space>. The endpoint computing device 208 may use the detected positions and/or motions of the head-mounted display device 206 to move the avatar associated with the endpoint system within the shared virtual environment 102"),[AltContent: ] displays video data of the virtual space on the active terminal (Fajt765, Paragraph [0063], "the environment presentation engine 504 generates presentations of objects in the shared virtual environment <read on virtual space> to the user. In some embodiments, the environment presentation engine 504 may generate at least one video feed <read on video data> that includes the presentation of the objects, and provides the at least one video feed to the head-mounted display device 514 <read on active terminal> to be displayed"), and supports communication between the active avatar and a passive avatar in the virtual space (Fajt765, Paragraph [0037], "The shared virtual environment 102 <read on virtual space> is a virtual room in which two or more users may interact with each other <read on supports communication> and/or with objects within the shared virtual environment through avatars"; "the user can also see a left hand 122 and a right hand 124 that correspond to the user's own avatar" <read on active avatar>, "A first avatar has a head 110, a torso 104, a left hand 106 and a right hand 108. A second avatar also has a head 120, a left hand 116, a right hand 118, and a torso 114" <read on passive avatar>), comprising: a storage device (Fajt765, Paragraph [0072], Fig. 6, "the computing device 600 includes at least one processor 602 and a system memory 604 connected by a communication bus 606"; "the computing device 600 also includes a storage medium 608" <read on storage device>), [[ that stores passive permission data including a permission flag indicating whether the passive avatar accepts or rejects a movement from the active avatar to the passive avatar that realizes communication between the active avatar and the passive avatar]]; an output unit (Fajt765, Paragraph [0063], "the environment presentation engine 604 <read on output unit> generates presentations of objects in the shared virtual environment to the user. In some embodiments, the environment presentation engine 604 may generate at least one video feed that includes the presentation of the objects, and provides the at least one video feed to the head-mounted display device 514 to be displayed"), [[ that outputs data for displaying video data on the active terminal that indicates that the passive avatar does not accept the movement of the active avatar when the movement from the active avatar to the passive avatar is detected and the permission flag in the passive permission data indicates rejection ]].
But Fajt765 does not explicitly disclose that stores passive permission data including a permission flag indicating whether the passive avatar accepts or rejects a movement from the active avatar to the passive avatar that realizes communication between the active avatar and the passive avatar. that outputs data for displaying video data on the active terminal that indicates that the passive avatar does not accept the movement of the active avatar when the movement from the active avatar to the passive avatar is detected and the permission flag in the passive permission data indicates rejection.
However, Fajt192 teaches that stores passive permission data (Fajt192, Paragraph [0042), "the object data engine 554 is configured to manage object data within the object data store 560 <read on storage device>. The object data may include, but is not limited to... an owner of the object and one or more scripts defining behavior of the object"; Paragraph [0079), "an object data engine 554 of an environment information server 408 retrieves a set of permission conditions from an object data store 560 <read on stores permission data>"),[AltContent: ] including a permission flag indicating whether the passive avatar accepts or rejects a movement from the active avatar to the passive avatar (Fajt192, Paragraph [0079], "In some embodiments, a permission condition <read on permission flag> includes a proximity condition. The proximity condition is a threshold of a virtual distance between two objects in the shared virtual environment 300 that should be satisfied for the proximity condition to be met <read on indicating whether accepts or rejects a movement>"; Paragraph [0026], "the permission condition may include a threshold for a virtual distance between the second avatar 310 <read on passive avatar> and the owned object 314, a virtual distance between the second avatar 310 and the first avatar 308 <read on active avatar, movement from active avatar to passive avatar>"),[AltContent: ] that realizes communication between the active avatar and the passive avatar (Fajt192, Paragraph [0026], "Because the first endpoint system 302 has determined that a permission condition has been met, it is allowed to experience the owned object 314 in an interactive mode <read on realizes communication>. As illustrated, the interactive mode includes the permission to move and edit 309 the owned object 314"), that outputs data for displaying video data on the active terminal (Fajt192, Paragraph [0050], "the environment presentation engine 604 may generate at least one video feed <read on video data> that includes the presentation of the objects, and provides the at least one video feed to the head-mounted display device 614 <read on active terminal> to be displayed"), that indicates that the passive avatar does not accept the movement of the active[AltContent: ] avatar (Fajt192, Paragraph [0075]-[0076], "the object behavior engine 601 configures the owned object 314 to be presented within the shared virtual environment in a limited-interaction mode <read on indicates that the passive avatar does not accept the movement>. In some embodiments, the limited-interaction mode may allow the owned object 314 to be viewed by the user of the first endpoint system 302, but would ignore attempts of the user of the first endpoint system 302 to move, edit, or otherwise interact with the owned object 314"),[AltContent: ] when the movement from the active avatar to the passive avatar is detected (Fajt192, Paragraph [0074], "the object behavior engine 601 checks for a presence of an owning user of the owned object 314 within the shared virtual environment 300 <read on detects movement from active avatar to passive avatar>"; Paragraph [0082], "the object behavior engine 601 tests one or more proximity conditions of the set of permission conditions <read on detects movement>"),
and the permission flag in the passive permission data indicates rejection (Fajt192, Paragraph [0075], "Otherwise, the result of decision block 1008 is NO <read on permission flag indicates rejection>, and the method 1000 proceeds to block 1010, where the object behavior engine 601 configures the owned object 314 to be presented within the shared virtual environment in a limited-interaction mode"; Paragraph [0085]
PNG
media_image1.png
170
12
media_image1.png
Greyscale
PNG
media_image2.png
173
15
media_image2.png
Greyscale
, "If none of the proximity conditions were satisfied <read on permission flag indicates rejection>, then the result of decision block 1026 is NO, and the method 1000 proceeds to block 1030, where the object behavior engine 601 continues to cause the owned object 314 to be presented within the shared virtual environment 300 in the limited-interaction mode").
Fajt192 and Fajt765 are analogous since both of them are dealing with managing avatar interactions and permissions in shared virtual environments with head-mounted display devices. Fajt765 provided a way of allowing users to control avatars in a shared virtual environment through endpoint computing devices connected to head-mounted display devices where multiple avatars can interact and communicate with each other through collaborative gestures. Fajt192 provided a way of managing permissions for interactions in a shared virtual environment based on permission conditions including proximity thresholds, where the system presents objects in either an interactive mode when permission conditions are met or a limited-interaction mode when permission conditions indicate rejection. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate permission flag system for controlling avatar-to-avatar interactions taught by Fajt192 into modified invention of Fajt765 such that when an active avatar attempts to move toward or interact with a passive avatar in the shared virtual environment, the system will check the permission data associated with the passive avatar to determine whether to allow or reject the interaction, and display appropriate video data indicating rejection when the permission flag indicates that the passive avatar does not accept the movement. The motivation is to provide enhanced privacy and control for users in shared virtual environments by allowing each user's avatar to have configurable permission settings that determine which types of interactions from other avatars are accepted or rejected, thereby improving user experience and safety in social virtual reality applications where users may not want unwanted interactions or approaches from other avatars.
Regarding Claim 2, the combination of Fajt765 and Fajt192 teach the invention in Claim 1.
The combination further teaches wherein the video data that does not accept the action is video data (Fajt765, Paragraph [0063], "the environment presentation engine 504 may generate at least one video feed that includes the presentation of the objects, and provides the at least one video feed to the head-mounted display device 514 to be displayed"), [[the active avatar passes through the passive avatar).
But Fajt765 does not explicitly disclose the active avatar passes through the passive avatar.
However, Fajt192 teaches in which the active avatar passes through the passive avatar (Fajt192, Paragraph [0076], "the physics engine 610 may ignore simulated forces applied to the owned object 614 <read on passive avatar> by objects under the control of the first endpoint system 302 <read on active avatar>")
Fajt192 and Fajt765 are analogous since both of them are dealing with avatar interactions and permission-based behaviors in shared virtual environments. Fajt765 provided a way of detecting collisions between avatars and generating video presentations based on collision detection and gesture states in a shared virtual environment. Fajt192 provided a way of managing permission for interacting with virtual objects based on virtual proximity, where the physics engine can ignore simulated forces when permission conditions are not met, allowing objects to pass through each other without collision response. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the physics behavior of ignoring collisions taught by Fajt192 into modified invention of Fajt765 such that when the passive avatar does not accept the action from the active avatar, the video data would show the active avatar passing through the passive avatar without collision response, as this provides a clear visual indication that the interaction is not permitted.
Regarding Claim 3, the combination of Fajt765 and Fajt192 teach the invention in Claim 1.
The combination further teaches wherein the video data that does not accept the movement is video data in which (Fajt765, Paragraph [0063], "the environment presentation engine 504 may generate at least one video feed that includes the presentation of the objects, and provides the at least one video feed to the head-mounted display device 514 to be displayed"), the active avatar and the passive avatar move in a repelling manner (Fajt765, Paragraph [0064], "the physics engine 510 provides a real-time simulation of physical behavior of the objects in the shared virtual environment. a physics engine 510 may provide the simulation by conducting collision detection/collision response actions <read on repelling manner> , rigid body and/or soft body dynamics; Paragraph [0090), "Upon detection of a collision, the objects may be represented as colliding or deforming in some way to provide a more realistic representation").
Regarding Claim 4, the combination of Fajt765 and Fajt192 teach the invention in Claim 1.
The combination further teaches wherein the video data that does not accept the action is video data that (Fajt765, Paragraph [0063), "the environment presentation engine 504 may generate at least one video feed that includes the presentation of the objects, and provides the at least one video feed to the head-mounted display device 514 to be displayed") displays a message indicating that the passive avatar rejects the action from the active avatar (Fajt765, Paragraph [0063], "the environment presentation engine 504 may generate at least one video feed that includes the presentation of the objects, and provides the at least one video feed to the head-mounted display device 514 to be displayed. the environment presentation engine 504 may also generate at least one audio feed to be presented"; Paragraph [0010], "the functions may include special visual or audio effects").
Regarding Claim 5, the combination of Fajt765 and Fajt192 teach the invention in Claim 1.
The combination further teaches which is connected to a passive terminal (Fajt765, Paragraph [0044], "a first endpoint system 302, a second endpoint system 304 <read on passive terminal>, and a third endpoint system 306") that inputs movement instructions for the passive avatar active in the virtual space (Fajt765, Paragraph [0044], "a first endpoint system 302, a second endpoint system 304, and a third endpoint system 306" participating in the shared virtual environment; Paragraph [0038], "the endpoint computing device 208 may use the detected positions and/or motions of the handheld controller devices 210, 212 to control the hands of the avatar <read on inputs movement instructions for the passive avatar> 122, 124 within the shared virtual environment 102"), and the output unit further outputs data for displaying on the passive terminal [[v ideo data in which the passive avatar does not accept the movement of the active avatar ]] (Fajt765, Paragraph [0089], "the object authority engine 506 of the first endpoint system 302 generates first location change notifications... and transmits them (e.g., via the communication relay server 310) along with a representation of the first gesture state to one or more other endpoint systems"; Paragraph [0080], "environment presentation engines 504 of the other endpoint systems 500 present the one or more local objects").
But Fajt765 does not explicitly disclose video data in which the passive avatar does not accept the movement of the active avatar.
However, Fajt192 teaches video data in which the passive avatar does not accept the movement of the active avatar (Fajt192, Paragraph [0005], "In response to determining that the distance does not satisfy the at least one proximity condition, the first endpoint system presents the owned object within the shared virtual environment in a limited-interaction mode; Paragraph (0033), "the first endpoint system 402 can send a single notification to the communication relay server 410, and the communication relay server 410 sends it to the other endpoint systems; Paragraph [0067], "environment presentation engines 604 of the other endpoint systems 600 present the one or more local objects. The presentations on the other endpoint systems 600 use the initial status notifications to determine where to present the objects").
Fajt192 and Fajt765 are analogous since both of them are dealing with multiple endpoint systems exchanging notifications about avatar states and interactions in a shared virtual environment. Fajt765 provided a way of transmitting location change notifications and gesture states between endpoint systems via a communication relay server, where each endpoint system presents the avatars and their interactions. Fajt192 provided a way of distributing notifications to all participating endpoint systems including the limited-interaction mode when permission is denied, so that each system can properly render the interaction denial states. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to transmit the video data showing the passive avatar not accepting the action to the passive terminal (second endpoint system) for display using the multi-endpoint architecture taught by Fajt765, as both Fajt765 and Fajt192 expressly teach that each endpoint system receives notifications about avatar states and generates presentations, so when permission is denied as taught by Fajt192 the system would naturally transmit the denial state to all participating endpoint systems including the passive terminal.
Regarding Claim 6, the combination of Fajt765 and Fajt192 teach the invention in Claim 1.
The combination further teaches wherein the output unit detects the action from the active avatar to the passive avatar when the active avatar approaches the passive avatar [[ and the distance between the active avatar and the passive avatar is equal to or less than a certain value]] (Fajt765, Paragraph [0114), "an object behavior engine 501 of the first endpoint system 302 determines that the first avatar and the second avatar are in virtual proximity"), (Fajt765, Paragraph [0090), "a physics engine 510 of the first endpoint system 302 detects a collision between"; [0109], "the hand of the first avatar and the hand of the second avatar based at least in part on the first and second location change notifications").
But Fajt765 does not explicitly disclose the distance between the active avatar and the passive avatar is equal to or less than a certain value.
However, Fajt192 teaches the distance between the active avatar and the passive avatar is equal to or less than a certain value (Fajt192, Paragraph [0079], "the proximity condition is a threshold of a virtual distance between two objects in the shared virtual environment 300 that should be satisfied for the proximity condition to be met"; "a proximity condition may indicate that a virtual distance between the avatar of the owning user 310 and the avatar of the first user 308 <read on the distance between the active avatar and the passive avatar> within the shared virtual environment 300 should be less than a threshold virtual distance <read on equal to or less than a certain value>"; Paragraph [0082], "the object behavior engine 601 tests one or more proximity conditions of the set of permission conditions").
Fajt192 and Fajt765 are analogous since both of them are dealing with detecting interactions between avatars based on virtual proximity and distance in shared virtual environments. Fajt192 provided a way of using distance thresholds between avatars to determine whether interaction conditions are met, and testing proximity conditions to determine permissions. Fajt765 provided a way of detecting interactions and determining virtual proximity between avatars in the shared virtual environment. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to detect the action from the active avatar to the passive avatar based on the distance between them being equal to or less than a certain value, as Fajt192 expressly teaches using distance thresholds between avatars to determine whether interaction conditions are met and this is a predictable combination of the proximity detection techniques taught in both references.
Regarding Claim 7, the combination of Fajt765 and Fajt192 teach the invention in Claim 1.
The combination further teaches wherein the passive permission data associates, for each part of the active avatar, [[ a permission flag indicating whether the passive avatar permits the action from the active avatar to the passive avatar in the virtual space]] (Fajt765, Paragraph [0037], "a first avatar has a head 110, a torso 104, a left hand 106 and a right hand 108. A second avatar also has a head 120, a left hand 116, a right hand 118, and a torso 114 <read on each part of the active avatar>"; Paragraph [0080], "the first endpoint system 302 may initially have object authority over a head object and two hand objects that are associated with the avatar"), and when the action from the active avatar to a specific part of the passive avatar is detected [(and the permission flag corresponding to the specific part in the passive permission data indicates denial, the video data indicates that the passive avatar does not accept the action of the active avatar]] (Fajt765, Paragraph, [0090], "a physics engine 510 of the first endpoint system 302 detects a collision between the hand of the first avatar and the hand of the second avatar <read on action to a specific part> based at least in part on the first and second location change notifications").
But Fajt765 does not explicitly disclose a permission flag indicating whether the passive avatar permits the action from the active avatar to the passive avatar in the virtual space, and the permission flag corresponding to the specific part in the passive permission data indicates denial, the video data indicates that the passive avatar does not accept the action of the active avatar.
However, Fajt192 teaches a permission flag indicating whether the passive avatar permits the action from the active avatar to the passive avatar in the virtual space (Fajt192, Paragraph [0079]-[0080], "a permission condition includes a proximity condition... a permission condition may also include a permission <read on permission flag> to be applied if the proximity condition is met"; Paragraph [0076], "the object data may indicate that the limited-interaction mode includes the ability to collide with the owned object 314, but not to change its position"),
and the permission flag corresponding to the specific part in the passive permission data indicates denial, the video data indicates that the passive avatar does not accept the action of the active avatar (Fajt192, Paragraph [0005], "In response to determining that the distance does not satisfy the at least one proximity condition, the first endpoint system presents the owned object within the shared virtual environment in a limited-interaction mode”; Paragraph [0076], “the physics engine 610 may ignore simulated forces applied to the owned object 614 by objects under the control of the first endpoint system 302”).
Fajt192 and Fajt765 are analogous since both of them are dealing with avatar interactions, permission systems, and detecting actions to specific parts of avatars in shared virtual environments. Fajt765 provided a way of tracking and managing individual avatar parts (head, hands, torso) as separate objects with collision detection for specific parts. Fajt192 provided a way of permission systems with configurable permission conditions that determine whether interactions are allowed based on various factors. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to associate permission flags for each body part of the avatar and to deny actions based on part-specific permissions, as Fajt765 expressly teaches tracking and managing individual avatar parts as separate objects and Fajt192 expressly teaches permission systems with configurable permission conditions, and extending the permission system to be part-specific is a predictable refinement that applies known permission techniques to the already-separated body parts taught by Fajt765.
Regarding Claim 8, it recites limitations similar in scope to the limitations of claim 1, but in a system. As shown in the rejection, the combination of Fajt765 and Fajt192 disclose the limitations of claims 1. Additionally, Fajt765 discloses an system that specified in Fig. 5, Element 514 head-mounted display device; Paragraph [0007], the first avatar. The second avatar; Fig. 6 Element 608 Storage Memory; Paragraph [0076], output devices that mapped to an active terminal, an avatar display device, a passive avatar, a storage device, an output unit. Thus, Claim 8 is met by Fajt765 according to the mapping presented in the rejection of claims 1, given the device corresponds to the system.
Regarding Claim 9, it recites limitations similar in scope to the limitations of Claim 1 but as a method and the combination of Fajt765 and Fajt192 teaches all the limitations as of Claim 1. Therefore is rejected under the same rationale.
Regarding Claim 10, it recites limitations similar in scope to the limitations of claim 1 and the combination of Fajt765 and Fajt192 teaches all the limitations as of Claim 1. And Fajt765 discloses these features can be implemented on a computer readable storage medium (Fajt765, Fig. 6, Paragraph [0048], “The engines can be stored in any type of computer-readable medium or computer storage device and be stored on and executed by one or more general purpose computers, thus creating a special purpose computer configured to provide the engine”)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20070050716 A1 System and method for enabling users to interact in a virtual space
US 20080215994 A1 Virtual world avatar control, interactivity and communication interactive messaging
US 20080221998 A1 Participant interaction with entertainment in real and virtual environments
US 20090259648 A1 Automated avatar creation and interaction in a virtual world
US 20090259948 A1 Surrogate avatar control in a virtual universe
US 20130257876 A1 Systems and Methods for Providing An Interactive Avatar
US 20200074742 A1 User Interface Security in a Virtual Reality Environment
US 20220095008 A1 Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, video distribution method, and video distribution program
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached on (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YuJang Tswei/Primary Examiner, Art Unit 2614