DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Japan on 08/15/2023. It is noted, however, that applicant has not filed a certified copy of the 2023-132208 application as required by 37 CFR 1.55.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 6-8, 11, & 13-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Akihiko Shirai (Pat. Pub. U.S.-20220414962-A1, herein after “Shirai”).
Regarding claim 1, Shirai teaches [a]n information processing method performed by an information processing device including a memory that stores a program and at least one processor that executes the program “The functions and features described herein may also be executed by various distributed components of a system. For example, one or more processors may execute these system functions, wherein the processors are distributed across multiple components communicating in a network” ¶ [0268],
wherein the processor adjusts settings related to a display form “rendering process” ¶ [0078] of an object “predetermined object” ¶ [0084] in a virtual space “Predetermined objects are any objects of various objects that can be placed in a virtual space and are typically one or some of the various objects that can be placed in the virtual space. Alternatively, predetermined objects may be all the various objects that can be placed in a virtual space” ¶ [0084], based on avatar information on an avatar “user avatar” ¶ [0069], when the avatar satisfying a predetermined condition is located within a reference range “predetermined distance” ¶ [0078] including a location of the object “predetermined object” ¶ [0078] in the virtual space in which the object is placed “ … a rendering process implements an animation rendering function of rendering an animation of a combination of a predetermined object with a user object when the distance between the predetermined object and the user object is shorter than or equal to a first predetermined distance” ¶ [0078] where a rendering process is triggered on an object once the user avatar comes within the predetermined distance.
Regarding claim 6, Shirai teaches [t]he information processing method according to claim 1, wherein:
the avatar information includes information on at least one of an appearance “In the example shown in FIG. 8, rendering information 800 of the user avatar M1 associates face part ID, hairstyle part ID, costume part ID, and the like with each avatar ID. Part information concerned with an appearance, such as face part ID, hairstyle part ID, and costume part ID, includes parameters for featuring the user avatar M1 and may be selected by an associated user” ¶ [0111], an attribute “As described above, when the position/orientation information of one user avatar M1 includes information indicating the position and orientation of each of a plurality of parts of the user avatar M1, the avatar rendering section 2581 may express the position and orientation of each of the plurality of parts of the user avatar M1 in accordance with those pieces of information” ¶ [0128], and a possessed object of the avatar “… the hand of a user or the hand of a user object refers to a part corresponding to a palm, fingers, and a thumb beyond the wrist of the user or user object” ¶ [0087]; and
the processor adjusts the settings so that the object is displayed in the display form according to at least one of the appearance, the attribute, and the possessed object of the avatar “The distance between a user object and a predetermined object is a distance in a virtual space (for example, a spatial distance) and may be calculated in accordance with position information of the predetermined object and position information of the user object. In this case, the position information of the predetermined object may be position information to be used at the time of placing the predetermined object in a virtual space and may be, for example, position information of a representative point (for example, a barycenter or a centroid) of the predetermined object. The position information of a user object may be position information of the hand of the user object. In this case, the position information of the hand of a user object may be position information of the tip end part of the hand of the user object” ¶ [0086] where the rendering process and animation can be triggered by the user object coming within the predetermined distance of the predetermined object. Additionally, “An animation concerned with a combination of a predetermined object with a user object is any animation and is, for example, an animation that represents various movements of the hand, such as touching the predetermined object, holding the predetermined object, gripping the predetermined object, picking up the predetermined object, lifting the predetermined object, and pinching the predetermined object” ¶ [0091].
PNG
media_image1.png
610
409
media_image1.png
Greyscale
Shirai, Fig. 11, the user avatar (M1).
Regarding claim 7, Shirai teaches [t]he information processing method according to claim 1, wherein the processor selects and decides the appearance “various animations” ¶ [0104] of the object to be displayed among from the preregistered multiple types of appearances “variations of various animations” ¶ [0104] that differ from each other in the adjustment of the settings “FIG. 6 is a view illustrating an example of a state machine suitably usable in the present embodiment. In the example shown in FIG. 6, states ST600 to ST625 are shown as examples. The state ST600 corresponds to a default state (“Default”) that is formed after a user avatar enters a virtual space (after entry), and the state ST610 corresponds to any state (“Any State”) that indicates any state among the states ST620 to ST625. The states ST620 to ST625 are states accompanied by rendering an animation at the time of transition” ¶ [0103]. Additionally, “[b]y implementing changes of various animations using such a state machine, it is possible to easily increase variations of various animations. When, for example, an animation concerned with one predetermined object is newly added, it is possible to add the animation without influence on the existing animations by using a description in accordance with, for example, a transition from any state (“Any State”)” ¶ [0104]. For example, “When, for example, the predetermined object is an object imitating “sword”, animation data of drawing the sword to cut and animation data of holding the sword over the head may be prepared. In this case, a predetermined condition concerned with animation data of drawing the sword to cut may be satisfied when a user input at the last predetermined timing is an input to issue instructions for “cutting”. Similarly, a predetermined condition concerned with animation data of holding the sword over the head may be satisfied when a user input at the last predetermined timing is an input to issue instructions for “holding” ¶ [0162].
PNG
media_image2.png
577
577
media_image2.png
Greyscale
Shirai, Fig. 6, various appearance states of different objects (ST620-ST625).
Regarding claim 8, Shirai teaches [t]he information processing method according to claim 1, wherein the processor adjusts the settings so that the display form of the object is a display form corresponding to a changed location when the location of the object in the virtual space is changed “An animation may also express a change in the movement, deformation, or the like of a predetermined object. This is because, depending on the type of a predetermined object, giving the movement of a predetermined object by hand1ing the predetermined object with hand can be higher in consistency with reality” ¶ [0094].
Regarding claim 11, Shirai teaches [t]he information processing method according to claim 1, wherein the processor returns the settings to a state (Fig. 6, item ST600 “Default”) before the adjustment when a predetermined restoration condition “release condition” is satisfied after the settings are adjusted “In step S2518, the server apparatus 10 determines whether the release condition (see the object information 1800 in FIG. 18) concerned with the k-th predetermined object is satisfied. When the release condition concerned with an exclusively used state and the release condition concerned with a shared state are associated with the k-th predetermined object (see, for example, predetermined object ID “B01” in FIG. 18), the release condition concerned with the exclusively used state is used. When the determination result is “YES”, the predetermined cancellation condition is satisfied, and the process proceeds to step S2520; otherwise, the process in the current cycle ends …” ¶ [0239]. Additionally, “In step S2522, the server apparatus 10 transmits an animation stop command to the terminal apparatus 20 rendering an animation for the k-th predetermined object. In this case, the associated terminal apparatus 20 stops the animation mode and renders various objects (including the user avatar M1) in the normal mode.” ¶ [0241].
Regarding claim 13, [a]n information processing system, comprising:
an information processing device including a first communication circuit “FIG. 27 is a block diagram of processing circuitry that performs computer-based operations in accordance with the present disclosure. FIG. 27 illustrates processing circuitry 300 of server apparatus 10 and/or terminal apparatus 20” (Shirai, ¶ [0255]) the server additionally contains a communication section (Fig. 1, item 11), a first memory that stores a first program “The process data and instructions may be stored in memory 302” (Shirai, ¶ [0257]), and at least one first processor that executes the first program “A processing circuit includes a particularly programmed processor, for example, processor (CPU) 301, as shown in FIG. 27. A processing circuit also includes devices such as an application specific integrated circuit (ASIC) and conventional circuit components arranged to perform the recited functions” (Shirai, ¶ [0259]); and
a terminal device including a display “As shown in FIG. 7, the terminal apparatus 20 includes an object information storage section 240, the animation data storage section 244, a server data acquisition section 250, a user input acquisition section 252, an avatar movement processing section 254, and a rendering section 258” (Shirai, ¶ [0108]) where the rendering section includes a method of displaying, a second communication circuit “The configuration of the terminal apparatus 20 will be described. As shown in FIG. 1, the terminal apparatus 20 includes the terminal communication section 21 …” (Shirai, ¶ [0064]), a second memory that stores a second program “… the terminal storage section 22 …” (Shirai, ¶ [0064]), and at least one second processor that executes the second program “… and the terminal control section 25” (Shirai, ¶ [0064]), wherein the first processor causes information on a virtual space in which an object is placed to be transmitted by the first communication circuit “As shown in FIG. 16, the server apparatus 10 includes an object information storage section 140, the distance state storage section 144, the animation information storage section 146, a positional relationship calculation section 150 (an example of an information generating section), an interference detecting section 154, an object management section 156, an animation control section 157, and a prohibiting section 158” (Shirai, ¶ [0145]) all of which is communicated to the terminal device by the network;
wherein the second processor displays the virtual space on the display “the server apparatus 10 transmits a rendering command including a data ID concerned with associated first animation data to the associated terminal apparatus 20 in order to render an animation” (Shirai, ¶ [0214]) where the rendered space is displayed, based on the information on the virtual space in which the object received via the second communication circuit is placed “… the associated terminal apparatus 20 renders an animation in accordance with the first animation data associated with the data ID included in the rendering command” (Shirai, ¶ [0214]);
wherein the first processor adjusts settings related to a display form of the object in the virtual space “… the server apparatus 10 transmits a rendering command including a data ID concerned with associated second animation data to the associated terminal apparatus 20 in order to render an animation concerned with a combination of the k-th predetermined object with the user avatar M1 that has transitioned to the proximity state (in FIG. 23, indicated by “animation in the proximity state”)” (Shirai, ¶ [0202]) where the display form is changed for the object animation to proceed and the rendering instructions based on the animation are transmitted, based on avatar information on an avatar “… the server apparatus 10 determines whether there is a transition from a non-proximity state to a proximity state in relation to the positional relationship information concerned with the k-th predetermined object for any one user avatar M1” (Shirai, ¶ [0201]), when the avatar satisfying a predetermined condition is located within a reference range including a location of the object in the virtual space in which the object is placed “In step S2318, the server apparatus 10 determines whether the distance (object-to-object distance) between any one user avatar M1 and the k-th predetermined object is shorter than or equal to the second predetermined distance d2 in accordance with the positional relationship information 1900 (see FIG. 19)” (Shirai, ¶ [0201]);
wherein the first processor causes the first communication circuit to transmit the settings related to the display form of the object after the adjustment “In step S2320, the server apparatus 10 transmits a rendering command including a data ID concerned with associated second animation data to the associated terminal apparatus 20 in order to render an animation concerned with a combination of the k-th predetermined object with the user avatar M1 that has transitioned to the proximity state (in FIG. 23, indicated by “animation in the proximity state”)” (Shirai, ¶ [0202]); and
wherein the second processor causes the display to display the object “The terminal control section 25 launches a virtual reality application in response to user's operation. The terminal control section 25 cooperates with the server apparatus 10 to execute various processes concerned with virtual reality. For example, the terminal control section 25 causes the display section 23 to display a virtual space image” (Shirai, ¶ [0075]), based on the settings related to the display form of the object after the adjustment received via the second communication circuit “server apparatus 10 executes an adjustment process on the predetermined object in the exclusively used state by one user avatar M1 (hereinafter, also referred to as “second adjustment process”)” (Shirai, ¶ [0206]) it is also to be noted that the server apparatus and terminal apparatus may have interchangeable capabilities.
PNG
media_image3.png
494
588
media_image3.png
Greyscale
Shirai, Fig. 1, with server apparatus (item 10), which communicates to a network (item 3), and communicates from the network to the terminal apparatus (item 20) to display to the viewer (item 23)
Regarding claim 14, Shirai teaches [a]n information processing device comprising a processing unit “FIG. 27 is a block diagram of processing circuitry that performs computer-based operations in accordance with the present disclosure. FIG. 27 illustrates processing circuitry 300 of server apparatus 10 and/or terminal apparatus 20 …” (Shirai, ¶ [0255]), wherein the processing unit adjusts settings related to a display form of an object in a virtual space “the server apparatus 10 transmits a rendering command including a data ID concerned with associated second animation data to the associated terminal apparatus 20 (Shirai, ¶ [0202]), based on avatar information on an avatar “… the server apparatus 10 determines whether there is a transition from a non-proximity state to a proximity state in relation to the positional relationship information concerned with the k-th predetermined object for any one user avatar M1” (Shirai, ¶ [0201]), when the avatar satisfying a predetermined condition is located within a reference range including a location of the object in the virtual space in which the object is placed “In step S2318, the server apparatus 10 determines whether the distance (object-to-object distance) between any one user avatar M1 and the k-th predetermined object is shorter than or equal to the second predetermined distance d2 in accordance with the positional relationship information 1900 (see FIG. 19)” (Shirai, ¶ [0201]).
Regarding claim 15, Shirai teaches [a] computer-readable recording medium that records a program for causing a computer to perform processing of adjusting settings related to a display form of an object in a virtual space “the instructions may be stored on CDs, DVDs, in FLASH memory, RAM, ROM, PROM, EPROM, EEPROM, hard disk or any other non-transitory computer readable medium of an information processing device with which the processing circuitry 300 communicates, such as a server or computer.” (Shirai, ¶ [0257]), based on avatar information on an avatar “… the server apparatus 10 determines whether there is a transition from a non-proximity state to a proximity state in relation to the positional relationship information concerned with the k-th predetermined object for any one user avatar M1” (Shirai, ¶ [0201]), when the avatar satisfying a predetermined condition is located within a reference range including a location of the object in the virtual space in which the object is placed “In step S2318, the server apparatus 10 determines whether the distance (object-to-object distance) between any one user avatar M1 and the k-th predetermined object is shorter than or equal to the second predetermined distance d2 in accordance with the positional relationship information 1900 (see FIG. 19)” (Shirai, ¶ [0201]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2 & 3 are rejected under 35 U.S.C. 103 as being unpatentable over Shirai in view of Akihiko Shirai (Pat. Pub. US-20230162433-A1, herein after “Shirai 2”).
Regarding claim 2, Shirai teaches [t]he information processing method according to claim 1,
Shirai does not teach wherein the processor determines that the predetermined condition is satisfied when the avatar possesses a predetermined corresponding object that corresponds to the object.
Shirai 2 teaches wherein the processor determines that the predetermined condition is satisfied when the avatar possesses a predetermined corresponding object that corresponds to the object “When the second predetermined condition is satisfied, the server device 10 generates an item acquisition event for the user avatar(s) M1 that satisfies the second predetermined condition (step S245). After that, the administrative user generates a gathering instruction input via the terminal device 20-3 as appropriate (step S246). In this case, for example, each user avatar M1 is forcibly moved to a corresponding predetermined gathering place according to the gathering instruction input (step S247). As described above, instead of forcibly moving, based on a response input (approval input) from each user, each user avatar M1 may be forcibly moved to a corresponding predetermined gathering place. Then, when a mission completion condition is satisfied (step S248), the server device 10 terminates the mission (step S249). The mission completion condition is arbitrary, and may be set based on, for example, elapsed time” (Shirai 2, ¶ [0276]) where the acquisition of a predetermined object satisfies a condition and causes a corresponding operation.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of displaying a virtual object once the avatar is close taught by Shirai with the use of an action requiring a predetermined object taught by Shirai 2 to produce a virtual object display that is based on a predetermined condition such as when the user possesses a certain object within the virtual space. The motivation to do so would be to only allow certain viewers to interact with, or visualize, a specific object, scene, or action.
Regarding claim 3, Shirai teaches [t]he information processing method according to claim 1,
Shirai does not explicitly teach wherein the processor determines that the predetermined condition is satisfied when the avatar is performing a shooting operation to capture the virtual space.
Shirai 2 teaches wherein the processor determines that the predetermined condition is satisfied when the avatar is performing a shooting operation to capture the virtual space “the predetermined event is the image acquisition event, and the preparation processing includes processing for activating the operation button B602 (see FIG. 6) (making the operation button B602 operable), which is the shutter button, as an interface for generating an image acquisition event. Preparation processing is performed on the terminal image for the user for which it is determined that the first predetermined condition is satisfied” (Shirai 2, ¶ [0160]) where the user is acquiring an image causing the predetermined event to be satisfied and unlocking the corresponding operation.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of displaying a virtual object once the avatar is close taught by Shirai with the use of displaying the object when the user is taking a photo taught by Shirai 2 to produce a virtual object display that is based on a predetermined condition such as when taking a photo. The motivation to do so would be to allow for easier image capturing in the virtual space, where the desired object to photograph will be present.
Claims 4 & 5 are rejected under 35 U.S.C. 103 as being unpatentable over Shirai in view of 渡邊 匡志 (Masashi Watanabe) (Pat. Pub. JP-7324470-B2, herein after “Watanabe”).
Regarding claim 4, Shirai teaches [t]he information processing method according to claim 1, wherein the processor adjusts the settings for an orientation as the display form of the object “an animation relates to a combination of the predetermined object with the hand part M2 of the user avatar M1. In this case, the animation rendering section 2584 may perform rendering such that the animation of the hand part M2 is naturally coupled to the wrist joint of the user avatar M1 in accordance with the position and orientation of the wrist of the user avatar M1” (Shirai, ¶ [0139]), based on the position of the avatar identified according to the avatar information “ … a rendering process implements an animation rendering function of rendering an animation of a combination of a predetermined object with a user object when the distance between the predetermined object and the user object is shorter than or equal to a first predetermined distance” (Shirai, ¶ [0078]).
Shirai does not explicitly teach where a predetermined region of the object faces the avatar.
Watanabe teaches so that a predetermined region of the object faces the avatar “the local coordinate system may be set with the orientation indicated by the position/orientation information relating to the representative point of the user avatar M1 as the front direction. Alternatively, as shown in FIG. 14, when the dialogue partner user avatar M1 (in FIG. 14, the user avatar M1 with the username “user B”) is identified, the local coordinate system faces the dialogue partner user avatar M1. The direction may be set as the front direction” (Watanabe, Page 19) where a dialogue partner is an object, and the two avatars face each other.
PNG
media_image4.png
387
562
media_image4.png
Greyscale
Watanabe, Fig. 14, user avatars (item M1) facing each other.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of displaying a virtual object once the avatar is close taught by Shirai with the use of objects facing the user avatar taught by Watanabe to produce a virtual object display that presents a predetermined region to the user once a predetermined condition such as proximity is satisfied. The motivation to do so would be to allow form multiple users to view a presenting face of the virtual object.
Regarding claim 5, Shirai teaches [t]he information processing method according to claim 1, wherein the processor adjusts the settings for the orientation as the display form of the object “an animation relates to a combination of the predetermined object with the hand part M2 of the user avatar M1. In this case, the animation rendering section 2584 may perform rendering such that the animation of the hand part M2 is naturally coupled to the wrist joint of the user avatar M1 in accordance with the position and orientation of the wrist of the user avatar M1” (Shirai, ¶ [0139])
Shirai does not explicitly teach that the predetermined region of the object faces a representative point of a plurality of avatars, each avatar identical to the avatar, when the plurality of avatars satisfying the predetermined condition is located within the reference range.
Watanabe teaches that the predetermined region of the object faces a representative point of a plurality of avatars “… the user avatar M1 has one or more specific parts whose orientation can be changed, the position/orientation information of the user avatar M1 may include information representing the orientation of the one or more specific parts … for example, the face, upper body, eyes, and the like” (Watanabe, Page 10), each avatar identical to the avatar “… the virtual space can also function as a place for interaction between users via the user avatar M1. … a plurality of users can make an appointment in advance and receive content in a specific space 70 at a predetermined time. In this case, a plurality of users can interact with each other by providing content” (Watanabe, Page 9) where the various user avatars can be objects, and a specific space can be accessed to allow for the plurality of avatars to face each other at a predetermined region (front facing), when the plurality of avatars satisfying the predetermined condition is located within the reference range “Each value of various camera parameters of the camera 60 may be calculated. That is, the sitting parameter calculation unit 1701 calculates various camera parameters of the virtual camera 60 associated with the one user avatar M1 based on the part orientation operation input (particularly, face part orientation operation input) associated with the one user avatar M1. can be calculated. Thereby, for example, the user can interact with a plurality of friend avatars based on a terminal image that can selectively display a plurality of friend avatars in front of the user avatar M1 when the user avatar M1 is seated” (Watanabe, Page 19) where multiple characters within the same specific space may be seated and face each other.
PNG
media_image5.png
374
545
media_image5.png
Greyscale
Watanabe, Fig. 13, user avatar (item M1) facing forward once sat.
PNG
media_image4.png
387
562
media_image4.png
Greyscale
Watanabe, Fig. 14, multiple user avatars (item M1) facing each other.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of displaying a virtual object once the avatar is close taught by Shirai with the use of multiple avatars taught by Watanabe to produce a virtual object display that can face multiple users once a predetermined condition such as proximity is satisfied. The motivation to do so would be to allow form multiple users to view a presenting face of the virtual object.
Claims 9 & 10 are rejected under 35 U.S.C. 103 as being unpatentable over Shirai in view of Chao Shen (Pat. Pub. WO-2019095360-A1, herein after “Shen”).
Regarding claim 9, Shirai teaches [t]he information processing method according to claim 1.
Shirai does not explicitly teach wherein the object is a watch object.
Shen teaches wherein the object is a watch object “The virtual watch 1003 is a complete object entity in the virtual space, while the menu 1004 is also a complete object entity in the virtual space. The virtual watch 1003 is used to implement the time display logic of the watch, the display rendering of the watch, and the menu 1004 is used to implement the display and hiding logic of the menu and the logic for interacting with the menu” (Shen, Page 14).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of displaying a virtual object once the avatar is close taught by Shirai with the use of multiple avatars taught by Shen to produce a virtual object display that can face multiple users once a predetermined condition such as proximity is satisfied. The motivation to do so would be to allow for multiple users to view a presenting face of the virtual object.
Regarding claim 10, Shirai in view of Shen teaches [t]he information processing method according to claim 9, wherein the processor adjusts the settings so that the object displays analog time “The virtual watch can be either digital or pointer-style. The time is displayed on the virtual watch so that the character associated with the client 101 can know the real world time without taking off the helmet” (Shen, Page 13) when the distance between the object and the avatar is greater than or equal to a reference distance “ … a rendering process implements an animation rendering function of rendering an animation of a combination of a predetermined object with a user object when the distance between the predetermined object and the user object is shorter than or equal to a first predetermined distance” (Shirai, ¶ [0078]).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Shirai in view of Honda Yasuaki (Pat. Pub. WO-2019095360-A1, herein after “Yasuaki”).
Regarding claim 12, Shirai teaches [t]he information processing method according to claim 1,
Shirai does not teach wherein the processor displays the object in a global display mode, in which the settings are adjusted so that the display forms of the object seen from the plurality of avatars are the same, or in a local display mode, in which the settings related to the display form of the object are adjusted separately and independently for each of the plurality of avatars, based on the relationship between each of the plurality of avatars and the object, and switches between the global display mode and the local display mode according to the operation of the avatar on a switching object provided in the virtual space.
Yasuaki teaches wherein the processor displays the object in a global display mode “The shared server terminal 11 is arranged in the virtual reality space, and serves as an update object that constitutes the virtual reality space. For example, an avatar representing each user is managed, so that a plurality of users can share the same virtual reality space” (Yasuaki, Page 2) where multiple users sharing the virtual space is considered global display, in which the settings are adjusted so that the display forms of the object seen from the plurality of avatars are the same “More specifically, the host A may, for example, read the streets of Tokyo in a three-dimensional virtual reality space, the streets of New York in a three-dimensional virtual reality space, or a three-dimensional virtual reality space in another predetermined area (hereinafter referred to as appropriate). 3D image data for providing a virtual reality space). Note that this three-dimensional image data includes only basic objects whose basic state does not change (even if it changes, it changes autonomously, such as a ferris wheel, a neon sign, etc.). There is no static data. For example, objects commonly used by a plurality of users, such as buildings and roads, are basic objects” (Yasuaki, Page 2). Additionally, “When the switch button 201 is clicked, the browser (the client terminal 13) connects to a predetermined bureau (the shared server terminal 11). When a browser is connected to a bureau, the bureau sends updates about the … browser, and connects updates about the already connected browser... Thereby, as described above, the same world is shared between a plurality of users (browser).” (Yasuaki, Page 28), or in a local display mode “ a non-shared mode” (Yasuaki, Page 7), in which the settings related to the display form of the object are adjusted separately and independently for each of the plurality of avatars “… in which the three-dimensional virtual reality space is not shared with other users” (Yasuaki, Page 7), based on the relationship between each of the plurality of avatars and the object “At this stage, the client terminal 13 and the shared server terminal 11 are not connected (the link is not established), so that the client terminal 13 does not receive the update information, and therefore, the virtual reality space of only the basic object, that is, for example, A virtual reality space such as a building, such as a building, is displayed (updated objects such as avatars of other users are not displayed).” (Yasuaki, Page 17), and switches between the global display mode and the local display mode according to the operation of the avatar on a switching object provided in the virtual space “if it is determined that the switching button 201 has been clicked, the process proceeds to step S202, and the operation mode is set to the non-shared mode. That is, the display of the switching button 201 is changed as shown in FIG. 30, and the multi-user window is closed” (Yasuaki, Page 29). Additionally, “the three-dimensional virtual system is operated in response to the operation of the switching unit. The operation mode is switched to a shared mode in which the real space is shared with other users or a non-shared mode in which the real space is not shared with other users. Therefore, according to the user's request, The virtual reality space can be shared or not shared with other users.” (Yasuaki, Page 30).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the method of displaying a virtual object once the avatar is close taught by Shirai with the use of multiple avatars whom can toggle a global or local display taught by Yasuaki to produce a virtual object display that can be viewed individually or by a group. The motivation to do so would be to allow for a user to experience the display on their own or with other users “… so that a plurality of users can share the same virtual reality space” (Yasuaki, Page 2).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAIDEN ALEXANDER USSERY whose telephone number is (571)272-1192. The examiner can normally be reached Monday - Friday* 7:30AM - 5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CAIDEN ALEXANDER USSERY/Examiner, Art Unit 2611
/TAMMY PAIGE GODDARD/Supervisory Patent Examiner, Art Unit 2611