Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over Tarama et al. (US Pub 2013/0194175 A1) in view of Nakashima (US Pub 2019/0333261 A1).
As to claim 1, Tarama discloses a system including one or more computers, the system comprising: one or more memories storing instructions; and one or more processors capable of executing the instructions causing the system to (Fig .11, Fig. 12, ¶0096-0098.):
execute control of a movement of an avatar in a virtual space (Fig. 1-Fig. 4, Fig. 7, ¶0077, “a character 32 that performs an action corresponding to the action of the user is displayed on a menu screen 30. The position of each body part of the character 32 changes based on a change in three-dimensional coordinates of each body part of the user. That is, the character 32 can be interpreted as an image for showing the user (presenting the user with) the positions of the respective body parts of the user which are being detected by the game device 20. The user can grasp that the positions of the respective body parts are being normally detected by looking at the character 32.”),
wherein, as the control of the movement, restriction of a movement of a (Abstract, “A movement restriction unit restricts a movement of a movement subject within a display screen or a virtual space in a case where a numerical value indicating a magnitude of a displacement between a position of a hand or the like of a user and a reference position or a reference direction is smaller than the first reference value.”).
Tarama does not disclose an avatar.
Nakashima teaches restriction of a movement of an avatar based on detection data of a user corresponding to the avatar (¶0302, “a hand object serving as a virtual hand corresponding to a hand of the user 5. In at least one aspect, the control module 510 moves the hand object in the virtual space 11 so that the hand object moves in association with a motion of the hand of the user 5 in the real space based on output of the motion sensor 420. In at least one aspect, the operation object may correspond to a hand part of an avatar object.” ¶0353, “The avatar objects 6A, 6C, and 6D are all arranged in front of the stage object 1532. In the virtual space 11A, the avatar object 6B moves in accordance with the motion of the user 5B (second user), thereby performing a live performance The avatar object 6A views the performance made by the avatar object 6B in the virtual space 11A. ” ¶0360, “The user 5B moves his/her body to cause the avatar object 6B to carry out a performance The computer 200B detects a motion of the user 5B, based on output from various motion sensors that the user 5B wears.” ¶0510, “The processor 210A may perform charging-related processing in accordance with how many times the avatar object 6A moved (the number of times of movement). The processor 210A registers charging information in accordance with the number of times of movement with the tag object 2733A, for example. The processor 210A calculates a higher charged amount for a larger number of times of movement and registers charging information indicating the calculated charged amount with the tag object 2733A, for example. In this case, the processor 210A may register optimum charging information in accordance with how many times the avatar object 6A moved with the tag object 2733A. The processor 210A may further prevent the avatar object 6A from moving in the second virtual space 2711A with no restriction and hampering the live performance made by the avatar object 6B.” ¶0516, “the position to which the processor 210A allows the avatar object 6A to move is limited to the one within a specific range in the second virtual space 2711A. The processor 210A does not allow the avatar object 6A to move onto the stage object 1532 in principle, for example.” ¶0517, “This enables the processor 210A to prevent the avatar object 6A from moving during the live performance arbitrarily without the user 5B's permission. In other words, the user 5B may willfully specify the timing at which the avatar object 6A is allowed to move. This also enables the user 5B to control the staging of the live performance in a manner more preferable for the user 5B.”)
Tarama and Nakashima are considered to be analogous art because all pertain to virtual space. It would have been obvious before the effective filing date of the claimed invention to have modified Tarama with the features of “restriction of a movement of an avatar based on detection data of a user corresponding to the avatar” as taught by Nakashima. The suggestion/motivation would have been in order to prevent the avatar object 6A from moving in the second virtual space with no restriction and hampering the live performance made by the avatar object 6B (Nakashima, ¶0510).
As to claim 2, claim 1 is incorporated and the combination of Tarama and Nakashima discloses the instructions further cause the system to receive a first setting regarding a movement of an avatar that is to be restricted in the movement restriction mode (Nakashima, ¶0510, “The processor 210A may further prevent the avatar object 6A from moving in the second virtual space 2711A with no restriction and hampering the live performance made by the avatar object 6B.” ¶0516, “the position to which the processor 210A allows the avatar object 6A to move is limited to the one within a specific range in the second virtual space 2711A. The processor 210A does not allow the avatar object 6A to move onto the stage object 1532 in principle, for example.” ¶0517, “This enables the processor 210A to prevent the avatar object 6A from moving during the live performance arbitrarily without the user 5B's permission. In other words, the user 5B may willfully specify the timing at which the avatar object 6A is allowed to move. This also enables the user 5B to control the staging of the live performance in a manner more preferable for the user 5B.”).
As to claim 3, claim 2 is incorporated and the combination of Tarama and Nakashima discloses in the first setting, a movement of an avatar that is to be restricted is set by designating a region in which movement of an avatar is to be restricted, and wherein, as the control of the movement, a movement of a region of an avatar is restricted based on the first setting (Nakashima, ¶0510, “The processor 210A may further prevent the avatar object 6A from moving in the second virtual space 2711A with no restriction and hampering the live performance made by the avatar object 6B.” ¶0516, “the position to which the processor 210A allows the avatar object 6A to move is limited to the one within a specific range in the second virtual space 2711A. The processor 210A does not allow the avatar object 6A to move onto the stage object 1532 in principle, for example.” ¶0517, “This enables the processor 210A to prevent the avatar object 6A from moving during the live performance arbitrarily without the user 5B's permission. In other words, the user 5B may willfully specify the timing at which the avatar object 6A is allowed to move. This also enables the user 5B to control the staging of the live performance in a manner more preferable for the user 5B.”).
As to claim 4, claim 2 is incorporated and the combination of Tarama and Nakashima discloses in the first setting, a movement of an avatar that is to be restricted is set by designating a movement shape of an avatar which is to be restricted, and wherein, as the control of the movement, a movement of an avatar is restricted, based on the first setting such that the avatar is restricted from forming a shape matching or partially matching the designated movement shape of the avatar (Nakashima, ¶0510, “The processor 210A may further prevent the avatar object 6A from moving in the second virtual space 2711A with no restriction and hampering the live performance made by the avatar object 6B.”. Fig. 28, ¶0518, “The processor 210A may detect a first event in the second virtual space 2711A. The first event is, for example, making any action by the avatar object 6B. Examples of the action of the avatar object 6B include the moving of the avatar object 6B to another position on the stage object 1532 as in FIG. 55A. The first event may be a specific performance carried out by the avatar object 6B on the stage object 1532 for a certain period of time or more. Upon detecting the first event, the processor 210A allows the avatar object 6A to move in the second virtual space 2711A. This may prevent the avatar object 6A from moving haphazardly during the live performance” movement shape is broad term and a movement of avatar can have a movement area that is a “shape”.)
As to claim 5, claim 1 is incorporated and the combination of Tarama and Nakashima discloses the instructions further cause the system to receive a second setting regarding input from a user that is to be restricted in the input restriction mode (Tarama, ¶0008, “movement restriction means for restricting a movement of the movement subject in a case where a numerical value indicating a magnitude of a displacement between a position indicated by the position information and one of a reference position and a reference direction is smaller than a first reference value” ¶0115, “The movement restriction unit 56 restricts the movement of the movement subject (item images 36) in a case where a numerical value indicating a magnitude of a displacement between a position indicated by the position information and a reference position or a reference direction (hereinafter, the numerical value indicating the magnitude of the displacement is referred to simply as “numerical value”) is smaller than a first reference value. The first reference value is a predetermined value that defines a range of the numerical value for restricting the scrolling of the item images 36.” ¶0169, “Further, by causing the movement control unit 54 to perform the control in a case where the user holds out the right hand P6, it is possible to prevent such an erroneous operation that the scrolling starts in a case where the user who has no intention to cause the scrolling moves the right hand P6 in the left-right direction. Further, by restricting the control of the movement control unit 54 in a case where the user lowers the right hand P6, it is possible to easily instruct to lift the restriction of the scroll control.” ¶0170, “by setting different hands as the hand used for the scrolling and the hand for executing the processing corresponding to the menu item, it is possible to prevent the processing corresponding to the menu item from being erroneously executed during the scrolling.”).
As to claim 6, claim 5 is incorporated and the combination of Tarama and Nakashima discloses in the second setting, the input from the user that is to be restricted is set by designating a body region of a user from which input is to be restricted, and wherein, as the control of the movement, a movement of an avatar that follows detection data of the designated body region of the user, in detection data of a user corresponding to an avatar, is restricted based on the second setting (Tarama, ¶0170, “by setting different hands as the hand used for the scrolling and the hand for executing the processing corresponding to the menu item, it is possible to prevent the processing corresponding to the menu item from being erroneously executed during the scrolling.”).
As to claim 7, claim 5 is incorporated and the combination of Tarama and Nakashima discloses in the second setting, input from a user that is to be restricted is set by designating a shape of a user from which input is to be restricted, and wherein, as the control of the movement, a movement of an avatar that follows detection data of a range matching or partially matching the designated shape of the user, in detection data of a user corresponding to an avatar, is restricted based on the second setting (Tarama, ¶0050, “The position detecting device 1 generates user body part information relating to a position of the user in a real space. In this embodiment, description is given of a case where the user body part information includes information relating to positions of a plurality of body parts of the user 100. The body parts of the user 100 include, for example, a head and both arms.” ¶0051, “the position detecting device 1 includes, for example, a CCD camera 2, an infrared sensor 3, and a microphone 4 including a plurality of microphones.” ¶0060, “the infrared sensor 3 also detects an outline of a subject (user 100) by detecting depth differences acquired from the reflected infrared light.” ¶0077, “he character 32 can be interpreted as an image for showing the user (presenting the user with) the positions of the respective body parts of the user which are being detected by the game device 20.” Nakashima, ¶0299, “The control module 510 detects parts (e.g., mouth, eyes, and eyebrows) forming the face of the user 5 from a face image of the user 5 generated by the first camera 150 and the second camera 160. The control module 510 detects a motion (shape) of each detected part.”).
As to claim 8, claim 7 is incorporated and the combination of Tarama and Nakashima discloses in the second setting, the shape of the user from which input is to be restricted is designated using image capturing data obtained by capturing an image of the shape of the user from which input is to be restricted (Tarama, ¶0051, “the position detecting device 1 includes, for example, a CCD camera 2, an infrared sensor 3, and a microphone 4 including a plurality of microphones.” ¶0060, “the infrared sensor 3 also detects an outline of a subject (user 100) by detecting depth differences acquired from the reflected infrared light.” Nakashima, ¶0299, “The control module 510 detects parts (e.g., mouth, eyes, and eyebrows) forming the face of the user 5 from a face image of the user 5 generated by the first camera 150 and the second camera 160. The control module 510 detects a motion (shape) of each detected part.”).).
As to claim 9, the combination of Tarama and Nakashima discloses a control method for a system including one or more computers, the control method comprising: executing control of a movement of an avatar in a virtual space, wherein, as the control of the movement, restriction of a movement of an avatar based on detection data of a user corresponding to the avatar is executed according to either a movement restriction mode for restricting a movement of an avatar or an input restriction mode for restricting input from a user (See claim 1 for detailed analysis.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Perez (US 10,373,342 B1) teaches Virtual characters and objects can be moved and manipulated using selection shapes.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YU CHEN/Primary Examiner, Art Unit 2613