Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 7-8, and 16-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Handa et al. (U.S. PGPUB 20230104139) in view of Dawson et al. (U.S. PGPUB 20140379752).
With respect to claim 1, Handa et al. disclose an information processing system (paragraph 68, FIG. 2 is a block diagram of the information system A) comprising: one or more processors (paragraph 68, processing unit 13) programmed to:
generate, in a virtual space, a specific object that enables an avatar to move to a specific position or a specific area in the virtual space (paragraph 79, The portal storage unit 114 stores one or more pieces of portal information. The portal information is information related to a portal. The portal is an object for the avatar to move between two VR spaces, paragraph 91, The installation instruction is an instruction to install a portal in a first VR space), and
associate the specific object with (ii) information regarding at least one of (a) an attribute of the specific object and (b) an attribute of the specific position or an attribute of the specific area (paragraph 81, The portal information has one or more portal attribute values. The one or more portal attribute values are, for example, a first identifier specifying the first VR space, a second identifier specifying the second VR space, first installation position information, and second movement position information). However, Handa et al. do not expressly disclose associating the specific object with (i) a condition for using the specific object, wherein the condition for using the specific object is a condition that must be met in order for the avatar to use the specific object, the condition for using the specific object includes a condition regarding a number of avatars that can move via the specific object at the same time, and the number of avatars that can move via the specific object at the same time is dynamically varied according to a degree of congestion of the specific position or the specific area to which the avatars move via the specific object.
Dawson et al., who also deal with virtual environments, disclose a method for associating the specific object with (i) a condition for using the specific object, wherein the condition for using the specific object is a condition that must be met in order for the avatar to use the specific object (paragraph 46, When the "suitable" output is enabled or asserted, the teleport destination is transferred to dialogue display control 320 and teleport control 340, whereupon the resident is notified that the destination is determined to be suitable, preferably by a dialogue box similar to that depicted in FIG. 4A which also provides for confirming the destination and initiating a teleport operation by issuing a signal to teleport control 340), the condition for using the specific object includes a condition regarding a number of avatars that can move via the specific object at the same time, and the number of avatars that can move via the specific object at the same time is dynamically varied according to a degree of congestion of the specific position or the specific area to which the avatars move via the specific object (paragraph 50, if a user criterion is crowdedness and the user has defined (either explicitly or implicitly through the avatar's teleporting history or a combination thereof) "crowded" as being x avatars per square foot for a park, y avatars per square foot for a shopping mall and z avatars per square foot for a rock concert, the requested destination can be scanned for the current avatar population density to determine suitability…the VU may also provide definitions of crowdedness or specify maximum avatar occupancy for a location or, since different avatars may present different processing burdens for rendering, determine an occupancy based on current system performance or projected performance if the requesting avatar were to be added to the avatar population). The maximum avatar occupancy directly affects the number of avatars that can move via teleporting.
Handa et al. and Dawson et al. are in the same field of endeavor, namely computer graphics.
Before the effective filing date of the claimed invention, it would have been obvious to apply the method of associating the specific object with (i) a condition for using the specific object, wherein the condition for using the specific object is a condition that must be met in order for the avatar to use the specific object, the condition for using the specific object includes a condition regarding a number of avatars that can move via the specific object at the same time, and the number of avatars that can move via the specific object at the same time is dynamically varied according to a degree of congestion of the specific position or the specific area to which the avatars move via the specific object, as taught by Dawson et al., to the Handa et al. system, because it is desirable that some mechanism, not previously available in virtual universes, be provided to regulate populations of avatars in locations both to provide a suitable experience consonant with the environment and to maintain acceptable response time of the VU system (paragraph 35 of Dawson et al.).
With respect to claim 7, Handa et al. as modified by Dawson et al. disclose the information processing system according to claim 1, wherein the attribute of the specific object includes at least two of (i) a setting state of whether consumption of the specific object accompanying use by the avatar is possible (Handa et al.: paragraph 231, Here, the “deletion condition” is a condition for deleting a portal installed in the past in a case where the user installed the portal. An attribute value “2 or less” of the “deletion condition” indicates that the user can install up to two portals, and means that a portal whose installation date and time is the oldest, the portal having been installed by the same user, is deleted in a case where the third portal has been installed. An attribute value “maximum 1” of the “deletion condition” indicates that the user can install only one portal, and means that a portal whose installation date and time is the oldest, the portal having been installed by the same user, is deleted in a case where the second portal has been installed), (ii) a setting state of whether the specific object can be carried by the avatar, (iii) a setting state of whether the specific object is fixed in the virtual space (iv) a setting state of whether the specific object is stored in a pocket of clothing of the avatar or inside the avatar, (v) a setting state of whether duplication of the specific object is possible (Handa et al.: paragraph 84, In a case where the portal is the bidirectional portal, second installation position information and first movement position information may be included in one or more portal attribute values included in portal information. The second installation position information is information specifying an installation position of the portal in the second VR space, duplication is possible based on a second position of the portal), and (vi) a setting state of whether ownership transfer of the specific object is possible.
With respect to claim 8, Handa et al. as modified by Dawson et al. disclose the information processing system according to claim 1, wherein the one or more processors set or update the condition for using the specific object based on a state pertaining to the specific position or the specific area (Handa et al.: paragraph 231, it is assumed that the terminal information storage unit 111 of the information processing device 1 stores a terminal information management table having a structure illustrated in FIG. 9. The terminal information management table is a table that manages terminal information. The terminal information management table manages one or more records having a “terminal identifier”, a “world identifier”, an “avatar identifier”, an “installation condition”, a “deletion condition”, a “confirmation condition”, and a “movement condition”). Fig. 9 shows a table with different conditions of the portal, including conditions related to the position of the portal and/or avatar.
With respect to claim 16, Handa as modified by Dawson et al. disclose a non-transitory computer-readable medium storing thereon a program (Handa et al.: paragraph 135, The procedure of the processing performed by the processing unit 13 and the like is generally achieved by software, and the software is recorded in a recording medium such as a read-only memory (ROM)) that causes a computer to execute the system of claim 1; see rationale for rejection of claim 1.
With respect to claim 17, Handa et al. as modified by Dawson et al. disclose an information processing method as executed by the system of claim 1; see rationale for rejection of claim 1.
With respect to claim 18, Handa et al. as modified by Dawson et al. disclose an information processing device (Handa et al.: paragraph 278, An information system B in such a case includes two or more information processing devices 1. FIG. 20 is a block diagram of the information system B) comprising: one or more processors (Handa et al.: Fig. 20, processing unit 13) programmed to execute the system of claim 1; see rationale for rejection of claim 1.
Claim(s) 2-6, 10-13, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Handa et al. (U.S. PGPUB 20230104139) in view of Dawson et al. (U.S. PGPUB 20140379752) and further in view of Hamilton, II et al. (U.S. PGPUB 20100036729).
With respect to claim 2, Handa et al. as modified by Dawson et al. disclose the information processing system according to claim 1. However, Handa et al. as modified by Dawson et al. do not expressly disclose the one or more processors are further programmed to set a predetermined guidance process (i) via a first predetermined object accompanying the avatar or (ii) via a second predetermined object linked with the specific position or the specific area.
Hamilton, II et al., who also deal with virtual environments, disclose a method wherein the one or more processors are further programmed to set a predetermined guidance process (i) via a first predetermined object accompanying the avatar or (ii) via a second predetermined object linked with the specific position or the specific area (paragraph 40, the VU-ad may be displayed as a machinima of a commercial projected on a screen (e.g., a billboard, television screen, etc.) at a location in the VU, paragraph 58, The machinima may be played by a VU-ad provider, and may depict events happening in real time (or that have already happened) at another location in the VU).
Handa et al., Dawson et al., and Hamilton, II et al. are in the same field of endeavor, namely computer graphics.
Before the effective filing date of the claimed invention, it would have been obvious to apply the method wherein the one or more processors are further programmed to set a predetermined guidance process (i) via a first predetermined object accompanying the avatar or (ii) via a second predetermined object linked with the specific position or the specific area, as taught by Hamilton, II et al., to the Handa et al. as modified by Dawson et al. system, because this would allow potential customers (e.g., users controlling avatars in a VU) to engage in an immersive advertising experience in a VU (paragraph 17 of Hamilton, II et al.).
With respect to claim 3, Handa et al. as modified by Dawson et al. and Hamilton, II et al. disclose the information processing system according to claim 2, wherein the predetermined guidance process includes a process of outputting information regarding the specific position or the specific area (Hamilton, II et al.: paragraph 58, the machinima may be a video of events happening at a VU simulation of a real world amusement park, paragraph 67, The video may be, for example, a trailer of an upcoming movie that the VU-ad provided is promoting).
With respect to claim 4, Handa et al. as modified by Dawson et al. and Hamilton, II et al. disclose the information processing system according to claim 3, wherein the information regarding the specific position or the specific area includes a video pertaining to the specific position or the specific area (Hamilton, II et al.: paragraph 40, machinima may comprise a video rendered using real-time, interactive 3-D engines, such as those used in first-person-shooter games. As another example, machinima may comprise a video generated and/or shown in a virtual space where characters and events are controlled by at least one of: scripts, artificial intelligence, and humans, paragraph 67, At step 510, a video is played at a first location in the VU by the VU-ad provider).
With respect to claim 5, Handa et al. as modified Dawson et al. and Hamilton, II et al. disclose the information processing system according to claim 3, further comprising a first memory that stores a usage status or a usage history of the specific object by a plurality of the avatars (Hamilton, II et al.: paragraph 31, the VU-ad may be altered in any suitable fashion based upon any suitable combination of profile aspects, including, but not limited to: age, gender, ethnicity, occupation, income, hobbies, address, history of interacting with ads, etc.). It would have been obvious to include a first memory that stores a usage status or a usage history of the specific object by a plurality of the avatars because aspects of the VU-ad are automatically altered based upon the profile data (paragraph 31 of Hamilton, II et al.).
With respect to claim 6, Handa et al. as modified by Dawson et al. and Hamilton, II et al. disclose the information processing system according to claim 2, wherein the second predetermined object includes at least one of (i) a first avatar associated with an area including a position of the specific object and (ii) a second avatar associated with the specific position or the specific area (Hamilton, II et al.: paragraph 30, the VU-ad comprises at least one ad avatar which interacts with at least one user avatar. The ad avatar may comprise an avatar bot (e.g., run by a script), or may be controlled by a human operator in real time, or some combination of both). It would have been obvious for the second predetermined object includes at least one of (i) a first avatar associated with an area including a position of the specific object and (ii) a second avatar associated with the specific position or the specific area because this would customize the experience for the user.
With respect to claim 10, Handa et al. as modified by Dawson et al. and Hamilton, II et al., disclose the information processing system according to claim 1, wherein the one or more processors are further programmed to output a predetermined video while the avatar is moving to the specific position or the specific area (Hamilton, II et al.: paragraph 67, At step 510, a video is played at a first location in the VU by the VU-ad provider. The video may be, for example, a trailer of an upcoming movie that the VU-ad provided is promoting, paragraph 68, At step 520, a user who observes the video is permitted to move (e.g., walk, fly, etc.) his/her avatar into contact with the object on which the video is playing. At step 525, the user avatar is teleported to a second location of the VU. The second location may be any location, such as, for example, a pre-defined location of the VU where ad avatars and/or ad objects are acting out the scene in the video). It would have been obvious for the one or more processors are further programmed to output a predetermined video while the avatar is moving to the specific position or the specific area because the VU-ad may automatically (or by human operator intervention) offer the user an incentive regarding the movie (e.g., a coupon for reduced admission, a promotional code to view limited-access videos at the movie's official website, etc.) (paragraph 70 of Hamilton, II et al.).
With respect to claim 11, Handa et al. as modified by Dawson et al. and Hamilton, II et al. disclose the information processing system according to claim 10, wherein the one or more processors generate the predetermined video based on avatar information or user information associated with the avatar (Hamilton, II et al., paragraph 31, the VU-ad may be altered in any suitable fashion based upon any suitable combination of profile aspects, including, but not limited to: age, gender, ethnicity, occupation, income, hobbies, address, history of interacting with ads, etc.). It would have been obvious for the one or more processors generate the predetermined video based on avatar information or user information associated with the avatar because aspects of the VU-ad are automatically altered based upon the profile data (paragraph 31 of Hamilton, II et al.).
With respect to claim 12, Handa et al. as modified by Dawson et al. and Hamilton, II et al. disclose the information processing system according to claim 10, wherein the one or more processors further associate, with the avatar, an item or an object corresponding to the specific position or the specific area (Hamilton, II et al.: paragraph 29, the VU-ad comprises at least one ad object at a location in the VU. Some features of the ad object may be unalterable, while other features may be altered by the avatar of a user interacting with the ad object. For example, the VU-ad may comprise a rendering of an automobile object in the VU that is approachable by avatars in the VU). It would have been obvious for the one or more processors further associate, with the avatar, an item or an object corresponding to the specific position or the specific area because this would allow additional user interaction in the virtual environment.
With respect to claim 13, Handa et al. as modified by Dawson et al. and Hamilton, II et al. disclose the information processing system according to claim 1, further comprising an action memory that, when the avatar moves to the specific position or the specific area via the specific object, stores at least one of (i) an action of the avatar during movement to the specific position or the specific area and (ii) an action of the avatar at the specific position or the specific area (Hamilton, II et al.: paragraph 28, the VU-ad is defined (e.g., via programming) in a manner that allows a user to cause their user avatar to interact with components of the VU-ad. Such interaction may include, for example, chatting with the ad avatars, opening the hood of the car, removing car parts to place in the user avatar inventory for later inspection, pushing an ad avatar out of the way to see the car better, deleting an ad avatar, changing the visual appearance of an ad avatar, and so forth, paragraph 61, The interaction may be any suitable interaction, such as those already described herein. For example, the VU-ad may include an ad avatar in the form of a mascot of the amusement park, and the user may ask the mascot if the user is tall enough to ride the newest roller coaster at the real world amusement park. In embodiments, the parameters of the interaction are defined in the programming of the VU-ad, which, as already described herein, may be stored at the host or the advertiser computing device, or some combination of both). It would have been obvious to include an action memory that, when the avatar moves to the specific position or the specific area via the specific object, stores at least one of (i) an action of the avatar during movement to the specific position or the specific area and (ii) an action of the avatar at the specific position or the specific area because plural user avatars can interact with the same in-progress VU-ad (and, possibly, other user avatars) at the same time, creating a fun, customized, and compelling experience (paragraph 28 of Hamilton, II et al.).
With respect to claim 15, Handa et al. as modified by Dawson et al. and Hamilton, II et al. disclose the information processing system according to claim 1, wherein the one or more processors generate or update the condition for using the specific object based on user input from a specific user associated with the specific position or the specific area (Hamilton, II et al.: paragraph 59, a user whose avatar is in the vicinity of the machinima may be intrigued by what is displayed in the machinima. This user then controls his/her avatar to move (e.g., walk, fly, etc.) into contact with the object displaying the machinima, paragraph 60, At step 430, after moving into the machinima, the user avatar is teleported to a second location of the VU). The user input includes walking/flying into the portal, which updates the condition of the portal. It would have been obvious to include the one or more processors generate or update the condition for using the specific object based on user input from a specific user associated with the specific position or the specific area because this would allow for greater user interaction in the virtual environment.
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Handa et al. (U.S. PGPUB 20230104139) in view of Dawson et al. (U.S. PGPUB 20140379752), Hamilton, II et al. (U.S. PGPUB 20100036729), and further in view of Patel et al. (U.S. PGPUB 20230360006).
With respect to claim 14, Handa et al. as modified by Dawson et al. and Hamilton, II et al. disclose the information processing system according to claim 13. However, Handa et al. as modified by Dawson et al. and Hamilton, II et al. do not expressly disclose the one or more processors are further programmed to issue a non-fungible token (NFT) based on data stored in the action memory.
Patel et al., who also deal with virtual environments, disclose a method wherein the one or more processors are further programmed to issue a non-fungible token (NFT) based on data stored in the action memory (paragraph 23, The interaction may comprise transferring the NFT 130 from the first digital folder 132 to a second digital folder 140 associated with the second user 116. Upon completion of the interaction, the second avatar 126 may access the NFT 130 stored in the second digital folder 140 and interact with or manipulate the NFT 130 in the virtual environment 102).
Handa et al., Dawson et al., Hamilton, II et al., and Patel et al. are in the same field of endeavor, namely computer graphics.
Before the effective filing date of the claimed invention, it would have been obvious to apply the method wherein the one or more processors are further programmed to issue a non-fungible token (NFT) based on data stored in the action memory, as taught by Patel et al., to the Handa et al. as modified by Dawson et al. and Hamilton, II et al. system, because provides a technical advantage that increases information security because it inhibits a bad actor from attempting to transfer an inauthentic or fraudulent virtual resource as an associated avatar in a virtual environment. This process may be employed to authenticate and validate the virtual resource of a user before allowing the user to perform an interaction within a virtual environment (paragraph 3 of Patel et al.).
Response to Arguments
Applicant’s arguments with respect to claim(s) 1 and 16-18 have been considered but are moot in view of the new ground(s) of rejection.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
WO 2005034464 to Augustin et al. for a method of checking if a number of users to login to a portal exceeds a threshold.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW GUS YANG whose telephone number is (571)272-5514. The examiner can normally be reached M-F 9 AM - 5:30 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW G YANG/Primary Examiner, Art Unit 2614
3/23/26