Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This Office Action is in response to the application filed on 05/17/2024.
2. The IDS filed on 05/17/2024 is considered and entered into the application file. But documents/contents for non-patent literature documents cited in the IDS are not received; and machine generated translation for Korean patent is not received.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
3. Claims 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Duplessie et al (US 20250050217 A1).
Duplessie et al (“Duplessie”) is directed to a method and system ) that uses map data of a virtual environment and object data of an avatar object including a moveset to generate an action path for traversal of the virtual environment by the avatar object (Abstract)
As per claim 1, Duplessie discloses a computing device (computing device 800, Fig, 8) , comprising: one or more processors (processor(s) 820, Fig. 8);
one or more hardware-based memory devices (memory 840, Fig. 8) storing computer-executable instructions which, when executed by the one or more processors, cause the computing device to:
initiate an adventure in which a persona is placed within a virtual world ([0017] FIG. 1 illustrates an example network environment 100 in which a system for action path generation for traversal of an avatar object through a virtual environment may be implemented. 0018] In some examples, the pathway generation system 160 can access, select, and/or adjust pathway generation parameter(s) 140 associated with traversal such as start and destination locations, approach speeds and angles, and move sequences.
output an initial description of the virtual world based on the persona’s location, wherein the initial description includes one or more sensory observations from the persona’s location ([0016] generating, for an iteration of a plurality of iterations, a pathway segment between a first location and a second location of an action path for traversal by the avatar object, the action path including a start location and a destination location. [0017] the user device 102 may also run a modeling application 14 for generating the map data 122, and may include a physics engine 16 that provides a framework for how the avatar object and other objects move through and interact with aspects of the virtual environment.
receive, at the computing device, input for a directional movement of the persona within the virtual world ([0029] Step 304 of method 300 includes generating, for an iteration of a plurality of iterations, a pathway for traversal of the avatar object between a start location and a destination location of the virtual environment. The systems outlined herein may generate and evaluate a plurality of pathways over the plurality of iterations to find different possible ways that an avatar object could traverse the virtual environment based on the map data and based on aspects of the avatar object such as a moveset of the avatar object. and
responsive to the received input, output a subsequent description of the virtual world based on the persona’s directional movement within the virtual world ([0020] In some examples, the pathway generation system 160 can simulate traversal of the virtual environment by the avatar object, and can generate telemetry data 168 that describes traversal of the virtual environment by the avatar object for further analysis by developers. [0030] In some examples, inputs provided to the machine learning model can also include, but are not limited to, the map data, information describing aspects of the avatar object such as the moveset of the avatar object, and information about any obstacles within the pathway).
As per claim 2, Duplessie further discloses that the computing device of claim 1, wherein the subsequent description changes based on the specific directional movement associated with the received input ([0029] Step 304 of method 300 includes generating, for an iteration of a plurality of iterations, a pathway for traversal of the avatar object between a start location and a destination location of the virtual environment. The systems outlined herein may generate and evaluate a plurality of pathways over the plurality of iterations to find different possible ways that an avatar object could traverse the virtual environment based on the map data and based on aspects of the avatar object such as a moveset of the avatar object. In some examples, the step of generating the pathway for traversal of the avatar object can be performed using a machine-learning model that receives the map data and information describing aspects of the avatar object such as a moveset of the avatar object, and generates one or more possible pathways between the start location and the destination location based on the received inputs).
As per claim 3, Duplessie further discloses that the computing device of claim 1, wherein descriptions for each available directional movement are pre-assigned prior to the received input ([0054] As further shown with reference to block 468, the action path generator 460 can construct the action path 166 based on the pathway segment(s). The action path generator 460 can select feasible pathway segment(s) with move sequence(s) and parameters that, when combined, result in the action path 166 that satisfies the pathway generation parameter(s) 140 (e.g., starts at the start location 442, ends at the destination location 444, and satisfies constraints 446 when possible).
As per claim 4, Duplessie further discloses that the computing device of claim 1, wherein the output initial description is generated from an LLM (large language model) ([0047] In the non-limiting example of FIG. 4, at block 462 the action path generator 460 generates a pathway for traversal of the avatar object between the start location 442 and the destination location 444 of the virtual environment. The action path generator 460 can iteratively generate new pathways between the start location 442 and the destination location 444 for evaluation and refinement. In some examples, the action path generator 460 can include a machine-learning model that is trained to generate initial pathways based on map data 122 and avatar object data 126. Also see [0029, 0030, and 0045).
As per claim 5, Duplessie further discloses that the computing device of claim 4, wherein the output subsequent description is likewise generated from the LLM ([0047] In the non-limiting example of FIG. 4, at block 462 the action path generator 460 generates a pathway for traversal of the avatar object between the start location 442 and the destination location 444 of the virtual environment. The action path generator 460 can iteratively generate new pathways between the start location 442 and the destination location 444 for evaluation and refinement. In some examples, the action path generator 460 can include a machine-learning model that is trained to generate initial pathways based on map data 122 and avatar object data 126. Also see [0029, 0030, and 0045).
As per claim 6, Duplessie further discloses that the computing device of claim 1, wherein the virtual world is a mix of user-created and LLM-created ([0017] the user device 102 may also run a modeling application 14 for generating the map data 122, and may include a physics engine 16 that provides a framework for how the avatar object and other objects move through and interact with aspects of the virtual environment. As shown, user device 102 implements a pathway generation system 160 that accesses the interactive content data 120 and generates one or more action paths 166 associated with traversal of an avatar object through the virtual environment based on the interactive content data 120. [0047] 047] In the non-limiting example of FIG. 4, at block 462 the action path generator 460 generates a pathway for traversal of the avatar object between the start location 442 and the destination location 444 of the virtual environment. The action path generator 460 can iteratively generate new pathways between the start location 442 and the destination location 444 for evaluation and refinement. In some examples, the action path generator 460 can include a machine-learning model that is trained to generate initial pathways based on map data 122 and avatar object data 126).
As per claim 7, Duplessie further discloses that the computing device of claim 1, wherein descriptions are directed to one or more of a human’s senses ([0038] The map data 122 can include collision data 422 that defines spatial information about the virtual environment, such as surfaces, walls, and other structures that can be touched by the avatar object. In some examples, collision data 422 can describe a 2-D or 3-D environment through vertices of various structures within the virtual environment). [0046] In other examples, the pathway generation parameter(s) 140 may indicate that the user is looking for any feasible pathway between the start location 442 and the destination location 444, with refinement being part of a later step in a general development process).
As per method claims 8-14, these method claims include limitations that are similar or correspond to system claims 1-7, respectively. Thus, the method claims are also rejected under similar citations given to the system claims, respectively.
As per non-transitory computer-readable memory claims 15-20, these memory claims include limitations that are similar or correspond to system claims 1-7, respectively. Thus, the memory claims are also rejected under similar citations given to the system claims, respectively.
Conclusion
4. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TADESSE HAILU whose telephone number is (571)272-4051; and the email address is Tadesse.hailu@USPTO.GOV. The examiner can normally be reached Monday- Friday 9:30-5:30 (Eastern time).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bashore, William L. can be reached (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TADESSE HAILU/Primary Examiner, Art Unit 2174