DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
Applicant’s submission of a response was received on 12/30/2025. Presently, claims 1-9, and 11-21 are pending.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-8, 11-19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Jean-Laurent Mallet (US 10520644 B1; hereinafter Mallet) in view of Soichi Nakajima (US 20200151946 A1; hereinafter Nakajima) and of Leia Leilani (“Terraforming Guide with Full Process” May 31, 2020; pp 1-16 and corresponding YouTube Video: https://www.youtube.com/
watch?v=9vr5YfyRcSE (June 1, 2020); hereinafter Leilani) and Happy Island Designer as evidenced by Justin (“This Happy Island Designer web-app can help you plan out every detail of your upcoming deserted island” found at: https://web.archive.org/web/20200323094200/
https://animalcrossingworld.com/2020/03/this-happy-island-designer-web-app-can-help-you-plan-out-every-detail-of-your-upcoming-deserted-island/; hereinafter Happy Island Designer).
Regarding claim 1, Mallet suggests a method for terrain deformation, comprising:
obtaining a grid vertex set of a three-dimensional terrain model for virtual objects in a game scene and a data node combination corresponding to the grid vertex set, wherein a data node of the data node combination is provided with a mapping relationship with at least one vertex in the grid vertex set (“restoring horizon H.sub.τ 210 to its initial, unfaulted and unfolded state (e.g., mapping horizon H.sub.τ onto the sea floor S.sub.τ(0)) 209 and shifting all sedimentary terrains in such a way that, for each point r∈G: the particle of sediment currently located at point r moves to its former, “restored” location, where the particle was located at geological time τ, no overlaps or voids/gaps are created in the subsurface (see at least: Mallet [column 19 lines 57-68])) The map and information on a gridded vortex is needed for the restoration that within the grids” (recited in at least: Mallet [column 20 lines 53-56]));
However Mallet does not explicitly disclose obtaining, according to an interaction event between a target virtual object in a game and the three-dimensional terrain model, a deformation picture corresponding to the interaction event; obtaining the deformation data corresponding to a shape of the deformation picture according to the deformation picture, wherein the deformation data is used to control deformation of the data node combination; adjusting a three-dimensional terrain model grid by adjusting a target vertex in the grid vertex set according to the deformation data and the mapping relationship; and rendering out a corresponding three-dimensional terrain model according to the three- dimensional terrain model grid.
Nakajima teaches obtaining, according to an interaction event between a target virtual object in a game and the three-dimensional terrain model, a deformation picture corresponding to the interaction event (“as virtual objects, a house object 40, a tree object 41, and the user character 50 are disposed on the terrain objects. The user plays the game by operating the user character 50. For example, the user character 50 can move on the ground object 34, climb a cliff object, dig a ditch on the ground object 34, scrape a cliff object, and add a new cliff object on the ground object 34 or a cliff object” (recited in at least: Nakajima paragraph [0066])); obtaining the deformation data corresponding to a shape of the deformation picture according to the deformation picture, wherein the deformation data is used to control deformation of the data node combination; adjusting a three-dimensional terrain model grid by adjusting a target vertex in the grid vertex set according to the deformation data and the mapping relationship; and rendering out a corresponding three-dimensional terrain model according to the three- dimensional terrain model grid (“the CPU 21 causes the GPU 22 to perform a rendering process based on the virtual camera VC (step S109). Specifically, the GPU 22 generates an image of the terrain and the objects on the terrain that have been deformed by the first and second deformation processes, as viewed from the virtual camera VC. The generated image is output to the display 12, on which the image of the virtual space is displayed (step S110) (see at least: Nakajima paragraph [0166]); = terrain in the virtual space is viewed from above, the terrain looks like a flat plane in which a plurality of squares are disposed in a grid pattern. Each square is the top surface or bottom surface of a terrain part. For example, the cliff object 30 is formed by disposing eight terrain parts GP1. In this case, each terrain part GP1 is previously configured so as to allow the vertices V in an adjoining portion of adjoining terrain parts GP1 to share the same positions. In other words, when a first terrain part GP1 adjoins a second terrain part GP1, a vertex V of the first terrain part GP1 and a vertex V of the second terrain part GP1 in an adjoining portion are located at the same position” (recited in at least: Nakajima paragraph [0078])).
Leilani also teaches that maps are given within the game that the character has to “clean up the area” so they can start to deform the terrain and build up new areas (recited in at least: attached NPL). The map with three levels of flooring that “the target virtual object” (also according to the instant application’s originally filed specification “[0060] The deformation method of the present disclosure will be described below by taking the target virtual object as a virtual character and the three-dimensional terrain model as a virtual beach as an example.”) has to interact with to start deformation processes and for the system to render that process to the player. The virtual environment also interacts and renders footprints, for example, on the beach as the target virtual object walks on the sand.
It would have been obvious to one having ordinary skill in the art to have used the restoration geological modeling of Mallet as it teaches rendering 3D surfaces with interactions and restorations with the gridded patterns and “target virtual object” or character interactions or events to start the deformation or restoration process as taught by Nakajima and Leilani for the added benefit of allowing a game to be interactive and giving users the ability to customize their own virtual environment.
Mallet doesn’t explicitly disclose during running of a game, wherein the deformation picture is a two-dimensional picture comprising deformation data used to control deformation of the three-dimensional terrain mode.
Happy Island Designer teaches during running of a game, wherein the deformation picture is a two-dimensional picture comprising deformation data used to control deformation of the three-dimensional terrain mode (Users are able to design rivers, beaches and other terraforming in a two-dimensional setting to be used in a three-dimensional world (see attached NPL).
It would be obvious to a person having ordinary skill in the art to have tried the editing of a two-dimensional picture for a three-dimensional implementation as taught by Happy Island Designer into the system of Mallet for the added benefit of allowing users to see the entire plane to edit without having views blocked by mountains and cliffs in the three-dimensional view.
Regarding claim 2, and similarly claims 13, and 21, Mallet in view of Nakajima, Leilani and Happy Island Designer suggest the claimed matter as stated above and Nakajima further suggests wherein the step of obtaining the deformation picture corresponding to the interaction event comprises: obtaining a deformation unit corresponding to the interaction event; obtaining the deformation picture and a preset deformation auxiliary data by analyzing the deformation unit (“disposing a terrain object representing a terrain in a virtual space, disposing a second object on the terrain object, setting a deformation parameter indicating a degree of deformation of the terrain, performing coordinate conversion on vertex coordinates included in the terrain object and the second object so as to deform the entire terrain to the degree corresponding to the deformation parameter, and rendering the terrain object and the second object after the coordinate conversion is performed, based on a virtual camera” (recited in at least: Nakajima paragraph [0007])); the obtaining the deformation data corresponding to the shape of the deformation picture according to the deformation picture comprises: determining a sub-deformation data corresponding to the deformation unit according to the deformation picture and the preset deformation auxiliary data; and obtaining the deformation data corresponding to the shape of the deformation picture according to the sub-deformation data (“Specifically, the GPU 22 displaces the vertices of each terrain part disposed in the virtual space based on the terrain part arrangement data, using the vertex shader function, based on expressions 1 and 2. As a result, the vertices of each terrain part are displaced according to the position in the virtual space, so that the terrain is deformed as shown in FIG. 6” (recited in at least: Nakajima paragraph [0164]; and FIG. 6)).
Regarding claim 3, Mallet in view of Nakajima, Leilani and Happy Island Designer suggest the claimed matter as stated above and Nakajima further suggests wherein the preset deformation auxiliary data comprises at least one of followings: a preset deformation region data, a preset offset data and a preset time data; the determining the sub-deformation data corresponding to the deformation unit according to the deformation picture and the preset deformation auxiliary data comprises: obtaining corresponding shape information according to the deformation picture (“the second deformation process of deforming the entire terrain into a curved shape is performed. As a result, the entire terrain can be deformed so as to be easily seen by the user, and it is not necessary for a game creator to previously create a deformed terrain (curved terrain) or an object on a deformed terrain. Therefore, the cost and time of development can be reduced, resulting in an improvement in development efficiency” (recited in at least: Nakajima paragraph [0171])); and determining the sub-deformation data according to the shape information and the preset deformation auxiliary data, wherein the sub-deformation data comprises at least one of followings: a target deformation region data, a target offset data and a target time data (“it is assumed that each terrain part has a square shape in the x-axis and z-axis directions. In other words, each terrain part has a square shape as viewed in the y-axis direction. A terrain is formed by disposing such terrain parts in a grid pattern. However, such a terrain part shape is merely for illustrative purposes. Each terrain part may have a predetermined shape (e.g., a square, rectangle, rhombus, parallelogram, triangle, or polygon) as viewed in the y-axis direction. Specifically, terrain parts may have the same predetermined shape in the x-axis and z-axis directions, and a terrain may be formed by disposing a plurality of terrain parts having the same predetermined shape. The first and second deformation processes may be performed on such a terrain” (recited in at least: Nakajima paragraph [0174])).
Regarding claim 4, and similarly claim 15, Mallet in view of Nakajima, Leilani and Happy Island Designer suggest the claimed matter as stated above and Nakajima further suggests wherein the data node combination comprises more than one data node combination, and the adjusting the three-dimensional terrain model grid by adjusting the target vertex in the grid vertex set of the three-dimensional terrain model according to the deformation data and the mapping relationship comprises: obtaining deformation control information by adjusting information of the data node of a target data node combination in the more than one data node combination according to the deformation data and a preset global dynamic parameter; and adjusting the three-dimensional terrain model grid by adjusting the target vertex in the grid vertex set according to the deformation control information and the mapping relationship (“The cliff object 31 is formed of seven terrain parts GP1 and one terrain part GP2. The cliff object 32 is formed of three terrain parts GP1 and one terrain part GP3. Also in this case, each terrain part is previously configured so as to allow the vertices V in an adjoining portion of adjoining terrain parts to share the same positions (see at least: Nakajima paragraph [0079]); Specifically, as shown in FIG. 10A, the cliff object 31 is formed of seven terrain parts before having been scraped. When the user character 50 is located near a position Pa, then if the user performs a predetermined operation, a terrain part GP1 at the position Pa is removed as shown in FIG. 10B” (recited in at least: Nakajima paragraph [0101]); and the 3 FIG 11, 12B, and 13 showing the more than one data note combination in regards to the deformation data).
Regarding claim 5, and similarly claim 16, Mallet in view of Nakajima, Leilani and Happy Island Designer suggest the claimed matter as stated above and Nakajima further suggests wherein, the obtaining the deformation control information by adjusting the information of the data node of the target data node combination in the more than one data node combination according to the deformation data and the preset global dynamic parameter comprises: obtaining the deformation control information by adjusting the information of the data node of the target data node combination according to a target offset data, a target time data included in the sub-deformation data and the preset global dynamic parameter (“each vertex is displaced by the GPU 22 using the vertex shader function. Specifically, the GPU 22 performs the coordinate conversion based on expressions 3-7 according to an instruction from the CPU 21. Thereafter, the GPU 22 performs a rendering process based on the virtual camera VC, and causes the display 12 to display an image. Specifically, each time an image is displayed on the display 12 (for each frame), the vertices of terrain parts and objects on the terrain are displaced by the GPU 22 using the vertex shader function, so that the entire terrain is deformed” (recited in at least: Nakajima paragraph [0127])).
Regarding claim 6, and similarly claim 17, Mallet in view of Nakajima, Leilani and Happy Island Designer suggest the claimed matter as stated above and Nakajima further suggests wherein, the obtaining the deformation control information by adjusting the information of the data node of the target data node combination according to the target offset data, the target time data included in the sub- deformation data and the preset global dynamic parameter comprises: determining a coordinate offset value of the data node of the target data node combination according to the target offset data; determining a time required for coordinate offset of the data node of the target data node combination according to the target time data; and obtaining the deformation control information by adjusting the information of the data node of the target data node combination according to the coordinate offset value, the time required for coordinate offset of the data node of the target data node combination and the preset global dynamic parameter (“in the virtual space, a ground object 34 and cliff objects 30 and 31 are disposed as terrain objects, and a house object 40, a tree object 41, and a user character 50 are disposed on the terrain objects. In addition, a virtual camera VC is set in the virtual space. For the virtual camera VC, a CxCyCz-coordinate system fixed to the virtual camera VC is set. The Cx-axis is the lateral direction axis of the virtual camera VC. The Cy-axis is the upward direction of the virtual camera VC. The Cz-axis is the line-of-sight direction of the virtual camera VC. In this embodiment, the Cx-axis is set to be parallel to the x-axis of the virtual space. The virtual camera VC is also movable in the height direction of the virtual space. When the virtual camera VC is moved in the height direction, the virtual camera VC is turned around the Cx-axis (pitch direction). Because the Cx-axis is set to be parallel to the x-axis of the virtual space, the direction in which the line-of-sight direction of the virtual camera VC extends along the ground 34 is parallel to the z-axis” (recited in at least: Nakajima paragraph [0116])).
Regarding claim 7, and similarly claim 18, Mallet in view of Nakajima, Leilani and Happy Island Designer suggest the claimed matter as stated above and Nakajima further suggests wherein, the method further comprises: determining the target data node combination from the more than one data node combination according to a target deformation region data in the sub-deformation data and a data node information of the more than one data node combination (“different displacement amounts can be added to different vertices, depending on the position of the vertex V in the virtual space. Even the same kind of terrain parts can be deformed into different shapes if the terrain parts are located at different positions. In addition, the displacement amounts dx and dz are determined by a combination of trigonometric functions having different periods, and therefore, it is difficult for the user to be aware of the periods. Therefore, a terrain that looks natural to the user can be created” (recited in at least: Nakajima paragraph [0092])).
Regarding claim 8, and similarly claim 19, Mallet in view of Nakajima, Leilani and Happy Island Designer suggest the claimed matter as stated above and Mallet further suggests wherein, the determining the target data node combination from the more than one data node combination according to the target deformation region data in the sub-deformation data and the data node information of the more than one data node combination comprises: obtaining an intersection point of the deformation picture with the grid vertex set of the three-dimensional terrain model by mapping the deformation picture into the grid vertex set of the three-dimensional terrain model using a preset mapping relationship according to spatial position information and region information of the deformation picture included in target deformation region data, and the grid vertex set of the three-dimensional terrain mode; determining the intersection point as the target vertex; and determining the target data node combination from the more than one data node combination according to the target vertex and the mapping relationship between the grid vertex set of the three-dimensional terrain model and the data node of the data node combination (“each horizon H.sub.τ 210 is a level-set (constant value) surface of the geological-time t. Paleo-geographic coordinates {u(r), v(r)} and twin-points (101,102) given as input are linked… Additionally or alternatively, each pair of twin-points (r.sub.F.sup.+, r.sub.F.sup.−) (101,102) may be the intersection of a level set 210 of vertical depositional coordinate t(r) with a “fault stria” σ(r.sub.F.sup.−) 600 comprising a curved surface passing through point r.sub.F.sup.− 102 whose geometry is defined by geological rules, e.g., defining fault blocks sliding against one another according to tectonic forces and geological context. As a consequence of constraints defined by equations (6), (7), and (8) above, fault-striae (e.g., see FIG. 12) may characterize the paleo-geographic coordinates {u(r), v(r)}, and vice versa. Each point r∈G 214 may be characterized by its present day coordinates (e.g., {x(r), y(r), z(r)}) with respect to a present day coordinate system {r.sub.x, r.sub.y, r.sub.z} 220 comprising three mutually orthogonal unit vectors, e.g., where r.sub.z is assumed to be oriented upward” (recited in at least: Mallet [column 15-16, lines 45-67; 1-10])).
Regarding claim 11, Mallet discloses an electronic device, comprising: a processor, a storage medium and a bus; wherein the storage medium stores a program instruction executable by the processor; when the electronic device runs, the processor communicates with the storage medium through the bus, and the processor executes the program instruction to execute following steps (Memory 150 may include cache memory, long term memory such as a hard drive, and/or external memory, for example, including random access memory (RAM), read only memory (ROM), dynamic RAM (DRAM), synchronous DRAM (SD-RAM), flash memory, volatile memory, non-volatile memory, cache memory, buffer, short term memory unit, long term memory unit, or other suitable memory units or storage units. Memory 150 may store instructions (e.g., software 160) and data 155 to execute embodiments of the aforementioned methods, steps and functionality (e.g., in long term memory, such as a hard drive) (see at least: Mallet [column 28, lines 32-44])): similar in scope to claim 1.
Regarding claim 12, Mallet suggests a non-transitory computer-readable storage medium, wherein a computer program is stored on the storage medium, and the computer program is run by a processor to execute following steps (Embodiments of the invention may include an article such as a non-transitory computer or processor readable medium, or a computer or processor storage medium, such as for example a memory, a disk drive, or a USB flash memory, encoding, including or storing instructions, e.g., computer-executable instructions, which when executed by a processor or controller, carry out methods disclosed herein (see at least: Mallet [column 29 lines 28-34])): steps similar in scope to claim 1.
Claims 9 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mallet in view of Nakajima, Leilani, and Happy Island Designer in further view of The Bell Tree Forums (https://www.belltreeforums.com/threads/anybody-playing-the-game-completely-offline.579218/; hereinafter The Bell Tree Forums).
Regarding claim 9, and similarly claim 20 Mallet in view of Nakajima, Leilani, and Happy Island Designer suggest the claimed matter as stated above; and Nakajima further suggests wherein the method further comprises: producing at least one sub-grid vertex set of the three-dimensional terrain model; and obtaining the grid vertex set of the three-dimensional terrain model according to the at least one sub-grid vertex set (“when an image is rendered, the above first deformation process is performed. As a result, each vertex of the new terrain part GP1 is displaced, depending on the position in the virtual space of the vertex of the new terrain part GP1, so that the new terrain part GP1 is deformed. The displacement amounts of each vertex are calculated using expressions 1 and 2” (recited in at least: Nakajima paragraph [0105])).
However, Nakajima does not suggest that the model is produced in an offline state. Leilani suggests that a game that is similar to the one taught in Nakajima is Animal Crossing New Horizons and The Bell Tree Forums teaches that the game is playable offline (see attached NPL).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have allowed terrain modeling to be available in an offline state for the added benefit of allowing users with no or weak internet access the ability to still enjoy the game they purchased.
Response to Arguments
Contingent Limitations:
The previous contingent limitations of claims 1, 11, and 12 have been overcome due to the Applicant’s amendments.
35 U.S.C. § 103:
The Applicant’s arguments with respect to claims 1-9, and 11-21 have been considered but are moot in view of the newly formulated grounds of rejection necessitated by the applicant’s amendments.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SELWA A ALSOMAIRY whose telephone number is (703)756-5323. The examiner can normally be reached M-F 7:30AM to 5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Vasat can be reached at (571) 270-7625. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SELWA A ALSOMAIRY/Examiner, Art Unit 3715
/Jay Trent Liddle/Primary Examiner, Art Unit 3715