DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/20/2026 has been entered.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 9-11, 14-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Liu et al. (CN 112386911 A, hereinafter “Liu”) in view of Mason (US 20200327719 A1, hereinafter “Mason”) and Cheng et al. (US 20220058871 A1, hereinafter “Cheng”).
Regarding claim 1, Liu discloses A navigation mesh update method, the method comprising: (page 1, lines 7-8, “a navigation grid generation method and device, a nonvolatile storage medium and an electronic device”)
determining a to-be-updated region that includes a scene change in a virtual scene, the to-be-updated region being one of a plurality of regions in the virtual scene; (page 2, lines 28-33, “a game player may frequently add or delete parts when building a virtual building model, even remove the original virtual building model and rebuild a new virtual building model, or some parts may be damaged in the fighting process (for example, the virtual wall model collapses after being attacked by a bomb) in order to enhance the fighting experience in a game scene, and at this time, the NavMesh mesh needs to be baked from the beginning”; page 19, lines 35-38, “when a change occurs inside the virtual building model, for example: the second floor inside the virtual building model adds virtual furnishings which have no impact on the scene terrain at all, thereby enabling to significantly reduce the amount of data processed when updating the NavMesh grid”). Note that: (1) a game scene can be a dynamic virtual scene with a plurality of regions of parts of the virtual building model; and (2) when the second floor inside the virtual building model as a part of the virtual building model adds virtual furnishings, a change is determined within the region (the second floor of the virtual building) of the game scene (the virtual building scene) to be updated.
obtaining a physical model of a virtual object in a space bounding box of the to-be-updated region; (Abstract, “acquiring attribute information sets of a plurality of virtual building components contained in a virtual building model”; page 14, lines 1-4, “a set of attribute information of all virtual building components in the virtual building model and internal virtual furnishings (for example: virtual furniture models such as tables and chairs or virtual appliance models such as refrigerators) can be obtained”; page 3, lines 23-36, “the grid corresponding to the target virtual building component is a grid 34 corresponding to a target bounding box, and the target bounding box is a bounding box 35 corresponding to the target virtual building component.”). Note that: (1) the virtual building components are virtual objects; (2) the attribution information of the components can be all physical and characteristic parameters to define components as corresponding physical models. Therefore, acquiring attribution information of the components is equivalent to obtaining the physical models of the components; and (3) it is obvious to one having ordinary skills in the art that a minimal box that contains all bounding boxes corresponding to the multi-level bounding box tree for the to-be-updated region can be regarded as a space bounding box of the to-be-updated region.
generating to-be-processed mesh data of the physical model by (page 2, lines 25-27, “the corresponding Navmesh Mesh grid is generated through a set of complete processes of voxelization, height field construction, walkable area screening, area and polygon generation and DetailMesh generation”; page 20, lines 19-24, “In the process of adding the virtual building component, if the added virtual building component is a walkable virtual building component, the polygon corresponding to the virtual building component can be directly added into the polygon mesh, the connection relation and the BVH tree are updated, and the updated polygon mesh is converted into the corresponding NavMesh mesh”). Note that: (1) the mesh data of the physical model can be generated through a set of processes of voxelization, height field construction, walkable area screening, area and polygon generation and DetailMesh generation; (2) when a virtual building component (a virtual object) is added, the polygons corresponding to the component as the geometric data of the physical model can be added into the polygon mesh to formulate mesh data to be processed in the region of scene with the component.
generating, by processing circuitry, a target navigation mesh based on the to-be-processed mesh data of the physical model in the to-be-updated region, the target navigation mesh indicating a passable route in the to-be-updated region; and (page 6 / line 34 – page 7 / line 4, “the second processing module is used for obtaining a first polygon mesh according to the position information and the connection relation of the polygons and converting the first polygon mesh into a first navigation mesh; the third processing module is used for removing grids corresponding to the target type virtual building components in the first navigation grid from the second navigation grid to obtain a third navigation grid, wherein the second navigation grid is an initial scene terrain navigation grid; and the generating module is used for establishing a communication relation between the third navigation 3 grid and the first navigation grid and generating a target navigation grid”). Note that: (1) a target navigation mesh (grid) is generated with a walkable (passable) virtual building element of the plurality of virtual building elements in the region where the walkable virtual building element (component) is located; (2) the polygon mesh as mesh data is obtained or converted into the first, second, and third navigation grid (mesh) for the target navigation grid (mesh).
updating a to-be-updated navigation mesh corresponding to the to-be-updated region of the plurality of regions in the virtual scene with the target navigation mesh. (page 26, lines 8-11, “When the game is operated, when the virtual building model is updated, only the virtual building components and the virtual furnishings related to the updating need to be considered, and irrelevant parts in a game scene do not need to be changed, so that the computation amount is reduced”). Note that: (1) when the virtual building model is updated because of the changes of virtual building components (e.g., the second floor by adding furnishings inside the virtual build model), the corresponding target navigation mesh is generated above; and (2) It is obvious to one having ordinary skills in the art that the target navigation mesh is used for the system to update a to-be-updated navigation mesh corresponding to the region where the change (the second floor with furnishings added) occurs while the other irrelevant regions or parts in the game scene keep unchanged.
However, Liu fails to disclose, but in the same art of computer graphics, Mason discloses
determining a geometric shape of the physical model of the virtual object (Mason, FIG. 21: on the left the sculpture model has concave surface portions or concave mesh (polygons and / or triangles) between chin and neck while showing convex surface portions or convex mesh (polygons and / or triangles) at forehead; para. [0003], “It may be desirable to decompose input three-dimensional volumes into a small number of convex or roughly-convex volumes, which together approximate the input volumes … because the volumes are typically complex and may have concavities which may make it difficult to determine whether a point is inside or outside a volume”;). Note that: (1) one can decompose the input volumes of the virtual object into convex or roughly-convex volumes (convex polygon mesh); (2) during the decomposition, the concavity portions (concave polygon mesh) can be identified or determined and split to form convex volume portions or convex polygon mesh; (3) the original convex polygon mesh and original concave polygon mesh can be mapped as the geometric shapes or shape types of a physical model of the object; and (4) As an Examiner’s note, “Height map” in FIG. 11 of this application is a representation rather than a shape of geometric shape of the physical model, and is not a shape. after a collision detection with the space bounding box is performed; (Mason, para. [0003], “Performing collision detection directly on three-dimensional volumes as created by artists or generated from images … By decomposing volumes into a small number of simpler convex or roughly-convex volumes, the speed and accuracy of computerized collision detection can be substantially increased”). Note that: three-dimensional volumes here can be regarded as the space bounding box and collision detection can be performed on the space bounding box. This processing or step can be performed before determining a geometric shape of the physical model of the virtual object.
determining a data conversion type from a plurality of candidate data conversion types for different geometric shapes based on the geometric shape of the physical model, the data conversion type being configured to convert data corresponding to the physical model to triangular mesh data; (Mason, para. [0003], “It may be desirable to decompose input three-dimensional volumes into a small number of convex or roughly-convex volumes, which together approximate the input volumes … because the volumes are typically complex and may have concavities which may make it difficult to determine whether a point is inside or outside a volume”; para. [0009], “identifying a concavity in the first three-dimensional volume, the concavity having a region of deepest concavity”; para. [0160], “A polygon mesh is a collection of vertices, edges and faces that define the shape and/or boundary of a three-dimensional volume. The faces may consist of various polygonal shapes such as triangles, quadrilaterals, convex polygons or concave polygons, and may be planar or nonplanar. If the polygon mesh comprises non-triangular faces (e.g. quadrilateral faces) or non-planar faces, these may optionally be triangulated”). Note that: (1) if the shape of the physical model or the shape of the potions of the physical model is identified or determined as convex polygon mesh, the conversion type can be mapped into “convex polygon mesh” (see FIG. 11 of this application) shape type’s corresponding conversion type (e.g., convex polygon mesh conversion type) as a label; (2) if the shape of the physical model or the shape of the potions of the physical model is identified or determined as concave polygon mesh, the conversion type can be mapped into “concave polygon mesh” (not shown but can be included in FIG. 11 of this application) shape type’s corresponding conversion type (e.g., concave polygon mesh conversion type) as a label; (3) “convex polygon mesh conversion type” and “concave polygon mesh conversion type” can form a plurality of candidate conversion types; and (4) for the concave polygon mesh as its shape type, the concave polygon mesh can be split into convex polygon mesh or roughly-convex polygon mesh based on the data conversion type (e.g., concave polygon mesh conversion type). Then, the convex polygon mesh as a kind of physical model can optionally be triangulated into the triangular mesh data. The splitting and triangulation are combined into the corresponding data converting based on the determined data conversion type (e.g., concave polygon mesh conversion type).
… by converting the data corresponding to the physical model to the triangular mesh data based on (i) the determined data conversion type and (ii) ; (Mason, paras. [0009]-[0011], “identifying a concavity in the first three-dimensional volume, the concavity having a region of deepest concavity; [0010] splitting the first three-dimensional volume along a split plane or intersection loop contacting or intersecting the region of deepest concavity, such as to provide plural three-dimensional volumes; and [0011] providing data representing an output set of two or more three-dimensional volumes”; para. [0160], “A polygon mesh is a collection of vertices, edges and faces that define the shape and/or boundary of a three-dimensional volume. The faces may consist of various polygonal shapes such as triangles, quadrilaterals, convex polygons or concave polygons, and may be planar or nonplanar. If the polygon mesh comprises non-triangular faces (e.g. quadrilateral faces) or non-planar faces, these may optionally be triangulated”). Note that: (1) for the convex polygon mesh as its shape type, it can optionally be triangulated into the triangular mesh data. The triangulation is the corresponding data converting based on the determined data conversion type (e.g., convex polygon mesh conversion type); and (2) for the concave polygon mesh as its shape type, the concave polygon mesh can be split into convex polygon mesh or roughly-convex polygon mesh based on the data conversion type (e.g., concave polygon mesh conversion type). Then, the convex polygon mesh can optionally be triangulated into the triangular mesh data. The splitting and triangulation are combined into the corresponding data converting based on the determined data conversion type (e.g., concave polygon mesh conversion type).
Liu and Mason are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply splitting a physical model or volumes into convex or roughly convex volumes and converting convex polygon mesh into triangular mesh data by optionally being triangulated, as taught by Mason into Liu. The motivation would have been “By decomposing volumes into a small number of simpler convex or roughly-convex volumes, the speed and accuracy of computerized collision detection can be substantially increased.” (Mason, para. [0003]). The suggestion for doing so would allow them to improve the speed and accuracy for collision detection and generation of triangular mesh data of virtual objects. Therefore, it would have been obvious to combine Liu and Mason.
However, Liu in view of Mason fails to disclose, but in the same art of computer graphics, Cheng discloses and (ii) filtering out triangles of the triangular mesh data outside the space bounding box; (Cheng, para. [0105], “At operation 1006, the process 1000 includes removing points (or vertices) and triangles (of the mesh) that are outside of the bounding box and that are below the plane. For example, the mesh refinement engine 312 can cut the mesh by removing all the points and triangles that are outside of the bounding box”). Note that: (1) the triangles outside the space bounding box cam be cut, removed, or filtered out to obtain a clean triangular mesh representation; and (2) this process to clean up the triangular mesh data can be used a step to finalize the triangular mesh data.
Liu in view of Mason, and Cheng, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply removing triangles of the triangular mesh data outside the space bounding box. The motivation would have been “, the mesh refinement engine 312 can cut the mesh by removing all the points and triangles that are outside of the bounding box” (Cheng, para. [0105]). The suggestion for doing so would allow them to convert physical model data to triangular mesh data for a target navigation mesh. Therefore, it would have been obvious to combine Liu, Mason, and Cheng.
Regarding claim 2, the combination of Liu, Mason, and Cheng discloses The method according to claim 1, wherein the scene change corresponds to one of movement, construction, and removal of the virtual object. (Liu, page 26, lines 8-11, “When the game is operated, when the virtual building model is updated, only the virtual building components and the virtual furnishings related to the updating need to be considered, and irrelevant parts in a game scene do not need to be changed, so that the computation amount is reduced”; “page 19, line 35-38, “when a change occurs inside the virtual building model, for example: the second floor inside the virtual building model adds virtual furnishings which have no impact on the scene terrain at all, thereby enabling to significantly reduce the amount of data processed when updating the NavMesh grid”). Note that: adding virtual furnishings to the second floor is a scene change corresponding to the construction of the second floor and its furnishings (virtual objects).
Regarding claim 3, the combination of Liu, Mason, and Cheng discloses The method according to claim 1, wherein the determining the to-be-updated region comprises:
determining a scene change region that includes the scene change in the virtual scene; (Liu, page 19, line 35-38, “when a change occurs inside the virtual building model, for example: the second floor inside the virtual building model adds virtual furnishings which have no impact on the scene terrain at all, thereby enabling to significantly reduce the amount of data processed when updating the NavMesh grid”). Note that: when the second floor inside the virtual building model as a part of the virtual building model adds virtual furnishings, a change is determined within the region (the second floor of the virtual building) of the game scene (the virtual building scene) to be updated.
determining at least one unit space in the virtual scene that corresponds to the scene change region; and (Liu, page 1, lines 10-12, “determining position information of vertices of a polygon corresponding to a walkable virtual building element of the plurality of virtual building elements in the game scene based on the set of attribute information”). Note that: (1) a polygon space in the game scene can be regarded as a unit space; (2) when the second floor’s change (added furnishings) occurs, the polygon(s) (unit space(s)) are corresponding to the scene change region (the second floor).
determining the to-be-updated region based on a boundary region corresponding to each of the at least one unit space in the virtual scene. (Liu, page 3, lines 29-31, “Optionally, the polygonal connection relationship is a connection relationship determined according to an adjacency relationship of the walkable virtual building elements”). Note that: it is obvious to one having ordinary skills in the art that: (1) Polygons are related to each other through the common edges and vertexes; (2) the polygons (unit spaces) of the added furnishings as a to-be-updated region can be adjacent to other elements of the second floor, and can be regarded as a boundary region with the corresponding polygons.
Regarding claim 4, the combination of Liu, Mason, and Cheng discloses The method according to claim 3, further comprising:
determining, from the at least one unit space, at least one to-be-deleted unit space in a to-be-updated unit space set, the to-be-updated unit space set including to-be-updated unit spaces that include scene changes;
deleting the at least one to-be-deleted unit space from the at least one unit space to obtain a unit space deletion result; and (Liu, page 2, lines 28-32, “a game player may frequently add or delete parts when building a virtual building model, even remove the original virtual building model and rebuild a new virtual building model, or some parts may be damaged in the fighting process (for example, the virtual wall model collapses after being attacked by a bomb) in order to enhance the fighting experience in a game scene”). Note that: (1) the game scene can consist of all polygons representing the components and elements; (2) players may add or delete components or parts that models by polygon(s) (unit space(s)); (3) removing the damaged parts (changing the scene) occurs at the region of the damaged parts; (4) the to-be-removed (deleted) damaged parts (polygon(s), unit space(s)) need to be updated timely in the scene; and (5) it is obvious that up-to-deleted parts formulate a to-be-updated unit space set (a polygon set) and a unit space delete results are obtained accordingly.
updating the to-be-updated unit space set based on the deletion of the at least one to-be-deleted unit space to obtain a target unit space set, wherein the determining the to-be-updated region comprises: Note that: it is obvious to one having ordinary skills in the art that: (1) after deletion of the to-be-deleted parts, the to-be-updated unit space set can be updated accordingly and timely; and (2) the resulting updated to-be-updated unit space set can be regarded as a target unit space set.
determining a target to-be-updated unit space from the target unit space set based (on) a specified update time being reached; and (Liu, page 25, lines 13-18, “And for each polygon corresponding to the virtual ground model, removing partial area occupied by the virtual workbench model to form a new polygon mesh … the response is more timely when the game player frequently adjusts the virtual placement position”). Note that: it is obvious to one having ordinary skills in the art that: (1) updating a target to-be-updated unit space of the target unit space set to get optimal user experience can be very time sensitive or need to be timely (e.g., with a minimum latency or based on before reaching a specified update time); (2) the target to-be-updated unit space can be determined based on a timely requirement by the system or the game player(s).
determining, based on a boundary region corresponding to the target to-be-updated unit space, the to-be-updated region that includes the scene change in the virtual scene. Note that: it is obvious to one having ordinary skills in the art that: after having determined the target to-be-updated unit space (a polygon) corresponding to the boundary region and based on the position information of unit space (the index of vertexes and edges), the to-be-updated region where the target to-be-updated unit space is located with the corresponding change (e.g., removing the damage parts) can be determined.
Regarding claim 5, the combination of Liu, Mason, and Cheng discloses The method according to claim 4, wherein the updating the to-be-updated unit space set comprises:
adding, based on the unit space deletion result, the at least one of to-be-updated unit spaces to the to-be-updated unit space set, to obtain the target unit space set; and
determining the to-be-updated unit space set as the target unit space set based on the at least one to-be-deleted unit space being all deleted. (Liu, page 2, lines 28-32, “a game player may frequently add or delete parts when building a virtual building model, even remove the original virtual building model and rebuild a new virtual building model, or some parts may be damaged in the fighting process (for example, the virtual wall model collapses after being attacked by a bomb) in order to enhance the fighting experience in a game scene”). Note that: it is obvious to one having ordinary skills in the art that: (1) the game player determines the to-be-deleted unit space (polygon) for deletion as a to-be-updated unit space (polygon); (2) The to-be-deleted unit space is deleted; (3) the deleted unit pace of the to-be-updated unit spaces set (sets of polygons) as a to-be-updated unit space needs to be updated accordingly and timely; and (4) after the to-be-deleted unit space has been deleted, the updated to-be-updated unit space set can be regarded as a target unit space set.
Regarding claim 9, the combination of Liu, Mason, and Cheng discloses The method according to claim 3, wherein the determining the to-be-updated region further comprises: determining the to-be-updated region based on a boundary region corresponding to the scene change region. (Liu, page 3, lines 29-31, “Optionally, the polygonal connection relationship is a connection relationship determined according to an adjacency relationship of the walkable virtual building elements”). Note that: it is obvious to one having ordinary skills in the art that: (1) polygons are related to each other through the common edges and vertexes; (2) the polygons (unit space(s) of the added furnishings as a to-be-updated region can be adjacent to other element of the second floor, and can be regarded as a boundary region with the corresponding polygons of the changed scene region.
Regarding claim 10, the combination of Liu, Mason, and Cheng discloses The method according to claim 1, wherein the generating the to-be-processed mesh data comprises:
performing, when the physical model is a curved surface model, straight surface processing on the curved surface model to obtain a to-be-converted model; and (Liu, page 4, lines 21-24, “Optionally, the attribute information set further includes: bounding box information for each virtual building component, the method further comprising: a multi-level bounding box tree is constructed based on bounding box information for each virtual building component”). Note that: it is known to one having ordinary skills in the art that: (1) the bounding box for each virtual building component is the closest enclosing box for the component; and (2) if the physical model of a virtual building component is a curved surface model, one can use its corresponding bounding box as a to-be-converted model to represent it while each surface of the bounding box is a straight surface.
performing geometrization processing on the to-be-converted model to generate the to-be-processed mesh data. (Liu, page 2, lines 25-27, “the corresponding Navmesh Mesh grid is generated through a set of complete processes of voxelization, height field construction, walkable area screening, area and polygon generation and DetailMesh generation”; page 20, lines 19-24, “In the process of adding the virtual building component, if the added virtual building component is a walkable virtual building component, the polygon corresponding to the virtual building component can be directly added into the polygon mesh, the connection relation and the BVH tree are updated, and the updated polygon mesh is converted into the corresponding NavMesh mesh”). Note that: (1) the mesh data of the physical model (the to-be-converted model) can be generated or geometrized through a set of processes of voxelization, height field construction, walkable area screening, area and polygon generation and DetailMesh generation, and this set of processes can be regarded as a geometrization processing; and (2) when a virtual building component (a virtual object) is added, the polygons corresponding to the component as the geometric data of the physical model can be added into the polygon mesh to formulate mesh data to be processed in the region of scene with the component.
Regarding claim 11, the combination of Liu, Mason, and Cheng discloses The method according to claim 1, wherein the generating the to-be-processed mesh data comprises:
performing geometrization processing on the physical model to obtain a model vertex set and a geometric figure set corresponding to a specified geometric shape for navigation processing, each geometric figure in the geometric figure set including a vertex index, the vertex index indicating a model vertex in the model vertex set; and
determining the model vertex set and the geometric figure set as the to-be-processed mesh data. (Liu, page 2, lines 25-27, “the corresponding Navmesh Mesh grid is generated through a set of complete processes of voxelization, height field construction, walkable area screening, area and polygon generation and DetailMesh generation”; page 20, lines 19-24, “In the process of adding the virtual building component, if the added virtual building component is a walkable virtual building component, the polygon corresponding to the virtual building component can be directly added into the polygon mesh, the connection relation and the BVH tree are updated, and the updated polygon mesh is converted into the corresponding NavMesh mesh”; page 3, lines 29-31, “Optionally, the polygonal connection relationship is a connection relationship determined according to an adjacency relationship of the walkable virtual building elements”). Note that: (1) the mesh data of the physical model can be generated or geometrized through a set of processes of voxelization, height field construction, walkable area screening, area and polygon generation and DetailMesh generation, and this set of processes can be regarded as a geometrization processing; and (2) the polygons as a result of the geometrization processing on the physical model (the virtual building component) have their vertexes, edges and enclosed areas; and (3) It is obvious that: a) the vertexes can formulate a model vertex set (or a virtual component vertex set); b) the shapes (geometric figures) of clusters of the adjacent polygons can be accumulated into a geometric figure set; and c) each geometric figure can include the vertex index with corresponding vertexes of the cluster of polygons.
Claim 14 reciting “An information processing apparatus, comprising: processing circuitry configured to:” is corresponding to the method of claim 1. Therefore, claim 14 is rejected for the same rationale for claim 1.
In addition, the combination of Liu, Mason, and Cheng discloses An information processing apparatus, comprising: processing circuitry configured to: (Liu, page 3, lines 9-11, “At least some embodiments of the present invention provide a navigation grid generation method, an apparatus, a non-volatile storage medium, and an electronic apparatus”).
Claims 15-18 are corresponding to the method of claims 2-5, respectively. Therefore, claims 15-18 are rejected for the same rationale for claims 2-5, respectively.
Claim 20 reciting “A non-transitory computer-readable storage medium, storing computer executable instructions which when executed by a processor cause the processor to perform:” is corresponding to the method of claim 1. Therefore, claim 20 is rejected for the same rationale for claim 1.
In addition, the combination of Liu, Mason, and Cheng discloses A non-transitory computer-readable storage medium, storing computer executable instructions which when executed by a processor cause the processor to perform: (Liu, page 30, lines 19-22, “the nonvolatile storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, 21 a magnetic disk, or an optical disk”; page 9, lines 35-37, “a processor for executing a program, where the program is configured to execute the navigation grid generation method in any one of the above when running”).
Claims 6 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Mason, Cheng, and Muffler et al. (US 20030128210 A1, hereinafter “Muffler”).
Regarding claim 6, the combination of Liu, Mason, and Cheng discloses The method according to claim 4, wherein the determining the target to-be-updated unit space comprises:
… determining a to-be-updated unit space of the to-be-updated unit spaces with a highest update priority from the target unit space set as the target to-be-updated unit space. Note that: it is obvious to one having ordinary skills in the art that: (1) after the update priority has been determined, one can determine the highest update priority from the target unit space set and set the corresponding target unit space as the target to-be-updated unit space; and (2) the target to-be-updated unit space can be determined as a to-be-updated unit space of the to-be-updated unit spaces.
s in the target unit space set, a respective one of the to-be-updated unit spaces and the virtual object, and the scene data including feature data of the virtual scene;
(Liu, page 2, lines 28-32, “a game player may frequently add or delete parts when building a virtual building model, even remove the original virtual building model and rebuild a new virtual building model, or some parts may be damaged in the fighting process (for example, the virtual wall model collapses after being attacked by a bomb) in order to enhance the fighting experience in a game scene”). Note that: (1) the scene that the virtual wall model collapses after being attacked by a bomb can be the scene data with wall features and changes; and (2) the respective one of the polygons representing the wall model can be the respective one of the to-be-updated unit spaces in the target unit space set.
However, the combination of Liu, Mason, and Cheng fails to disclose, but in the same art of computer graphics, Muffler discloses determining priority information for each of … the priority information … determining an update priority … based on the priority information; and (Muffler, para. [0008], “A critical item detector is configured to identify polygons received from the geometric processor that have at least a portion of the polygon within a critical item region”; para. [0009], “The system provides a database of polygons, where each polygon is enabled to be associated with a critical item flag and a critical item priority”; claim 14, “wherein the polygons are arranged based on critical item priority and polygons with the highest critical item priority are rendered first based on the processing time available”). Note that: It is obvious to one having ordinary skills in the art: (1) each polygon (unit space) can be associated with a priority; (2) the information to identify the polygons in the critical region can be determined based on the scene data (e.g., “the virtual wall model collapses after being attacked by a bomb can be the scene data with wall features and changes” in Liu above) and can be regarded as the priority information due to its criticalness; and (3) for scene update, the priority information for each of the to-be-updated unit space in the target unit space set can be regarded as its update priority.
The combination of Liu, Mason, and Cheng, and Muffler, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply polygon priority a in critical region, as taught by Muffler into the combination of Liu, Mason, and Cheng. The motivation would have been “each polygon is enabled to be associated with a critical item flag and a critical item priority” (Muffler, para. [0009]). The suggestion for doing so would allow to determine the update priority of each of to-be-updated unit spaces so that the critical unit spaces with highest priority can be update timely in the scene. Therefore, it would have been obvious to combine Liu, Mason, Cheng, and Muffler.
Claim 19 is corresponding to the method of claim 6. Therefore, claim 19 is rejected for the same rationale for claim 6.
Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Mason, Cheng, and Le Geyt (US 20160379406 A1, hereinafter “Le_Geyt”).
Regarding claim 7, the combination of Liu, Mason, and Cheng discloses The method according to claim 4, further comprising:
determining a space size of each of the at least one unit space in the scene change region; and Note that: it is obvious that one having ordinary skills in the art can readily obtain or determine the space size of each of unit space (polygon) in the scene change region.
(Liu, page 15, lines 10-12, “For the purpose of free 10 splicing and building, the shape and size of the virtual building components need to 11 meet certain rules”). Note that: it is needed to determine whether each of unit spaces meet some requirements for unit space, regions of scene, or scene update.
… wherein the determining, from the at least one unit space, the at least one to-be-deleted unit space comprises:
determining, from the at least one valid unit space corresponding to the at least one unit space, the at least one to-be-deleted unit space in the to-be-updated unit space set. Note that: it is obvious to one having ordinary skills in the art that: after determining whether each of the at least one unit space is a valid unit space (polygon), one can readily identify or determine the at least one to-be-deleted unit space (polygon) from the valid unit spaces or the valid unit space set.
However, the combination of Liu, Mason, and Cheng fails to disclose, but in the same art of computer graphics, Le_Geyt discloses determining whether each … valid … is greater than a first specified space size, (Le_Geyt, para. [0034], “The system may then determine the threshold size as a percentage of the total size of the 3D polygonal mesh. For example, the threshold size may be set to a value that is five percent of the surface area size of the 3D polygonal mesh (e.g., the size of the main mesh and all associated components). The system may then compare the size of the component to the threshold size. If the size of the component is less than or equal to the threshold size, then the component may be deemed a small component, and thus be eligible for simplification”). Note that: (1) a threshold or space size is set based on five percent of the surface area size of the 3D polygonal mesh, and the threshold can be regarded as a first specified space size; (2) by comparing the size of component to the threshold size, the system can determine whether a component is a small component for simplification or a large enough (a valid component) not for simplification so that a small-sized component can be excluded as an invalid component while a large-sized component is determined as a valid component of a valid component set.
the combination of Liu, Mason, and Cheng, and Le_Geyt, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply determining whether a component is for processing by comparing the component size to a specified size threshold, as taught by Le_Geyt into the combination of Liu, Mason, and Cheng. The motivation would have been “The system may then compare the size of the component to the threshold size” (Le_Geyt, para. [0034]). The suggestion for doing so would allow to determine whether each of the at least one unit space (polygon) is a valid unit space (polygon). Therefore, it would have been obvious to combine Liu, Mason, Cheng, and Le_Geyt.
Regarding claim 8, the combination of Liu, Mason, Cheng, and Le_Geyt discloses The method according to claim 3, further comprising: dividing the virtual scene into the plurality of regions based on a second specified space size to obtain a unit space set, the unit space set including unit spaces corresponding to the virtual scene. (Le_Geyt, para. [0034], “The system may then determine the threshold size as a percentage of the total size of the 3D polygonal mesh. For example, the threshold size may be set to a value that is five percent of the surface area size of the 3D polygonal mesh (e.g., the size of the main mesh and all associated components). The system may then compare the size of the component to the threshold size. If the size of the component is less than or equal to the threshold size, then the component may be deemed a small component, and thus be eligible for simplification”). Note that: (1) the virtual scene consists of a set of different-sized polygons; (2) a different space size threshold (e.g., 20% of the of the total size of the 3D polygonal mesh) can be used as a second specified space size to compare each polygon’s size with the space size threshold; (3) a unit space set can be used to accumulate the unit spaces with their space size greater than the space size threshold, and each polygon in this set can be a separate region; (4) another unit space set can be used to accumulate the unit spaces with their space sizes less or equal than the space size threshold, and the adjacent polygons in this set can formulate a separate region; and (5) the virtual scene is divided into the plurality of regions in this way.
The motivation to combine Liu, Mason, Cheng, and Le_Geyt given in claim 7 is incorporated here.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Liu, Mason, Cheng, Hou (CN 104548597 A, hereinafter “Hou”), and Wilheim et al. (WO 2013024348 A2, hereinafter “Wilheim”).
Regarding claim 12, the combination of Liu, Mason, and Cheng fails to disclose, but in the same art of computer graphics, Hou discloses
performing voxelization processing on the to-be-processed mesh data to obtain voxel block data and height field data corresponding to each voxel block in the voxel block data; (Hou, page 3, lines 24-26, “In step 101, the input model is converted into a voxel model composed of a plurality of set voxels, and a height field data structure corresponding to the voxel model is established”). Note that: (1) the input model for step 101 can be the to-be-updated mesh data; (2) voxels can be regarded as voxel block data.
selecting passable voxel block data from the voxel block data based on the height field data;
performing region generation on the passable voxel block data to obtain a passable region; and (Hou, page 2, lines 14-18, “Selecting a voxel that does not satisfy the parameter limit from the height field data structure according to a parameter limit set by the player character, and marking the non-walkable flag; Selecting a walkable area in the voxel model according to the voxel marked with the non-walkable mark in the height field data structure”). Note that: (1) walkable voxels can be selected or obtained by excluding the marked non-walkable voxels from the plurality set voxels based on the height field data; and (2) a walkable area or a passable region can be obtained by accumulating the selected walkable voxels.
The combination of Liu, Mason, and Cheng, and Hou, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply obtaining voxel block data and height field data, as taught by Hou into the combination of Liu, Mason, and Cheng. The motivation would have been “converting an input model into a voxel model which consists of multiple set voxels, and establishing a height field data structure which corresponds to the voxel model … selecting a walkable region” (Hou, page 1, lines 13-17). The suggestion for doing so would allow to performing voxelization to obtain voxel block data and height field data and obtain a passable region. Therefore, it would have been obvious to combine Liu, Mason, Cheng, and Hou.
However, the combination of Liu, Mason, Cheng, and Hou fails to disclose, but in the same art of computer graphics, Wilheim discloses
performing surface cutting on the passable region to generate the target navigation mesh. (Wilhem, page 3, lines 17-19, “Figures 3A and 3B are diagrams of a navigation mesh with applied embodiments of a cutting algorithm without elimination of small triangles and with elimination of small triangles, respectively”; page 14, lines 5-13, “dynamic blocking may be performed using a cutting mesh or plane that defines a region. The planes may be bounded to prevent an infinite plane from modifying the entire mesh. In one embodiment, the plane may be bounded in the shape of a box, and may be referred to as a blocking box … a cutting mesh may define an intersecting plane to a navigation mesh to cut the existing navigation mesh into one or more half spaces, defined as within the blocking box/plane or outside the blocking box/plane. A bounding box may restrict the intersecting plane to limit cutting across the entire navigation mesh.”). Note that: (1) The cutting method is a triangle cutting that is one of kinds of surface cutting per the specification of this application; and (2) The triangle cutting can be applied on the passable or walkable region recited above by using a cutting mesh or plane within a blocking box to cut the existing navigation mesh out to generate the target navigation mesh.
The combination of Liu, Mason, Cheng, and Hou, and Wilheim, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply triangle cutting for mesh cutting, as taught by Wilheim into the combination of Liu, Mason, Cheng, and Hou. The motivation would have been “a navigation mesh with applied embodiments of a cutting algorithm” (Wilheim, page 3, lines 17-18). The suggestion for doing so would allow to determine the passable route and control a virtual object to move based on the passable route. Therefore, it would have been obvious to combine Liu, Mason, Cheng, Hou, and Wilheim.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Liu in view of Mason, Cheng, and Tang (CN112370788 A, hereinafter, “Tang”). A machine-translated English version for Tang is attached.
Regarding claim 13, the combination of Liu, Mason, and Cheng fails to disclose, but in the same art of computer graphics, Tang discloses
determining the passable route by performing route finding in the to-be-updated region based on the target navigation mesh;
and controlling, based on the passable route, another virtual object to move in the virtual scene. (Tang, page 1, lines 9-15, “The method includes: in response to a target point setting operation for a virtual object in a game scene, determining a starting point and an ending point corresponding to the virtual object; At the beginning, the pathfinding algorithm is performed on the grid nodes in the walkable area in the game scene, and the target node group from the starting point to the end point is obtained; the navigation path is determined based on the target node group; the virtual object is controlled to move to the end point according to the navigation path”). Note that: (1) the navigation path is equivalent to the passable route; (2) the target node group formulates the to-be-updated region based on the target navigation mesh; and (3) the game scene is the virtual scene.
The combination of Liu, Mason, and Cheng, and Tang, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply determining the passable or walkable route and controlling a virtual object to move, as taught by Tang into the combination of Liu, Mason, and Cheng. The motivation would have been “the virtual object is controlled to move to the end point according to the navigation path” (Tang, page 1, lines 14-15). The suggestion for doing so would allow to determine the passable route and control a virtual object to move based on the passable route. Therefore, it would have been obvious to combine Liu, Mason, Cheng, and Tang.
Response to Arguments
Applicant's arguments with respect to claim rejection 35 U.S.C. 103, have been fully considered but they are not persuasive.
Applicant alleges, “Regarding the rejection of Claim 1 under 35 U.S.C. § 103, it is respectfully submitted that Liu and Mason fail to disclose ‘obtaining a physical model of a virtual object in a space bounding box of the to-be-updated region; determining a geometric shape of the physical model of the virtual object after a collision detection with the space bounding box is performed; determining a data conversion type from a plurality of candidate data conversion types for different geometric shapes based on the geometric shape of the physical model, the data conversion type being configured to convert data corresponding to the physical model to triangular mesh data; generating to-be-processed mesh data of the physical model by converting the data corresponding to the physical model to the triangular mesh data based on (i) the determined data conversion type and (ii) filtering out triangles of the triangular mesh data outside the space bounding box,’ as discussed during the interview” (page 10, line 20-30). However, the arguments are respectfully mooted because the corresponding newly amended limitations, “a space bounding box”, “after a collision detection with the space bounding box is performed”, “the data conversion type being configured to convert data corresponding to the physical model to triangular mesh data”, and “and (ii) filtering out triangles of the triangular mesh data outside the space bounding box”, have been addressed in the detailed claim rejection 35 U.S.C. 103 above. The arguments are not persuasive.
Applicant alleges, “Independent Claims 14 and 20, although differing in scope and/or statutory class, patentably define over Liu and Mason at least for reasons analogous to the reasons stated above for the patentability of Claim 1. Accordingly, it is respectfully submitted that Claims 14 and 20 (and all associated dependent claims) patentably define over Liu and Mason.” (page 11, lines 3-6). However, Examiner respectfully disagrees about the allegations as whole because: Independent claims 14 and 20 are corresponding to claim 1. Therefore, claims 14 and 20 are rejected for the same rationale for claim 1. And all associated dependent claims are rejected for the respective rationale above. The arguments are not persuasive.
Applicant alleges, “Regarding the rejections of Claims 6-8, 12-13, and 19 under 35 U.S.C. § 103, it is respectfully submitted that Claims 6-8, 12-13, and 19 patentably define over Liu and Mason for the reasons stated above for the patentability of Claims 1 and 14, from which Claims 6-8, 12-13, and 19 depend.” (page 11, lines 7-10). However, Examiner respectfully disagrees about the allegations as whole because: claims 6-8, 12-13, and 19 are rejected for the respective rationale above. The arguments are not persuasive.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Wu (US 20120224755 A1, hereinafter “Wu”) teaches that a CAD model or other three-dimensional ("3D") digital model is converted to a list of triangles defining the surface of the object.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BIAO CHEN whose telephone number is (703)756-1199. The examiner can normally be reached M-F 8am-5pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee M Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Biao Chen/
Patent Examiner, Art Unit 2611
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611