DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending under this Office action.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 5 recites the limitation "the method" in “The method of claim 1”; and the limitation "the one or more first point" and “the one or more second points” in “the one or more first points and the one or more second points are coplanar” There is insufficient antecedent basis for this limitation in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-8, and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Min, etc. (US 20170301133 A1) in view of Reinhardt, etc. (US 6281904 B1), further in view Brochu (US 20190147649 A1).
Regarding claim 1, Min teaches that a computer-implemented method (See Min: Fig. 1, and [0029], “FIG. 1 is a flowchart illustrating a method 100 of merging two or more 3D meshes according to one embodiment. The method 100 may include the following steps: receive two or more input meshes (110); spatial alignment (120); mesh clipping (130); geometry refinement (140); texture blending (150); and output a merged mesh (160). Some of these steps may be optional. As discussed above, the received meshes may be three-dimensional (3D) and may include components based on triangles (triangle mesh), quadrilaterals, or other convex polygons. Although triangle meshes are discussed herein, embodiments of the present disclosure are not limited to triangle meshes. The method 100 may be implemented by a computer system including a processor and a non-transitory computer-readable storage medium storing instructions”) for rendering two overlapping textures in a 3D scene, the method comprising:
obtaining a first 3D support comprising a first rendered texture (See Min: Fig. 1, and [0026], “The present disclosure relates generally to methods of merging three-dimensional (3D) meshes. More specifically, some embodiments of the present disclosure relate to an automatic approach for robustly merging two or more 3D textured meshes into one large 3D textured mesh. According some embodiments, a 3D mesh may include a polygon mesh”; and [0029], “FIG. 1 is a flowchart illustrating a method 100 of merging two or more 3D meshes according to one embodiment. The method 100 may include the following steps: receive two or more input meshes (110); spatial alignment (120); mesh clipping (130); geometry refinement (140); texture blending (150); and output a merged mesh (160)”. Note that the received two or more 3D meshes, one 3D textured mesh is mapped to the first rendered texture);
obtaining a second 3D support comprising a second rendered texture (See Min: Fig. 1, and [0026], “The present disclosure relates generally to methods of merging three-dimensional (3D) meshes. More specifically, some embodiments of the present disclosure relate to an automatic approach for robustly merging two or more 3D textured meshes into one large 3D textured mesh. According some embodiments, a 3D mesh may include a polygon mesh”; and [0029], “FIG. 1 is a flowchart illustrating a method 100 of merging two or more 3D meshes according to one embodiment. The method 100 may include the following steps: receive two or more input meshes (110); spatial alignment (120); mesh clipping (130); geometry refinement (140); texture blending (150); and output a merged mesh (160)”. Note that the received two or more 3D meshes, the second or more 3D textured mesh is mapped to the second rendered texture);
detecting that the second support intersects with the first support;
computing a third 3D support by merging the first 3D support and the second 3D support;
computing a third texture by mixing the first texture and the second texture (See Min: Fig. 1, and [0028], “In some embodiments, when there are a number of input meshes to be merged into a large mesh, the input meshes may be merged sequentially in the order of their sizes. For example, the size of a mesh may be measured in terms of a volume of its bounding box. A bounding box may be defined as a smallest rectangular 3D space that contains the mesh. In some embodiments, the input meshes can be sorted to obtain a sequence in descending order of mesh sizes: {M.sub.0, M.sub.1, . . . , M.sub.k, . . . , M.sub.n} where M.sub.k is the mesh with the k-th largest volume. The merging process may start with the mesh of the largest size: M=M.sub.0. The largest mesh M.sub.0 may be sequentially merged with other meshes in the sequence to obtain a current merged mesh: M=merge (M, M.sub.k) {k=0, 1, 2 . . . n}. For instance, the largest mesh M.sub.0 can be merged with the second largest mesh M.sub.1 to obtain a merged mesh M=merge (M.sub.0, M.sub.1); then M is merged with the third biggest mesh M.sub.2 to obtain a new merged mesh M=merge (M, M.sub.2); and so on and so forth. For illustration purposes only, the following will describe the process of merging two meshes as an example to explain the merging process. It should be understood that embodiments of the present disclosure are not limited to merging only two meshes, and can be applied to merging any number of meshes”. The merged texture mesh is mapped to the a third texture);
rendering the computed third texture on the computed third 3D support; and
displaying the rendered third texture on the third 3D support.
However, Min fails to explicitly disclose that for rendering two overlapping textures in a 3D scene; obtaining a first 3D support; obtaining a second 3D support; detecting that the second support intersects with the first support; computing a third 3D support by merging the first 3D support and the second 3D support; rendering the computed third texture on the computed third 3D support; and displaying the rendered third texture on the third 3D support.
However, Reinhardt teaches that for rendering two overlapping textures in a 3D scene (See Reinhardt: Fig. 5, and Col. 2 Lines 57-61, “In one embodiment, the present invention provides such a solution in a method which allows for fusing information extracted from two or more images of a scene into a single texture image for each surface of a computer-generated model of the scene”; and Col. 10 Lines 15-24, “Next, at step 505 of the flow diagram shown in FIG. 5, the sorted source textures and v-channels may be merged or fused to form a composite. At this point, a number of source textures are available in an undistorted form. Red, green, blue (i.e., color) and v-channel information is available for each. The v-channel provides a pixel validity level between 0 and 1. Although not necessary, all source textures for a particular surface are preferably available at the same pixel resolution. The task is now to combine all existing texture information from the multiple images into a single texture”. Note that the image of scenes are combined to generated a combined image and rendered onto the 3D model is mapped to “for rendering two overlapping textures in a 3D scene”);
rendering the computed third texture (See Reinhardt: Figs. 2-3, and Col. 5 Lines 47-67, “FIG. 2 now illustrates a general method of creating a digital model which makes use of the methods of the present invention. Having loaded a digital image 201 (step 250), a user may then create one or more objects known as primitives (e.g., boxes, pyramids, cylinders, or other three-dimensional objects) which approximate the objects shown in the digital images (step 251). A wireframe rendering 202 of the primitives may be displayed over top of the digital image 201 (i.e., the digital representation of the photograph). The objective then, is for the user to manipulate the wireframe primitive rendering 202 using the methods of the present invention, until the wireframe precisely (or nearly precisely) coincides with the object it represents in the digital image (steps 252, 253, 254, 255 and 256). Thus, the user creates a geometric model (from the primitive(s)) right on top of the digital image 201 (i.e., the photograph(s)), without requiring the use of separate editors, windows or views. In the example shown in FIG. 2, a wireframe rendering 202 of a rectilinear box is manipulated until it coincides with the outline of a box shown in the digital image 201”. Note that users using the images and textures to create and render the 3D model is mapped to the limitation of “rendering the computed third texture on the computed third 3D support”) on the computed third 3D support; and
displaying the rendered third texture (See Reinhardt: Figs. 1-3, and Col. 5 Lines 28-35, “Disk drive unit(s) 15 (or other long term storage media) may also coupled to CPU 11 and may be used for storing the digital images and geometric and texture data for three-dimensional models as well as computer readable instructions which comprise an embodiment of the present invention. Display output is provided by a video display unit 14 coupled to CPU 11. Video display unit 14 may be a conventional display such as a liquid crystal display (LCD) or other display device”; and Col. 6 Lines 18-42, “To accomplish these objectives, as the user manipulates the wireframe renderings 202 of the primitives to align the wireframe with the underlaid image 201, constraints are added to "fix" the wireframe 202 to the image 201. For example, as shown in FIG. 3, constraints 303, 304 which constrain or fix the location of comers or edges of the wireframe projections 302 to the locations in the image 301 to which they correspond or to constrain geometrical relationships between the primitives in their three-dimensional representations are added. As the constraints are introduced into the model, new estimates for all parameters of the primitives and virtual camera(s) are calculated. Based on these new parameters, the geometric coordinates of each primitive can be calculated and projected through each virtual camera to yield an updated projected wireframe graphical representation overlaid on the image and displayed to the user. The present inventions minimizes the amount of change in parameters which, with frequent enough incremental re-evaluations and reprojections yield a smooth movement of the wireframe, thus providing the user with the illusion of manipulating real three-dimensional objects made of springs or an elastic-like material. Further details regarding the various types of constraints which may be used to fix the wireframe projection to the image may be found in co-pending application Ser. No. 09/062512”. Note that display (the rendered 3D model to the user is mapped to the limitation of “rendering the computed third texture on the computed third 3D support”. Note that display the output 3D model of the scene is mapped to the limitation of “displaying the rendered third texture on the third 3D support”) on the third 3D support.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Min to have for rendering two overlapping textures in a 3D scene; rendering the computed third texture; and displaying the rendered third texture as taught by Reinhardt in order to eliminate the need for any retouching of the image (See Reinhardt: Fig. 1, and Col. 3 Lines 23-28, “A further benefit provided by the present invention is that given enough images of the same surface, view-dependent obstructions which may partially obscure the surface in various views (e.g., trees, lampposts, or moving objects such as birds or automobiles), can be automatically removed, eliminating the need for any retouching of the image”). Min teaches a method and system that may merge two or more 3D textures into one 3D texture; while Reinhardt teaches a system and method that may extract textures from multiple images, composite into a single texture image, and render and display the computer-generated model of the scene. Therefore, it is obvious to one of ordinary skill in the art to modify Min by Reinhardt to render and display the 3D scene using the merged texture images. The motivation to modify Min by Reinhardt is “Use of known technique to improve similar devices (methods, or products) in the same way”.
However, Min, modified by Reinhardt, fails to explicitly disclose that obtaining a first 3D support; obtaining a second 3D support; detecting that the second support intersects with the first support; computing a third 3D support by merging the first 3D support and the second 3D support; on the computed third 3D support; and on the third 3D support.
However, Brochu teaches that obtaining a first 3D support (See Brochu: Figs. 1-4, and [0008], “One embodiment of the present application sets forth a computer-implemented method for generating a three-dimensional (3D) surface model. The method includes joining a first 3D object having a first closed surface mesh and a second 3D object having a second closed surface mesh to produce a first irregular surface mesh”; and [0043], “FIG. 2 is a conceptual illustration of an irregular surface mesh model formed from a conventional Boolean combination of a sphere and a rectangular cuboid. Irregular surface mesh model 200 includes 3D objects 201, 203 and intersection contour 205. Rendering engine 110, as shown in FIG. 1, is configured to perform Boolean operations on 3D objects 201, 203 by generating Irregular surface mesh model 200 based on joining 3D objects 201, 203. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as a non-regularized Boolean model. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as an intermediate 3D surface model and generate a new shape as a 3D surface mesh 120 or adapted manifold surface model 130 by performing Boolean operations on one or more surface regions included in irregular surface mesh model 200”. Note that the 3D objects 201 and 203 with closed surface meshes are mapped to the 3D supports, each one can be mapped to the first or the second 3D supports, for clarity, 3D object 201 is mapped to the first 3D support);
obtaining a second 3D support (See Brochu: Figs. 1-4, and [0008], “One embodiment of the present application sets forth a computer-implemented method for generating a three-dimensional (3D) surface model. The method includes joining a first 3D object having a first closed surface mesh and a second 3D object having a second closed surface mesh to produce a first irregular surface mesh”; and [0043], “FIG. 2 is a conceptual illustration of an irregular surface mesh model formed from a conventional Boolean combination of a sphere and a rectangular cuboid. Irregular surface mesh model 200 includes 3D objects 201, 203 and intersection contour 205. Rendering engine 110, as shown in FIG. 1, is configured to perform Boolean operations on 3D objects 201, 203 by generating Irregular surface mesh model 200 based on joining 3D objects 201, 203. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as a non-regularized Boolean model. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as an intermediate 3D surface model and generate a new shape as a 3D surface mesh 120 or adapted manifold surface model 130 by performing Boolean operations on one or more surface regions included in irregular surface mesh model 200”. Note that the 3D objects 201 and 203 with closed surface meshes are mapped to the 3D supports, each one can be mapped to the first or the second 3D supports, for clarity, 3D object 203 is mapped to the second 3D support);
detecting that the second support intersects with the first support (See Brochu: Figs. 1-4, and [0031], “Intersection engine 112 computes intersection contours between surface meshes of two or more 3D objects on which rendering engine 110 performs Boolean operations to produce adapted 3D surface model 130. In some embodiments, intersection engine 112 removes portions of surface meshes of the 3D objects that surround the intersection contours. In some embodiments, intersection engine 112 removes the portions of the surface meshes surrounding an intersection contour by removing one or more vertices and/or edges of the polygonal mesh that intersect the intersection contour. For example, intersection engine 112 can compute an intersection contour for two overlapping spheres represented with triangle closed surface meshes. Intersection engine 112 can then, for each of the surface meshes, remove vertices and/or edges of triangles that intersect the computed intersection contour. In some embodiments, intersection engine 112 may be configured to remove a specified number of vertices and/or on the surface mesh that neighbor polygons intersecting the intersection contours”; and [0047], “Intersection contour 205 is a set of points common to both 3D objects 201, 203. Intersection contour 205 is formed when rendering engine 110 joins 3D objects 201, 203”. Note that the intersection engine computes the intersections between two 3D objects, and determine the intersection contour 205, and this is mapped to the cited limitation of “detecting that the second support intersects with the first support”);
computing a third 3D support by merging the first 3D support and the second 3D support (See Brochu: Figs. 1-4, and [0007], “As the foregoing illustrates, what is needed in the art are more effective techniques for generating 3D surface models made from combinations of multiple 3D objects”; and [0043], “FIG. 2 is a conceptual illustration of an irregular surface mesh model formed from a conventional Boolean combination of a sphere and a rectangular cuboid. Irregular surface mesh model 200 includes 3D objects 201, 203 and intersection contour 205. Rendering engine 110, as shown in FIG. 1, is configured to perform Boolean operations on 3D objects 201, 203 by generating Irregular surface mesh model 200 based on joining 3D objects 201, 203. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as a non-regularized Boolean model. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as an intermediate 3D surface model and generate a new shape as a 3D surface mesh 120 or adapted manifold surface model 130 by performing Boolean operations on one or more surface regions included in irregular surface mesh model 200”. Note that the joined 3D model 200 with irregular surface meshes is mapped to third 3D support by merging the first 3D support and the second 3D support);
on the computed third 3D support (See Brochu: Figs. 1-4, and [0028], “Rendering engine 110 is a software application configured to generate and/or modify 3D surface model 120 and/or adapted 3D surface model 130”. Note that rendering on the adapted 3D surface model is mapped to rendering on the computed third 3D support); and
on the third 3D support (See Brochu: Figs. 1-4, and [0004], “First, when the polygonal surface meshes of two different 3D objects do not align properly, the Boolean operations typically implemented by conventional 3D graphics applications cannot be performed accurately on the polygonal surface meshes. When the Boolean operations are not performed accurately, the 3D surface models resulting from the Boolean operations can include errors that are perceived as visual artifacts when the 3D surface models are displayed”. Note that the 3D surface models are displayed, which is mapped to the limitation of (displaying textures) on the third 3D support).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Min to have obtaining a first 3D support; obtaining a second 3D support; detecting that the second support intersects with the first support; computing a third 3D support by merging the first 3D support and the second 3D support; on the computed third 3D support; and on the third 3D support as taught by Brochu in order to avoid misalignments between surface meshes of multiple 3D objects (See Brochu: Fig. 6, and [0090], “At least one advantage of the disclosed technique is that the rendering engine enables a 3D graphics application to perform Boolean operations on a broader range of 3D objects. The rendering engine removes and evolves portions of surface boundaries individual 3D objects, which avoid misalignments between surface meshes of multiple 3D objects”). Min teaches a method and system that may merge two or more 3D textures into one 3D texture; while Brochu teaches a system and method that may use the 3D supports (3D models) of the meshes, and merge the 3D models (support), and render and display the 3D models (supports). Therefore, it is obvious to one of ordinary skill in the art to modify Min by Brochu to use the 3D supports for the texture merging and project the merged textures (meshes) on to the 3D model. The motivation to modify Min by Brochu is “Use of known technique to improve similar devices (methods, or products) in the same way”.
Regarding claim 2, Min, Reinhardt, and Brochu teach all the features with respect to claim 1 as outlined above. Further, Brochu teaches that the computer-implemented method of claim 1, wherein the first support and the second support each include a respective tessellation, the merging of the first 3D support and the second 3D support includes computing a union of the tessellation of the first 3D support and of the tessellation of the second 3D support (See Brochu: Figs. 1-2, and [0003], “A significant issue oftentimes experienced with conventional 3D graphics applications is that the polygonal surface meshes of different 3D objects do not align properly. Misalignments can occur, for example, when the polygonal surface meshes of two different 3D objects appear to a designer or to a developer to have the same shape, but, instead, have different polygonal tessellations. In such cases, the polygonal surface meshes of the 3D objects do not align mathematically within the 3D surface model. These misalignments can cause several problems”; and [0041], “For example, 3D surface model 120 may be a sphere represented by a triangle closed surface mesh. Rendering engine 110 may perform a union Boolean operation on 3D surface model 120 and a tetrahedron shape with a closed surface mesh to form a non-regularized Boolean model”. Note that the irregular merged model is mapped to the union).
Regarding claim 3, Min, Reinhardt, and Brochu teach all the features with respect to claim 2 as outlined above. Further, Min teaches that the computer-implemented method of claim 2, wherein the computing of the
union further comprises:
aggregating regions of the tessellation of the first 3D support and of the tessellation of the second 3D support that do not overlap each other (See Min: Figs. 7A-B, and [0055], “According to some embodiments, mesh clipping may be performed by an automatic energy minimization procedure. In this procedure, a bounding box of the mesh overlapping regions may be considered as a clipping volume. The clipping volume may then be rasterized as a grid of voxels: V. FIG. 7A is a schematic drawings illustrating a two-dimensional view of an exemplary grid of voxels in which a mesh clipping procedure may be applied to two meshes, mesh A and mesh B, according to some embodiments. Each square in the grid represents a voxel. As illustrated, the two meshes overlap in region 710”. Note that Mesh A exclusive the overlapping portion 710 and Mesh B exclusive the overlapping portion 710 are merged into the final merged textures, and these merged portions of Mesh A and B are mapped to aggregating regions that do not overlap each other); and
merging the aggregated regions by creating a new tessellation that joints the regions that overlap (See Min: Figs. 7A-B, and [0055], “According to some embodiments, mesh clipping may be performed by an automatic energy minimization procedure. In this procedure, a bounding box of the mesh overlapping regions may be considered as a clipping volume. The clipping volume may then be rasterized as a grid of voxels: V. FIG. 7A is a schematic drawings illustrating a two-dimensional view of an exemplary grid of voxels in which a mesh clipping procedure may be applied to two meshes, mesh A and mesh B, according to some embodiments. Each square in the grid represents a voxel. As illustrated, the two meshes overlap in region 710”; and [0065], “FIG. 7B is a schematic drawing illustrating the two meshes, mesh A and mesh B, as illustrated in FIG. 7A after a mesh clipping procedure has been performed, according to one embodiment”. Note that these three regions Mesh A, Mesh B, and the overlapping 710 are merged into the final meshes, and this is mapped to the current claim cited limitation).
Regarding claim 4, Min, Reinhardt, and Brochu teach all the features with respect to claim 1 as outlined above. Further, Brochu teaches that the computer-implemented method of claim 1, wherein the merging of the first 3D support and the second 3D support further comprises:
computing a surface covering a union of the first 3D support and of the second 3D support (See Brochu: Figs. 3A-D, and [0053], “FIG. 3B is a conceptual illustration of a slice of a non-regularized Boolean mesh with intersection contour areas removed, according to various embodiments of the present invention. Non-regularized Boolean mesh 310 includes sphere 301 and irregular polyhedron 303. Intersection engine 112 computes intersection contour 305 and removes intersection portions 315a-b proximate to portions of intersection contour 305. Removal of intersection portions 315a-b by intersection engine 112 forms a boundary set 311a-b, 313a-b proximate to intersection contour 305 and intersection portions 315a-b. Removal of intersection portions 315a-b also forms boundary gaps at the intersection portions 315a-b”. Note that computing the intersection contour is mapped to computing the surface covering a union); and
tessellating the computed surface (See Brochu: Figs. 3A-D, and [0058], “FIG. 3C is a conceptual illustration of a slice of a non-regularized Boolean mesh with boundaries extending through mesh evolution, according to various embodiments of the present invention. Non-regularized Boolean mesh 320 includes a boundary set. The boundary set includes boundaries 311a-b, 313a-b of the surface meshes of sphere 301 and irregular polyhedron 303, as well as portions of intersection contour 305 and interior surface mesh portions 306a-b. Mesh evolution engine 114 evolves one or more of the boundaries in the boundary set towards at least one of the other boundaries included in the boundary set”; and [0065], “”FIG. 3D is a conceptual illustration of a slice of an adapted non-manifold mesh, according to various embodiments of the present invention. Adapted non-manifold 3D surface mesh 330 is a 3D surface model based on a Boolean combination of sphere 301 and irregular polyhedron 303 that includes a consistent polyhedral surface mesh. Gap engine 116 connects boundaries in the boundary set to produce closed portions 322-327 of adapted non-manifold 3D surface mesh 330. Closed portions 322-327 and other portions of the surface meshes of adapted non-manifold surface mesh 330 define distinct regions within adapted non-manifold 3D surface mesh 330, denoted by “1”, “2,” and “3.””. Note that the mesh evolution for the union (adapted 3D surface model) is mapped to the tessellating the computed surface).
Regarding claim 6, Min, Reinhardt, and Brochu teach all the features with respect to claim 1 as outlined above. Further, Min teaches that the computer-implemented method of claim 1, wherein the first texture and the second texture each have a respective color, the mixing of the second texture with the first texture including blending the color of the first texture with the color of the second texture in the intersection of the second texture and the first texture (See Min: Fig. 1, and [0033], “Color inconsistencies can exist in the mesh concatenation areas, which may appear as visual artifacts in the resulting mesh. In the step of texture blending (150), color textures in the mesh concatenation areas can be blended to produce visually smooth transitions. This step of texture blending (150) aims to mix the colors from different mesh textures and gradually change blending weights in transition areas. A final merged mesh after the steps of 120, 130, 140, and 150 may then be output (160)”).
Regarding claim 7, Min, Reinhardt, and Brochu teach all the features with respect to claim 1 as outlined above. Further, Min teaches that the computer-implemented method of claim 1, wherein the mixing of the second texture with the first texture includes overlapping the first texture by the second texture in the intersection (See Min: Fig. 1, and [0033], “Color inconsistencies can exist in the mesh concatenation areas, which may appear as visual artifacts in the resulting mesh. In the step of texture blending (150), color textures in the mesh concatenation areas can be blended to produce visually smooth transitions. This step of texture blending (150) aims to mix the colors from different mesh textures and gradually change blending weights in transition areas. A final merged mesh after the steps of 120, 130, 140, and 150 may then be output (160)”; and Figs. 10A-B, and [0073], “In some embodiments, a texel (i.e., texture element or texture pixel) correspondence between mesh A and mesh B may be established by finding the closest points. As the number of texels may be very large, a k-d tree data structure may be used to speed up the closest neighbor search. A blending weight may be similar to the weight w used in the geometry refinement process as described above. Such a texture blending procedure may result in a gradual color change from one mesh to the other while approaching the boundaries. In some embodiments, the textures in the transition areas may be rasterized as texels and blended into the texel colors. According to one embodiment, texel blending weights may be computed using barycentric interpolation of triangle weights w from the geometry refinement process. FIG. 10B illustrates an exemplary image of the merged mesh illustrated in FIG. 10A after a texture blending process according to one embodiment”. Note that the overlapping region may have different texture overlapping, and the texture blending process is operated on the overlapped textures, and this is mapped to the texture overlap).
Regarding claim 8, Min, Reinhardt, and Brochu teach all the features with respect to claim 7 as outlined above. Further, Reinhardt teaches that the computer-implemented method of claim 7, wherein the second texture has a transparency, the overlapping of the first texture by the second texture being according to the transparency of the second texture (See Reinhardt: Fig. 5, and Col. 9 Lines 24-39, “Next, at step 503 of FIG. 5, for each image, and for each surface in that image, a determination is made as to whether a particular pixel (image element) is a visible part of that surface. Such a determination is required because, for example, some pixels may be obscured by other, closer surfaces or a part or all of the surface may lie outside the image, making those pixels invalid for texturing. The validity mask produced by this decision-making process is undistorted (e.g., using the same transformation as was used above in the "undistort step") into a rectangular image with the same pixel resolution as that used for the color information in the "undistort step" 502. This validity information, optionally interpolation filtered (resulting in partially valid pixels) is then stored as a v-channel (analogous to an alpha or a-channel encoding transparency commonly used in image processing)”. Note that the visibility of the pixel and its related texture transparency affects the blending in the overlapped region, and this is mapped to the overlapping the first texture with the second texture according to the transparency of the second texture).
Regarding claim 13, Min, Reinhardt, and Brochu teach all the features with respect to claim 1 as outlined above. Further, Min, Reinhardt, and Brochu teach a that a non-transitory computer readable storage medium having recorded thereon a computer program having instructions which, when executed by a computer, cause the computer to perform a method (See Min: Fig. 1, and [0029], “FIG. 1 is a flowchart illustrating a method 100 of merging two or more 3D meshes according to one embodiment. The method 100 may include the following steps: receive two or more input meshes (110); spatial alignment (120); mesh clipping (130); geometry refinement (140); texture blending (150); and output a merged mesh (160). Some of these steps may be optional. As discussed above, the received meshes may be three-dimensional (3D) and may include components based on triangles (triangle mesh), quadrilaterals, or other convex polygons. Although triangle meshes are discussed herein, embodiments of the present disclosure are not limited to triangle meshes. The method 100 may be implemented by a computer system including a processor and a non-transitory computer-readable storage medium storing instructions”) for rendering two overlapping textures in a 3D scene (See Reinhardt: Fig. 5, and Col. 2 Lines 57-61, “In one embodiment, the present invention provides such a solution in a method which allows for fusing information extracted from two or more images of a scene into a single texture image for each surface of a computer-generated model of the scene”; and Col. 10 Lines 15-24, “Next, at step 505 of the flow diagram shown in FIG. 5, the sorted source textures and v-channels may be merged or fused to form a composite. At this point, a number of source textures are available in an undistorted form. Red, green, blue (i.e., color) and v-channel information is available for each. The v-channel provides a pixel validity level between 0 and 1. Although not necessary, all source textures for a particular surface are preferably available at the same pixel resolution. The task is now to combine all existing texture information from the multiple images into a single texture”. Note that the image of scenes are combined to generated a combined image and rendered onto the 3D model is mapped to “for rendering two overlapping textures in a 3D scene”), the method comprising:
obtaining a first 3D support (See Brochu: Figs. 1-4, and [0008], “One embodiment of the present application sets forth a computer-implemented method for generating a three-dimensional (3D) surface model. The method includes joining a first 3D object having a first closed surface mesh and a second 3D object having a second closed surface mesh to produce a first irregular surface mesh”; and [0043], “FIG. 2 is a conceptual illustration of an irregular surface mesh model formed from a conventional Boolean combination of a sphere and a rectangular cuboid. Irregular surface mesh model 200 includes 3D objects 201, 203 and intersection contour 205. Rendering engine 110, as shown in FIG. 1, is configured to perform Boolean operations on 3D objects 201, 203 by generating Irregular surface mesh model 200 based on joining 3D objects 201, 203. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as a non-regularized Boolean model. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as an intermediate 3D surface model and generate a new shape as a 3D surface mesh 120 or adapted manifold surface model 130 by performing Boolean operations on one or more surface regions included in irregular surface mesh model 200”. Note that the 3D objects 201 and 203 with closed surface meshes are mapped to the 3D supports, each one can be mapped to the first or the second 3D supports, for clarity, 3D object 201 is mapped to the first 3D support) comprising a first rendered texture (See Min: Fig. 1, and [0026], “The present disclosure relates generally to methods of merging three-dimensional (3D) meshes. More specifically, some embodiments of the present disclosure relate to an automatic approach for robustly merging two or more 3D textured meshes into one large 3D textured mesh. According some embodiments, a 3D mesh may include a polygon mesh”; and [0029], “FIG. 1 is a flowchart illustrating a method 100 of merging two or more 3D meshes according to one embodiment. The method 100 may include the following steps: receive two or more input meshes (110); spatial alignment (120); mesh clipping (130); geometry refinement (140); texture blending (150); and output a merged mesh (160)”. Note that the received two or more 3D meshes, one 3D textured mesh is mapped to the first rendered texture);
obtaining a second 3D support (See Brochu: Figs. 1-4, and [0008], “One embodiment of the present application sets forth a computer-implemented method for generating a three-dimensional (3D) surface model. The method includes joining a first 3D object having a first closed surface mesh and a second 3D object having a second closed surface mesh to produce a first irregular surface mesh”; and [0043], “FIG. 2 is a conceptual illustration of an irregular surface mesh model formed from a conventional Boolean combination of a sphere and a rectangular cuboid. Irregular surface mesh model 200 includes 3D objects 201, 203 and intersection contour 205. Rendering engine 110, as shown in FIG. 1, is configured to perform Boolean operations on 3D objects 201, 203 by generating Irregular surface mesh model 200 based on joining 3D objects 201, 203. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as a non-regularized Boolean model. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as an intermediate 3D surface model and generate a new shape as a 3D surface mesh 120 or adapted manifold surface model 130 by performing Boolean operations on one or more surface regions included in irregular surface mesh model 200”. Note that the 3D objects 201 and 203 with closed surface meshes are mapped to the 3D supports, each one can be mapped to the first or the second 3D supports, for clarity, 3D object 203 is mapped to the second 3D support) comprising a second rendered texture (See Min: Fig. 1, and [0026], “The present disclosure relates generally to methods of merging three-dimensional (3D) meshes. More specifically, some embodiments of the present disclosure relate to an automatic approach for robustly merging two or more 3D textured meshes into one large 3D textured mesh. According some embodiments, a 3D mesh may include a polygon mesh”; and [0029], “FIG. 1 is a flowchart illustrating a method 100 of merging two or more 3D meshes according to one embodiment. The method 100 may include the following steps: receive two or more input meshes (110); spatial alignment (120); mesh clipping (130); geometry refinement (140); texture blending (150); and output a merged mesh (160)”. Note that the received two or more 3D meshes, the second or more 3D textured mesh is mapped to the second rendered texture);
detecting that the second support intersects with the first support (See Brochu: Figs. 1-4, and [0031], “Intersection engine 112 computes intersection contours between surface meshes of two or more 3D objects on which rendering engine 110 performs Boolean operations to produce adapted 3D surface model 130. In some embodiments, intersection engine 112 removes portions of surface meshes of the 3D objects that surround the intersection contours. In some embodiments, intersection engine 112 removes the portions of the surface meshes surrounding an intersection contour by removing one or more vertices and/or edges of the polygonal mesh that intersect the intersection contour. For example, intersection engine 112 can compute an intersection contour for two overlapping spheres represented with triangle closed surface meshes. Intersection engine 112 can then, for each of the surface meshes, remove vertices and/or edges of triangles that intersect the computed intersection contour. In some embodiments, intersection engine 112 may be configured to remove a specified number of vertices and/or on the surface mesh that neighbor polygons intersecting the intersection contours”; and [0047], “Intersection contour 205 is a set of points common to both 3D objects 201, 203. Intersection contour 205 is formed when rendering engine 110 joins 3D objects 201, 203”. Note that the intersection engine computes the intersections between two 3D objects, and determine the intersection contour 205, and this is mapped to the cited limitation of “detecting that the second support intersects with the first support”);
computing a third 3D support by merging the first 3D support and the second 3D support (See Brochu: Figs. 1-4, and [0007], “As the foregoing illustrates, what is needed in the art are more effective techniques for generating 3D surface models made from combinations of multiple 3D objects”; and [0043], “FIG. 2 is a conceptual illustration of an irregular surface mesh model formed from a conventional Boolean combination of a sphere and a rectangular cuboid. Irregular surface mesh model 200 includes 3D objects 201, 203 and intersection contour 205. Rendering engine 110, as shown in FIG. 1, is configured to perform Boolean operations on 3D objects 201, 203 by generating Irregular surface mesh model 200 based on joining 3D objects 201, 203. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as a non-regularized Boolean model. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as an intermediate 3D surface model and generate a new shape as a 3D surface mesh 120 or adapted manifold surface model 130 by performing Boolean operations on one or more surface regions included in irregular surface mesh model 200”. Note that the joined 3D model 200 with irregular surface meshes is mapped to third 3D support by merging the first 3D support and the second 3D support);
computing a third texture by mixing the first texture and the second texture (See Min: Fig. 1, and [0028], “In some embodiments, when there are a number of input meshes to be merged into a large mesh, the input meshes may be merged sequentially in the order of their sizes. For example, the size of a mesh may be measured in terms of a volume of its bounding box. A bounding box may be defined as a smallest rectangular 3D space that contains the mesh. In some embodiments, the input meshes can be sorted to obtain a sequence in descending order of mesh sizes: {M.sub.0, M.sub.1, . . . , M.sub.k, . . . , M.sub.n} where M.sub.k is the mesh with the k-th largest volume. The merging process may start with the mesh of the largest size: M=M.sub.0. The largest mesh M.sub.0 may be sequentially merged with other meshes in the sequence to obtain a current merged mesh: M=merge (M, M.sub.k) {k=0, 1, 2 . . . n}. For instance, the largest mesh M.sub.0 can be merged with the second largest mesh M.sub.1 to obtain a merged mesh M=merge (M.sub.0, M.sub.1); then M is merged with the third biggest mesh M.sub.2 to obtain a new merged mesh M=merge (M, M.sub.2); and so on and so forth. For illustration purposes only, the following will describe the process of merging two meshes as an example to explain the merging process. It should be understood that embodiments of the present disclosure are not limited to merging only two meshes, and can be applied to merging any number of meshes”. The merged texture mesh is mapped to the a third texture);
rendering the computed third texture (See Reinhardt: Figs. 2-3, and Col. 5 Lines 47-67, “FIG. 2 now illustrates a general method of creating a digital model which makes use of the methods of the present invention. Having loaded a digital image 201 (step 250), a user may then create one or more objects known as primitives (e.g., boxes, pyramids, cylinders, or other three-dimensional objects) which approximate the objects shown in the digital images (step 251). A wireframe rendering 202 of the primitives may be displayed over top of the digital image 201 (i.e., the digital representation of the photograph). The objective then, is for the user to manipulate the wireframe primitive rendering 202 using the methods of the present invention, until the wireframe precisely (or nearly precisely) coincides with the object it represents in the digital image (steps 252, 253, 254, 255 and 256). Thus, the user creates a geometric model (from the primitive(s)) right on top of the digital image 201 (i.e., the photograph(s)), without requiring the use of separate editors, windows or views. In the example shown in FIG. 2, a wireframe rendering 202 of a rectilinear box is manipulated until it coincides with the outline of a box shown in the digital image 201”. Note that users using the images and textures to create and render the 3D model is mapped to the limitation of “rendering the computed third texture on the computed third 3D support”) on the computed third 3D support (See Brochu: Figs. 1-4, and [0028], “Rendering engine 110 is a software application configured to generate and/or modify 3D surface model 120 and/or adapted 3D surface model 130”. Note that rendering on the adapted 3D surface model is mapped to rendering on the computed third 3D support); and
displaying the rendered third texture (See Reinhardt: Figs. 1-3, and Col. 5 Lines 28-35, “Disk drive unit(s) 15 (or other long term storage media) may also coupled to CPU 11 and may be used for storing the digital images and geometric and texture data for three-dimensional models as well as computer readable instructions which comprise an embodiment of the present invention. Display output is provided by a video display unit 14 coupled to CPU 11. Video display unit 14 may be a conventional display such as a liquid crystal display (LCD) or other display device”; and Col. 6 Lines 18-42, “To accomplish these objectives, as the user manipulates the wireframe renderings 202 of the primitives to align the wireframe with the underlaid image 201, constraints are added to "fix" the wireframe 202 to the image 201. For example, as shown in FIG. 3, constraints 303, 304 which constrain or fix the location of comers or edges of the wireframe projections 302 to the locations in the image 301 to which they correspond or to constrain geometrical relationships between the primitives in their three-dimensional representations are added. As the constraints are introduced into the model, new estimates for all parameters of the primitives and virtual camera(s) are calculated. Based on these new parameters, the geometric coordinates of each primitive can be calculated and projected through each virtual camera to yield an updated projected wireframe graphical representation overlaid on the image and displayed to the user. The present inventions minimizes the amount of change in parameters which, with frequent enough incremental re-evaluations and reprojections yield a smooth movement of the wireframe, thus providing the user with the illusion of manipulating real three-dimensional objects made of springs or an elastic-like material. Further details regarding the various types of constraints which may be used to fix the wireframe projection to the image may be found in co-pending application Ser. No. 09/062512”. Note that display (the rendered 3D model to the user is mapped to the limitation of “rendering the computed third texture on the computed third 3D support”. Note that display the output 3D model of the scene is mapped to the limitation of “displaying the rendered third texture on the third 3D support”) on the third 3D support (See Brochu: Figs. 1-4, and [0004], “First, when the polygonal surface meshes of two different 3D objects do not align properly, the Boolean operations typically implemented by conventional 3D graphics applications cannot be performed accurately on the polygonal surface meshes. When the Boolean operations are not performed accurately, the 3D surface models resulting from the Boolean operations can include errors that are perceived as visual artifacts when the 3D surface models are displayed”. Note that the 3D surface models are displayed, which is mapped to the limitation of (displaying textures) on the third 3D support).
Regarding claim 14, Min, Reinhardt, and Brochu teach all the features with respect to claim 13 as outlined above. Further, Brochu teaches that the non-transitory computer readable storage medium of claim 13, wherein the first support and the second support each include a respective tessellation, the merging of the first 3D support and the second 3D support includes computing a union of the tessellation of the first 3D support and of the tessellation of the second 3D support (See Brochu: Figs. 1-2, and [0003], “A significant issue oftentimes experienced with conventional 3D graphics applications is that the polygonal surface meshes of different 3D objects do not align properly. Misalignments can occur, for example, when the polygonal surface meshes of two different 3D objects appear to a designer or to a developer to have the same shape, but, instead, have different polygonal tessellations. In such cases, the polygonal surface meshes of the 3D objects do not align mathematically within the 3D surface model. These misalignments can cause several problems”; and [0041], “For example, 3D surface model 120 may be a sphere represented by a triangle closed surface mesh. Rendering engine 110 may perform a union Boolean operation on 3D surface model 120 and a tetrahedron shape with a closed surface mesh to form a non-regularized Boolean model”. Note that the irregular merged model is mapped to the union).
Regarding claim 15, Min, Reinhardt, and Brochu teach all the features with respect to claim 14 as outlined above. Further, Min teaches that the non-transitory computer readable storage medium of claim 14, wherein the computing of the union further comprises:
aggregating regions of the tessellation of the first 3D support and of the tessellation of the second 3D support that do not overlap each other (See Min: Figs. 7A-B, and [0055], “According to some embodiments, mesh clipping may be performed by an automatic energy minimization procedure. In this procedure, a bounding box of the mesh overlapping regions may be considered as a clipping volume. The clipping volume may then be rasterized as a grid of voxels: V. FIG. 7A is a schematic drawings illustrating a two-dimensional view of an exemplary grid of voxels in which a mesh clipping procedure may be applied to two meshes, mesh A and mesh B, according to some embodiments. Each square in the grid represents a voxel. As illustrated, the two meshes overlap in region 710”. Note that Mesh A exclusive the overlapping portion 710 and Mesh B exclusive the overlapping portion 710 are merged into the final merged textures, and these merged portions of Mesh A and B are mapped to aggregating regions that do not overlap each other); and
merging the aggregated regions by creating a new tessellation that joints the regions that overlap (See Min: Figs. 7A-B, and [0055], “According to some embodiments, mesh clipping may be performed by an automatic energy minimization procedure. In this procedure, a bounding box of the mesh overlapping regions may be considered as a clipping volume. The clipping volume may then be rasterized as a grid of voxels: V. FIG. 7A is a schematic drawings illustrating a two-dimensional view of an exemplary grid of voxels in which a mesh clipping procedure may be applied to two meshes, mesh A and mesh B, according to some embodiments. Each square in the grid represents a voxel. As illustrated, the two meshes overlap in region 710”; and [0065], “FIG. 7B is a schematic drawing illustrating the two meshes, mesh A and mesh B, as illustrated in FIG. 7A after a mesh clipping procedure has been performed, according to one embodiment”. Note that these three regions Mesh A, Mesh B, and the overlapping 710 are merged into the final meshes, and this is mapped to the current claim cited limitation).
Regarding claim 16, Min, Reinhardt, and Brochu teach all the features with respect to claim 13 as outlined above. Further, Brochu teaches that the non-transitory computer readable storage medium of claim 13, wherein the merging of the first 3D support and the second 3D support further comprises:
computing a surface covering a union of the first 3D support and of the second 3D support (See Brochu: Figs. 3A-D, and [0053], “FIG. 3B is a conceptual illustration of a slice of a non-regularized Boolean mesh with intersection contour areas removed, according to various embodiments of the present invention. Non-regularized Boolean mesh 310 includes sphere 301 and irregular polyhedron 303. Intersection engine 112 computes intersection contour 305 and removes intersection portions 315a-b proximate to portions of intersection contour 305. Removal of intersection portions 315a-b by intersection engine 112 forms a boundary set 311a-b, 313a-b proximate to intersection contour 305 and intersection portions 315a-b. Removal of intersection portions 315a-b also forms boundary gaps at the intersection portions 315a-b”. Note that computing the intersection contour is mapped to computing the surface covering a union); and
tessellating the computed surface (See Brochu: Figs. 3A-D, and [0058], “FIG. 3C is a conceptual illustration of a slice of a non-regularized Boolean mesh with boundaries extending through mesh evolution, according to various embodiments of the present invention. Non-regularized Boolean mesh 320 includes a boundary set. The boundary set includes boundaries 311a-b, 313a-b of the surface meshes of sphere 301 and irregular polyhedron 303, as well as portions of intersection contour 305 and interior surface mesh portions 306a-b. Mesh evolution engine 114 evolves one or more of the boundaries in the boundary set towards at least one of the other boundaries included in the boundary set”; and [0065], “”FIG. 3D is a conceptual illustration of a slice of an adapted non-manifold mesh, according to various embodiments of the present invention. Adapted non-manifold 3D surface mesh 330 is a 3D surface model based on a Boolean combination of sphere 301 and irregular polyhedron 303 that includes a consistent polyhedral surface mesh. Gap engine 116 connects boundaries in the boundary set to produce closed portions 322-327 of adapted non-manifold 3D surface mesh 330. Closed portions 322-327 and other portions of the surface meshes of adapted non-manifold surface mesh 330 define distinct regions within adapted non-manifold 3D surface mesh 330, denoted by “1”, “2,” and “3.””. Note that the mesh evolution for the union (adapted 3D surface model) is mapped to the tessellating the computed surface).
Regarding claim 17, Min, Reinhardt, and Brochu teach all the features with respect to claim 1 as outlined above. Further, Min, Reinhardt, and Brochu teach a that a system (See Min: Fig. 1, and [0029], “FIG. 1 is a flowchart illustrating a method 100 of merging two or more 3D meshes according to one embodiment. The method 100 may include the following steps: receive two or more input meshes (110); spatial alignment (120); mesh clipping (130); geometry refinement (140); texture blending (150); and output a merged mesh (160). Some of these steps may be optional. As discussed above, the received meshes may be three-dimensional (3D) and may include components based on triangles (triangle mesh), quadrilaterals, or other convex polygons. Although triangle meshes are discussed herein, embodiments of the present disclosure are not limited to triangle meshes. The method 100 may be implemented by a computer system including a processor and a non-transitory computer-readable storage medium storing instructions”) comprising:
a processor coupled to a memory, the memory having recorded thereon a computer program for rendering two overlapping textures in a 3D scene (See Reinhardt: Fig. 5, and Col. 2 Lines 57-61, “In one embodiment, the present invention provides such a solution in a method which allows for fusing information extracted from two or more images of a scene into a single texture image for each surface of a computer-generated model of the scene”; and Col. 10 Lines 15-24, “Next, at step 505 of the flow diagram shown in FIG. 5, the sorted source textures and v-channels may be merged or fused to form a composite. At this point, a number of source textures are available in an undistorted form. Red, green, blue (i.e., color) and v-channel information is available for each. The v-channel provides a pixel validity level between 0 and 1. Although not necessary, all source textures for a particular surface are preferably available at the same pixel resolution. The task is now to combine all existing texture information from the multiple images into a single texture”. Note that the image of scenes are combined to generated a combined image and rendered onto the 3D model is mapped to “for rendering two overlapping textures in a 3D scene”) that when executed by the processor causes the processor to be configured (See Min: Fig. 1, and [0029], “FIG. 1 is a flowchart illustrating a method 100 of merging two or more 3D meshes according to one embodiment. The method 100 may include the following steps: receive two or more input meshes (110); spatial alignment (120); mesh clipping (130); geometry refinement (140); texture blending (150); and output a merged mesh (160). Some of these steps may be optional. As discussed above, the received meshes may be three-dimensional (3D) and may include components based on triangles (triangle mesh), quadrilaterals, or other convex polygons. Although triangle meshes are discussed herein, embodiments of the present disclosure are not limited to triangle meshes. The method 100 may be implemented by a computer system including a processor and a non-transitory computer-readable storage medium storing instructions”) to:
obtain a first 3D support (See Brochu: Figs. 1-4, and [0008], “One embodiment of the present application sets forth a computer-implemented method for generating a three-dimensional (3D) surface model. The method includes joining a first 3D object having a first closed surface mesh and a second 3D object having a second closed surface mesh to produce a first irregular surface mesh”; and [0043], “FIG. 2 is a conceptual illustration of an irregular surface mesh model formed from a conventional Boolean combination of a sphere and a rectangular cuboid. Irregular surface mesh model 200 includes 3D objects 201, 203 and intersection contour 205. Rendering engine 110, as shown in FIG. 1, is configured to perform Boolean operations on 3D objects 201, 203 by generating Irregular surface mesh model 200 based on joining 3D objects 201, 203. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as a non-regularized Boolean model. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as an intermediate 3D surface model and generate a new shape as a 3D surface mesh 120 or adapted manifold surface model 130 by performing Boolean operations on one or more surface regions included in irregular surface mesh model 200”. Note that the 3D objects 201 and 203 with closed surface meshes are mapped to the 3D supports, each one can be mapped to the first or the second 3D supports, for clarity, 3D object 201 is mapped to the first 3D support) comprising a first rendered texture (See Min: Fig. 1, and [0026], “The present disclosure relates generally to methods of merging three-dimensional (3D) meshes. More specifically, some embodiments of the present disclosure relate to an automatic approach for robustly merging two or more 3D textured meshes into one large 3D textured mesh. According some embodiments, a 3D mesh may include a polygon mesh”; and [0029], “FIG. 1 is a flowchart illustrating a method 100 of merging two or more 3D meshes according to one embodiment. The method 100 may include the following steps: receive two or more input meshes (110); spatial alignment (120); mesh clipping (130); geometry refinement (140); texture blending (150); and output a merged mesh (160)”. Note that the received two or more 3D meshes, one 3D textured mesh is mapped to the first rendered texture);
obtain a second 3D support (See Brochu: Figs. 1-4, and [0008], “One embodiment of the present application sets forth a computer-implemented method for generating a three-dimensional (3D) surface model. The method includes joining a first 3D object having a first closed surface mesh and a second 3D object having a second closed surface mesh to produce a first irregular surface mesh”; and [0043], “FIG. 2 is a conceptual illustration of an irregular surface mesh model formed from a conventional Boolean combination of a sphere and a rectangular cuboid. Irregular surface mesh model 200 includes 3D objects 201, 203 and intersection contour 205. Rendering engine 110, as shown in FIG. 1, is configured to perform Boolean operations on 3D objects 201, 203 by generating Irregular surface mesh model 200 based on joining 3D objects 201, 203. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as a non-regularized Boolean model. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as an intermediate 3D surface model and generate a new shape as a 3D surface mesh 120 or adapted manifold surface model 130 by performing Boolean operations on one or more surface regions included in irregular surface mesh model 200”. Note that the 3D objects 201 and 203 with closed surface meshes are mapped to the 3D supports, each one can be mapped to the first or the second 3D supports, for clarity, 3D object 203 is mapped to the second 3D support) comprising a second rendered texture (See Min: Fig. 1, and [0026], “The present disclosure relates generally to methods of merging three-dimensional (3D) meshes. More specifically, some embodiments of the present disclosure relate to an automatic approach for robustly merging two or more 3D textured meshes into one large 3D textured mesh. According some embodiments, a 3D mesh may include a polygon mesh”; and [0029], “FIG. 1 is a flowchart illustrating a method 100 of merging two or more 3D meshes according to one embodiment. The method 100 may include the following steps: receive two or more input meshes (110); spatial alignment (120); mesh clipping (130); geometry refinement (140); texture blending (150); and output a merged mesh (160)”. Note that the received two or more 3D meshes, the second or more 3D textured mesh is mapped to the second rendered texture);
detect that the second support intersects with the first support (See Brochu: Figs. 1-4, and [0031], “Intersection engine 112 computes intersection contours between surface meshes of two or more 3D objects on which rendering engine 110 performs Boolean operations to produce adapted 3D surface model 130. In some embodiments, intersection engine 112 removes portions of surface meshes of the 3D objects that surround the intersection contours. In some embodiments, intersection engine 112 removes the portions of the surface meshes surrounding an intersection contour by removing one or more vertices and/or edges of the polygonal mesh that intersect the intersection contour. For example, intersection engine 112 can compute an intersection contour for two overlapping spheres represented with triangle closed surface meshes. Intersection engine 112 can then, for each of the surface meshes, remove vertices and/or edges of triangles that intersect the computed intersection contour. In some embodiments, intersection engine 112 may be configured to remove a specified number of vertices and/or on the surface mesh that neighbor polygons intersecting the intersection contours”; and [0047], “Intersection contour 205 is a set of points common to both 3D objects 201, 203. Intersection contour 205 is formed when rendering engine 110 joins 3D objects 201, 203”. Note that the intersection engine computes the intersections between two 3D objects, and determine the intersection contour 205, and this is mapped to the cited limitation of “detecting that the second support intersects with the first support”);
compute a third 3D support by the processor being further configured to merge the first 3D support and the second 3D support (See Brochu: Figs. 1-4, and [0007], “As the foregoing illustrates, what is needed in the art are more effective techniques for generating 3D surface models made from combinations of multiple 3D objects”; and [0043], “FIG. 2 is a conceptual illustration of an irregular surface mesh model formed from a conventional Boolean combination of a sphere and a rectangular cuboid. Irregular surface mesh model 200 includes 3D objects 201, 203 and intersection contour 205. Rendering engine 110, as shown in FIG. 1, is configured to perform Boolean operations on 3D objects 201, 203 by generating Irregular surface mesh model 200 based on joining 3D objects 201, 203. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as a non-regularized Boolean model. In some embodiments, rendering engine 110 may generate irregular surface mesh model 200 as an intermediate 3D surface model and generate a new shape as a 3D surface mesh 120 or adapted manifold surface model 130 by performing Boolean operations on one or more surface regions included in irregular surface mesh model 200”. Note that the joined 3D model 200 with irregular surface meshes is mapped to third 3D support by merging the first 3D support and the second 3D support);
compute a third texture by mixing the first texture and the second texture (See Min: Fig. 1, and [0028], “In some embodiments, when there are a number of input meshes to be merged into a large mesh, the input meshes may be merged sequentially in the order of their sizes. For example, the size of a mesh may be measured in terms of a volume of its bounding box. A bounding box may be defined as a smallest rectangular 3D space that contains the mesh. In some embodiments, the input meshes can be sorted to obtain a sequence in descending order of mesh sizes: {M.sub.0, M.sub.1, . . . , M.sub.k, . . . , M.sub.n} where M.sub.k is the mesh with the k-th largest volume. The merging process may start with the mesh of the largest size: M=M.sub.0. The largest mesh M.sub.0 may be sequentially merged with other meshes in the sequence to obtain a current merged mesh: M=merge (M, M.sub.k) {k=0, 1, 2 . . . n}. For instance, the largest mesh M.sub.0 can be merged with the second largest mesh M.sub.1 to obtain a merged mesh M=merge (M.sub.0, M.sub.1); then M is merged with the third biggest mesh M.sub.2 to obtain a new merged mesh M=merge (M, M.sub.2); and so on and so forth. For illustration purposes only, the following will describe the process of merging two meshes as an example to explain the merging process. It should be understood that embodiments of the present disclosure are not limited to merging only two meshes, and can be applied to merging any number of meshes”. The merged texture mesh is mapped to the a third texture);
render the computed third texture (See Reinhardt: Figs. 2-3, and Col. 5 Lines 47-67, “FIG. 2 now illustrates a general method of creating a digital model which makes use of the methods of the present invention. Having loaded a digital image 201 (step 250), a user may then create one or more objects known as primitives (e.g., boxes, pyramids, cylinders, or other three-dimensional objects) which approximate the objects shown in the digital images (step 251). A wireframe rendering 202 of the primitives may be displayed over top of the digital image 201 (i.e., the digital representation of the photograph). The objective then, is for the user to manipulate the wireframe primitive rendering 202 using the methods of the present invention, until the wireframe precisely (or nearly precisely) coincides with the object it represents in the digital image (steps 252, 253, 254, 255 and 256). Thus, the user creates a geometric model (from the primitive(s)) right on top of the digital image 201 (i.e., the photograph(s)), without requiring the use of separate editors, windows or views. In the example shown in FIG. 2, a wireframe rendering 202 of a rectilinear box is manipulated until it coincides with the outline of a box shown in the digital image 201”. Note that users using the images and textures to create and render the 3D model is mapped to the limitation of “rendering the computed third texture on the computed third 3D support”) on the computed third 3D support (See Brochu: Figs. 1-4, and [0028], “Rendering engine 110 is a software application configured to generate and/or modify 3D surface model 120 and/or adapted 3D surface model 130”. Note that rendering on the adapted 3D surface model is mapped to rendering on the computed third 3D support); and
display the rendered third texture (See Reinhardt: Figs. 1-3, and Col. 5 Lines 28-35, “Disk drive unit(s) 15 (or other long term storage media) may also coupled to CPU 11 and may be used for storing the digital images and geometric and texture data for three-dimensional models as well as computer readable instructions which comprise an embodiment of the present invention. Display output is provided by a video display unit 14 coupled to CPU 11. Video display unit 14 may be a conventional display such as a liquid crystal display (LCD) or other display device”; and Col. 6 Lines 18-42, “To accomplish these objectives, as the user manipulates the wireframe renderings 202 of the primitives to align the wireframe with the underlaid image 201, constraints are added to "fix" the wireframe 202 to the image 201. For example, as shown in FIG. 3, constraints 303, 304 which constrain or fix the location of comers or edges of the wireframe projections 302 to the locations in the image 301 to which they correspond or to constrain geometrical relationships between the primitives in their three-dimensional representations are added. As the constraints are introduced into the model, new estimates for all parameters of the primitives and virtual camera(s) are calculated. Based on these new parameters, the geometric coordinates of each primitive can be calculated and projected through each virtual camera to yield an updated projected wireframe graphical representation overlaid on the image and displayed to the user. The present inventions minimizes the amount of change in parameters which, with frequent enough incremental re-evaluations and reprojections yield a smooth movement of the wireframe, thus providing the user with the illusion of manipulating real three-dimensional objects made of springs or an elastic-like material. Further details regarding the various types of constraints which may be used to fix the wireframe projection to the image may be found in co-pending application Ser. No. 09/062512”. Note that display (the rendered 3D model to the user is mapped to the limitation of “rendering the computed third texture on the computed third 3D support”. Note that display the output 3D model of the scene is mapped to the limitation of “displaying the rendered third texture on the third 3D support”) on the third 3D support (See Brochu: Figs. 1-4, and [0004], “First, when the polygonal surface meshes of two different 3D objects do not align properly, the Boolean operations typically implemented by conventional 3D graphics applications cannot be performed accurately on the polygonal surface meshes. When the Boolean operations are not performed accurately, the 3D surface models resulting from the Boolean operations can include errors that are perceived as visual artifacts when the 3D surface models are displayed”. Note that the 3D surface models are displayed, which is mapped to the limitation of (displaying textures) on the third 3D support).
Regarding claim 18, Min, Reinhardt, and Brochu teach all the features with respect to claim 17 as outlined above. Further, Brochu teaches that the system of claim 17, wherein the first support and the second support each include a respective tessellation, the merging of the first 3D support and the second 3D support including computing a union of the tessellation of the first 3D support and of the tessellation of the second 3D support (See Brochu: Figs. 1-2, and [0003], “A significant issue oftentimes experienced with conventional 3D graphics applications is that the polygonal surface meshes of different 3D objects do not align properly. Misalignments can occur, for example, when the polygonal surface meshes of two different 3D objects appear to a designer or to a developer to have the same shape, but, instead, have different polygonal tessellations. In such cases, the polygonal surface meshes of the 3D objects do not align mathematically within the 3D surface model. These misalignments can cause several problems”; and [0041], “For example, 3D surface model 120 may be a sphere represented by a triangle closed surface mesh. Rendering engine 110 may perform a union Boolean operation on 3D surface model 120 and a tetrahedron shape with a closed surface mesh to form a non-regularized Boolean model”. Note that the irregular merged model is mapped to the union).
Regarding claim 19, Min, Reinhardt, and Brochu teach all the features with respect to claim 18 as outlined above. Further, Min teaches that the system of claim 18, wherein the processor is further configured to compute the union by being configured to:
aggregate regions of the tessellation of the first 3D support and of the tessellation of the second 3D support that do not overlap each other (See Min: Figs. 7A-B, and [0055], “According to some embodiments, mesh clipping may be performed by an automatic energy minimization procedure. In this procedure, a bounding box of the mesh overlapping regions may be considered as a clipping volume. The clipping volume may then be rasterized as a grid of voxels: V. FIG. 7A is a schematic drawings illustrating a two-dimensional view of an exemplary grid of voxels in which a mesh clipping procedure may be applied to two meshes, mesh A and mesh B, according to some embodiments. Each square in the grid represents a voxel. As illustrated, the two meshes overlap in region 710”. Note that Mesh A exclusive the overlapping portion 710 and Mesh B exclusive the overlapping portion 710 are merged into the final merged textures, and these merged portions of Mesh A and B are mapped to aggregating regions that do not overlap each other); and
merge the aggregated regions by creating a new tessellation that joints the regions that overlap (See Min: Figs. 7A-B, and [0055], “According to some embodiments, mesh clipping may be performed by an automatic energy minimization procedure. In this procedure, a bounding box of the mesh overlapping regions may be considered as a clipping volume. The clipping volume may then be rasterized as a grid of voxels: V. FIG. 7A is a schematic drawings illustrating a two-dimensional view of an exemplary grid of voxels in which a mesh clipping procedure may be applied to two meshes, mesh A and mesh B, according to some embodiments. Each square in the grid represents a voxel. As illustrated, the two meshes overlap in region 710”; and [0065], “FIG. 7B is a schematic drawing illustrating the two meshes, mesh A and mesh B, as illustrated in FIG. 7A after a mesh clipping procedure has been performed, according to one embodiment”. Note that these three regions Mesh A, Mesh B, and the overlapping 710 are merged into the final meshes, and this is mapped to the current claim cited limitation).
Regarding claim 20, Min, Reinhardt, and Brochu teach all the features with respect to claim 17 as outlined above. Further, Brochu teaches that the system of claim 17, wherein the processor is further configured to merge the first 3D support and the second 3D support by the processor being further configured to:
compute a surface covering a union of the first 3D support and of the second 3D support (See Brochu: Figs. 3A-D, and [0053], “FIG. 3B is a conceptual illustration of a slice of a non-regularized Boolean mesh with intersection contour areas removed, according to various embodiments of the present invention. Non-regularized Boolean mesh 310 includes sphere 301 and irregular polyhedron 303. Intersection engine 112 computes intersection contour 305 and removes intersection portions 315a-b proximate to portions of intersection contour 305. Removal of intersection portions 315a-b by intersection engine 112 forms a boundary set 311a-b, 313a-b proximate to intersection contour 305 and intersection portions 315a-b. Removal of intersection portions 315a-b also forms boundary gaps at the intersection portions 315a-b”. Note that computing the intersection contour is mapped to computing the surface covering a union); and
tessellate the computed surface (See Brochu: Figs. 3A-D, and [0058], “FIG. 3C is a conceptual illustration of a slice of a non-regularized Boolean mesh with boundaries extending through mesh evolution, according to various embodiments of the present invention. Non-regularized Boolean mesh 320 includes a boundary set. The boundary set includes boundaries 311a-b, 313a-b of the surface meshes of sphere 301 and irregular polyhedron 303, as well as portions of intersection contour 305 and interior surface mesh portions 306a-b. Mesh evolution engine 114 evolves one or more of the boundaries in the boundary set towards at least one of the other boundaries included in the boundary set”; and [0065], “”FIG. 3D is a conceptual illustration of a slice of an adapted non-manifold mesh, according to various embodiments of the present invention. Adapted non-manifold 3D surface mesh 330 is a 3D surface model based on a Boolean combination of sphere 301 and irregular polyhedron 303 that includes a consistent polyhedral surface mesh. Gap engine 116 connects boundaries in the boundary set to produce closed portions 322-327 of adapted non-manifold 3D surface mesh 330. Closed portions 322-327 and other portions of the surface meshes of adapted non-manifold surface mesh 330 define distinct regions within adapted non-manifold 3D surface mesh 330, denoted by “1”, “2,” and “3.””. Note that the mesh evolution for the union (adapted 3D surface model) is mapped to the tessellating the computed surface).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Min, etc. (US 20170301133 A1) in view of Reinhardt, etc. (US 6281904 B1), further in view Brochu (US 20190147649 A1) and van Beek, etc. (US 6047088 A).
Regarding claim 5, Min, Reinhardt, and Brochu teach all the features with respect to claim 1 as outlined above. However, Min, modified by Reinhardt and Brochu, fails to explicitly disclose that the method of claim 1, wherein the one or more first points and the one or more second points are coplanar, the merging of the first 3D support and the second 3D support including determining a rectangular surface including each of the one or more first points and of the one or more second points, the determined rectangular surface consisting in two triangles.
However, van Beek teaches that the method of claim 1, wherein the one or more first points and the one or more second points are coplanar, the merging of the first 3D support and the second 3D support including determining a rectangular surface including each of the one or more first points and of the one or more second points, the determined rectangular surface consisting in two triangles (See van Beek: Figs. 7-12, and Col. 5 Lines 22-28, “Because the initial 2D triangular mesh is either a uniform mesh or a Delaunay mesh, the mesh triangular topology (links between node points) is not encoded; only the 2D node point coordinates p.sub.n =(x.sub.n, y.sub.n) are encoded. In the bit stream, a special flag may specify whether the initial mesh is uniform or Delaunay. See Table 8, below”; and Col. 12 Lines 15-23, “As previously stated, five parameters specify the geometry of a uniform mesh (Table 5). The first two decoded parameters specify the number of nodes in the horizontal and vertical, respectively, direction of the uniform mesh. The next two decoded parameters specify horizontal and vertical size of each rectangle (containing two triangles) in units accurate to half pixel units. The last parameter specifies how each rectangle is split into two triangles”. Note that in uniform mesh, the rectangle is tessellated into two triangle primitives, and a lot of nodes (points) are tessellated in this way (one rectangle divided into two triangle, and this tessellation is mapped to the current claim cited limitation).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Min to have the method of claim 1, wherein the one or more first points and the one or more second points are coplanar, the merging of the first 3D support and the second 3D support including determining a rectangular surface including each of the one or more first points and of the one or more second points, the determined rectangular surface consisting in two triangles as taught by van Beek in order to provide system and method for encoding and decoding mesh and displacement of node points from one frame time instant to next (See van Beek: Fig. 6, and [0090], “At least one advantage of the disclosed technique is that the rendering engine enables a 3D graphics application to perform Boolean operations on a broader range of 3D objects. The rendering engine removes and evolves portions of surface boundaries individual 3D objects, which avoid misalignments between surface meshes of multiple 3D objects”). Min teaches a method and system that may merge two or more 3D textures into one 3D texture; while van Beek teaches a system and method that may tessellate the surface with uniform meshes that divide the surface into multiple points (nodes) and rectangles, and each rectangle has two triangles as primitives. Therefore, it is obvious to one of ordinary skill in the art to modify Min by van Beek to tesselate the 3D supports with meshes consisted of rectangles and each rectangles divided into two triangles to obtain the uniform meshes. The motivation to modify Min by van Beek is “Use of known technique to improve similar devices (methods, or products) in the same way”.
Claims 9-11 are rejected under 35 U.S.C. 103 as being unpatentable over Min, etc. (US 20170301133 A1) in view of Reinhardt, etc. (US 6281904 B1), further in view Brochu (US 20190147649 A1) and Schmidt (US 20140253548 A1).
Regarding claim 9, Min, Reinhardt, and Brochu teach all the features with respect to claim 1 as outlined above. However, Min, modified by Reinhardt and Brochu, fails to explicitly disclose that the computer-implemented method of claim 1, wherein the obtaining of each 3D support further comprises: determining, from a user-input performed with an input device, one or more points in the 3D scene to be textured; computing the 3D support comprising the determined one or more points to be textured; computing the texture based on the determined one or more points; and rendering the computed texture on the computed 3D support.
However, Schmidt teaches that the computer-implemented method of claim 1, wherein the obtaining of each 3D support further comprises:
determining, from a user-input performed with an input device, one or more points in the 3D scene to be textured (See Schmidt: Figs. 2-4, and [0038], “Point 402 may be a point within stroke 202 or a point within geodesic trace 204. In general, in the example discussed herein, point 402 is a point for which stroke parameterization engine 114 has already generated a UV coordinate. Stroke parameterization engine 114 is configured to generate a UV coordinate estimate for point 410 based on the already-generated UV coordinate of point 402 and based on tangent-normal frame 400. In doing so, stroke parameterization engine 114 is configured to project point 410 into the plane defined by axes 406 and 408 along path 412 to a position 414. Stroke parameterization engine 114 could use any geometrically or mathematically feasible technique for projecting point 410 into the plane defined by axes 406 and 408”. Note that the points within the user input strokes are mapped to the claim cited limitation of determining the points in the 3D scene according to the user input);
computing the 3D support comprising the determined one or more points to be textured (See Schmidt: Figs. 1-5, and [0046], “Stroke parameterization engine 114 is also configured to generate a 3D model of a stroke and associated geodesic trace in order to project a texture map onto the stroke along the geodesic trace in situations where portions of that stroke overlap other portions of the stroke, as discussed in greater detail below in conjunction with FIGS. 5A-5B”. Note that the 3D model is mapped to the 3D support);
computing the texture based on the determined one or more points (See Schmidt: Figs. 2-4, and [0032], “FIG. 2B is a conceptual diagram that illustrates stroke 202 and geodesic trace 204 shown in FIG. 2A parameterized with UV coordinates, according to one embodiment of the invention. As shown, stroke 202 and geodesic trace 204 are parameterized along U axis 210 and V axis 212 associated with texture map 112 (also shown in FIG. 2A). Stroke parameterization engine 114 is configured to parameterize stroke 202 so that stroke 202 resides along U axis 210. Stroke parameterization engine 114 is configured to parameterize geodesic trace 204 in similar fashion, i.e. along U axis 210. Stroke parameterization engine 114 may use any technically feasible parameterization technique to convert the 3D surface coordinates associated with stroke 202 and geodesic trace 204 into the 2D coordinates associated with texture map 112”. Note that parameterizing the strokes in UV coordinates and converting the 3D surface coordinates strokes into 2D coordinates associated with the texture map is mapped to computing the textures based in the points determined by the strokes); and
rendering the computed texture on the computed 3D support (See Schmidt: Figs. 5A-B, and [0047], “FIG. 5A is a conceptual diagram that illustrates texture map 112 shown in FIG. 2A projected onto a geodesic trace 506 on the surface of 3D model 110 (also shown in FIG. 2A), according to one embodiment of the present invention. As shown, stroke 502 is disposed on the surface of 3D model 110. Similar to stroke 202 shown in FIG. 2A, stroke 502 defines a 3D polyline that follows a geodesic associated with the surface of 3D model 110, and geodesic trace 504 defines a region proximate to stroke 502. The end-user of rendering engine 108 may interact with the GUI provided by rendering engine 108 in order to draw or paint stroke 502 on the surface of 3D model 110”. Note that rendering and drawing the strokes are mapped to rendering the computed texture on the computed 3D support).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Min to have he computer-implemented method of claim 1, wherein the obtaining of each 3D support further comprises: determining, from a user-input performed with an input device, one or more points in the 3D scene to be textured; computing the 3D support comprising the determined one or more points to be textured; computing the texture based on the determined one or more points; and rendering the computed texture on the computed 3D support as taught by Schmidt in order to produce a continuous-looking texture along the stroke (See Schmidt: Fig. 6, and [0090], “At least one advantage of the disclosed technique is that the rendering engine enables a 3D graphics application to perform Boolean operations on a broader range of 3D objects. The rendering engine removes and evolves portions of surface boundaries individual 3D objects, which avoid misalignments between surface meshes of multiple 3D objects”). Min teaches a method and system that may merge two or more 3D textures into one 3D texture; while Schmidt teaches a system and method that may specify the points in the 3D scene, generate the textures associated with the specified point, and render the textures on the specified points. Therefore, it is obvious to one of ordinary skill in the art to modify Min by Schmidt to allow users to select the points in the 3D scene with textures rendered on the points to have a continuous looking textures. The motivation to modify Min by Schmidt is “Use of known technique to improve similar devices (methods, or products) in the same way”.
Regarding claim 10, Min, Reinhardt, Brochu, and Schmidt teach all the features with respect to claim 9 as outlined above. Further, Schmidt teaches that the computer-implemented method of claim 9, wherein the obtaining of each 3D support further comprises:
parametrizing the one or more points to be textured, wherein computing the texture comprises computing the texture based on the parametrized one or more points (See Schmidt: Figs. 1-3, and [0029], “In practice, the stroke may be generated based on input received from the end-user. For example, the end-user could select a paintbrush tool from the GUI generated by rendering engine 108, and then paint the stroke on the surface of the 3D model. Stroke parameterization engine 114 is configured to parameterize the stroke using the UV coordinates associated with texture map 112, and to then estimate UV coordinates for various portions of the model proximate to the stroke. Stroke parameterization engine 114 may then project texture map 112 onto the surface of 3D model 110 using those estimated UV coordinates, as discussed in greater detail below in conjunction with FIGS. 2A-5B and 7-9. Stroke parameterization engine 114 is also configured to identify various features associated with 3D model 110 that reside proximate to the stroke disposed on the surface of 3D model 110, as discussed in greater detail below in conjunction with FIGS. 6A-6B and 9”).
Regarding claim 11, Min, Reinhardt, Brochu, and Schmidt teach all the features with respect to claim 9 as outlined above. Further, Schmidt teaches that the computer-implemented method of claim 9, wherein the 3D scene comprises a 3D modeled object, the computing of the 3D support further comprising:
placing each of the one or more points on the 3D modeled object (See Schmidt: Figs. 1-5, and [0049], “Stroke parameterization engine 114 is configured to generate the stroke model by placing geodesic circle 506 at sequential locations along stroke 502. For each such location, stroke parameterization engine 114 identifies a set of points associated with 3D model 110 that are included within geodesic circle 506. Stroke parameterization engine 114 slides geodesic circle 506 incrementally in this fashion along stroke 502, and for each identified set of points, stroke parameterization engine 114 determines a subset of points that were not included within the previous set of points (i.e. the points included within geodesic circle 506 at a previous location along stroke 502). Stroke parameterization engine 114 collects each subset of points determined in this fashion to the stroke model. As a result, stroke parameterization engine 114 generates at least one copy of each point within stroke 502 and geodesic trace 504, and may also generate multiple copies of points residing within self-intersecting portions of stroke 502. Stroke parameterization engine 114 may then perform the stroke parameterization and texture mapping approach described above in conjunction with FIGS. 2A-4 using the stroke model, as further described below in conjunction with FIG. 5B”. Note that placing the circle along the stroke and associated it with the location of the 3D model is mapped to place points onto the 3D modeled object); and
determining a part of the 3D modeled object serving as the 3D support (See Schmidt: Figs. 1-7, and [0059], “At step 704, stroke parameterization engine 114 converts the stroke to a 3D polyline. The 3D polyline may include one or different points, such as, e.g. points 202-0, 202-1, 202-2, and 202-3 shown in FIG. 3A. At step 706, stroke parameterization engine 114 generates a geodesic trace around the polyline that defines a region proximate to the polyline. The geodesic trace could be, for example, geodesic trace 204 shown in FIGS. 2A-3B and 5A-6B. Stroke parameterization engine 114 may generate the geodesic placing a geodesic circle at each sequential point within the polyline and identifying portions of the 3D model (such as, e.g., vertices or points, etc.) that fall within the geodesic circle. The geodesic circle may be a 2D circle that resides tangent to the surface of the 3D model at different points within the polyline”. Note that the 3D model portion is identified by placing the circle on the stroke associated with the 3D model, and this is mapped to determine the part of 3D modeled object serving the 3D support).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Min, etc. (US 20170301133 A1) in view of Reinhardt, etc. (US 6281904 B1), further in view Brochu (US 20190147649 A1), Schmidt (US 20140253548 A1), and van Beek, etc. (US 6047088 A).
Regarding claim 12, Min, Reinhardt, Brochu, and Schmidt teach all the features with respect to claim 9 as outlined above. However, Min, modified by Reinhardt, Brochu, and Schmidt, fails to explicitly disclose that the computer-implemented method of claim 9, wherein the one or more points are coplanar, the computing of the 3D support including determining a rectangular surface including each of the one or more points, the determined rectangular surface consisting in two triangles.
However, van Beek teaches that the computer-implemented method of claim 9, wherein the one or more points are coplanar, the computing of the 3D support including determining a rectangular surface including each of the one or more points, the determined rectangular surface consisting in two triangles (See van Beek: Figs. 7-12, and Col. 5 Lines 22-28, “Because the initial 2D triangular mesh is either a uniform mesh or a Delaunay mesh, the mesh triangular topology (links between node points) is not encoded; only the 2D node point coordinates p.sub.n =(x.sub.n, y.sub.n) are encoded. In the bit stream, a special flag may specify whether the initial mesh is uniform or Delaunay. See Table 8, below”; and Col. 12 Lines 15-23, “As previously stated, five parameters specify the geometry of a uniform mesh (Table 5). The first two decoded parameters specify the number of nodes in the horizontal and vertical, respectively, direction of the uniform mesh. The next two decoded parameters specify horizontal and vertical size of each rectangle (containing two triangles) in units accurate to half pixel units. The last parameter specifies how each rectangle is split into two triangles”. Note that in uniform mesh, the rectangle is tessellated into two triangle primitives, and a lot of nodes (points) are tessellated in this way (one rectangle divided into two triangle, and this tessellation is mapped to the current claim cited limitation).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention was effectively filed to modify Min to have the computer-implemented method of claim 9, wherein the one or more points are coplanar, the computing of the 3D support including determining a rectangular surface including each of the one or more points, the determined rectangular surface consisting in two triangles as taught by van Beek in order to provide system and method for encoding and decoding mesh and displacement of node points from one frame time instant to next (See van Beek: Fig. 6, and [0090], “At least one advantage of the disclosed technique is that the rendering engine enables a 3D graphics application to perform Boolean operations on a broader range of 3D objects. The rendering engine removes and evolves portions of surface boundaries individual 3D objects, which avoid misalignments between surface meshes of multiple 3D objects”). Min teaches a method and system that may merge two or more 3D textures into one 3D texture; while van Beek teaches a system and method that may tessellate the surface with uniform meshes that divide the surface into multiple points (nodes) and rectangles, and each rectangle has two triangles as primitives. Therefore, it is obvious to one of ordinary skill in the art to modify Min by van Beek to tesselate the 3D supports with meshes consisted of rectangles and each rectangles divided into two triangles to obtain the uniform meshes. The motivation to modify Min by van Beek is “Use of known technique to improve similar devices (methods, or products) in the same way”.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GORDON G LIU whose telephone number is (571)270-0382. The examiner can normally be reached Monday - Friday 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona E Faulk can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GORDON G LIU/ Primary Examiner, Art Unit 2618