DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
2. The information disclosure statements (IDS) submitted on the following dates are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner: 11/14/2024.
Drawings
3. The drawings are objected to because:
Label ROM 706 as used in Figure 7 and paragraph [00141] are mismatched. It appears to the Examiner that S(pecification) “read only memory (ROM) 708” refers to F(igure) ROM 706.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 112
4. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
5. Claims 9-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 9, line 2 recites "a character model”. The limitation "character model" is previously introduced in “a second character model based on a shape and a size of the three-dimensional digital objects fit for a first character model” of claim 1, lines 3-5. It is not clear whether "a character model” at claim 9, line 2 is the same with "a second character model or a first character model" in “a second character model based on a shape and a size of the three-dimensional digital objects fit for a first character model” of claim 1, lines 3-5.
Depending claims 10-14 are rejected under the same rationale.
Claim Rejections - 35 USC § 103
6. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
7. Claims 1-6 are rejected under 35 U.S.C. 103 as being unpatentable over Supancic, III, (“Supancic”) [US-2022/0207843-A1]
Regarding claim 1, Supancic discloses a method (Fig. 1 and ¶0016, at least disclose methods for modifying three-dimensional digital items to fit different character models) comprising:
storing, at a server computer, a machine learning system configured to compute a shape and a size of three-dimensional digital objects fit for a second character model based on a shape and a size of the three-dimensional digital objects fit for a first character model (Fig. 1 shows server computer 120 and machine learning systems 122 a-122n; ¶0017, at least discloses a method comprises storing, at a server computer, a machine learning system configured to compute a shape and size of three-dimensional digital objects to fit a second character model based on the shape and size that the same three-dimensional digital objects have to fit a first character model; ¶0021, at least discloses digital item data store 100 stores digital items 102 a-102 m. Each of digital items 102 a-102 m comprise data defining three-dimensional model-specific digital items. As an example, digital items 102 a-102 m may comprise three-dimensional cosmetic items designed to be fit to character models for three-dimensional rendering. The digital items 102 a-102 m may be defined based on a size and shape of the digital items, such as through vertices on a three-dimensional mesh; ¶0027, at least discloses Server computer 120 may store trained machine learning systems 122 a-122 n and graphical user interface instructions 124);
generating, using the machine learning system, a base transform matrix corresponding to a first exemplary three-dimensional digital object fit for the first character model and a second exemplary three-dimensional digital object fit for the second character model (Fig. 2 and ¶0011, at least disclose method of training and utilizing a machine learning system configured to compute a shape and size of three-dimensional digital objects to fit a second character model based on the shape and size that the same three-dimensional digital objects have to fit a first character model; ¶0040, at least discloses Machine learning systems for computing a shape and size of three-dimensional digital objects to fit a second character model based on the shape and size that the same three-dimensional digital objects have to fit a first character model; ¶0056-0059, at least disclose a function that models different types of transformations and an affinity between a particular vertex and the transformation. The transformations may include any of shear, rotation, scale, translation, or any other three-dimensional transformations […] The vertices closer to the character model's head may be more sensitive to some types of transformations, such as translation, but less sensitive to other types of transformations, such as scale. Thus, the affinity value takes into account an affinity of a vertex to a type of transformation by basing the affinity value, at least in part on the location of the vertex […] where ŷi is a particular output vertex value, Tk is one of k transformation matrices, Ak,x i is the affinity value which is dependent on the transformation type k and the coordinates of the input vertex xi. In an embodiment, the transformation matrices comprise (3×4) matrices defining one or more of translation, shear, rotation, or scaling, using known mathematical methods for defining coordinate transformations [base transform matrix] […] The system may generate an initial embedding for the vertices and a separate embedding for each transformation, thereby creating K+1 embeddings where K is the number of transformation matrices [base transform matrix]; ¶0063, at least discloses the model described above generate a one-to-one prediction of vertices for an output three-dimensional digital item fit to a second character model from vertices of an input three-dimensional digital item fit to a first character model. Thus, if multiple transformations are desired, such as in a case where a single item may need to be fit to a plurality of different character models, the system may initialize a plurality of machine learning systems and train the plurality of machine learning systems with different inputs or outputs);
training the machine learning system using the base transform matrix and machine-learning training data (¶0016, at least discloses The system then trains a machine learning system using the training dataset [machine-learning training data]. When the system receives data defining a new three-dimensional digital item fit to the first character model, the system computes output vertices using the trained machine learning system to generate a version of the new three-dimensional digital item fit to the second character model; Fig. 2 and ¶0037, at least discloses At step 206, a machine learning system is trained in containerized environment 110 using the matched vertices. For example, the containerized environment 110 may generate training datasets for one or more different machine learning systems from the matched vertex data. The training data may include, for each digital item, an input matrix and an output matrix. The input matrix may comprise coordinates for each vertex of a human-male-specific version of a digital item and the output matrix may comprise coordinates for each corresponding vertex of the female-dwarf-specific version of the same digital item. The locations of vertices in the input matrix may correspond to the locations of matched vertices in the output matrix. Thus, the first set of coordinates in the input matrix may be coordinates that were matched to the first set of coordinates in the output matrix in step 204; ¶0056-0060, at least discloses the function used for the regression model comprises a function that models different types of transformations and an affinity between a particular vertex and the transformation. The transformations may include any of shear, rotation, scale, translation, or any other three-dimensional transformations […] As a practical example, the regression model may be initialized according to:
PNG
media_image1.png
75
205
media_image1.png
Greyscale
where ŷi is a particular output vertex value, Tk is one of k transformation matrices, Ak,x i is the affinity value which is dependent on the transformation type k and the coordinates of the input vertex xi. In an embodiment, the transformation matrices comprise (3×4) matrices defining one or more of translation, shear, rotation, or scaling, using known mathematical methods for defining coordinate transformations […] the transformation matrices and affinity matrices are parameterized with weights using a machine learning system, such as a deep neural network which uses each full set of coordinates as inputs);
receiving, from a client computing device, input data defining a plurality of input vertices for an input three-dimensional digital object fit for the first character model (¶0017, at least discloses receiving, from a client computing device, particular input data defining a plurality of particular input vertices for a particular input three-dimensional digital object fit for the first character model; ¶0042, at least discloses The data may additionally include data that identifies the character model to which the new digital item is fit. For example, if the new digital item was originally designed as being fit to a female orc, the client computing device 130 may send, along with the data defining vertices of the new digital item, an indication that the item was fit to a female orc, thereby allowing the server computer 120 to select the correct machine learning systems for computing outputs; ¶0064, at least discloses the client computing device may send the new three-dimensional digital item to the server computer and/or data defining the vertices of the new three-dimensional digital item); and
generating, using the machine learning system, output data defining a plurality of output vertices for an output three-dimensional digital object for the second character model (¶0016-0017, at least disclose When the system receives data defining a new three-dimensional digital item fit to the first character model, the system computes output vertices using the trained machine learning system to generate a version of the new three-dimensional digital item fit to the second character model […] in response to receiving the particular input data, computing, using the machine learning system, particular output data defining a plurality of particular output vertices for a particular output three-dimensional digital object; wherein the particular output three-dimensional digital object is the particular input three-dimensional digital object fit for the second character model; ¶0037-0038, at least disclose The locations of vertices in the input matrix may correspond to the locations of matched vertices in the output matrix. Thus, the first set of coordinates in the input matrix may be coordinates that were matched to the first set of coordinates in the output matrix in step 204 […] the machine learning system comprises a linear regression or neural network model configured to compute an output matrix of vertices from an input matrix of vertices; ¶0043, at least discloses At step 214, the server computer 120 computes an output digital item for a second character model. For example, the server computer 120 may generate an input data set comprising coordinates of each vertex of the new digital item. The server computer 120 may then feed the input data set into the machine learning system to compute an output data set comprising coordinates of each vertex of the new digital item fit to the second character model; ¶0055, at least discloses for a particular model, each set of inputs may correspond to a same character model, such as the male human character model, while each set of outputs corresponds to a particular other character model, such as the female goblin character model. A regression model may be defined as:
PNG
media_image2.png
38
103
media_image2.png
Greyscale
where ŷ is the predicted output vertices and f(x; w) is a differentiable function of the input vertices, x, and a set of weights, w, which are trained using the training datasets).
Though Supancic does not directly disclose about generating a base transform matrix, it would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to apply the transformation matrices comprise (3×4) matrices defining one or more of translation, shear, rotation, or scaling, using known mathematical methods for defining coordinate transformations. The system may generate an initial embedding for the vertices and a separate embedding for each transformation, thereby creating K+1 embeddings where K is the number of transformation matrices. After generating the cost matrix 304, the system may use a cost minimization algorithm to identify vertex matches which, in aggregate, minimize a total cost (or distance) between matched vertices for generating, using the machine learning system, a base transform matrix corresponding to a first exemplary three-dimensional digital object fit for the first character model and a second exemplary three-dimensional digital object fit for the second character model.
Doing so would leverage existing data to fit new digital items to existing character models or fit existing digital items to new character models.
Regarding claim 2, Supancic discloses the method of claim 1, and further discloses wherein:
the base transform matrix (see Claim 1 rejection for detailed analysis) comprises a plurality of vertex distances (Fig. 3 and ¶0048, at least disclose Once the system has identified the vertices, the system computes distances between each set of vertices. Distance AA comprises the distance between vertex 1A and vertex 2A. Similarly, distance BB comprises the distance between vertex 1B and vertex 2B, distance, AB comprises the distance between vertex 1A and vertex 2B, and distance BA comprises the distance between vertex 1B and vertex 2A. While FIG. 3 depicts only two vertices on each of the digital items, in an embodiment, the system computes distances between each vertex of the digital item fit to the first character model and each vertex of the digital item fit to the second character model), and
individual vertex distances of the plurality of vertex distances comprise a distance from a vertex of the first exemplary three-dimensional digital object fit for the first character model to a vertex of the second exemplary three-dimensional digital object fit for the second character model (Fig. 3 and ¶0048, at least disclose Once the system has identified the vertices, the system computes distances between each set of vertices. Distance AA comprises the distance between vertex 1A and vertex 2A. Similarly, distance BB comprises the distance between vertex 1B and vertex 2B, distance, AB comprises the distance between vertex 1A and vertex 2B, and distance BA comprises the distance between vertex 1B and vertex 2A. While FIG. 3 depicts only two vertices on each of the digital items, in an embodiment, the system computes distances between each vertex of the digital item fit to the first character model and each vertex of the digital item fit to the second character model).
Regarding claim 3, Supancic discloses the method of claim 1, and further discloses wherein the machine-learning training data comprises a plurality of input matrices corresponding to a plurality of three-dimensional digital objects fit for the first character model and a plurality of output matrices corresponding to the plurality of three-dimensional digital objects fit for the second character model (Fig. 2 and ¶0037, at least disclose The training data may include, for each digital item, an input matrix and an output matrix. The input matrix may comprise coordinates for each vertex of a human-male-specific version of a digital item and the output matrix may comprise coordinates for each corresponding vertex of the female-dwarf-specific version of the same digital item. The locations of vertices in the input matrix may correspond to the locations of matched vertices in the output matrix. Thus, the first set of coordinates in the input matrix may be coordinates that were matched to the first set of coordinates in the output matrix in step 204 […] the machine learning system comprises a linear regression or neural network model configured to compute an output matrix of vertices from an input matrix of vertices).
Regarding claim 4, Supancic discloses the method of claim 1, and further discloses wherein the second exemplary three-dimensional digital object digital fit for the second character model comprises a particular characteristic associated with the second character model, and the output three-dimensional digital object comprises the particular characteristic associated with the second character model (¶0017, at least discloses the particular output three-dimensional digital object is the particular input three-dimensional digital object fit for the second character model; and causing display, on the client computing device, of the particular output three-dimensional digital object combined with the second character model; ¶0021, at least discloses The digital items 102 a-102 m may be further defined with other information, such as data defining textures, colors, physical properties such as material, moveability, or environmental interactions, lighting, and/or any other characteristics).
Regarding claim 5, Supancic discloses the method of claim 1, and further discloses wherein:
the machine learning system is a first machine learning system (¶0017, at least discloses storing, at a server computer, a machine learning system [first machine learning system] configured to compute a shape and size of three-dimensional digital objects to fit a second character model based on the shape and size that the same three-dimensional digital objects have to fit a first character model), the base transform matrix is a first base transform matrix (¶0056-0059, at least disclose a function that models different types of transformations and an affinity between a particular vertex and the transformation. The transformations may include any of shear, rotation, scale, translation, or any other three-dimensional transformations […] The vertices closer to the character model's head may be more sensitive to some types of transformations, such as translation, but less sensitive to other types of transformations, such as scale. Thus, the affinity value takes into account an affinity of a vertex to a type of transformation by basing the affinity value, at least in part on the location of the vertex […] where ŷi is a particular output vertex value, Tk is one of k transformation matrices, Ak,x i is the affinity value which is dependent on the transformation type k and the coordinates of the input vertex xi. In an embodiment, the transformation matrices comprise (3×4) matrices defining one or more of translation, shear, rotation, or scaling, using known mathematical methods for defining coordinate transformations [first base transform matrix] […] The system may generate an initial embedding for the vertices and a separate embedding for each transformation, thereby creating K+1 embeddings where K is the number of transformation matrices [first base transform matrix]), the output data is first output data, the plurality of output vertices is a first plurality of output vertices (¶0017, at least discloses in response to receiving the particular input data, computing, using the machine learning system, particular output data defining a plurality of particular output vertices [first plurality of output vertices] for a particular output three-dimensional digital object), and the output three-dimensional digital object is a first output three-dimensional digital object (¶0017, at least discloses in response to receiving the particular input data, computing, using the machine learning system, particular output data defining a plurality of particular output vertices for a particular output three-dimensional digital object; wherein the particular output three-dimensional digital object is the particular input three-dimensional digital object fit for the second character model; and causing display, on the client computing device, of the particular output three-dimensional digital object combined with the second character model), the method further comprising:
storing, at the server computer, a second machine learning system configured to compute the shape and the size of the three-dimensional digital objects fit for the second character model based on the shape and the size of the three-dimensional digital objects fit for the first character model (Fig. 1 shows server computer 120 and machine learning systems 122 a-122n; ¶0017, at least discloses a method comprises storing, at a server computer, a machine learning system configured to compute a shape and size of three-dimensional digital objects to fit a second character model based on the shape and size that the same three-dimensional digital objects have to fit a first character model; ¶0021, at least discloses digital item data store 100 stores digital items 102 a-102 m. Each of digital items 102 a-102 m comprise data defining three-dimensional model-specific digital items. As an example, digital items 102 a-102 m may comprise three-dimensional cosmetic items designed to be fit to character models for three-dimensional rendering. The digital items 102 a-102 m may be defined based on a size and shape of the digital items, such as through vertices on a three-dimensional mesh; ¶0027, at least discloses Server computer 120 may store trained machine learning systems 122 a-122 n [second machine learning system] and graphical user interface instructions 124);
generating, by the second machine learning system, a second base transform matrix corresponding to the first exemplary three-dimensional digital object fit for the first character model and a third exemplary three-dimensional digital object fit for the second character model (Fig. 2 and ¶0011, at least disclose method of training and utilizing a machine learning system configured to compute a shape and size of three-dimensional digital objects to fit a second character model based on the shape and size that the same three-dimensional digital objects have to fit a first character model; ¶0040, at least discloses Machine learning systems for computing a shape and size of three-dimensional digital objects to fit a second character model based on the shape and size that the same three-dimensional digital objects have to fit a first character model; ¶0056-0059, at least disclose a function that models different types of transformations and an affinity between a particular vertex and the transformation. The transformations may include any of shear, rotation, scale, translation, or any other three-dimensional transformations […] The vertices closer to the character model's head may be more sensitive to some types of transformations, such as translation, but less sensitive to other types of transformations, such as scale. Thus, the affinity value takes into account an affinity of a vertex to a type of transformation by basing the affinity value, at least in part on the location of the vertex […] where ŷi is a particular output vertex value, Tk is one of k transformation matrices, Ak,x i is the affinity value which is dependent on the transformation type k and the coordinates of the input vertex xi. In an embodiment, the transformation matrices comprise (3×4) matrices defining one or more of translation, shear, rotation, or scaling, using known mathematical methods for defining coordinate transformations [base transform matrix] […] The system may generate an initial embedding for the vertices and a separate embedding for each transformation, thereby creating K+1 embeddings where K is the number of transformation matrices [second base transform matrix]; ¶0063, at least discloses the model described above generate a one-to-one prediction of vertices for an output three-dimensional digital item fit to a second character model from vertices of an input three-dimensional digital item fit to a first character model. Thus, if multiple transformations are desired, such as in a case where a single item may need to be fit to a plurality of different character models, the system may initialize a plurality of machine learning systems and train the plurality of machine learning systems with different inputs or outputs);
training the second machine learning system using the second base transform matrix and the machine learning training data (¶0016, at least discloses The system then trains a machine learning system using the training dataset [machine-learning training data]. When the system receives data defining a new three-dimensional digital item fit to the first character model, the system computes output vertices using the trained machine learning system to generate a version of the new three-dimensional digital item fit to the second character model; Fig. 2 and ¶0037, at least discloses At step 206, a machine learning system is trained in containerized environment 110 using the matched vertices. For example, the containerized environment 110 may generate training datasets for one or more different machine learning systems from the matched vertex data. The training data may include, for each digital item, an input matrix and an output matrix. The input matrix may comprise coordinates for each vertex of a human-male-specific version of a digital item and the output matrix may comprise coordinates for each corresponding vertex of the female-dwarf-specific version of the same digital item. The locations of vertices in the input matrix may correspond to the locations of matched vertices in the output matrix. Thus, the first set of coordinates in the input matrix may be coordinates that were matched to the first set of coordinates in the output matrix in step 204; ¶0056-0060, at least discloses the function used for the regression model comprises a function that models different types of transformations and an affinity between a particular vertex and the transformation. The transformations may include any of shear, rotation, scale, translation, or any other three-dimensional transformations […] As a practical example, the regression model may be initialized according to:
PNG
media_image1.png
75
205
media_image1.png
Greyscale
where ŷi is a particular output vertex value, Tk is one of k transformation matrices, Ak,x i is the affinity value which is dependent on the transformation type k and the coordinates of the input vertex xi. In an embodiment, the transformation matrices comprise (3×4) matrices defining one or more of translation, shear, rotation, or scaling, using known mathematical methods for defining coordinate transformations […] the transformation matrices and affinity matrices are parameterized with weights using a machine learning system, such as a deep neural network which uses each full set of coordinates as inputs); and
generating, by the second machine learning system, second output data defining a second plurality of output vertices for a second output three-dimensional digital object for the second character model (¶0016-0017, at least disclose When the system receives data defining a new three-dimensional digital item fit to the first character model, the system computes output vertices using the trained machine learning system to generate a version of the new three-dimensional digital item fit to the second character model […] in response to receiving the particular input data, computing, using the machine learning system, particular output data defining a plurality of particular output vertices for a particular output three-dimensional digital object; wherein the particular output three-dimensional digital object is the particular input three-dimensional digital object fit for the second character model; ¶0027, at least discloses Server computer 120 may store trained machine learning systems 122 a-122 n [second machine learning system] and graphical user interface instructions 124; ¶0037-0038, at least disclose The locations of vertices in the input matrix may correspond to the locations of matched vertices in the output matrix. Thus, the first set of coordinates in the input matrix may be coordinates that were matched to the first set of coordinates in the output matrix in step 204 […] the machine learning system comprises a linear regression or neural network model configured to compute an output matrix of vertices from an input matrix of vertices; ¶0043, at least discloses At step 214, the server computer 120 computes an output digital item for a second character model. For example, the server computer 120 may generate an input data set comprising coordinates of each vertex of the new digital item. The server computer 120 may then feed the input data set into the machine learning system to compute an output data set comprising coordinates of each vertex of the new digital item fit to the second character model; ¶0055, at least discloses for a particular model, each set of inputs may correspond to a same character model, such as the male human character model, while each set of outputs corresponds to a particular other character model, such as the female goblin character model. A regression model may be defined as:
PNG
media_image2.png
38
103
media_image2.png
Greyscale
where ŷ is the predicted output vertices and f(x; w) is a differentiable function of the input vertices, x, and a set of weights, w, which are trained using the training datasets).
Regarding claim 6, Supancic discloses the method of claim 5, and discloses the method further comprising:
causing display, on the client computing device, the first output three-dimensional digital object overlaid with the second character model and the second output three- dimensional digital object overlaid with the second character model (¶0070, at least discloses the server computer may determine whether portions of the character model overlap with portions of the digital item, thereby causing the clipping depicted in outputs 1 and 4. If the server computer identifies clipping in an output, the server computer may remove the output from those provided to the client computing device. Thus, in an embodiment, the server computer may display only outputs 2 and 3 of FIG. 4 to the client computing device. Thus, the server computer may generate a larger number of outputs, but display only the outputs that meet particular criteria, thereby improving the visual interface provided to the client computing device); and
receiving, from the client computing device, a selection of one of the first output three- dimensional digital object or the second output three-dimensional digital object (¶0070, at least discloses the server computer may determine whether portions of the character model overlap with portions of the digital item, thereby causing the clipping depicted in outputs 1 and 4. If the server computer identifies clipping in an output, the server computer may remove the output from those provided to the client computing device. Thus, in an embodiment, the server computer may display only outputs 2 and 3 of FIG. 4 to the client computing device. Thus, the server computer may generate a larger number of outputs, but display only the outputs that meet particular criteria, thereby improving the visual interface provided to the client computing device).
8. Claims 15-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Supancic, III, (“Supancic”) [US-2022/0207843-A1] in view of Nguyen Canh et al., (“Nguyen”) [US-2024/0273771-A1], further in view of Nguyen Canh et al., (“Nguyen_7173”) [US-2024/0087173-A1]
Regarding claim 15, Supancic discloses the method of claim 1, and discloses the method further comprising:
decomposing a mesh of the output three-dimensional digital object (Supancic- ¶0021, at least discloses The digital items 102 a-102 m may be defined based on a size and shape of the digital items, such as through vertices on a three-dimensional mesh; ¶0046, at least discloses The vertices may be extracted from data defining the digital item, such as from a mesh that serves as the backbone for the digital item [decomposing a mesh of the output three-dimensional digital object]; ¶0066, at least discloses the server computer additionally recreates a digital mesh using the vertices);
Supancic does not explicitly disclose, but Nguyen discloses
decomposing a mesh of the output three-dimensional digital object into a first sub-mesh and a second sub-mesh (Nguyen- Fig. 4 shows symmetry plane on a mesh object; ¶0006, at least discloses partitioning the mesh via a global symmetry plane that partitions the mesh into a first side and a second side, that global symmetry plane perpendicular to the first bounding plane and the second bounding plane; ¶0043, at least discloses the mesh is separated into multiple sub-meshes for each slice. For example, each slice si corresponds to a respective sub-mesh. As illustrated in FIG. 5(C), a mesh is divided into at least slice s1 and slice s2 with local symmetry planes p1 and p2, respectively. The slices s1 and slice s2 and may each correspond to two different sub-meshes; Figs. 6A-6C and ¶0047, at least disclose two symmetry planes (p1, p2) are intersected at line l in cutting plane c1. The left vertices in plane ci in both sub-meshes are the same. That is, the vertices in two different slices that share a boundary may be the same along the boundary; ¶0051, at least discloses an index of left vertices may be used to find a corresponding pair in the right vertices of multiple sub-meshes);
identifying one or more vertices from the first sub-mesh and one or more vertices of the second sub-mesh that are symmetrical (Nguyen- ¶0004, at least discloses Vertices are divided into a left and right part of a symmetry plane; ¶0037, at least discloses a mesh is assumed to be fully or partially symmetric in geometry. In one or more examples, symmetry coding is assumed to use half of a full mesh (e.g., left side of mesh) to predict the remainder of the mesh (e.g., right side of mesh); ¶0095, at least discloses merging the one or more vertices in the boundary of the second slice with one or more vertices in the boundary of the first slice further comprises: merging a first vertex located on the first side in the boundary of the second slice with a second vertex located on the first side in the boundary of the first slice, and merging a third vertex located on the second side in the boundary of the second slice with a fourth vertex located on the second side in the boundary of the first slice, the third vertex corresponding to a predicted vertex that is symmetric to the first vertex, the fourth vertex corresponding to a predicted vertex that is symmetric to the second vertex.);
determining a symmetry plane for the first sub-mesh and the second sub-mesh based at least in part on the one or more vertices of the first sub-mesh and the one or more vertices of the second sub-mesh (Nguyen- ¶0004, at least discloses Vertices are divided into a left and right part of a symmetry plane; ¶0006, at least discloses partitioning the mesh via a global symmetry plane that partitions the mesh into a first side and a second side; Fig, 5A and ¶0042-0043, at least disclose A symmetry plane may be a plane that divides a mesh object into a first side (e.g., left side) and a second side opposite to the left side (e.g., right side). A global symmetry plane may divide an entire mesh object as illustrated in FIG. 5(A) […] the mesh is separated into multiple sub-meshes for each slice. For example, each slice si corresponds to a respective sub-mesh. As illustrated in FIG. 5(C), a mesh is divided into at least slice s1 and slice s2 with local symmetry planes p1 and p2, respectively. The slices s1 and slice s2 and may each correspond to two different sub-meshes; ¶0051, at least discloses an index of left vertices may be used to find a corresponding pair in the right vertices of multiple sub-meshes. For example, since symmetry prediction is performed to predict right mesh from left mesh
PNG
media_image3.png
36
59
media_image3.png
Greyscale
, the corresponding predicted right vertices are
PNG
media_image4.png
34
55
media_image4.png
Greyscale
, respectively; Figs. 5B, 8 and ¶0065, at least disclose a mesh is partitioned via a global symmetry plane that partitions the mesh into a first side and a second side. Referring to FIG. 5(B), the global symmetry plane may be plane p, which is perpendicular to planes b1, b2);
enforcing a symmetry between the first sub-mesh and the second sub-mesh based at least in part on the symmetry plane (Nguyen- ¶0037, at least discloses a mesh is assumed to be fully or partially symmetric in geometry […] symmetry coding is assumed to use half of a full mesh (e.g., left side of mesh) to predict the remainder of the mesh (e.g., right side of mesh); Fig, 5A and ¶0042-0043, at least disclose A symmetry plane may be a plane that divides a mesh object into a first side (e.g., left side) and a second side opposite to the left side (e.g., right side). A global symmetry plane may divide an entire mesh object as illustrated in FIG. 5(A) […] the mesh is separated into multiple sub-meshes for each slice. For example, each slice si corresponds to a respective sub-mesh. As illustrated in FIG. 5(C), a mesh is divided into at least slice s1 and slice s2 with local symmetry planes p1 and p2, respectively. The slices s1 and slice s2 and may each correspond to two different sub-meshes; ¶0051, at least discloses an index of left vertices may be used to find a corresponding pair in the right vertices of multiple sub-meshes. For example, since symmetry prediction is performed to predict right mesh from left mesh
PNG
media_image3.png
36
59
media_image3.png
Greyscale
, the corresponding predicted right vertices are
PNG
media_image4.png
34
55
media_image4.png
Greyscale
, respectively; Figs. 5B, 8 and ¶0065, at least disclose a mesh is partitioned via a global symmetry plane that partitions the mesh into a first side and a second side. Referring to FIG. 5(B), the global symmetry plane may be plane p, which is perpendicular to planes b1, b2)); and
recombining the first sub-mesh and the second sub-meshjoint sub-meshes at the decoder side).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Supancic to incorporate the teachings of Nguyen, and apply the symmetry plane for the first sub-mesh and the second sub-mesh into Nguyen’s teachings for decomposing a mesh of the output three-dimensional digital object into a first sub-mesh and a second sub-mesh; identifying one or more vertices from the first sub-mesh and one or more vertices of the second sub-mesh that are symmetrical; determining a symmetry plane for the first sub-mesh and the second sub-mesh based at least in part on the one or more vertices of the first sub-mesh and the one or more vertices of the second sub-mesh; enforcing a symmetry between the first sub-mesh and the second sub-mesh based at least in part on the symmetry plane.
Doing so would refine the base mesh to minimize the displacement.
The prior art does not explicitly disclose, but Nguyen_7173 discloses
recombining the first sub-mesh and the second sub-mesh into a symmetrized mesh of the output three-dimensional digital object (Nguyen_7173- ¶0043, at least discloses A symmetrize process 404 may be used to symmetrize the initial mesh to utilize symmetry property […] Symmetry partitioning 406 may be performed by dividing the base mesh to left and right parts; ¶0048-0049, at least disclose a base mesh is symmetrized to become a perfect symmetry base mesh mean. For example, half of the base mesh may be predicted via a given symmetry plane with zero displacement […] reconstruction of the left original mesh is used together with the reconstructed right base mesh to predict the right vertices; Fig. 6 and ¶0057-0058, at least disclose operation S1002 where a first side of the base mesh is reconstructed. For example, referring to FIG. 6 , the left side base mesh vertices on side 600A may be reconstructed. The process proceeds to operation S1004 where a second side of the base mesh is reconstructed. For example, referring to FIG. 6 , the right side base mesh vertices may derived based on the left side base mesh vertices based on the symmetry between the left side base mesh vertices and the right side base mesh vertices. The process proceeds to operation S1006 where the original vertices of the polygon mesh are reconstructed. For example, referring to FIG. 6 , the right side original vertices may be reconstructed based on each right side base mesh vertex and a corresponding displacement included in the bitstream. Similarly, the left side original vertices may be reconstructed based on each left side base mesh vertex and a corresponding displacement included in the bitstream. The process proceeds to operation S1008 where the polygon mesh is reconstructed. For example, referring to FIG. 6 , after the left original vertices and right original vertices are reconstructed, the polygon mesh 600 is reconstructed.).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Supancic/Nguyen to incorporate the teachings of Nguyen_7173, and apply the base mesh is symmetrized into Supancic/Nguyen’s teachings for recombining the first sub-mesh and the second sub-mesh into a symmetrized mesh of the output three-dimensional digital object.
Doing so would provide surface reflection symmetry for efficient mesh compression.
Regarding claim 16, Supancic in view of Nguyen and Nguyen_7173, discloses the method of claim 15, and discloses the method further comprising:
identifying a first portion of the output three-dimensional digital object corresponding to the first sub-mesh and a second portion of the three-dimensional digital object corresponding to the second sub-mesh based at least in part on a UV map corresponding to a texture of the output three-dimensional digital object and the mesh of the output three-dimensional digital object (Nguyen- Fig. 5(A) shows a left portion [first portion] of the output three-dimensional digital object corresponding to the left sub-mesh [first sub-mesh] and a right portion [second portion] of the three-dimensional digital object corresponding to the right sub-mesh [second sub-mesh]; ¶0004, at least discloses Symmetry was utilized to compress symmetry mesh. Vertices are divided into a left and right part of a symmetry plane. The left part is encoded by mesh coding while the right part is encoded by a symmetry prediction and displacement coding. Even though the texture coordinate (or UV attribute) also has a certain level of symmetry, the texture coordinate may exhibit different symmetrical properties in transition and rotation), and
wherein the first portion of the output three-dimensional digital object and the second portion of the output three-dimensional digital object are symmetrical (Nguyen- Fig. 5(A) shows; ¶0037, at least discloses a mesh is assumed to be fully or partially symmetric in geometry. In one or more examples, symmetry coding is assumed to use half of a full mesh (e.g., left side of mesh) to predict the remainder of the mesh (e.g., right side of mesh); Nguyen_7173- ¶0008, at least discloses performing a symmetrize process on the initial base mesh to generate a symmetrical base mesh that includes a first side having the first set of base mesh vertices and a second side having a second set of base mesh vertices, each base mesh vertex in the first set of base mesh vertices having a corresponding symmetric vertex in the second set of base mesh vertices).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Supancic to incorporate the teachings of Nguyen and Nguyen_7173, and apply the symmetrical base mesh into Supancic’s teachings in order the first portion of the output three-dimensional digital object and the second portion of the output three-dimensional digital object are symmetrical.
The same motivation that was utilized in the rejection of claim 15 applies equally to this claim.
Regarding claim 17, Supancic in view of Nguyen and Nguyen_7173, discloses the method of claim 16, and further discloses wherein identifying the first portion of the output three- dimensional digital object and the second portion of the output three-dimensional digital object based at least in part on the UV map (see Claim 16 rejection for detailed analysis) further includes:
identifying a first portion of the texture of the output three-dimensional digital object (see Claim 16 rejection for detailed analysis) that comprises one or more particular characteristics (Supancic- ¶0021, at least discloses The digital items 102 a-102 m may be further defined with other information, such as data defining textures, colors, physical properties such as material, moveability, or environmental interactions, lighting, and/or any other characteristics.); and
identifying a second portion of the texture of the output three-dimensional digital object (see Claim 16 rejection for detailed analysis) that comprises the one or more particular characteristics (Supancic- ¶0021, at least discloses The digital items 102 a-102 m may be further defined with other information, such as data defining textures, colors, physical properties such as material, moveability, or environmental interactions, lighting, and/or any other characteristics.).
Regarding claim 18, Supancic in view of Nguyen and Nguyen_7173, discloses the method of claim 15, and further discloses wherein the symmetry plane comprises an XY plane, an XZ plane, or a YZ plane (Nguyen- Figs. 6A-6C shows the symmetry plane comprises an XY plane).
Regarding claim 20, Supancic in view of Nguyen and Nguyen_7173, discloses the method of claim 15, and discloses the method further comprising:
identifying one or more duplicate symmetry planes based at least in part on a normal vector to the symmetry plane (Nguyen- ¶0045, at least discloses a slice merging process may be performed after a mesh is divided into multiple slices. In one or more examples, two or more slices are merged if the difference between the local symmetry planes for this slices is minimal (e.g., less than a threshold). In one or more examples, the condition for determining whether to merge two or more slices may specify that if an angle between the two symmetry planes of two respective slices is smaller than a given threshold τp, then the two slices are grouped or merged to one. As illustrated in FIG. 5(C), the last 2 slices may be merged since the symmetry plane is identical to the global symmetry plane p3=p; Nguyen_7173- ¶0051, at least discloses the input mesh is near symmetry and complete. An example of this type of input mesh includes an example of a one-to-one mapping for each vertex via the symmetry plane, where the mapping also lies in the normal direction of the symmetry plane).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Supancic to incorporate the teachings of Nguyen and Nguyen_7173, and apply the two symmetry planes and mapping also lies in the normal direction of the symmetry plane into Supancic’s teachings for identifying one or more duplicate symmetry planes based at least in part on a normal vector to the symmetry plane.
The same motivation that was utilized in the rejection of claim 15 applies equally to this claim.
Allowable Subject Matter
9. Claims 7-14 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
10. The following is a statement of reasons for the indication of allowable subject matter:
Regarding Claim 7, the combination of prior arts teaches the method of Claim 1.
However in the context of claim 1, 5 and 7 as a whole, the combination of prior arts does
not teach the first base transform matrix is excluded from generating the second output data, and the second base transform matrix is excluded from generating the first output data. Therefore, Claim 7 in the context of claim 1, 5 as a whole does comprise allowable subject matter.
Regarding Claim 8, the combination of prior arts teaches the method of Claim 1.
However in the context of claim 1 and 8 as a whole, the combination of prior arts does
not teach separating the machine-learning training data into a first cluster corresponding to the first base transform matrix and a second cluster corresponding to a second base transform matrix; applying a first transformation corresponding to the first base transform matrix to a plurality of vertices associated with the output three-dimensional digital object; applying a second transformation corresponding to the second base transform matrix to the plurality of vertices associated with the output three-dimensional digital object; comparing a result of the first transformation and a result of the second transformation with a ground truth; and selecting the first base transform based at least in part on a similarity of the result of the first transformation with the ground truth, wherein the machine learning system is trained using data from the first cluster corresponding to the first base transform matrix. Therefore, Claim 8 in the context of claim 1 as a whole does comprise allowable subject matter.
Regarding Claim 9, the combination of prior arts teaches the method of Claim 1.
However in the context of claim 1 and 9 as a whole, the combination of prior arts does
not teach identifying a first clipped vertex from a plurality of vertices of the output three- dimensional digital object; identifying one or more neighboring vertices of the first clipped vertex from the plurality of vertices of the output three-dimensional digital object; repositioning the first clipped vertex to a nearest point on a surface of the convex hull; and repositioning individual neighboring vertices of the one or more neighboring vertices based at least in part on the repositioning of the first clipped vertex. Therefore, Claim 9 in the context of claim 1 as a whole does comprise allowable subject matter.
The dependent claims 10-14 depend directly or indirectly from Claim 9, and therefore also contain allowable subject matter.
Regarding Claim 19, the combination of prior arts teaches the method of Claim 15.
However in the context of claim 1, 15 and 19 as a whole, the combination of prior arts does
not teach identifying a normal vector to the symmetry plane that aligns with the axis of symmetry; identifying a midpoint between the one or more vertices of the first sub-mesh and the one or more vertices of the second sub-mesh; and identifying the symmetry plane based at least in part on the normal vector and the midpoint. Therefore, Claim 19 in the context of claim 1, 15 as a whole does comprise allowable subject matter.
Conclusion
11. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. They are as recited in the attached PTO-892 form.
12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL LE whose telephone number is (571)272-5330. The examiner can normally be reached 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL LE/Primary Examiner, Art Unit 2614