Prosecution Insights
Last updated: April 19, 2026
Application No. 18/813,849

ON COMPRESSION OF A MESH WITH MULTIPLE TEXTURE MAPS

Non-Final OA §103
Filed
Aug 23, 2024
Examiner
GE, JIN
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Tencent America LLC
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
416 granted / 520 resolved
+18.0% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
38 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
9.0%
-31.0% vs TC avg
§103
60.2%
+20.2% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are pending in the present application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/20/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6-11, 13-18, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. PGPubs 2024/0233192 to Cao in view of U.S. PGPubs 2024/0153150 to Kim et al. Regarding claim 1, Cao teaches a method for compression of mesh sequences with multiple texture maps per frame (par 0027, par 0032-0036, “Volumetric visual data may by in the form of a volumetric frame that describes an object or scene captured at a particular time instance or in the form of a sequence of volumetric frames (referred to as a volumetric sequence or volumetric video) that describes an object or scene captured at multiple different time instances …. Encoding may be used to compress the size of a mesh frame or sequence to provide for more efficient storage and/or transmission”, par 0038-0040, “As shown in FIG. 1, a mesh sequence 108 may comprise a series of mesh frames 124. A mesh frame describes an object or scene captured at a particular time instance”), the method being executed by at least one processor (par 0039, “a mesh feed interface to receive captured natural scenes and/or synthetically generated scenes from a mesh content provider, and/or a processor to generate synthetic mesh scenes”), the method comprising: receiving an input dynamic mesh representing a volumetric data of at least one three-dimensional (3D) visual content, wherein the input dynamic mesh comprises a plurality of mesh frames (Fig. 1, par 0038-0041, “As shown in FIG. 1, a mesh sequence 108 may comprise a series of mesh frames 124. A mesh frame describes an object or scene captured at a particular time instance. Mesh sequence 108 may achieve the impression of motion when a constant or variable time is used to successively present mesh frames 124 of mesh sequence 108. A (3D) mesh frame comprises a collection of vertices 126 in 3D space and geometry information of vertices 126. A 3D mesh may comprise a collection of vertices, edges, and faces that define the shape of a polyhedral object ….. a 3D mesh (e.g., one of mesh frames 124) may be a static or a dynamic mesh. In some examples, the 3D mesh may be represented (e.g., defined) by connectivity information, geometry information, and texture information (e.g., texture coordinates and texture connectivity)”); determining that a mesh frame among the plurality of mesh frames comprises a plurality of texture maps in response to a mesh file associated with the mesh frame indicating that at least two different materials are applied in the mesh frame (par 0033, “One or more types of attribute information may be stored for each face (of a triangle). Attribute information may indicate a property of a face's visual appearance. For example, attribute information may indicate a texture (e.g., color) of the face, a material type of the face, transparency information of the face, reflectance information of the face, a normal vector to a surface of the face, a velocity at the face, an acceleration at the face, a time stamp indicating when the face (and/or vertex) was captured, or a modality indicating how the face (and/or vertex) was captured (e.g., running, walking, or flying). In another example, a face (or vertex) may comprise light field data in the form of multiple view-dependent texture information. Light field data may be another type of optional attribute information”, par 0040, “One or more of the triangles may further comprise one or more types of attribute information. Attribute information may indicate a property of a point's visual appearance. For example, attribute information may indicate a texture (e.g., color) of a face, a material type of a face, transparency information of a face, reflectance information of a face, a normal vector to a surface of a face, a velocity at a face, an acceleration at a face, a time stamp indicating when a face was captured, a modality indicating when a face was captured (e.g., running, walking, or flying). In another example, one or more of the faces (or triangles) may comprise light field data in the form of multiple view-dependent texture information. Light field data may be another type of optional attribute information. Color attribute information of one or more of the faces may comprise a luminance value and two chrominance values. The luminance value may represent the brightness (or luma component, Y) of the point. The chrominance values may respectively represent the blue and red components of the point (or chroma components, Cb and Cr) separate from the brightness. Other color attribute values are possible based on different color schemes (e.g., an RGB or monochrome color scheme)”, par 0047, “Decoder 120 may decode mesh sequence 108 from encoded bitstream 110. To decode attribute information (e.g., textures) of mesh sequence 108, decoder 120 may reconstruct the 2D images compressed using one or more 2D video encoders. Decoder 120 may then reconstruct the attribute information of 3D mesh frames 124 from the reconstructed 2D images”, par 0054, “Attribute information 262 (e.g., color, texture, etc.) of the mesh frame may be encoded separately from the geometry information of the mesh frame described above. In some examples, attribute information 262 of the mesh frame may be represented (e.g., stored) by an attribute map (e.g., texture map or materials information) that associates each vertex of the mesh frame with corresponding attributes information of that vertex. Attribute transfer 232 may re-parameterize attribute information 262 in the attribute map based on reconstructed mesh determined (e.g., generated or output) from mesh reconstruction components 225”); encoding material indices associated with the mesh frame of the input dynamic mesh (par 0043, “Redundant information is information that may be predicted at a decoder and therefore may not be needed to be transmitted to the decoder for accurate decoding of mesh sequence 108. For example, encoder 114 may convert attribute information (e.g., texture information) of one or more of mesh frames 124 from 3D to 2D and then apply one or more 2D video encoders or encoding methods to the 2D images”, par 0070, “decoder 300 includes video decoder 304 that decodes attribute bitstream 336 comprising encoded attribute information represented (e.g., stored) in 2D images (or picture frames) to determined attribute information 344 (e.g., decoded attribute information or reconstructed attribute information)”). But Cao keeps silent for teaching determining a material index associated with each triangle face in the mesh frame, wherein a respective material index indicates a texture to be applied to a respective triangle face. In related endeavor, Kim et al. teach determining a material index associated with each triangle face in the mesh frame, wherein a respective material index indicates a texture to be applied to a respective triangle face (par 0009, “texture coordinates for vertices of a reconstructed mesh may be used to identify the respective vertices. These texture coordinates may then be mapped to attributes that are to be associated with the respective vertices represented by the texture coordinates. For example, attributes, such as surface normals, may be communicated as an indexed list and respective texture coordinates may be mapped to index entries for the index of surface normals. As another example, attribute information (such as colors) may be communicated using 2D video image frames (or portions thereof) and the texture coordinates may be mapped to two-dimensional pixel coordinates (e.g. U,V) in a 2D video image frame, wherein the 2D pixel coordinates are associated with a given vertex of the reconstructed mesh. In situations, wherein the range of possible texture coordinates and the range of possible attributes are the same, a more direct mapping may be used. However, in situations wherein a resolution of texture coordinates is different than a resolution of attributes, then adjustments may need to be made to account for differences in resolution. For example, if a texture coordinate resolution allows for more texture coordinates than are allowed for in an attribute resolution, some attribute values may be mapped to more than one texture coordinate, or an attribute value may need to be generated, for example using interpolation, to provide an attribute value for a given texture coordinate for which there is not a matching attribute. As discussed herein, various methods may be used to account for a difference in resolution between texture coordinates and attributes. Also, the respective resolutions needed to compute these adjustments may be signaled in a variety of manners, such as in the atlas data sub-bitstream, the base mesh sub-bitstream, etc. The atlas data sub-bitstream includes patch data units and information indicating how attribute patches, for example in the video sub-bitstream, map to sub-meshes, for example as may be reconstructed from the base mesh and displacements. Also, the texture coordinate resolutions may be signaled as a texture coordinate bit-depth, or a horizonal and vertical height for the texture coordinates may be signaled as separate values”, Fig 2, par 0033, “the example texture mesh stored in the object format shown in FIG. 2 includes geometry information listed as X, Y, and Z coordinates of vertices and texture coordinates listed as two dimensional (2D) coordinates for vertices, wherein the 2D coordinates identify a pixel location of a pixel storing texture information for a given vertex. The example texture mesh stored in the object format also includes texture connectivity information that indicates mappings between the geometry coordinates and texture coordinates to form polygons, such as triangles. For example, a first triangle is formed by three vertices, where a first vertex (1/1) is defined as the first geometry coordinate (e.g., 64.062500, 1237.739990, 51.757801), which corresponds with the first texture coordinate (e.g., 0.0897381, 0.740830). The second vertex (2/2) of the triangle is defined as the second geometry coordinate (e.g., 59.570301, 1236.819946, 54.899700), which corresponds with the second texture coordinate (e.g., 0.899059, 0.741542). Finally, the third vertex of the triangle corresponds to the third listed geometry coordinate which matches with the third listed texture coordinate. However, note that in some instances a vertex of a polygon, such as a triangle may map to a set of geometry coordinates and texture coordinates that may have different index positions in the respective lists of geometry coordinates and texture coordinates. For example, the second triangle has a first vertex corresponding to the fourth listed set of geometry coordinates and the seventh listed set of texture coordinates. A second vertex corresponding to the first listed set of geometry coordinates and the first set of listed texture coordinates and a third vertex corresponding to the third listed set of geometry coordinates and the ninth listed set of texture coordinates.”, par 0139), encoding material indices associated with the mesh frame of the input dynamic mesh (par 0047, “Attribute transfer module 430 compares the geometry of the original static/dynamic mesh M(i) to the reconstructed deformed mesh DM(i) and updates the attribute map to account for any geometric deformations, this updated attribute map is output as updated attribute map A′(i). The updated attribute map A′(i) is then padded, wherein a 2D image comprising the attribute images is padded such that spaces not used to communicate the attribute images have a padding applied ….. The updated attribute map A′(i) that has been padded and optionally color space converted is then video encoded via video encoding module 436 and is provided to multiplexer 438 for inclusion in compressed bitstream b(i). “, par 0114-0119, “Encode stitching information indicating the mapping between duplicated vertices as follows: [0116] Encode per vertex tags identifying duplicated vertices (by encoding a vertex attribute with the mesh codec) [0117] Encode for each duplicated vertex the index of the vertex it should be merged with. [0118] Make sure that the decoded positions and vertex attributes associated with the duplicated vertices exactly match [0119] Per vertex tags identifying duplicated vertices and code this information as a vertex attribute by using the mesh codec “, par 0139). It would have been obvious to a person of ordinary skill in the art at the time before the effective filing data of the claimed invention to modified Cao to include determining a material index associated with each triangle face in the mesh frame, wherein a respective material index indicates a texture to be applied to a respective triangle face as taught by Kim et al. to use different attributes for different sub-meshes of a mesh to directly generate the three-dimensional mesh and apply texture or attribute values to represent an object to adapt to additionally or alternatively encode three-degree of freedom plus (3DOF+) scenes, visual volumetric content,. Regarding claim 2, Cao as modified by Kim et al. teaches all the limitation of claim 1, and Kim et al. further teach wherein the material index is determined in accordance with an order in which each material of the at least two different materials appears in the mesh file (Fig 2, par 0033, ““the example texture mesh stored in the object format shown in FIG. 2 includes geometry information listed as X, Y, and Z coordinates of vertices and texture coordinates listed as two dimensional (2D) coordinates for vertices, wherein the 2D coordinates identify a pixel location of a pixel storing texture information for a given vertex. The example texture mesh stored in the object format also includes texture connectivity information that indicates mappings between the geometry coordinates and texture coordinates to form polygons, such as triangles. For example, a first triangle is formed by three vertices, where a first vertex (1/1) is defined as the first geometry coordinate (e.g., 64.062500, 1237.739990, 51.757801), which corresponds with the first texture coordinate (e.g., 0.0897381, 0.740830). The second vertex (2/2) of the triangle is defined as the second geometry coordinate (e.g., 59.570301, 1236.819946, 54.899700), which corresponds with the second texture coordinate (e.g., 0.899059, 0.741542)”). PNG media_image1.png 448 432 media_image1.png Greyscale Regarding claim 3, Cao as modified by Kim et al. teaches all the limitation of claim 1, and Kim et al. further teach wherein the material index is determined arbitrarily, and wherein each material among the at least two different materials has a different material index (Fig 2, par 0033, ““the example texture mesh stored in the object format shown in FIG. 2 includes geometry information listed as X, Y, and Z coordinates of vertices and texture coordinates listed as two dimensional (2D) coordinates for vertices, wherein the 2D coordinates identify a pixel location of a pixel storing texture information for a given vertex. The example texture mesh stored in the object format also includes texture connectivity information that indicates mappings between the geometry coordinates and texture coordinates to form polygons, such as triangles. For example, a first triangle is formed by three vertices, where a first vertex (1/1) is defined as the first geometry coordinate (e.g., 64.062500, 1237.739990, 51.757801), which corresponds with the first texture coordinate (e.g., 0.0897381, 0.740830). The second vertex (2/2) of the triangle is defined as the second geometry coordinate (e.g., 59.570301, 1236.819946, 54.899700), which corresponds with the second texture coordinate (e.g., 0.899059, 0.741542)”). Regarding claim 4, Cao as modified by Kim et al. teaches all the limitation of claim 1, and further teach wherein the encoding of the material indices comprises: encoding the material indices associated with the mesh frame in a traversal order used for compression of texture coordinates associated with the mesh frame (Cao: par 0095-0096, “Once all the displacements 700 are packed, the empty pixels in image 720 may be padded with neighboring pixel values for improved compression. In the example shown in FIG. 7A, packing order 722 for blocks may be a raster order and a packing order 732 for displacements within packing block 730 may be a Z-order. However, it should be understood that other packing schemes both for blocks and displacements within blocks may be used. In some embodiments, a packing scheme for the blocks and/or within the blocks may be predetermined. In some embodiments, the packing scheme may be signaled by the encoder in the bitstream per patch, patch group, tile, image, or sequence of images …. packing order 732 may follow a space-filling curve, which specifies a traversal in space in a continuous, non-repeating way “, par 0098, “displacements 700 packed in displacement image 720 may be ordered according to their LODs. For example, displacement coefficients (e.g., quantized wavelet coefficients) may be ordered from a lowest LOD to a highest LOD. In other words, a wavelet coefficient representing a displacement for a vertex at a first LOD may be packed (e.g., arranged and stored in displacement image 720) according to the first LOD. For example, displacements 700 may be packed from a lowest LOD to a highest LOD. Higher LODs represent a higher density of vertices and corresponds to more displacements compared to lower LODs “, Kim et al.: par 0098, “Implicitly detecting the connected components (CC) of the mesh with respect to the position's connectivity or the texture coordinate's connectivity or both and by considering each CC as a sub-mesh. The mesh vertices are traversed from neighbor to neighbor, which makes it possible to detect the CCs in a deterministic way. The indices assigned to the CCs start from 0 and are incremented by one each time a new CC is detected”). Regarding claim 6, Cao as modified by Kim et al. teaches all the limitation of claim 1, and Cao et al. further teach wherein the encoding the material indices associated with the mesh frame comprises: encoding the material indices associated with the mesh frame as an occupancy map associated with a plurality of vertices of the mesh frame (par 0174, “Patch occupancy component 1810 may indicate which samples in patch geometry component 1806 and patch attribute component 1808 are associated with data in the 3D patch. For example, patch occupancy component 1810 may be a binary image as shown in FIG. 18 that indicates whether a pixel in one or more of the other 2D patch components corresponds to a valid 3D projected point from mesh frame 1726. A binary image is an image that comprises pixels with two colors, such as black and white as shown in FIG. 18. Each pixel of a binary image may be stored using a single bit, with “0” representing one of the two colors and “1” representing the other of two colors”, par 0176, “Patch packer 1710 may pack: 2D patch geometry components generated for 3D patches of mesh frame 1726 into 2D geometry component 1736, 2D patch attribute components (e.g., for a single attribute type) generated for the 3D patches of mesh frame 1726 into 2D attribute component 1728, and 2D patch occupancy components generated for the 3D patches of mesh frame 1726 into 2D occupancy component 1730”). PNG media_image1.png 448 432 media_image1.png Greyscale Regarding claim 7, Cao as modified by Kim et al. teaches all the limitation of claim 1, and Kim et al. further teach wherein the determining that the at least two different materials are applied in the mesh frame comprises: determining that a parameter indicating a new material is applied in the mesh frame occurs at least twice (Fig 2, par 0033, ““the example texture mesh stored in the object format shown in FIG. 2 includes geometry information listed as X, Y, and Z coordinates of vertices and texture coordinates listed as two dimensional (2D) coordinates for vertices, wherein the 2D coordinates identify a pixel location of a pixel storing texture information for a given vertex. The example texture mesh stored in the object format also includes texture connectivity information that indicates mappings between the geometry coordinates and texture coordinates to form polygons, such as triangles. For example, a first triangle is formed by three vertices, where a first vertex (1/1) is defined as the first geometry coordinate (e.g., 64.062500, 1237.739990, 51.757801), which corresponds with the first texture coordinate (e.g., 0.0897381, 0.740830). The second vertex (2/2) of the triangle is defined as the second geometry coordinate (e.g., 59.570301, 1236.819946, 54.899700), which corresponds with the second texture coordinate (e.g., 0.899059, 0.741542)”). Regarding claim 8, Cao teaches an apparatus for mesh compression of mesh sequences with multiple texture maps per frame (Fig 2A, par 0050-0055), the apparatus comprising: at least one memory configured to store program code; and at least one processor configured to read the program code and operate as instructed by the program code (par 0202-0203). The remaining limitations of the claim are similar in scope to claim 1 and rejected under the same rationale. Regarding claims 9-11 and 13-14, Cao as modified by Kim et al. teaches all the limitation of claim 8, the claims 9-11 and 13-14 are similar in scope to claims 2-4 and 6-7 and are rejected under the same rational. Regarding claim 15, Cao teaches non-transitory computer-readable medium storing instructions, the instructions comprising: one or more instructions that, when executed by one or more processors of a device for mesh coding, cause the one or more processors to (par 0030-0031, par 0202-0204). The remaining limitations of the claim are similar in scope to claim 1 and rejected under the same rationale. Regarding claims 16-18 and 20, Cao as modified by Kim et al. teaches all the limitation of claim 15, the claims 16-18 and 20 are similar in scope to claims 2-4 and 6 and are rejected under the same rational. Allowable Subject Matter Claims 5, 12, and 19 are objected to as being dependent upon a rejected base, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The cited prior art fails to teach the combination of elements recited in claims 5, 12, and 19, including "wherein for a triangle face in the mesh frame that is encoded first, the encoding of the material indices comprises encoding a first material index associated with the triangle face in the mesh frame that is encoded first; and for remaining triangle faces, the encoding of the material indices comprises encoding a respective material index predictor associated with a respective remaining triangle face, wherein the respective material index predictor is a difference between a current material index of a current triangle face and a previous material index of a previous triangle face". Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jin Ge whose telephone number is (571)272-5556. The examiner can normally be reached 8:00 to 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571)272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JIN . GE Examiner Art Unit 2619 /JIN GE/ Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Aug 23, 2024
Application Filed
Feb 10, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592024
QUANTIFICATION OF SENSOR COVERAGE USING SYNTHETIC MODELING AND USES OF THE QUANTIFICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586296
METHODS AND PROCESSORS FOR RENDERING A 3D OBJECT USING MULTI-CAMERA IMAGE INPUTS
2y 5m to grant Granted Mar 24, 2026
Patent 12579704
VIDEO GENERATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12573164
DESIGN DEVICE, PRODUCTION METHOD, AND STORAGE MEDIUM STORING DESIGN PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12573151
PERSONALIZED DEFORMABLE MESH BY FINETUNING ON PERSONALIZED TEXTURE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+18.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month