Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. This action is responsive to communications: Application filed on January 5, 2024, and Drawings filed on January 5, 2024.
2. Claims 1–20 are pending in this case. Claims 1, 11 are independent claims.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Allowable Subject Matter
Claims 5, 6, 15, 16, 10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
With regard to claim 5 and 15 the prior arts do not disclose the method according to claim 1, wherein decoding the encoded volumetric data comprises predicting multiple groups of vertices in addition to the group, and wherein at least one vertex per each of the multiple groups is not edge connected to others of the vertices per each of the multiple groups.
With regard to claim 10, the prior arts do not disclose the method according to claim 1, wherein values of the spatial grouping syntax are based on whether a first coding cost of coding a motion vector of all of the vertices of the group is determined to be less than or equal to a second coding cost of coding estimation residues of all the vertices of the group.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 8, 9, 18, 19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
With regard to claims 8 and 18, the application claims the limitation wherein the group consists of an integer K of the vertices, and wherein the basemesh inter submesh data unit syntax is obtained with the encoded volumetric data and signals the integer K.
It is unclear what constitutes an “integer K”. It is unclear whether K represent the number of vertices in the group or is it a label that defines the group or is it something else. It is unclear what constitutes “signals the integer K”. For the purpose of a compact prosecution K is interpreted as a label that defines the group.
With regard to claims 9, the application claims the apparatus according to claim 3, the limitation wherein the integer K is 16. Claim 3 does not claim an integer K, therefore the claim lacks antecedent basis. For the purpose of a compact prosecution, claim 9 is interpreted as dependent on claim 8. Claim 19 is rejected for the same reason and will be examined as dependent on claim 18.
Claims 8, 18, 19, 19 would be allowable if the applicant overcome the 112 rejections. Why is claim 19 listed twice?
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 2, 3, 4, 11, 12, 13, 14, 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Joshi, Pub. No.: 20220164994 A1.
With regard to claim 1:
Joshi discloses a method for video decoding, the method performed by at least one processor and comprising: obtaining, from a coded bitstream, a mesh representing an encoded volumetric data of at least one three-dimensional (3D) visual content (3d mesh including 3d visual content is used for 2d conversion, paragraph 40: “Embodiments of the present disclosure provide systems and methods for converting an 3D mesh into a 2D representation that can be transmitted and then reconstructed into the 3D mesh for rendering. An encoder converts an input 3D mesh onto multiple 2D frames (such as geometry frames, attribute frames, and occupancy map frames). The 2D frames include patches that represent portions of the 3D mesh. In certain embodiments, a patches can include vertices that are also located in another patch. The 2D frames can represent video frames. The 2D video frames can be encoded (using video codecs such as HEVC, AVC, VP9, VP8, VVC, and the like) to compress the 2D frames for transmission via a bitstream. the encoder can also encode the connectivity information that specifies how the polygons of the mesh are formed. A decoder receives and decodes the bitstream and then reconstructs the 3D mesh from the 2D frames such that the 3D mesh can be rendered, displayed, and then viewed by a user.”); determining a spatial grouping syntax signaling a coding order of vertices obtained with the encoded volumetric data (vertices are group into patches, paragraph 80: “When encoding media content, such as a point cloud or a mesh, the electronic device 300 or the server 200 of FIG. 2 can separate the vertices from the connectivity information. When the vertices and the connectivity information are separated, the vertices are similar to points of a point cloud. The electronic device 300 or the server 200300 can also segment the vertices to certain projection planes to create multiple patches. For example, a cluster of points of the point cloud or vertices of a mesh can be grouped together and represented as a patch on the 2D frames. A patch can represent a single aspect of the mesh, such as geometry (a geometric position of a vertex), or an attribute such as color, reflectance, and the like) that are associated with a vertex. Patches that represent the same attribute can be packed into the same 2D frame. The 2D frames are then encoded to generate a bitstream. Similarly, the connectivity information is also encoded to generate another bitstream. The two bitstreams can be multiplexed together and transmitted to another device as a single bitstream. During the encoding process additional content such as metadata, flags, parameter sets, syntax elements, occupancy maps, geometry smoothing parameters, one or more attribute smoothing parameters, and the like can be included in any of the bitstreams.”), wherein at least two of the vertices are within a threshold distance of each other (vertices near each other are group as patches, paragraph 87: “During the segmentation process, each of the points/vertices of the 3D object 406 are assigned to a particular projection plane, (such as the projection planes 410, 412, 414, 416, 418, and 420). The points that are near each other and are assigned to the same projection plane are grouped together to form a cluster which is represented as a patch such as any of the patches as illustrated in FIGS. 4C and 4D. More or less projection planes can be used when assigning points to a particular projection plane. Moreover, the projection planes can be at various locations and angles. For example, certain projection planes can be at a 45 degree incline with respect to the other projection planes, Similarly, certain projection planes can be at a 90 degree angle with respect to other projection planes.”) and are not edge connected to each other (see fig. 6B wherein patches 610 includes vertices such as A and F, or C and B that are not edge connected, paragraph 130 and 131: “As illustrated in FIG. 6B, the overlapped patch 610 includes the vertex A 612, the vertex B 614, the vertex C 616, the vertex D 618, the vertex E 622, and the vertex F 624, and the edges connecting these vertices. The overlapped patch 620 includes the vertex E 622, the vertex F 624, the vertex G 626, and the vertex H 628 and the edges connecting these vertices. That is, the vertex E 622, and the vertex F 624 belong to both overlapped patch 610 and overlapped patch 620 of the mesh 650. It is noted that by including the vertex E 622, and the vertex F 624 in two different patches may result in a loss of coding efficiency. However, when coding the connectivity and reordering information, the overlapped mesh patches are advantageous. For example, for coding connectivity information, the connectivity corresponding to each overlapped patch may be coded independently of other patches. This indicates that only the connectivity information corresponding to other points within the patch needs to be coded. Also for reordering information, only the points within each patch need to be considered instead of all the vertices. For example, if an overlapped patch includes 256 vertices and the total number of vertices of the mesh is 1024, in a straightforward fixed-length coding of reordering information, only 8 bits per vertex are needed instead of 10 bits.”); and decoding the encoded volumetric data by predicting the at least two of the vertices as a group based on the spatial grouping syntax (wherein grouped vertices (patches) are predicted as overlapping or not during decoding, paragraph 144 to 147: “A decoder, such as the decoder 550 of FIG. 5 can receive a bitstream that includes overlapped patches. To reconstruct the mesh, the decoder 550 decodes geometry, occupancy map and attribute information corresponding to the overlapped mesh patches. This provides information about the vertices of the triangles of the overlapped mesh as well as the texture or (u, v) values associated with each vertex. Then, the decoder 550 decodes the connectivity information to form the mesh. In certain embodiments, if the encoder 510 encoded the geometry and occupancy information for all the overlapped mesh patches losslessly, the decoder 550 determines which vertices and edges are repeated and delete them during the reconstruction process. In certain embodiments, if the encoder 510 encoded the overlapped patches in a lossy manner, for each overlapped patch, the decoder 550 derives a 3D bounding box for the points belonging to the patch. The decoder 550 can use the following syntax elements. syntax elements AtlasPatch3dOffsetU, AtlasPatch3dOffsetV, AtlasPatch3dOffsetD, AtlasPatch3dRangeD, AtlasPatch2dSizeX, and AtlasPatch2dSizeY. The decoder 550 can combine the location of a point, with no quantization errors if that point is represented in two (or more) patches. For example, if overlapped patches 610 and 620 were encoded in a lossy manner, and those overlapped patches have orthogonal projection directions and 3D bounding boxes that overlap. Examples of orthogonal projection directions are for X and Y, Y and Z, and the like. This example will consider vertex F 624, which is located in both the overlapped patch 610 and the overlapped patch 620 and has an original 3D location at (X, Y, Z). If the projection direction for the overlapped patch 610 is Z, then there is no quantization error for the X and Y coordinates of the reconstructed points from the overlapped patch 610. It is noted that for a projection onto plane XY (in the Z direction) does not introduce quantization noise in the coordinates X and Y. Similarly, if the projection direction for the overlapped patch 620 is Y, then there is no quantization error for the X and Z coordinates of the reconstructed points from the overlapped patch 620. It is noted that for projection onto plane XZ (in the Y direction) does not introduce quantization noise in the coordinates X and Z. Accordingly, an original point (corresponding by vertex F 624) represented in the overlapped patch 610 is located at (X, Y, Z), then the reconstructed point from reconstructed point may be denoted as (X, Y, {circumflex over (Z)}), where {circumflex over (Z)} is a quantized version of Z. That is, the quantized version of vertex F 624 of overlapped patch 610 is (X, Y, {circumflex over (Z)}). Similarly an original point (corresponding vertex F 624) represented in the overlapped patch 620 is located at (X, Y, Z), then the reconstructed point from reconstructed point may be denoted as (X, Ŷ, Z), where Ŷ is a quantized version of Y. That is, the quantized version of vertex F 624 of overlapped patch 620 is (X, Ŷ, Z). In this example, the decoder 550 searches for a point that is in overlapped patch 610 and overlapped patch 620 that have the same X coordinate (such as the vertex F 624). The decoder 550 then determines that these points are duplicates of each other and then combines these two points into a single point. To combine these points the decoder 550 (i) knows the X value (as it is the same for the overlapped patches 610 and 620), (ii) derives the Y coordinate of the vertex F 624 from the reconstructed point in the overlapped patch 610 and (iii) derives the Z coordinate of the vertex F 624 from the reconstructed point in overlapped patch 620.”).
With regard to claims 2 and 12:
Joshi discloses the method according to claim 1, wherein a third vertex of the group is within the threshold distance to any of the at least two of the vertices (see fig. 6B wherein vertices near each other A, B, C are grouped as Patch 610, paragraph 130 and 131: “As illustrated in FIG. 6B, the overlapped patch 610 includes the vertex A 612, the vertex B 614, the vertex C 616, the vertex D 618, the vertex E 622, and the vertex F 624, and the edges connecting these vertices. The overlapped patch 620 includes the vertex E 622, the vertex F 624, the vertex G 626, and the vertex H 628 and the edges connecting these vertices. That is, the vertex E 622, and the vertex F 624 belong to both overlapped patch 610 and overlapped patch 620 of the mesh 650. It is noted that by including the vertex E 622, and the vertex F 624 in two different patches may result in a loss of coding efficiency. However, when coding the connectivity and reordering information, the overlapped mesh patches are advantageous. For example, for coding connectivity information, the connectivity corresponding to each overlapped patch may be coded independently of other patches. This indicates that only the connectivity information corresponding to other points within the patch needs to be coded. Also for reordering information, only the points within each patch need to be considered instead of all the vertices. For example, if an overlapped patch includes 256 vertices and the total number of vertices of the mesh is 1024, in a straightforward fixed-length coding of reordering information, only 8 bits per vertex are needed instead of 10 bits.”).
With regard to claims 3 and 13:
Joshi discloses The method according to claim 2, wherein the third vertex is edge connected to one of the at least two of the vertices (see fig. 6B wherein vertices A is edge connected to B, C, paragraph 130 and 131: “As illustrated in FIG. 6B, the overlapped patch 610 includes the vertex A 612, the vertex B 614, the vertex C 616, the vertex D 618, the vertex E 622, and the vertex F 624, and the edges connecting these vertices. The overlapped patch 620 includes the vertex E 622, the vertex F 624, the vertex G 626, and the vertex H 628 and the edges connecting these vertices. That is, the vertex E 622, and the vertex F 624 belong to both overlapped patch 610 and overlapped patch 620 of the mesh 650. It is noted that by including the vertex E 622, and the vertex F 624 in two different patches may result in a loss of coding efficiency. However, when coding the connectivity and reordering information, the overlapped mesh patches are advantageous. For example, for coding connectivity information, the connectivity corresponding to each overlapped patch may be coded independently of other patches. This indicates that only the connectivity information corresponding to other points within the patch needs to be coded. Also for reordering information, only the points within each patch need to be considered instead of all the vertices. For example, if an overlapped patch includes 256 vertices and the total number of vertices of the mesh is 1024, in a straightforward fixed-length coding of reordering information, only 8 bits per vertex are needed instead of 10 bits.”).
With regard to claims 4 and 14:
Joshi discloses The method according to claim 2, wherein the third vertex is not edge connected to any of the at least two of the vertices (see fig. 6B wherein vertices E is not edge connected to A or B, paragraph 130 and 131: “As illustrated in FIG. 6B, the overlapped patch 610 includes the vertex A 612, the vertex B 614, the vertex C 616, the vertex D 618, the vertex E 622, and the vertex F 624, and the edges connecting these vertices. The overlapped patch 620 includes the vertex E 622, the vertex F 624, the vertex G 626, and the vertex H 628 and the edges connecting these vertices. That is, the vertex E 622, and the vertex F 624 belong to both overlapped patch 610 and overlapped patch 620 of the mesh 650. It is noted that by including the vertex E 622, and the vertex F 624 in two different patches may result in a loss of coding efficiency. However, when coding the connectivity and reordering information, the overlapped mesh patches are advantageous. For example, for coding connectivity information, the connectivity corresponding to each overlapped patch may be coded independently of other patches. This indicates that only the connectivity information corresponding to other points within the patch needs to be coded. Also for reordering information, only the points within each patch need to be considered instead of all the vertices. For example, if an overlapped patch includes 256 vertices and the total number of vertices of the mesh is 1024, in a straightforward fixed-length coding of reordering information, only 8 bits per vertex are needed instead of 10 bits.”).
Claim 11 is rejected for the same reason as claim 1.
Claim 20 is rejected for the same reason as claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Joshi, in view of Buhushan, patent No.: 12488529.
With regard to claims 7 and 17:
Joshi does not disclose the method according to claim 1, wherein the spatial grouping syntax is of a basemesh inter submesh data unit syntax.
However Buhushan discloses The method according to claim 1, wherein the spatial grouping syntax is of a basemesh inter submesh data unit syntax (paragraph 380: “The texturing engine 1506 then divides the mesh into multiple separate submeshes 1568 by grouping faces textured by the same frame under a corresponding submesh. Each submesh includes a subset of the vertices 1522, the faces 1524, and the normals 1526 in the mesh and is associated with a separate material in material data 1530. By creating submeshes 1568, the texturing engine 1506 allows different portions of the mesh to be associated with different materials and textures 1532 in the materials.”). It would have been obvious to one of ordinary skill in the art, at the time the filing was made to apply Buhushan to Joshi in order to optimize performance saving system resources.
Pertinent Arts
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Comer, Pub. No.: 20190266796 A1: Example 59: The method of any of the Examples above, wherein the first type of transformation comprises a rigid transformation of at least some of the vertices of the base mesh towards corresponding vertices of the target mesh and wherein applying at least one iteration of the first type of transformation comprises: identifying a grouping of vertices of the base mesh that have distance differences above the first threshold, wherein a given vertex in the grouping of vertices has the largest distance difference amongst the grouping; determining a size of a falloff region based at least in part on the magnitude of the largest distance difference; applying the rigid transformation for the vertices of the base mesh in the falloff region; and feathering the rigid transformation for the vertices of the base mesh outside of the falloff region.
Inagaki Pub. No.: US 20200005537 A1: As the object collision progresses deeper into the surface of the mesh 205, this may result in some of the vertices of the mesh being pushed apart and separated into closely positioned groupings on the sides of the collision object, for example 212, 206 and 214, 210, accompanied by a significant increase in distance between a few of the vertices positioned near the middle of the collision event, for example 206, 208, 210, as shown in FIG. 2B.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DI XIAO whose telephone number is (571)270-1758. The examiner can normally be reached 9Am-5Pm est M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DI XIAO/Primary Examiner, Art Unit 2178