DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/12/2025 has been entered.
Response to Arguments
Applicant’s arguments (see remarks), filed 12/12/2025, with respect to claims 1-21, have been fully considered but are respectfully unpersuasive.
On page 14, applicant argues “The Applicant respectfully submits that the combination of Thudor, Sinharoy, Kim and Park does not teach, suggest, or render obvious at least, for example, the feature of "compute a first radius based on division of the first number of 3D points of the determined surface of the reference point cloud by the determined second number of the 3D points of the reference point cloud," as recited in amended independent claim 1”.
In response, the Office respectfully does not find this argument persuasive because the references and/or combination of references have changed with regard to the 35 U.S.C. rejection of independent claim 1 due to the amendments. The references and/or combination of references currently includes THUDOR et al. (US 20200380765 A1) in view of HUR et al. (US 20210407142 A1) and in further view of BHOWMICK et al. (US 20190080503 A1).
On page 14, applicant argues “It was alleged in the Office Action that: THUDOR fails to explicitly teach compute a first radius by division of the surface of the reference point cloud by the first number of the 3D points of the reference point cloud. See Office Action at page 8.”.
In response, the Office respectfully does not find this argument persuasive because the amendments changed the scope of the claimed invention as well as the applicability of THUDOR et al. (US 20200380765 A1) to the relevant limitation.
Based on the breadth of claim language, THUDOR et al. (US 20200380765 A1) explicitly teaches compute a first radius (Fig. 9. Paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined. The density may be determined by counting for each element (or for each element of a part of the elements), e.g. point or mesh element, of the 3D representation the number of neighbors N, for example the number of neighbors in a sphere of radius R centered on said each element or in a cube centered on said each element. In paragraph [0107]-THUDOR discloses the density may be estimated by determining the distance to the nearest neighbor, for each element or for each element of a part of the elements of the 3D representation. This distance is considered as being equivalent to the above spherical neighborhood radius R (and N=1)) based on division of the first number of 3D points of the determined surface of the reference point cloud by the determined second number of the 3D points of the reference point cloud (Fig. 9. Paragraph [0106]-THUDOR discloses the density may be expressed with the number of neighbors N, as a surface density equal to the number of neighbors divided by the neighborhood surface (i.e. N/(Pi.Math.R.sup.2)) or as a volume density equal to the number of neighbors divided by the neighborhood volume (N/(4/3.Math.Pi.Math.R.sup.3)). In paragraph [0117]-THUDOR discloses to obtain the 3D parts, the point cloud may be partitioned. The 3D space 81 (e.g. a half-sphere) occupied by the point cloud is partitioned according to spherical coordinates (r, θ, φ), (wherein “r: represents a radius of the half-sphere). The size of each 3D part is determined to uniformly distribute the points of the point cloud into the 3D parts, the size of the 3D points depending from the local density of the points in the different areas of the space occupied by the point cloud. Please also read paragraph [0111, 0117, 0144]).
On page 15, applicant argues “Park does not describe that a radius is computed based on division of a number of three- dimensional (3D) points of a surface of a reference point cloud by a number of the 3D points of the reference point cloud.”.
In response, the Office respectfully does not find this argument persuasive because the references and/or combination of references have changed with regard to the 35 U.S.C. rejection of independent claim 1 due to the amendments. The references and/or combination of references currently used in the rejection of claim 1 includes THUDOR et al. (US 20200380765 A1) in view of HUR et al. (US 20210407142 A1) and in further view of BHOWMICK et al. (US 20190080503 A1).
On page 17, applicant argues “Further, the Examiner has failed to provide "articulated reasoning with some rationale underpinning to support the legal conclusion of obviousness" in the detailed manner described in KSR...Therefore, a person of ordinary skill in the field of Applicant's invention would not look at the disclosure of Park at the time the invention was made, to combine it with the other references as suggested in the Office Action. Neither the Examiner has provided any rationale for the combination of Park with the other cited references, nor there is a suggestion in Park for such a combination. Therefore, the Applicant respectfully submits that the rationale proffered to combine the teachings of Thudor, Sinharoy, Kim and Park is based on hindsight, and is thus improper.”.
In response, the Office respectfully does not find this argument persuasive because the references and/or combination of references have changed with regard to the 35 U.S.C. rejection of independent claim 1 due to the amendments. Thus, the rationale and/or reasoning in support of the obviousness rejection has also changed. The references and/or combination of references currently used in the rejection of claim 1 includes THUDOR et al. (US 20200380765 A1) in view of HUR et al. (US 20210407142 A1) and in further view of BHOWMICK et al. (US 20190080503 A1).
On page 17, applicant argues “Therefore, amended independent claim 1 is not taught, suggested, or rendered obvious over the combination of Thudor, Sinharoy, Kim, and Park. The Applicant further submits that amended independent claims 17 and 20 are also not taught, suggested, or rendered obvious over the references cited in the Office Action at least for the reasons stated above with regard to amended independent claim 1.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
On page 17, applicant argues “The Applicant respectfully submits that dependent claims 3-8, 12, 13, 16, and 19 are also not taught, suggested, or rendered obvious over the references cited in the Office Action based at least on the dependence on amended independent claims 1 or 17. Further, each of dependent claims 3-8, 12, 13, 16, and 19 separately recites subject matter not described or suggested by any of the cited references, whether taken individually or in combination.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
In addition, based on the breadth of claim language, THUDOR et al. (US 20200380765 A1) explicitly teaches
On page 19, applicant argues “the Applicant respectfully submits that dependent claims 2 and 8 are also not taught, suggested, or rendered obvious over the references cited in the Office Action based at least on the dependence on amended independent claim 1. Further, each of dependent claims 2 and 18 separately recites subject matter not described or suggested by any of the cited references, whether taken individually or in combination.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
In addition, based on the breadth of claim language, GAO explicitly teaches wherein the reference point cloud is an uncompressed point cloud of the object (Fig. 2. Paragraph [0054]-GAO discloses the streaming system (200) may include a capture subsystem (213). The capture subsystem (213) can include a point cloud source (201), for example light detection and ranging (LIDAR) systems, 3D cameras, 3D scanners, a graphics generation component that generates the uncompressed point cloud in software, and the like that generates for example point clouds (202) that are uncompressed. Further in paragraph [0058]-GAO discloses the V-PCC encoder (300) receives point cloud frames as uncompressed inputs and generates bitstream corresponding to compressed point cloud frames).
On page 19, applicant argues “the Applicant respectfully submits that dependent claims 9-11 are also not taught, suggested, or rendered obvious over the references cited in the Office Action based at least on the dependence on amended independent claim 1. Further, each of dependent claims 9-11 separately recites subject matter not described or suggested by any of the cited references, whether taken individually or in combination.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
In addition, based on the breadth of claim language, GAO explicitly teaches wherein the reference point cloud is an uncompressed point cloud of the object (Fig. 2. Paragraph [0054]-GAO discloses the streaming system (200) may include a capture subsystem (213). The capture subsystem (213) can include a point cloud source (201), for example light detection and ranging (LIDAR) systems, 3D cameras, 3D scanners, a graphics generation component that generates the uncompressed point cloud in software, and the like that generates for example point clouds (202) that are uncompressed. Further in paragraph [0058]-GAO discloses the V-PCC encoder (300) receives point cloud frames as uncompressed inputs and generates bitstream corresponding to compressed point cloud frames).
On page 19, applicant argues “the Applicant respectfully submits that dependent claims 14 and 15 are also not taught, suggested, or rendered obvious over the references cited in the Office Action based at least on the dependence on amended independent claim 1. Further, each of dependent claims 14 and 15 separately recites subject matter not described or suggested by any of the cited references, whether taken individually or in combination.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
On page 20, applicant argues “Based on at least the foregoing, the Applicant respectfully submits that claims 1-21 are in condition for allowance. A Notice of Allowability is courteously solicited.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Please amend to overcome the current grounds for rejection and/or prior arts of record.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-7, 16-17 and 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over THUDOR et al. (US 20200380765 A1), hereinafter referenced as THUDOR in view of HUR et al. (US 20210407142 A1), hereinafter referenced as HUR and in further view of BHOWMICK et al. (US 20190080503 A1), hereinafter referenced as BHOWMICK.
Regarding claim 1, THUDOR explicitly teaches an electronic device (Fig. 12, #12 called a device. Paragraph [0189]-THUDOR discloses FIG. 12 shows an example architecture of a device 12 which may be configured to implement a method described in relation with FIGS. 10, 11, 15 and/or 16. The device 12 may be configured to be an encoder 91, 131 or a decoder 92, 132 of FIGS. 9 and 13), comprising:
circuitry (Fig. 12. Paragraph [0190-0196]- [0190] The device 12 comprises following elements that are linked together by a data and address bus 121: a microprocessor 122 (or CPU), which is, for example, a DSP (or Digital Signal Processor); a ROM (or Read Only Memory) 123; a RAM (or Random-Access Memory) 124; a storage interface 125; an I/O interface 126 for reception of data to transmit, from an application; and a power supply, e.g. a battery) configured to:
acquire a reference point cloud of an object (Fig. 3. Paragraph [0092]-THUDOR discloses FIG. 3 shows two different representations of an object, or part of it, of the scene represented with the volumetric content 10. In paragraph [0093]-THUDOR discloses a first 3D representation 30 of the part of the object is a point cloud. In paragraph [0104]-THUDOR discloses a second representation 31 of the part of the object may be obtained from the point cloud (or the 3D mesh) representation 30, the second representation corresponding to a surface representation. In paragraph [0149]-THUDOR discloses the point cloud 901 is encoded into encoded data under the form of a bitstream 902 via an encoding process 91 implemented in a module M91. The bitstream is transmitted to a module M92 that implements a decoding process 92 to decode the encoded data to obtain a decoded point cloud 903. Please also read paragraph [0155, 0168 and 0171]);
determine a first bounding box for the reference point cloud (Fig. 9. Paragraph [0117]-THUDOR discloses to obtain the 3D parts, the point cloud may be partitioned according to different methods. The 3D space 83 (e.g. a parallelepiped corresponding to a box bounding the point cloud) occupied by the point cloud is partitioned. Each 3D part may have the form of a cube or of a rectangle parallelepiped. Please also read paragraph [0100, 0109, 0111 and 0144] (wherein multiple 2D parametrizations are generated for the plurality of partitioned 3D parts));
determine a second number of the three-dimensional (3D) points (Fig. 9. Paragraph [0100]-THUDOR discloses the density corresponds to the number of elements per volume unit, e.g. a number of points per voxel. In paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined (wherein the density may be determined by counting for each element (or for each element of a part of the elements)). In paragraph [0108]-THUDOR discloses the 3D representation is partitioned in a plurality of parts, and the number of elements within each 3D part is calculated. In paragraph [0109]-THUDOR discloses information about the density may be obtained by determining boundaries within the 3D representation. Please also read paragraph [0139] (wherein a first and second partitioning of the same point cloud are generated with associated 2D parameterizations, maps and patch atlases)) of the reference point cloud (Fig. 9. Paragraph [0093]-THUDOR discloses a first 3D representation 30 of the part of the object is a point cloud. The point cloud corresponds to a large collection of points representing the object, e.g. the external surface or the external shape of the object. In paragraph [0104]-THUDOR discloses a second representation 31 of the part of the object may be obtained from the point cloud (or the 3D mesh) representation 30, the second representation corresponding to a surface representation. The point cloud may be processed in order to compute its surface. Please also read paragraph [0111, 0117, 0144, 0149 and 0184]);
compute a first radius (Fig. 9. Paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined. The density may be determined by counting for each element (or for each element of a part of the elements), e.g. point or mesh element, of the 3D representation the number of neighbors N, for example the number of neighbors in a sphere of radius R centered on said each element or in a cube centered on said each element. In paragraph [0107]-THUDOR discloses the density may be estimated by determining the distance to the nearest neighbor, for each element or for each element of a part of the elements of the 3D representation. This distance is considered as being equivalent to the above spherical neighborhood radius R (and N=1)) based on division of the first number of 3D points of the determined surface of the reference point cloud by the determined second number of the 3D points of the reference point cloud (Fig. 9. Paragraph [0106]-THUDOR discloses the density may be expressed with the number of neighbors N, as a surface density equal to the number of neighbors divided by the neighborhood surface (i.e. N/(Pi.Math.R.sup.2)) or as a volume density equal to the number of neighbors divided by the neighborhood volume (N/(4/3.Math.Pi.Math.R.sup.3)). In paragraph [0117]-THUDOR discloses to obtain the 3D parts, the point cloud may be partitioned. The 3D space 81 (e.g. a half-sphere) occupied by the point cloud is partitioned according to spherical coordinates (r, θ, φ), (wherein “r: represents a radius of the half-sphere). The size of each 3D part is determined to uniformly distribute the points of the point cloud into the 3D parts, the size of the 3D points depending from the local density of the points in the different areas of the space occupied by the point cloud. Please also read paragraph [0111, 0117, 0144]);
generate a first local density map (Fig. 9. Paragraph [0119]-THURDOR discloses density information may be associated with each depth map and/or each texture map. The density information may for example take the form of metadata associated with each depth map. The density information may for example be representative of the average elements density of the 3D part associated with each depth map (or texture map). The density information may be representative of a range of density values that represents the range of density values in the considered 3D part. The density information may correspond to a flag associated with each depth map indicating whether the density of the elements comprised in the associated 3D parts is below a determined density level/value (e.g. the flag may be equal to 0 when the density is greater than the determined value and 1 when the density is less than the determined value, or the other way around) based on the computed first radius (Fig. 9. Paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined. Density may be determined by counting for each element (or for each element of a part of the elements), e.g. point or mesh element, of the 3D representation the number of neighbors N, for example the number of neighbors in a sphere of radius R centered on said each element or in a cube centered on said each element. Please also read paragraph [0107, 0117 and 0136]);
encode the reference point cloud to generate encoded point cloud data (Fig. 9. Paragraph [0148]-THUDOR discloses FIG. 9 shows schematically a diagram of an encoding/decoding scheme of a 3D scene, e.g. a 3D representation of the scene such as a point cloud. In paragraph [0149]-THUDOR discloses the point cloud 901 is encoded into encoded data under the form of a bitstream 902 via an encoding process 91 implemented in a module M91);
decode the encoded point cloud data to generate a test point cloud (Fig. 9. Paragraph [0170]-THUDOR discloses FIG. 11 shows operations for decoding the encoded version of the point cloud 901 from the bitstream 902. In paragraph [0171]-THUDOR discloses in an operation 111, encoded data of one or more pictures (e.g. pictures of one or more GOPs or of an intra period) of the point cloud is decoded by a decoder DEC2 from a received bitstream 902);
generate a second local density map for 3D points of the test point cloud (Fig. 9. Paragraph [0184]-THUDOR discloses the decoded point cloud 903 may then be further processed for reconstructing the 3D representation of the scene from the decoded pictures that comprise the attributes (depth and texture), from the decoded density information, from the decoded parameters representative of the 2D parameterizations and from the decoded mapping information for the mapping between the 2D parameterizations and the depth and texture maps comprised in the decoded pictures. In paragraph [0185]-THUDOR discloses parts of the reconstructed point cloud identified, from the decoded density information, as having a points density less than the determined density level may be further processed. In paragraph [0188]-THUDOR discloses an up-sampling process is applied to the parts of the point cloud identified, from the decoded density information, as having a points density less than the determined density level. Please also read paragraph [0106-0107, 0109, 0119]).
generate supplementary information based on the final density map (Fig. 14. Paragraph [0185]-THUDOR discloses parts of the reconstructed point cloud identified, from the decoded density information, as having a points density less than the determined density level may be further processed. In paragraph [0186]-THUDOR discloses additional points may be generated between pairs of reconstructed points obtained from the decoded bitstream. The additional points may be generated by computing their associated depth and texture from the depth and texture associated with the reconstructed points. The number of generated additional points may be determined according to a determined target density level. The target density level is set equal to the average density of the parts of the reconstructed point cloud having a density greater than said determined level. In paragraph [0188]-THUDOR discloses an up-sampling process is applied to the parts of the point cloud identified, from the decoded density information, as having a points density less than the determined density level. Please also read paragraph [0106-0107, 0119, 0160, 0171, 0184]), wherein the supplementary information includes at least one of:
missing points data corresponding to regions of the test point cloud that include geometry reconstruction artifacts (Fig. 16. Paragraph [0244]-THUDOR discloses splat rendering may then be applied to the reconstructed 3D representation (also to the parts that have been up-sampled) to generate/render the scene. Splat rendering is a technique that allows to fill hole between points, that are dimension-less, in a point cloud. It consists in estimating for each point of the point cloud based on its neighborhood an oriented ellipse, i.e. the two demi-axes and the normal of the ellipse), or one or more descriptors for the regions that include the geometry reconstruction artifacts (Fig. 16. Paragraph [0242]-THUDOR discloses the reconstructed 3D scene may be seen from the range of points of view, which may generate some rendering quality issues, especially when watching the 3D scene according to a point of view that enables to see areas of the scene identified as having a low point density (via the first information). To overcome these issues, an up-sampling process may be applied to the areas of the scene having a point density below the determined level/value to increase the number of points. Further in paragraph [0245]-THUDOR discloses the quality of the rendering of the 3D scene is increased by adding a small amount of data (i.e. the first information) to the bitstream. Please also read paragraph [0147]);
and signal the supplementary information to a Point Cloud Compression (PCC) decoder (Fig. 16. Paragraph [0243]-THUDOR discloses the decoded data and information may further be used to generate/reconstruct a 3D representation of the 3D scene for the rendering and/or displaying of the reconstructed 3D scene. The reconstructed 3D scene may be seen from the range of points of view, which may generate some rendering quality issues, especially when watching the 3D scene according to a point of view that enables to see areas of the scene identified as having a low point density (via the first information). To overcome these issues, an up-sampling process may be applied to the areas of the scene having a point density below the determined level/value to increase the number of points. Please also read paragraph [0225-0232 and 0244-0245]).
Although THUDOR explicitly teaches determine a surface of the reference point cloud (Fig. 9. Paragraph [0093]-THUDOR discloses a first 3D representation 30 of the part of the object is a point cloud. The point cloud corresponds to a large collection of points representing the object, e.g. the external surface or the external shape of the object. In paragraph [0104]-THUDOR discloses a second representation 31 of the part of the object may be obtained from the point cloud (or the 3D mesh) representation 30, the second representation corresponding to a surface representation. The point cloud may be processed in order to compute its surface. Please also read paragraph [0144, 0155-0160, 0171, 0176 and 0184-0186]), wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud (Fig. 9. Paragraph [0100]-THUDOR discloses the density of the elements (e.g. points or mesh elements) forming the first 3D representation 30 may spatially vary. A volume unit corresponds for example to a voxel or to a cube of determined dimensions (e.g. a cube with edges having each a size equal to 1, 2 or 10 cm for example). The density corresponds to the number of elements per volume unit, e.g. a number of points per voxel. In paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined (wherein the density may be determined by counting for each element (or for each element of a part of the elements)). In paragraph [0108]-THUDOR discloses the 3D representation is partitioned in a plurality of parts (that may correspond to voxels or to elementary surface areas), and the number of elements within each 3D part is calculated (e.g. from the geometry of the scene). In paragraph [0109]-THUDOR discloses information about the density may be obtained by determining boundaries within the 3D representation);
THUDOR fails to explicitly teach determine dimensions of the first bounding box; determine a surface of the reference point cloud based on the dimensions of the first bounding box, wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud.
However, HUR explicitly teaches determine dimensions of the first bounding box (Fig. 10. Paragraph [0276]-HUR discloses in the point cloud encoding, the point cloud encoder performs a geometry-based point cloud compression (G-PCC) procedure, which includes a series of procedures such as prediction, transformation, quantization, and entropy coding, and the encoded data may be output in the form of a bitstream. In paragraph [0292]-HUR discloses the point cloud decoder (Point Cloud Decoding) performs geometry decompression, attribute decompression, auxiliary data decompression, and/or mesh data decompression. In paragraph [0350]-HUR discloses in the point cloud data encoding process, regions may be automatically partitioned according to the point distribution. In paragraph [0351]-HUR discloses the partitioned region unit may be set as a tile, a slice, and/or a block (a smaller region obtained by partitioning a slice). In paragraph [0354]-HUR discloses each region may include point density value and bounding-box information (location, size). Please also read paragraph [0364 and 0421]);
determine a surface of the reference point cloud based on the dimensions of the first bounding box, wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud (Fig. 10. Paragraph [0419]-HUR discloses the block partitioner partitions the point cloud data on a block basis. A block means a unit in which a slice is partitioned. A unit in which one slice is partitioned in order to encode/decode the slice in detail may be a block. The space of the point cloud data may be partitioned into block(s) in consideration of the degree of distribution analyzed by the distribution analyzer, and/or may be partitioned into block(s) according to the partitioning policy or the PCC system. In paragraph [0421]-HUR discloses the tile/slice/block partitioner may generate information on each tile/slice/block, and deliver the same in a parameter of a bitstream. There may be signaling information such as the position of the bounding box, the size of the bounding box, the density (number of points/area of the region), and an octree node order value));
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR of having an electronic device comprising: circuitry configured to: acquire a reference point cloud of an object; determine a first bounding box for the reference point cloud, with the teachings of HUR of having determine dimensions of the first bounding box; determine a surface of the reference point cloud based on the dimensions of the first bounding box, wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud; and signal the supplementary information to a Point Cloud Compression (PCC) decoder.
Wherein THUDOR’s electronic device having determine dimensions of the first bounding box; determine a surface of the reference point cloud based on the dimensions of the first bounding box, wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud; and signal the supplementary information to a Point Cloud Compression (PCC) decoder.
The motivation behind the modification would have been to obtain an electronic device that improves the speed, decoding and appearance of point cloud reconstruction and transmission as well as the signal to noise ratio, since both THUDOR and HUR concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while HUR’s methods and systems that improve compression efficiency and the quality of content without the need to encode/decode date. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and HUR et al. (US 20210407142 A1), Abstract and Paragraph [0070 and 0149].
THUDOR fails to explicitly teach generate a final density map based on a comparison between the first local density map and the second local density map; generate supplementary information based on the final density map, wherein the supplementary information includes at least one of: missing points data corresponding to regions of the test point cloud that include geometry reconstruction artifacts.
However, BHOWMICK explicitly teaches generate a final density map (Fig. 2. Paragraph [0021]-BHOWMICK discloses the embodiments provide methods and systems for change detection utilizing three dimensional (3D) point-cloud processing. The method includes acquiring and comparing surface geometry of a reference point-cloud defining a reference surface and a template point-cloud defining a template surface at local regions or local surfaces. In paragraph [0044]-BHOWMICK discloses upon successful registration, at step 306, the method 300 includes allowing the subsampling module 216 to equalize point density of the registered reference point-cloud and the registered template point-cloud. In paragraph [0045]-BHOWMICK discloses the densities of the two point-clouds need to be consistent. The reference and the template point-clouds may have varying point densities with N and M vertices respectively. The higher density point-cloud (first point-cloud) among the reference point-cloud and the template point-cloud is subsampled and the lower density point-cloud (second point-cloud) is retained with the original point density) based on a comparison between the first local density map and the second local density map (Fig. 2. Paragraph [0045]-BHOWMICK discloses the b parameter for all the vertices of the template is set to 0, indicative of non-inclusion of the parameter in the subsampled point-cloud (first point-cloud post subsampling process). Then the vertices (represented by the 3D co-ordinates) of the template are set in a kd-tree. For every vertex (point) of the reference, the closest vertex (closest point) of the template is selected from the kd-tree and a subsampling distance is determined between them. If the subsampling distance between the 3D coordinates of the two vertices is either below the predefined subsampling threshold t1 or above the predefined subsampling threshold t2, wherein t2 is greater than t1, then the selected vertex of the template (first point-cloud) is included in the sub-sampled point-cloud and b=1 is set. The vertices of the template in the kd-tree, whose b parameter is set to 1, form the subsampled point-cloud (first point-cloud post subsampling process). A kd-tree is set with the vertices of the reference and compared with vertices of the template whose b is still set to 0. If the distance between the two vertices is greater than a threshold, then the input vertex is also included in the sub-sampled template point-cloud. Then the reference is sub-sampled in the similar manner, wherein the reference point-cloud is now the first point-cloud).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR of having an electronic device comprising: circuitry configured to: acquire a reference point cloud of an object; determine a first bounding box for the reference point cloud, with the teachings of BHOWMICK of having generate a final density map based on a comparison between the first local density map and the second local density map.
Wherein THUDOR’s electronic device having generate a final density map based on a comparison between the first local density map and the second local density map.
The motivation behind the modification would have been to obtain an electronic device that improves the speed, efficiency and accuracy of point cloud reconstruction, since both THUDOR and BHOWMICK concern point cloud reconstruction. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while BHOWMICK’s methods and systems improve point cloud reconstruction and change detection. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and BHOWMICK et al. (US 20190080503 A1), Abstract and Paragraph [0003-0004].
Regarding claim 3, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the electronic device according to claim 1, THUDOR further teaches wherein the geometry reconstruction artifacts correspond to holes in the test point cloud (Fig. 16. Paragraph [0244]-THUDOR discloses splat rendering may then be applied to the reconstructed 3D representation (also to the parts that have been up-sampled) to generate/render the scene. Splat rendering is a technique that allows to fill hole between points, that are dimension-less, in a point cloud. It consists in estimating for each point of the point cloud based on its neighborhood an oriented ellipse, i.e. the two demi-axes and the normal of the ellipse).
Regarding claim 4, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the electronic device according to claim 1, THUDOR further teaches wherein the circuitry (Fig. 12. Paragraph [0190-0196]-THUDOR discloses the device 12 comprises following elements that are linked together by a data and address bus 121: a microprocessor 122 (or CPU), which is, for example, a DSP (or Digital Signal Processor); a ROM (or Read Only Memory) 123; a RAM (or Random-Access Memory) 124; a storage interface 125; an I/O interface 126 for reception of data to transmit, from an application; and a power supply, e.g. a battery) is further configured to:
compute a spherical volume based on the first radius (Fig. 3. Paragraph [0106]-THUDOR discloses the density may be expressed with the number of neighbors N, as a surface density equal to the number of neighbors divided by the neighborhood surface (i.e. N/(Pi·R2)) or as a volume density equal to the number of neighbors divided by the neighborhood volume (N/(4/3·Pi·R3)). Please also read paragraph [0107]).
Regarding claim 5, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the electronic device according to claim 4, THUDOR further teaches wherein the circuitry (Fig. 12. Paragraph [0190-0196]- [0190]-THUDOR discloses the device 12 comprises following elements that are linked together by a data and address bus 121: a microprocessor 122 (or CPU), which is, for example, a DSP (or Digital Signal Processor); a ROM (or Read Only Memory) 123; a RAM (or Random-Access Memory) 124; a storage interface 125; an I/O interface 126 for reception of data to transmit, from an application; and a power supply, e.g. a battery) is further configured to:
determine, from the 3D points of the reference point cloud (Fig. 3. Paragraph [0093]-THUDOR discloses a first 3D representation 30 of the part of the object is a point cloud. The point cloud corresponds to a large collection of points representing the object, e.g. the external surface or the external shape of the object. In paragraphs [0094-0096]-THUDOR discloses the point cloud may be obtained of different ways, e.g.: from a capture of a real object shot by a rig of cameras, as the camera arrays of FIG. 2, optionally complemented by depth active sensing device; from a capture of a virtual/synthetic object shot by a rig of virtual cameras in a modelling tool; from a mix of both real and virtual objects), a number of points in a neighborhood of each 3D point of the reference point cloud, wherein the number of points in the neighborhood of the each 3D point of the reference point cloud is determined based on a location of a corresponding 3D point of the reference point cloud the reference point cloud, and the first radius (Fig. 3. Paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined. Density may be determined by counting for each element (or for each element of a part of the elements), e.g. point or mesh element, of the 3D representation the number of neighbors N, for example the number of neighbors in a sphere of radius R centered on said each element or in a cube centered on said each element); and determine a local density at the each 3D point of the reference point cloud, based on the spherical volume and the number of points in the neighborhood of the corresponding 3D point, wherein the local density map is further generated based on the local density at the each 3D point of the reference point cloud (Fig. 3. Paragraph [0106]-THUDOR discloses the density may be expressed with the number of neighbors N, as a surface density equal to the number of neighbors divided by the neighborhood surface (i.e. N/(Pi·R2)) or as a volume density equal to the number of neighbors divided by the neighborhood volume (N/(4/3·Pi·R3)).Further in paragraph [0107]-THUDOR discloses the density may be estimated by determining the distance to the nearest neighbor, for each element or for each element of a part of the elements of the 3D representation. This distance is considered as being equivalent to the above spherical neighborhood radius R (and N=1). Please also read paragraph [0117]).
Regarding claim 6, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the electronic device according to claim 1, THUDOR further teaches wherein the circuitry is further configured to:
determine a second bounding box (Fig. 9. Paragraph [0111]-THUDOR discloses each 2D parameterization is associated with a 3D part of the representation of the object, each 3D part corresponding to a volume comprising one or more points of the point cloud. Further in paragraph [0117]-THUDOR discloses the 3D space 83 (e.g. a parallelepiped corresponding to a box bounding the point cloud) occupied by the point cloud is partitioned. Each 3D part may have the form of a cube or of a rectangle parallelepiped. Please also read paragraph [0100 and 0139]) for the test point cloud (Fig. 9. Paragraph [0184]-THUDOR discloses the decoded point cloud 903 may then be further processed for reconstructing the 3D representation of the scene from the decoded pictures that comprise the attributes (depth and texture), from the decoded density information, from the decoded parameters representative of the 2D parameterizations and from the decoded mapping information for the mapping between the 2D parameterizations and the depth and texture maps comprised in the decoded pictures. Points of the point cloud are obtained by de-projecting the pixels of the depth and texture maps according to the inverse 2D parameterizations. The geometry of the point cloud (i.e. coordinates of the points or distance from a point of view associated with the 2D parameterization) is obtained by de-projecting the depth maps and the texture associated with the points of the point cloud is obtained from the texture maps. The points obtained from the de-projection of the depth and texture maps are called reconstructed points));
determine a third number of the 3D points of the test point cloud (Fig. 9. Paragraph [0184]-THUDOR discloses the decoded point cloud 903 may then be further processed for reconstructing the 3D representation of the scene from the decoded pictures that comprise the attributes (depth and texture), from the decoded density information. The points obtained from the de-projection of the depth and texture maps are called reconstructed points. In paragraph [0185]-THUDOR discloses parts of the reconstructed point cloud identified, from the decoded density information, as having a points density less than the determined density level may be further processed. In paragraph [0186]-THUDOR discloses additional points may be generated between pairs of reconstructed points obtained from the decoded bitstream. The number of generated additional points may be determined according to a determined target density level. The target density level is set equal to the average density of the parts of the reconstructed point cloud having a density greater than said determined level. Further in paragraph [0188]-THUDOR discloses an up-sampling process is applied to the parts of the point cloud identified, from the decoded density information, as having a points density less than the determined density level);
compute a second radius (Fig. 3. Paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined. The density may be expressed with the number of neighbors N, as a surface density equal to the number of neighbors divided by the neighborhood surface (i.e. N/(Pi.Math.R.sup.2)) or as a volume density equal to the number of neighbors divided by the neighborhood volume (N/(4/3.Math.Pi.Math.R.sup.3)). In paragraph [0107]-THUDOR discloses density may also be estimated by the distance to the nearest neighbor, for each element or for each element of a part of the elements of the 3D representation (wherein distance is equivalent to the above spherical neighborhood radius R (and N=1)). Further in paragraph [0117]-THUDOR discloses to obtain the 3D parts, the point cloud may be partitioned. The 3D space 81 (e.g. a half-sphere) occupied by the point cloud is partitioned according to spherical coordinates (r, θ, φ), i.e. according to a distance ‘r’ corresponding to the radius of the half-sphere. Please also read paragraph [0093, 0104, 0149, 0169, and 0184] (wherein multiple point clouds of a scene/object are acquired and partitioned, and the point clouds are encoded, decoded and reconstructed)) to sample the 3D points (Fig. 9. Paragraph [0111]-THUDOR discloses a 2D parameterization associated with a given 3D part of the point cloud corresponds to a browsing in 2 dimensions of the given 3D part of the point cloud allowing to sample the given 3D part, i.e. a 2D representation of the content (i.e. the point(s). Further in paragraph [0117]-THUDOR discloses the 3D space 83 (e.g. a parallelepiped corresponding to a box bounding the point cloud) occupied by the point cloud is partitioned. Each 3D part may have the form of a cube or of a rectangle parallelepiped. Please also read paragraph [0100 and 0139]) of the test point cloud (Fig. 9. Paragraph [0093]-THUDOR discloses a first 3D representation 30 of the part of the object is a point cloud. The point cloud corresponds to a large collection of points representing the object, e.g. the external surface or the external shape of the object. In paragraph [0104]-THUDOR discloses a second representation 31 of the part of the object may be obtained from the point cloud (or the 3D mesh) representation 30, the second representation corresponding to a surface representation. In paragraph [0149]-THUDOR discloses the point cloud 901 is encoded into encoded data under the form of a bitstream 902 via an encoding process 91 implemented in a module M91. The bitstream is transmitted to a module M92 that implements a decoding process 92 to decode the encoded data to obtain a decoded point cloud 903), wherein the second radius is computed based on the second bounding box (Fig. 9. Paragraph [0111]-THUDOR discloses each 2D parameterization is associated with a 3D part of the representation of the object, each 3D part corresponding to a volume comprising one or more points of the point cloud. Further in paragraph [0117]-THUDOR discloses the 3D space 83 (e.g. a parallelepiped corresponding to a box bounding the point cloud) occupied by the point cloud is partitioned. Each 3D part may have the form of a cube or of a rectangle parallelepiped. Please also read paragraph [0100 and 0139]) and the second number of the 3D points of the test point cloud (Fig. 9. Paragraph [0184]-THUDOR discloses the decoded point cloud 903 may then be further processed for reconstructing the 3D representation of the scene from the decoded pictures that comprise the attributes (depth and texture), from the decoded density information, from the decoded parameters representative of the 2D parameterizations and from the decoded mapping information for the mapping between the 2D parameterizations and the depth and texture maps comprised in the decoded pictures. The points obtained from the de-projection of the depth and texture maps are called reconstructed points. Therefore, it would have been obvious to a person of ordinary skill in the art to compute the radius again after decoding to further sample the points of the test cloud given THUDOR performs a quality/density evaluation on the reconstructed point cloud and computes various radii to sample points and assess the density of 3D parts. This would further improve the point cloud reconstruction in THUDOR as well as the ability to compare reference and test clouds);
compute a spherical volume based on the second radius (Fig. 8A-D. Paragraph [0117]-THUDOR discloses in FIG. 8A, the 3D space 81 (e.g. a half-sphere) occupied by the point cloud is partitioned according to spherical coordinates (r, θ, φ), i.e. according to a distance ‘r’ corresponding to the radius of the half-sphere and to the angles ‘θ’ and ‘φ’, each dimension ‘r’, ‘θ’ and ‘φ’ being partitioned evenly. The size of each 3D part is determined to uniformly distribute the points of the point cloud into the 3D parts, the size of the 3D points depending from the local density of the points in the different areas of the space occupied by the point cloud. Each 3D part may have the form of a cube or of a rectangle parallelepiped. Each 3D part may have the same size, or the 3D parts may be of different size, for example to uniformly distribute the points into all 3D parts. Please also read paragraph [0106-0107]).
Regarding claim 7, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the electronic device according to claim 6, THUDOR further teaches wherein the circuitry is further configured to:
determine a local density at the each location in the test point cloud (Fig. 9. Paragraph [0184]-THUDOR discloses the decoded point cloud 903 may then be further processed for reconstructing the 3D representation of the scene from the decoded pictures that comprise the attributes (depth and texture), from the decoded density information, from the decoded parameters representative of the 2D parameterizations and from the decoded mapping information for the mapping between the 2D parameterizations and the depth and texture maps comprised in the decoded pictures. In paragraph [0185]-THUDOR discloses parts of the reconstructed point cloud identified, from the decoded density information, as having a points density less than the determined density level may be further processed. Please also read paragraph [0188]), based on the spherical volume and the number of points in the neighborhood of the corresponding location of the 3D point of the reference point cloud (Fig. 9. Paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined. The density may be determined by counting for each element (or for each element of a part of the elements), e.g. point or mesh element, of the 3D representation the number of neighbors N, for example the number of neighbors in a sphere of radius R centered on said each element or in a cube centered on said each element. The density may be expressed with the number of neighbors N, as a surface density equal to the number of neighbors divided by the neighborhood surface (i.e. N/(Pi.Math.R.sup.2)) or as a volume density equal to the number of neighbors divided by the neighborhood volume (N/(4/3.Math.Pi.Math.R.sup.3). Please also read paragraph [0100, 0104, 0109, and 0119]), wherein the second local density map is further generated based on the local density at the each location in the test point cloud (Fig. 9. Paragraph [0186]-THUDOR discloses additional points may be generated between pairs of reconstructed points obtained from the decoded bitstream. The number of generated additional points may be determined according to a determined target density level. The target density level is set equal to the average density of the parts of the reconstructed point cloud having a density greater than said determined level. Further in paragraph [0188]-THUDOR discloses an up-sampling process is applied to the parts of the point cloud identified, from the decoded density information, as having a points density less than the determined density level).
THUDOR fails to explicitly teach determine, from the 3D points of the test point cloud, a number of points in a neighborhood of each location in the test point cloud, wherein the each location in the test point cloud corresponds to a corresponding location of a 3D point of the reference point cloud, and the number of points in the neighborhood of the each location in the test point cloud is determined based on the corresponding location of the 3D point of the reference point cloud, the test point cloud, and the second radius; and
However, BHOWMICK explicitly teaches determine, from the 3D points of the test point cloud, a number of points in a neighborhood of each location in the test point cloud, wherein the each location in the test point cloud corresponds to a corresponding location of a 3D point of the reference point cloud (Fig. 3. Paragraph [0044]-BHOWMICK discloses upon successful registration, at step 306, the method 300 includes allowing the subsampling module 216 to equalize point density of the registered reference point-cloud and the registered template point-cloud. In paragraph [0045]-BHOWMICK discloses the reference and the template point-clouds may have varying point densities. The higher density point-cloud (first point-cloud) among the reference point-cloud and the template point-cloud is subsampled and the lower density point-cloud (second point-cloud) is retained with the original point density), and number of points in the neighborhood of the each location in the test point cloud (Fig. 3. Paragraph [0046]-BHOWMICK discloses the MLS approximation comprises estimating a local reference surface for each point of the processed reference point-cloud and a local template surface for each point of the processed template point-cloud. Estimating the local reference surface and the local template surface comprises identifying local neighborhood comprising a plurality of neighbor points for each point of the processed reference point-cloud and each point of the processed template point-cloud. Further, identifying a local reference planar surface represented by the corresponding plurality of neighbor points and a local template planar surface represented by the corresponding plurality of neighbor points) is determined based on the corresponding location of the 3D point of the reference point cloud, the test point cloud (Fig. 3. Paragraph [0046]-BHOWMICK discloses the MLS approximation comprises projecting each point of the processed reference point-cloud on the corresponding local reference planar surface and each point of the processed template point-cloud on the corresponding local template planar surface. Each projected point from reference point-cloud is considered as origin of a local reference coordinate system and each projected point from template point-cloud is considered as origin of a local template coordinate system. Please also read paragraph [0045]), and the second radius (Fig. 3. Paragraph [0045]-BHOWMICK discloses the b parameter for all the vertices of the template is set to 0, indicative of non-inclusion of the parameter in the subsampled point-cloud (first point-cloud post subsampling process). For every vertex (point) of the reference, the closest vertex (closest point) of the template is selected from the kd-tree and a subsampling distance is determined between them. If the subsampling distance between the 3D coordinates of the two vertices is either below the predefined subsampling threshold t1 or above the predefined subsampling threshold t2, wherein t2 is greater than t1, then the selected vertex of the template (first point-cloud) is included in the sub-sampled point-cloud and b=1 is set. For example, t1=0.01 meters, t2=0.09 meters while the neighborhood radius is set to 0.09 meters. Please also read paragraph [0046-0047 and 0060]); and
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR and in further view of BHOWMICK of having an electronic device comprising: circuitry configured to: acquire a reference point cloud of an object; determine a first bounding box for the reference point cloud, with the teachings of SINHAROY of having determine, from the 3D points of the test point cloud, a number of points in a neighborhood of each location in the test point cloud, wherein the each location in the test point cloud corresponds to a corresponding location of a 3D point of the reference point cloud, and the number of points in the neighborhood of the each location in the test point cloud is determined based on the corresponding location of the 3D point of the reference point cloud, the test point cloud, and the second radius.
Wherein THUDOR’s electronic device having determine, from the 3D points of the test point cloud, a number of points in a neighborhood of each location in the test point cloud, wherein the each location in the test point cloud corresponds to a corresponding location of a 3D point of the reference point cloud, and the number of points in the neighborhood of the each location in the test point cloud is determined based on the corresponding location of the 3D point of the reference point cloud, the test point cloud, and the second radius.
The motivation behind the modification would have been to obtain an electronic device that improves the speed, efficiency and accuracy of point cloud reconstruction, since both THUDOR and BHOWMICK concern point cloud reconstruction. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while BHOWMICK’s methods and systems improve point cloud reconstruction and change detection. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and BHOWMICK et al. (US 20190080503 A1), Abstract and Paragraph [0003-0004].
Regarding claim 16, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the electronic device according to claim 1, THUDOR further teaches wherein the circuitry is further configured to transmit the encoded point cloud data as a coded bitstream along with the supplementary information to the PCC decoder (Fig. 14. Paragraph [0223]-THUDOR discloses the network is a broadcast network adapted to broadcast encoded 3D representation (e.g. point cloud(s) or mesh) from device 131 to decoding devices including the device 132. Further in paragraph [0225]-THUDOR discloses FIG. 14 shows an example of an embodiment of the syntax of such a signal when the data are transmitted over a packet-based transmission protocol (wherein the stream has a structure that may comprise a header part 141, which contains a payload of syntax elements and metadata about syntax elements). The fourth syntax element 145 is for example relative to the information relative to the density of at least a part of the 3D representation of the 3D scene. Further in paragraph [0232]-THUDOR discloses in a third operation 153, a first information representative of point density of the points comprised in one or more of the parts of the 3D representation is generated (wherein the first information associated with a given part of the 3D representation may comprise information indicating the point density, point density threshold, total density, range of densities, etc.). Please also read paragraph [0239-0242]).
Regarding claim 17, THUDOR explicitly teaches a method (Fig. 9-11. Paragraph [0148]-THUDOR discloses FIG. 9 shows schematically a diagram of an encoding/decoding scheme of a 3D scene, e.g. a 3D representation of the scene such as a point cloud. In paragraph [0154]-THUDOR discloses FIG. 10 shows operations for encoding the 3D scene or its 3D representation, e.g. the point cloud 901. In paragraph [0170]-THUDOR discloses FIG. 11 shows operations for decoding the encoded version of the point cloud 901 from the bitstream 902), comprising:
in an electronic device (Fig. 12, #12 called a device. Paragraph [0189]-THUDOR discloses FIG. 12 shows an example architecture of a device 12 which may be configured to implement a method described in relation with FIGS. 10, 11, 15 and/or 16. The device 12 may be configured to be an encoder 91, 131 or a decoder 92, 132 of FIGS. 9 and 13):
acquiring a reference point cloud of an object (Fig. 3. Paragraph [0092]-THUDOR discloses FIG. 3 shows two different representations of an object, or part of it, of the scene represented with the volumetric content 10. In paragraph [0093]-THUDOR discloses a first 3D representation 30 of the part of the object is a point cloud. In paragraph [0104]-THUDOR discloses a second representation 31 of the part of the object may be obtained from the point cloud (or the 3D mesh) representation 30, the second representation corresponding to a surface representation. In paragraph [0149]-THUDOR discloses the point cloud 901 is encoded into encoded data under the form of a bitstream 902 via an encoding process 91 implemented in a module M91. The bitstream is transmitted to a module M92 that implements a decoding process 92 to decode the encoded data to obtain a decoded point cloud 903. Please also read paragraph [0155, 0168 and 0171]);
determining a first bounding box for the reference point cloud (Fig. 9. Paragraph [0117]-THUDOR discloses to obtain the 3D parts, the point cloud may be partitioned according to different methods. The 3D space 83 (e.g. a parallelepiped corresponding to a box bounding the point cloud) occupied by the point cloud is partitioned. Each 3D part may have the form of a cube or of a rectangle parallelepiped. Please also read paragraph [0100, 0109, 0111 and 0144] (wherein multiple 2D parametrizations are generated for the plurality of partitioned 3D parts));
determining a second number of the three-dimensional (3D) points (Fig. 9. Paragraph [0100]-THUDOR discloses the density corresponds to the number of elements per volume unit, e.g. a number of points per voxel. In paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined (wherein the density may be determined by counting for each element (or for each element of a part of the elements)). In paragraph [0108]-THUDOR discloses the 3D representation is partitioned in a plurality of parts, and the number of elements within each 3D part is calculated. In paragraph [0109]-THUDOR discloses information about the density may be obtained by determining boundaries within the 3D representation. Please also read paragraph [0139] (wherein a first and second partitioning of the same point cloud are generated with associated 2D parameterizations, maps and patch atlases)) of the reference point cloud (Fig. 9. Paragraph [0093]-THUDOR discloses a first 3D representation 30 of the part of the object is a point cloud. The point cloud corresponds to a large collection of points representing the object, e.g. the external surface or the external shape of the object. In paragraph [0104]-THUDOR discloses a second representation 31 of the part of the object may be obtained from the point cloud (or the 3D mesh) representation 30, the second representation corresponding to a surface representation. The point cloud may be processed in order to compute its surface. Please also read paragraph [0111, 0117, 0144, 0149 and 0184]);
computing a first radius (Fig. 9. Paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined. The density may be determined by counting for each element (or for each element of a part of the elements), e.g. point or mesh element, of the 3D representation the number of neighbors N, for example the number of neighbors in a sphere of radius R centered on said each element or in a cube centered on said each element. In paragraph [0107]-THUDOR discloses the density may be estimated by determining the distance to the nearest neighbor, for each element or for each element of a part of the elements of the 3D representation. This distance is considered as being equivalent to the above spherical neighborhood radius R (and N=1)) based on division of the first number of 3D points of the determined surface of the reference point cloud by the determined second number of the 3D points of the reference point cloud (Fig. 9. Paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined. The density may be determined by counting for each element (or for each element of a part of the elements), e.g. point or mesh element, of the 3D representation the number of neighbors N, for example the number of neighbors in a sphere of radius R centered on said each element or in a cube centered on said each element. In paragraph [0107]-THUDOR discloses the density may be estimated by determining the distance to the nearest neighbor, for each element or for each element of a part of the elements of the 3D representation. This distance is considered as being equivalent to the above spherical neighborhood radius R (and N=1)).
generating a first local density map (Fig. 11. Paragraph [0119]-THURDOR discloses density information may be associated with each depth map and/or each texture map. The density information may for example take the form of metadata associated with each depth map. The density information may for example be representative of the average elements density of the 3D part associated with each depth map (or texture map). The density information may be representative of a range of density values that represents the range of density values in the considered 3D part. The density information may correspond to a flag associated with each depth map indicating whether the density of the elements comprised in the associated 3D parts is below a determined density level/value (e.g. the flag may be equal to 0 when the density is greater than the determined value and 1 when the density is less than the determined value, or the other way around) based on the computed first radius (Fig. 3. Paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined. Density may be determined by counting for each element (or for each element of a part of the elements), e.g. point or mesh element, of the 3D representation the number of neighbors N, for example the number of neighbors in a sphere of radius R centered on said each element or in a cube centered on said each element. Please also read paragraph [0107 and 0117]);
encoding the reference point cloud for generating encoded point cloud data (Fig. 9. Paragraph [0148]-THUDOR discloses FIG. 9 shows schematically a diagram of an encoding/decoding scheme of a 3D scene, e.g. a 3D representation of the scene such as a point cloud. In paragraph [0149]-THUDOR discloses the point cloud 901 is encoded into encoded data under the form of a bitstream 902 via an encoding process 91 implemented in a module M91);
decoding the encoded point cloud data for generating a test point cloud (Fig. 9. Paragraph [0170]-THUDOR discloses FIG. 11 shows operations for decoding the encoded version of the point cloud 901 from the bitstream 902. In paragraph [0171]-THUDOR discloses in an operation 111, encoded data of one or more pictures (e.g. pictures of one or more GOPs or of an intra period) of the point cloud is decoded by a decoder DEC2 from a received bitstream 902);
generating a second local density map for 3D points of the test point cloud (Fig. 11. Paragraph [0160]-THUDOR discloses density information associated with the picture 100 is further encoded by the encoder ENC1. In paragraph [0184]-THUDOR discloses the decoded point cloud 903 may then be further processed for reconstructing the 3D representation of the scene from the decoded pictures that comprise the attributes (depth and texture), from the decoded density information. Further in paragraph [0185]-THUDOR discloses parts of the reconstructed point cloud identified, from the decoded density information, as having a points density less than the determined density level may be further processed. Please also read paragraph [0100, 0104 and 0118-0119]);
generating supplementary information based on the final density map (Fig. 14. Paragraph [0185]-THUDOR discloses parts of the reconstructed point cloud identified, from the decoded density information, as having a points density less than the determined density level may be further processed. In paragraph [0186]-THUDOR discloses additional points may be generated between pairs of reconstructed points obtained from the decoded bitstream. The number of generated additional points may be determined according to a determined target density level. The target density level is set equal to the average density of the parts of the reconstructed point cloud having a density greater than said determined level. In paragraph [0188]-THUDOR discloses an up-sampling process is applied to the parts of the point cloud identified, from the decoded density information, as having a points density less than the determined density level. Please also read paragraph [0106-0107, 0119, 0160, 0171, 0184]), wherein the supplementary information includes at least one of:
missing points data corresponding to regions of the test point cloud that include geometry reconstruction artifacts (Fig. 16. Paragraph [0244]-THUDOR discloses splat rendering may then be applied to the reconstructed 3D representation (also to the parts that have been up-sampled) to generate/render the scene. Splat rendering is a technique that allows to fill hole between points, that are dimension-less, in a point cloud. It consists in estimating for each point of the point cloud based on its neighborhood an oriented ellipse, i.e. the two demi-axes and the normal of the ellipse), or one or more descriptors for the regions that include the geometry reconstruction artifacts (Fig. 16. Paragraph [0243]-THUDOR discloses the reconstructed 3D scene may be seen from the range of points of view, which may generate some rendering quality issues, especially when watching the 3D scene according to a point of view that enables to see areas of the scene identified as having a low point density (via the first information). To overcome these issues, an up-sampling process may be applied to the areas of the scene having a point density below the determined level/value to increase the number of points. Further in paragraph [0245]-THUDOR discloses the quality of the rendering of the 3D scene is increased by adding a small amount of data (i.e. the first information) to the bitstream. Please also read paragraph [0147]); and
signaling the supplementary information to a Point Cloud Compression (PCC) decoder (Fig. 16. Paragraph [0243]-THUDOR discloses the decoded data and information may further be used to generate/reconstruct a 3D representation of the 3D scene for the rendering and/or displaying of the reconstructed 3D scene. The reconstructed 3D scene may be seen from the range of points of view, which may generate some rendering quality issues, especially when watching the 3D scene according to a point of view that enables to see areas of the scene identified as having a low point density (via the first information). To overcome these issues, an up-sampling process may be applied to the areas of the scene having a point density below the determined level/value to increase the number of points. Please also read paragraph [0225-0232 and 0244-0245]).
Although THUDOR explicitly teaches determining a surface of the reference point cloud based on the first bounding box (Fig. 3. Paragraph [0093]-THUDOR discloses a first 3D representation 30 of the part of the object is a point cloud. The point cloud corresponds to a large collection of points representing the object, e.g. the external surface or the external shape of the object. In paragraph [0104]-THUDOR discloses a second representation 31 of the part of the object may be obtained from the point cloud (or the 3D mesh) representation 30, the second representation corresponding to a surface representation. The point cloud may be processed in order to compute its surface. Please also read paragraph [0144, 0155-0160, 0171, 0176 and 0184-0186]), wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud (Fig. 3. Paragraph [0100]-THUDOR discloses the density of the elements (e.g. points or mesh elements) forming the first 3D representation 30 may spatially vary. A volume unit corresponds for example to a voxel or to a cube of determined dimensions (e.g. a cube with edges having each a size equal to 1, 2 or 10 cm for example). The density corresponds to the number of elements per volume unit, e.g. a number of points per voxel. In paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined (wherein the density may be determined by counting for each element (or for each element of a part of the elements)). In paragraph [0108]-THUDOR discloses the 3D representation is partitioned in a plurality of parts (that may correspond to voxels or to elementary surface areas), and the number of elements within each 3D part is calculated (e.g. from the geometry of the scene). In paragraph [0109]-THUDOR discloses information about the density may be obtained by determining boundaries within the 3D representation).
THUDOR fails to explicitly teach determining dimensions of the first bounding box; determining a surface of the reference point cloud based on the dimensions of the first bounding box, wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud.
However, HUR explicitly teaches determining dimensions of the first bounding box (Fig. 10. Paragraph [0276]-HUR discloses in the point cloud encoding, the point cloud encoder performs a geometry-based point cloud compression (G-PCC) procedure, which includes a series of procedures such as prediction, transformation, quantization, and entropy coding, and the encoded data may be output in the form of a bitstream. In paragraph [0292]-HUR discloses the point cloud decoder (Point Cloud Decoding) performs geometry decompression, attribute decompression, auxiliary data decompression, and/or mesh data decompression. In paragraph [0350]-HUR discloses in the point cloud data encoding process, regions may be automatically partitioned according to the point distribution. In paragraph [0351]-HUR discloses the partitioned region unit may be set as a tile, a slice, and/or a block (a smaller region obtained by partitioning a slice). In paragraph [0354]-HUR discloses each region may include point density value and bounding-box information (location, size). Please also read paragraph [0364 and 0421]);
determining a surface of the reference point cloud based on the dimensions of the first bounding box, wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud (Fig. 10. Paragraph [0419]-HUR discloses the block partitioner partitions the point cloud data on a block basis. A block means a unit in which a slice is partitioned. A unit in which one slice is partitioned in order to encode/decode the slice in detail may be a block. The space of the point cloud data may be partitioned into block(s) in consideration of the degree of distribution analyzed by the distribution analyzer, and/or may be partitioned into block(s) according to the partitioning policy or the PCC system. In paragraph [0421]-HUR discloses the tile/slice/block partitioner may generate information on each tile/slice/block, and deliver the same in a parameter of a bitstream. There may be signaling information such as the position of the bounding box, the size of the bounding box, the density (number of points/area of the region), and an octree node order value)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR of having a method comprising: in an electronic device: acquiring a reference point cloud of an object; determining a first bounding box for the reference point cloud, with the teachings of HUR of having determining dimensions of the first bounding box; determining a surface of the reference point cloud based on the dimensions of the first bounding box, wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud.
Wherein THUDOR’s method having generating a final density map based on a comparison between the first local density map and the second local density map.
The motivation behind the modification would have been to obtain a method that improves the speed, decoding and appearance of point cloud reconstruction and transmission as well as the signal to noise ratio, since both THUDOR and HUR concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while HUR’s methods and systems that improve compression efficiency and the quality of content without the need to encode/decode date. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and HUR et al. (US 20210407142 A1), Abstract and Paragraph [0070 and 0149].
THUDOR fails to explicitly teach generating a final density map based on a comparison between the first local density map and the second local density map
However, BHOWMICK explicitly teaches generating a final density map (Fig. 2. Paragraph [0021]-BHOWMICK discloses the embodiments provide methods and systems for change detection utilizing three dimensional (3D) point-cloud processing. The method includes acquiring and comparing surface geometry of a reference point-cloud defining a reference surface and a template point-cloud defining a template surface at local regions or local surfaces. In paragraph [0044]-BHOWMICK discloses upon successful registration, at step 306, the method 300 includes allowing the subsampling module 216 to equalize point density of the registered reference point-cloud and the registered template point-cloud. In paragraph [0045]-BHOWMICK discloses the densities of the two point-clouds need to be consistent. The reference and the template point-clouds may have varying point densities with N and M vertices respectively. The higher density point-cloud (first point-cloud) among the reference point-cloud and the template point-cloud is subsampled and the lower density point-cloud (second point-cloud) is retained with the original point density) based on a comparison between the first local density map and the second local density map (Fig. 2. Paragraph [0045]-BHOWMICK discloses the b parameter for all the vertices of the template is set to 0, indicative of non-inclusion of the parameter in the subsampled point-cloud (first point-cloud post subsampling process). Then the vertices (represented by the 3D co-ordinates) of the template are set in a kd-tree. For every vertex (point) of the reference, the closest vertex (closest point) of the template is selected from the kd-tree and a subsampling distance is determined between them. If the subsampling distance between the 3D coordinates of the two vertices is either below the predefined subsampling threshold t1 or above the predefined subsampling threshold t2, wherein t2 is greater than t1, then the selected vertex of the template (first point-cloud) is included in the sub-sampled point-cloud and b=1 is set. The vertices of the template in the kd-tree, whose b parameter is set to 1, form the subsampled point-cloud (first point-cloud post subsampling process). A kd-tree is set with the vertices of the reference and compared with vertices of the template whose b is still set to 0. If the distance between the two vertices is greater than a threshold, then the input vertex is also included in the sub-sampled template point-cloud. Then the reference is sub-sampled in the similar manner, wherein the reference point-cloud is now the first point-cloud).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR of having a method comprising: in an electronic device: acquiring a reference point cloud of an object; determining a first bounding box for the reference point cloud, with the teachings of BHOWMICK of having generating a final density map based on a comparison between the first local density map and the second local density map.
Wherein THUDOR’s method having generating a final density map based on a comparison between the first local density map and the second local density map.
The motivation behind the modification would have been to obtain a method that improves the speed, efficiency and accuracy of point cloud reconstruction, since both THUDOR and BHOWMICK concern point cloud reconstruction. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while BHOWMICK’s methods and systems improve point cloud reconstruction and change detection. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and BHOWMICK et al. (US 20190080503 A1), Abstract and Paragraph [0003-0004].
Regarding claim 19, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the method according to claim 17, THUDOR further teaches wherein the geometry reconstruction artifacts correspond to holes in the test point cloud (Fig. 16. Paragraph [0244]-THUDOR discloses splat rendering may then be applied to the reconstructed 3D representation (also to the parts that have been up-sampled) to generate/render the scene. Splat rendering is a technique that allows to fill hole between points, that are dimension-less, in a point cloud. It consists in estimating for each point of the point cloud based on its neighborhood an oriented ellipse, i.e. the two demi-axes and the normal of the ellipse).
Regarding claim 20, THUDOR explicitly teaches a non-transitory computer-readable medium having stored thereon (Fig. 12. Paragraph [0190]-THUDOR discloses the device 12 comprises following elements that are linked together by a data and address bus 121: a microprocessor 122 (or CPU), which is, for example, a DSP (or Digital Signal Processor); a ROM (or Read Only Memory) 123; a RAM (or Random-Access Memory) 124; a storage interface 125; an I/O interface 126 for reception of data to transmit, from an application; and a power supply, e.g. a battery. Please also read paragraph [0061 and 0200-0208]), computer executable instruction, which when executed by a processor, cause the processor to execute operations (Fig. 12. Paragraph [0189]-THUDOR discloses FIG. 12 shows an example architecture of a device 12 which may be configured to implement a method described in relation with FIGS. 10, 11, 15 and/or 16. The device 12 may be configured to be an encoder 91, 131 or a decoder 92, 132 of FIGS. 9 and 13. Further in paragraph [0197]-THUDOR discloses the ROM 123 comprises at least a program and parameters. The ROM 123 may store algorithms and instructions to perform techniques in accordance with present principles. When switched on, the CPU 122 uploads the program in the RAM and executes the corresponding instructions. Please also read paragraph [0211-0220]), the operations comprising:
acquiring a reference point cloud of an object (Fig. 3. Paragraph [0092]-THUDOR discloses FIG. 3 shows two different representations of an object, or part of it, of the scene represented with the volumetric content 10. In paragraph [0093]-THUDOR discloses a first 3D representation 30 of the part of the object is a point cloud. In paragraph [0104]-THUDOR discloses a second representation 31 of the part of the object may be obtained from the point cloud (or the 3D mesh) representation 30, the second representation corresponding to a surface representation. In paragraph [0149]-THUDOR discloses the point cloud 901 is encoded into encoded data under the form of a bitstream 902 via an encoding process 91 implemented in a module M91. The bitstream is transmitted to a module M92 that implements a decoding process 92 to decode the encoded data to obtain a decoded point cloud 903. Please also read paragraph [0155, 0168 and 0171]);
determining a first bounding box for the reference point cloud (Fig. 7. Paragraph [0117]-THUDOR discloses to obtain the 3D parts, the point cloud may be partitioned according to different methods. The 3D space 83 (e.g. a parallelepiped corresponding to a box bounding the point cloud) occupied by the point cloud is partitioned. Each 3D part may have the form of a cube or of a rectangle parallelepiped. Please also read paragraph [0100 and 0144] (wherein multiple 2D parametrizations are generated for the plurality of partitioned 3D parts));
determining a second number of the three-dimensional (3D) points (Fig. 3. Paragraph [0100]-THUDOR discloses the density of the elements (e.g. points or mesh elements) forming the first 3D representation 30 may spatially vary. A volume unit corresponds for example to a voxel or to a cube of determined dimensions (e.g. a cube with edges having each a size equal to 1, 2 or 10 cm for example). The density corresponds to the number of elements per volume unit, e.g. a number of points per voxel. In paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined (wherein the density may be determined by counting for each element (or for each element of a part of the elements)). In paragraph [0108]-THUDOR discloses the 3D representation is partitioned in a plurality of parts (that may correspond to voxels or to elementary surface areas), and the number of elements within each 3D part is calculated (e.g. from the geometry of the scene). In paragraph [0109]-THUDOR discloses information about the density may be obtained by determining boundaries within the 3D representation) of the reference point cloud (Fig. 3. Paragraph [0093]-THUDOR discloses a first 3D representation 30 of the part of the object is a point cloud. The point cloud corresponds to a large collection of points representing the object, e.g. the external surface or the external shape of the object. In paragraph [0104]-THUDOR discloses a second representation 31 of the part of the object may be obtained from the point cloud (or the 3D mesh) representation 30, the second representation corresponding to a surface representation. The point cloud may be processed in order to compute its surface. In paragraph [0139]-THUDOR discloses FIG. 6 shows a first partitioning 61 of the point cloud corresponding for example to the partitioning 51 of FIG. 5 and a second partitioning 62 of the same point cloud. Please also read paragraph [0144, 0155-0160, 0171, 0176 and 0184-0186]);
computing a first radius based on division of the first number of 3D points of the determined surface of the reference point cloud by the determined second number of the 3D points of the reference point cloud (Fig. 9. Paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined. The density may be determined by counting for each element (or for each element of a part of the elements), e.g. point or mesh element, of the 3D representation the number of neighbors N, for example the number of neighbors in a sphere of radius R centered on said each element or in a cube centered on said each element. The density may be expressed with the number of neighbors N, as a surface density equal to the number of neighbors divided by the neighborhood surface (i.e. N/(Pi.Math.R.sup.2)) or as a volume density equal to the number of neighbors divided by the neighborhood volume (N/(4/3.Math.Pi.Math.R.sup.3)). In paragraph [0117]-THUDOR discloses to obtain the 3D parts, the point cloud may be partitioned. The 3D space 81 (e.g. a half-sphere) occupied by the point cloud is partitioned according to spherical coordinates (r), i.e. according to a distance ‘r’ corresponding to the radius of the half-sphere. The size of each 3D part is determined to uniformly distribute the points of the point cloud into the 3D parts, the size of the 3D points depending from the local density of the points in the different areas of the space occupied by the point cloud. Please also read paragraph [0093, 0104, 0107]);
generating a first local density map (Fig. 9. Paragraph [0119]-THURDOR discloses density information may be associated with each depth map and/or each texture map. The density information may for example take the form of metadata associated with each depth map. The density information may for example be representative of the average elements density of the 3D part associated with each depth map (or texture map). The density information may be representative of a range of density values that represents the range of density values in the considered 3D part. The density information may correspond to a flag associated with each depth map indicating whether the density of the elements comprised in the associated 3D parts is below a determined density level/value) based on the computed first radius (Fig. 9. Paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined. Density may be determined by counting for each element (or for each element of a part of the elements), e.g. point or mesh element, of the 3D representation the number of neighbors N, for example the number of neighbors in a sphere of radius R centered on said each element or in a cube centered on said each element. Please also read paragraph [0107, 0117 and 0136]);
encoding the reference point cloud for generating encoded point cloud data (Fig. 9. Paragraph [0148]-THUDOR discloses FIG. 9 shows schematically a diagram of an encoding/decoding scheme of a 3D scene, e.g. a 3D representation of the scene such as a point cloud. In paragraph [0149]-THUDOR discloses the point cloud 901 is encoded into encoded data under the form of a bitstream 902 via an encoding process 91 implemented in a module M91);
decoding the encoded point cloud data for generating a test point cloud (Fig. 9. Paragraph [0170]-THUDOR discloses FIG. 11 shows operations for decoding the encoded version of the point cloud 901 from the bitstream 902. In paragraph [0171]-THUDOR discloses in an operation 111, encoded data of one or more pictures (e.g. pictures of one or more GOPs or of an intra period) of the point cloud is decoded by a decoder DEC2 from a received bitstream 902);
generating a second local density map for 3D points of the test point cloud (Fig. 9. Paragraph [0184]-THUDOR discloses the decoded point cloud 903 may then be further processed for reconstructing the 3D representation of the scene from the decoded pictures that comprise the attributes (depth and texture), from the decoded density information, from the decoded parameters representative of the 2D parameterizations and from the decoded mapping information for the mapping between the 2D parameterizations and the depth and texture maps comprised in the decoded pictures. In paragraph [0185]-THUDOR discloses parts of the reconstructed point cloud identified, from the decoded density information, as having a points density less than the determined density level may be further processed. In paragraph [0188]-THUDOR discloses an up-sampling process is applied to the parts of the point cloud identified, from the decoded density information, as having a points density less than the determined density level. Please also read paragraph [0106-0107, 0109, 0119]);
generating supplementary information based on the final density map (Fig. 14. Paragraph [0185]-THUDOR discloses parts of the reconstructed point cloud identified, from the decoded density information, as having a points density less than the determined density level may be further processed. In paragraph [0186]-THUDOR discloses additional points may be generated between pairs of reconstructed points obtained from the decoded bitstream. The additional points may be generated by computing their associated depth and texture from the depth and texture associated with the reconstructed points. The number of generated additional points may be determined according to a determined target density level. The target density level is set equal to the average density of the parts of the reconstructed point cloud having a density greater than said determined level. In paragraph [0188]-THUDOR discloses an up-sampling process is applied to the parts of the point cloud identified, from the decoded density information, as having a points density less than the determined density level. Please also read paragraph [0106-0107, 0119, 0160, 0171, 0184]), wherein the supplementary information includes at least one of:
missing points data corresponding to regions of the test point cloud that include geometry reconstruction artifacts (Fig. 16. Paragraph [0244]-THUDOR discloses splat rendering may then be applied to the reconstructed 3D representation (also to the parts that have been up-sampled) to generate/render the scene. Splat rendering is a technique that allows to fill hole between points, that are dimension-less, in a point cloud. It consists in estimating for each point of the point cloud based on its neighborhood an oriented ellipse, i.e. the two demi-axes and the normal of the ellipse), or one or more descriptors for the regions that include the geometry reconstruction artifacts (Fig. 16. Paragraph [0243]-THUDOR discloses the reconstructed 3D scene may be seen from the range of points of view, which may generate some rendering quality issues, especially when watching the 3D scene according to a point of view that enables to see areas of the scene identified as having a low point density (via the first information). To overcome these issues, an up-sampling process may be applied to the areas of the scene having a point density below the determined level/value to increase the number of points. Further in paragraph [0245]-THUDOR discloses the quality of the rendering of the 3D scene is increased by adding a small amount of data (i.e. the first information) to the bitstream. Please also read paragraph [0147]); and
signaling the supplementary information to a Point Cloud Compression (PCC) decoder (Fig. 16. Paragraph [0243]-THUDOR discloses the decoded data and information may further be used to generate/reconstruct a 3D representation of the 3D scene for the rendering and/or displaying of the reconstructed 3D scene. The reconstructed 3D scene may be seen from the range of points of view, which may generate some rendering quality issues, especially when watching the 3D scene according to a point of view that enables to see areas of the scene identified as having a low point density (via the first information). To overcome these issues, an up-sampling process may be applied to the areas of the scene having a point density below the determined level/value to increase the number of points. Please also read paragraph [0225-0232 and 0244-0245]).
Although THUDOR explicitly teaches determining a surface of the reference point cloud based on the first bounding box (Fig. 3. Paragraph [0093]-THUDOR discloses a first 3D representation 30 of the part of the object is a point cloud. The point cloud corresponds to a large collection of points representing the object, e.g. the external surface or the external shape of the object. In paragraph [0104]-THUDOR discloses a second representation 31 of the part of the object may be obtained from the point cloud (or the 3D mesh) representation 30, the second representation corresponding to a surface representation. The point cloud may be processed in order to compute its surface. Please also read paragraph [0144, 0155-0160, 0171, 0176 and 0184-0186]), wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud (Fig. 3. Paragraph [0100]-THUDOR discloses the density of the elements (e.g. points or mesh elements) forming the first 3D representation 30 may spatially vary. A volume unit corresponds for example to a voxel or to a cube of determined dimensions (e.g. a cube with edges having each a size equal to 1, 2 or 10 cm for example). The density corresponds to the number of elements per volume unit, e.g. a number of points per voxel. In paragraph [0106]-THUDOR discloses information regarding the density of the 3D representation of the 3D scene may further be obtained or determined (wherein the density may be determined by counting for each element (or for each element of a part of the elements)). In paragraph [0108]-THUDOR discloses the 3D representation is partitioned in a plurality of parts (that may correspond to voxels or to elementary surface areas), and the number of elements within each 3D part is calculated (e.g. from the geometry of the scene). In paragraph [0109]-THUDOR discloses information about the density may be obtained by determining boundaries within the 3D representation).
THUDOR fails to explicitly teach determining dimensions of the first bounding box; determining a surface of the reference point cloud based on the dimensions of the first bounding box, wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud.
However, HUR explicitly teaches determining dimensions of the first bounding box (Fig. 10. Paragraph [0276]-HUR discloses in the point cloud encoding, the point cloud encoder performs a geometry-based point cloud compression (G-PCC) procedure, which includes a series of procedures such as prediction, transformation, quantization, and entropy coding, and the encoded data may be output in the form of a bitstream. In paragraph [0292]-HUR discloses the point cloud decoder (Point Cloud Decoding) performs geometry decompression, attribute decompression, auxiliary data decompression, and/or mesh data decompression. In paragraph [0350]-HUR discloses in the point cloud data encoding process, regions may be automatically partitioned according to the point distribution. In paragraph [0351]-HUR discloses the partitioned region unit may be set as a tile, a slice, and/or a block (a smaller region obtained by partitioning a slice). In paragraph [0354]-HUR discloses each region may include point density value and bounding-box information (location, size). Please also read paragraph [0364 and 0421]);
determining a surface of the reference point cloud based on the dimensions of the first bounding box, wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud (Fig. 10. Paragraph [0419]-HUR discloses the block partitioner partitions the point cloud data on a block basis. A block means a unit in which a slice is partitioned. A unit in which one slice is partitioned in order to encode/decode the slice in detail may be a block. The space of the point cloud data may be partitioned into block(s) in consideration of the degree of distribution analyzed by the distribution analyzer, and/or may be partitioned into block(s) according to the partitioning policy or the PCC system. In paragraph [0421]-HUR discloses the tile/slice/block partitioner may generate information on each tile/slice/block, and deliver the same in a parameter of a bitstream. There may be signaling information such as the position of the bounding box, the size of the bounding box, the density (number of points/area of the region), and an octree node order value)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR of having a non-transitory computer-readable medium having stored thereon, computer executable instruction, which when executed by a processor, cause the processor to execute operations, the operations comprising: acquiring a reference point cloud of an object, with the teachings of HUR of having determining dimensions of the first bounding box; determining a surface of the reference point cloud based on the dimensions of the first bounding box, wherein the surface of the reference point cloud includes a first number of three-dimensional (3D) points of the reference point cloud.
Wherein THUDOR’s non-transitory computer-readable medium having generating a final density map based on a comparison between the first local density map and the second local density map.
The motivation behind the modification would have been to obtain a non-transitory computer-readable medium that improves the speed, decoding and appearance of point cloud reconstruction and transmission as well as the signal to noise ratio, since both THUDOR and HUR concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while HUR’s methods and systems that improve compression efficiency and the quality of content without the need to encode/decode date. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and HUR et al. (US 20210407142 A1), Abstract and Paragraph [0070 and 0149].
THUDOR fails to explicitly teach generating a final density map based on a comparison between the first local density map and the second local density map
However, BHOWMICK explicitly teaches generating a final density map (Fig. 2. Paragraph [0021]-BHOWMICK discloses the embodiments provide methods and systems for change detection utilizing three dimensional (3D) point-cloud processing. The method includes acquiring and comparing surface geometry of a reference point-cloud defining a reference surface and a template point-cloud defining a template surface at local regions or local surfaces. In paragraph [0044]-BHOWMICK discloses upon successful registration, at step 306, the method 300 includes allowing the subsampling module 216 to equalize point density of the registered reference point-cloud and the registered template point-cloud. In paragraph [0045]-BHOWMICK discloses the densities of the two point-clouds need to be consistent. The reference and the template point-clouds may have varying point densities with N and M vertices respectively. The higher density point-cloud (first point-cloud) among the reference point-cloud and the template point-cloud is subsampled and the lower density point-cloud (second point-cloud) is retained with the original point density) based on a comparison between the first local density map and the second local density map (Fig. 2. Paragraph [0045]-BHOWMICK discloses the b parameter for all the vertices of the template is set to 0, indicative of non-inclusion of the parameter in the subsampled point-cloud (first point-cloud post subsampling process). Then the vertices (represented by the 3D co-ordinates) of the template are set in a kd-tree. For every vertex (point) of the reference, the closest vertex (closest point) of the template is selected from the kd-tree and a subsampling distance is determined between them. If the subsampling distance between the 3D coordinates of the two vertices is either below the predefined subsampling threshold t1 or above the predefined subsampling threshold t2, wherein t2 is greater than t1, then the selected vertex of the template (first point-cloud) is included in the sub-sampled point-cloud and b=1 is set. The vertices of the template in the kd-tree, whose b parameter is set to 1, form the subsampled point-cloud (first point-cloud post subsampling process). A kd-tree is set with the vertices of the reference and compared with vertices of the template whose b is still set to 0. If the distance between the two vertices is greater than a threshold, then the input vertex is also included in the sub-sampled template point-cloud. Then the reference is sub-sampled in the similar manner, wherein the reference point-cloud is now the first point-cloud).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR of having a non-transitory computer-readable medium having stored thereon, computer executable instruction, which when executed by a processor, cause the processor to execute operations, the operations comprising: acquiring a reference point cloud of an object, with the teachings of BHOWMICK of having generating a final density map based on a comparison between the first local density map and the second local density map.
Wherein THUDOR’s non-transitory computer-readable medium having generating a final density map based on a comparison between the first local density map and the second local density map.
The motivation behind the modification would have been to obtain a non-transitory computer-readable medium that improves the speed, efficiency and accuracy of point cloud reconstruction, since both THUDOR and BHOWMICK concern point cloud reconstruction. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while BHOWMICK’s methods and systems improve point cloud reconstruction and change detection. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and BHOWMICK et al. (US 20190080503 A1), Abstract and Paragraph [0003-0004].
Regarding claim 21, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the electronic device according to claim 8, THUDOR further teaches wherein the final density map (Fig. 14. Paragraph [0185]-THUDOR discloses parts of the reconstructed point cloud identified, from the decoded density information, as having a points density less than the determined density level may be further processed. In paragraph [0186]-THUDOR discloses additional points may be generated between pairs of reconstructed points obtained from the decoded bitstream. The additional points may be generated by computing their associated depth and texture from the depth and texture associated with the reconstructed points. The number of generated additional points may be determined according to a determined target density level. The target density level is set equal to the average density of the parts of the reconstructed point cloud having a density greater than said determined level. In paragraph [0188]-THUDOR discloses an up-sampling process is applied to the parts of the point cloud identified, from the decoded density information, as having a points density less than the determined density level. Please also read paragraph [0106-0107, 0119, 0160, 0171, 0184]) further comprises a third region, the first region corresponds to a first ellipsoid, the second region corresponds to a second ellipsoid, and the third region corresponds to a third ellipsoid (Fig. 16. Paragraph [0104]-THUDOR discloses a second representation 31 of the part of the object may be obtained from the point cloud (or the 3D mesh) representation 30, the second representation corresponding to a surface representation. The surface element associated with a given point of the point cloud is obtained by applying splat rendering to this given point. The surface of the object (also called implicit surface or external surface of the object) is obtained by blending all the splats (e.g., ellipsoids) associated with the points of the point cloud. In paragraph [0244]-THUDOR discloses splat rendering may then be applied to the reconstructed 3D representation (also to the parts that have been up-sampled) to generate/render the scene. It consists in estimating for each point of the point cloud based on its neighborhood an oriented ellipse, i.e. the two demi-axes and the normal of the ellipse).
Claims 2 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over THUDOR et al. (US 20200380765 A1), hereinafter referenced as THUDOR in view of HUR et al. (US 20210407142 A1), hereinafter referenced as HUR and in further view of BHOWMICK et al. (US 20190080503 A1), hereinafter referenced as BHOWMICK and in further view of GAO et al. (US 20220180567 A1), hereinafter referenced as GAO.
Regarding claim 2, THUDOR in view of HUR and in further view of BHOWMICK explicitly teaches the electronic device according to claim 1, THUDOR in view of HUR are silent on wherein the reference point cloud is an uncompressed point cloud of the object.
However, GAO explicitly teaches wherein the reference point cloud is an uncompressed point cloud of the object (Fig. 2. Paragraph [0054]-GAO discloses the streaming system (200) may include a capture subsystem (213). The capture subsystem (213) can include a point cloud source (201), for example light detection and ranging (LIDAR) systems, 3D cameras, 3D scanners, a graphics generation component that generates the uncompressed point cloud in software, and the like that generates for example point clouds (202) that are uncompressed. Further in paragraph [0058]-GAO discloses the V-PCC encoder (300) receives point cloud frames as uncompressed inputs and generates bitstream corresponding to compressed point cloud frames).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR and in further view of BHOWMICK of having an electronic device comprising: circuitry configured to: acquire a reference point cloud of an object; determine a first bounding box for the reference point cloud, with the teachings of GAO of having wherein the reference point cloud is an uncompressed point cloud of the object.
Wherein THUDOR’s electronic device having wherein the reference point cloud is an uncompressed point cloud of the object.
The motivation behind the modification would have been to obtain an electronic device that improves the decoding, signal to noise ratio, coding performance and the visual quality of reconstruction, since both THUDOR and GAO concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while GAO’s methods and systems that improve coding gain and performance as well as visual quality of reconstructed point cloud. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and GAO et al. (US 20220180567 A1), Abstract and Paragraph [0070 and 0126].
Regarding claim 18, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the method according to claim 17, THUDOR in view of HUR are silent wherein the reference point cloud is an uncompressed point cloud of the object.
However, GAO explicitly teaches wherein the reference point cloud is an uncompressed point cloud of the object (Fig. 2. Paragraph [0054]-GAO discloses the streaming system (200) may include a capture subsystem (213). The capture subsystem (213) can include a point cloud source (201), for example light detection and ranging (LIDAR) systems, 3D cameras, 3D scanners, a graphics generation component that generates the uncompressed point cloud in software, and the like that generates for example point clouds (202) that are uncompressed. Further in paragraph [0058]-GAO discloses the V-PCC encoder (300) receives point cloud frames as uncompressed inputs and generates bitstream corresponding to compressed point cloud frames).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR and in further view of BHOWMICK of having a method comprising: in an electronic device: acquiring a reference point cloud of an object; determining a first bounding box for the reference point cloud, with the teachings of GAO of having wherein the reference point cloud is an uncompressed point cloud of the object.
Wherein THUDOR’s method having wherein the reference point cloud is an uncompressed point cloud of the object.
The motivation behind the modification would have been to obtain a method that improves the decoding, signal to noise ratio, coding performance and the visual quality of reconstruction, since both THUDOR and GAO concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while GAO’s methods and systems that improve coding gain and performance as well as visual quality of reconstructed point cloud. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and GAO et al. (US 20220180567 A1), Abstract and Paragraph [0070 and 0126].
Claims 8 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over THUDOR et al. (US 20200380765 A1), hereinafter referenced as THUDOR in view of HUR et al. (US 20210407142 A1), hereinafter referenced as HUR and in further view of BHOWMICK et al. (US 20190080503 A1), hereinafter referenced as BHOWMICK and in further view of SINHAROY et al. (US 20200020132 A1), hereinafter referenced as SINHAROY.
Regarding claim 8, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the electronic device according to claim 1, THUDOR fails to explicitly teach wherein the first local density map is further generated based on local density differences between the first local density map and the second local density map, the final density map comprises a first region and a second region, and the local density differences in the first region is more than the local density differences in the second region.
However, SINHAROY explicitly teach wherein the first local density map (Fig. 5B. Paragraph [0146]-SINHAROY discloses encoding engines 528 can encode the geometry frames 522 (including the additional points patch 516 representing geometry and additional points patch 516 representing texture). In paragraph [0149]-SINHAROY discloses the decoder 550 receives a bitstream 532, such as the bitstream that was generated by the encoder 510. In paragraph [0151]- SINHAROY discloses the decoding engines 560 decode the geometry frame information 554, the texture frame information 556, and the occupancy map information 558. In paragraph [0152]- SINHAROY discloses the reconstruction engine 562 generates a reconstructed point cloud 564 (wherein reconstruction uses the decoded geometry frame information 554, the decoded texture frame information 556, and decoded occupancy map information 558). Reconstruction engine 562 uses the information from the additional points patches 516 to fill in the artifacts, such as holes and cracks (wherein data representing missed points are included in additional points patches 516)) is further generated based on local density differences between the first local density map and the second local density map, the final density map comprises a first region and a second region, and the local density differences in the first region is more than the local density differences in the second region (Fig. 5B. Paragraph [0112]-SINHAROY discloses the missed points selector 518 derives a relationship between points, when selecting a subset of missed points to include in the additional points patch 516. The relationship is defined by a neighborhood score. The neighborhood score can represent the density of points within a given area or zone. In paragraph [0108]-SINHAROY discloses the missed points selector 518 compares the distance between the selected missed point and the other two points. The missed points selector 518 determines that a stronger relationship exists (based on a higher density score) between the selected missed point and one of the two points when the distance (based on the proximity parameter) between the selected missed point is closer to one of the two points. Further in paragraph [0114]-SINHAROY discloses only those missed points whose neighborhood score is greater than a threshold score are stored in the additional points patches). Please also see claim 6).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR and in further view of BHOWMICK of having an electronic device comprising: circuitry configured to: acquire a reference point cloud of an object; determine a first bounding box for the reference point cloud, with the teachings of SINHAROY of having wherein the first local density map is further generated based on local density differences between the first local density map and the second local density map, the final density map comprises a first region and a second region, and the local density differences in the first region is more than the local density differences in the second region.
Wherein THUDOR’s electronic device having wherein the first local density map is further generated based on local density differences between the first local density map and the second local density map, the final density map comprises a first region and a second region, and the local density differences in the first region is more than the local density differences in the second region.
The motivation behind the modification would have been to obtain an electronic device that improves the speed, decoding and appearance of point cloud reconstruction and transmission as well as the signal to noise ratio, since both THUDOR and SINHAROY concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while SINHAROY’s methods and systems that improve the reconstruction of a 3D point cloud by decreasing the appearance of cracks or holes, and expedite and improve the transmission of point clouds between devices. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and SINHAROY et al. (US 20200020132 A1), Abstract and Paragraph [0035 and 0050].
Regarding claim 12, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the electronic device according to claim 1, THUDOR fails to explicitly teach wherein the circuitry is further configured to: determine a number of holes in the test point cloud based on the final density map, wherein the holes correspond to the geometry reconstruction artifacts; and adjust a number of bits per point, wherein the number of holes is minimum based on the adjustment of the number of bits per point, and wherein the number of bits per point is to encode each block of the reference point cloud.
However, SINHAROY explicitly teaches wherein the circuitry is further configured to: determine a number of holes in the test point cloud based on the final density map (Fig. 5B. Paragraph [0156]-SINHAROY discloses the additional points patch 516a and an additional points patch 516b represent a missed points patch representing geometry and a points patch representing texture, respectively. The missed points included in the additional points patches 516a and 516b can be identified by reconstructing a point cloud (in the encoder based on the projected patches that are included in the at least two frames 522 and 524) and comparing the reconstructed point cloud against the inputted point cloud 512 to find the missed points. Further in paragraph [0158]-SINHAROY discloses the additional points patches 516a and 516b can include all of the identified missed points or a subset of the missed points. All of the identified missed points may be included in the additional points patches 516a and 516b. Alternatively, a subset or a portion of the identified missed points are included in the additional points patches 516a and 516b. Determining whether to add a missed point to a additional points patch is based on 3D neighborhood information around each identified missed point. The neighborhood information may include how many other points are in the vicinity of a given point. The neighborhood information can also indicate how dense the neighborhood is around the given point. FIGS. 7A and 7B, illustrates a method for selecting certain missed points to be included in the additional points patch), wherein the holes correspond to the geometry reconstruction artifacts (Fig. 5B. Paragraph [0046]-SINHAROY discloses certain points of the 3D point cloud can be missed when the points are projected onto 2D frames by an encoder. When several points from close neighborhoods are missed during the projection, several cracks and holes are observed in the reconstructed point cloud. Artifacts, such as the cracks and holes can be introduced in the reconstructed point cloud as certain points were not transmitted from the original 3D point cloud. Artifacts, including holes and cracks, are detrimental to the visual quality and experience of immersive media); and
adjust a number of bits per point, wherein the number of holes is minimum based on the adjustment of the number of bits per point, and wherein the number of bits per point is to encode each block of the reference point cloud (Fig. 5D. Paragraph [0158]-SINHAROY discloses storing a subset of the identified missed points in a additional points patch (such as the additional points patches 516a and 516b) reduces the number of points included in the respective additional points patches. Reducing the number of missed points in the additional points patch 516 reduces the size of the additional points patch, which can increase the Bjøntegaard Delta Bit Rate (BDBR), as compared to a scenario when all of the identified missed points are included in a additional points patch. The BDBR is a performance metric used to evaluate lossy coding performance, by averaging bit-rate savings of one codec over another for the same visual. Please also read paragraph [0034 and 0139]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR and in further view of BHOWMICK of having an electronic device comprising: circuitry configured to: acquire a reference point cloud of an object; determine a first bounding box for the reference point cloud, with the teachings of SINHAROY of having wherein the circuitry is further configured to: determine a number of holes in the test point cloud based on the final density map, wherein the holes correspond to the geometry reconstruction artifacts; and adjust a number of bits per point, wherein the number of holes is minimum based on the adjustment of the number of bits per point, and wherein the number of bits per point is to encode each block of the reference point cloud.
Wherein THUDOR’s electronic device having wherein the circuitry is further configured to: determine a number of holes in the test point cloud based on the final density map, wherein the holes correspond to the geometry reconstruction artifacts; and adjust a number of bits per point, wherein the number of holes is minimum based on the adjustment of the number of bits per point, and wherein the number of bits per point is to encode each block of the reference point cloud.
The motivation behind the modification would have been to obtain an electronic device that improves the speed, decoding and appearance of point cloud reconstruction and transmission as well as the signal to noise ratio, since both THUDOR and SINHAROY concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while SINHAROY’s methods and systems that improve the reconstruction of a 3D point cloud by decreasing the appearance of cracks or holes, and expedite and improve the transmission of point clouds between devices. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and SINHAROY et al. (US 20200020132 A1), Abstract and Paragraph [0035 and 0050].
Regarding claim 13, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the electronic device according to claim 1, THUDOR fails to explicitly teach wherein the circuitry is further configured to compute, based on the final density map, a first quality metric as a ratio between a number of points in the regions that include the geometry reconstruction artifacts and a total number of the 3D points of the reference point cloud, and the supplementary information further includes the first quality metric.
However, SINHAROY explicitly teach wherein the circuitry is further configured to compute, based on the final density map (Fig. 9. Paragraph [0190]-SINHAROY discloses FIG. 9 illustrates an example method for encoding a point cloud. In paragraph [0191]-SINHAROY discloses in step 902, the encoder 510 generates 2D frames for a 3D point cloud. In paragraph [0192]-SINHAROY discloses in step 904, the encoder 510 detects missed points of the 2D point cloud that are not included in any of the 2D frames. In paragraph [0193]-SINHAROY discloses in step 906, the encoder 510 generates additional points patches based on a subset of the missed points. In paragraph [0201]-SINHAROY discloses in step 1004, the decoder 550 decodes the bitstream to identify the additional points patches within the frames. In paragraph [0202]-SINHAROY discloses the decoder 550 can identify the additional points patch in both the 2-D frames. In paragraph [0203]- SINHAROY discloses in step 1006, the decoder 550 generates from the 2D frames the 3D point cloud using the regular patches and the additional points patches (wherein decoder fills in holes and colors the point cloud to represent texture using the patch information)), a first quality metric as a ratio between a number of points in the regions that include the geometry reconstruction artifacts and a total number of the 3D points of the reference point cloud (Fig. 9. Paragraph [0192]-SINHAROY discloses in step 904, the encoder 510 detects missed points of the 2D point cloud that are not included in any of the 2D frames. The encoder 510 can reconstruct the geometry 3D point cloud, and compare the reconstructed geometry to the original point cloud to find each point that is missing. In paragraph [0193]-SINHAROY discloses in step 906, the encoder 510 generates additional points patches based on a subset of the missed points. The additional points patches can represent different attributes that correspond respectively to the first 2D frame and the second 2D frame. For each of the missed points, the encoder 510 identifies a quantity of points within a predefined zone surrounding a respective point of the missed points. Further in paragraph [0194]-SINHAROY discloses a preset score can indicate a percentage of points of the missed points that are included in the additional points patches), and the supplementary information further includes the first quality metric (Fig. 10. Paragraph [0200]-SINHAROY discloses the process begins with the decoder, such as decoder 550, receiving a compressed bitstream (step 1002). The received bitstream can include an encoded point cloud that was mapped onto multiple 2-D frames, compressed, and then transmitted and ultimately received by the decoder 550. Further in paragraph [0127]-SINHAROY discloses a flag can be signaled to indicate whether the data associated with the additional point (also referred to as missed points) is included in the bitstream as auxiliary information. The flag can be titled ‘additional_point_data_present_flag,’ as used in Syntax (3) below. When the flag value is one, data associated with the missed points is sent as auxiliary information, and the number of additional points is signaled. Please also read paragraph [0201-0202]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR and in further view of BHOWMICK of having an electronic device comprising: circuitry configured to: acquire a reference point cloud of an object; determine a first bounding box for the reference point cloud, with the teachings of SINHAROY of having wherein the circuitry is further configured to compute, based on the final density map, a first quality metric as a ratio between a number of points in the regions that include the geometry reconstruction artifacts and a total number of the 3D points of the reference point cloud, and the supplementary information further includes the first quality metric.
Wherein THUDOR’s electronic device having wherein the circuitry is further configured to compute, based on the final density map, a first quality metric as a ratio between a number of points in the regions that include the geometry reconstruction artifacts and a total number of the 3D points of the reference point cloud, and the supplementary information further includes the first quality metric.
The motivation behind the modification would have been to obtain an electronic device that improves the speed, decoding and appearance of point cloud reconstruction and transmission as well as the signal to noise ratio, since both THUDOR and SINHAROY concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while SINHAROY’s methods and systems that improve the reconstruction of a 3D point cloud by decreasing the appearance of cracks or holes, and expedite and improve the transmission of point clouds between devices. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and SINHAROY et al. (US 20200020132 A1), Abstract and Paragraph [0035 and 0050].
Claims 9 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over THUDOR et al. (US 20200380765 A1), hereinafter referenced as THUDOR in view of HUR et al. (US 20210407142 A1), hereinafter referenced as HUR and in further view of BHOWMICK et al. (US 20190080503 A1), hereinafter referenced as BHOWMICK and in further view of SCHWARZ et al. (US 20200228836 A1), hereinafter referenced as SCHWARZ.
Regarding claim 9, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the electronic device according to claim 1, although THUDOR explicitly teaches obtain the final density map (Fig. 9. Paragraph [0185]-THUDOR discloses parts of the reconstructed point cloud identified, from the decoded density information, as having a points density less than the determined density level may be further processed. In paragraph [0186]-THUDOR discloses additional points may be generated between pairs of reconstructed points obtained from the decoded bitstream. The number of generated additional points may be determined according to a determined target density level. The target density level is set equal to the average density of the parts of the reconstructed point cloud having a density greater than said determined level. Please also read paragraph [0106, 0119, 0149, 0184 and 0188]); and
generate the missing points data based on the reference point cloud (Fig. 9. Paragraph [0244]-THUDOR discloses splat rendering may then be applied to the reconstructed 3D representation (also to the parts that have been up-sampled) to generate/render the scene. Splat rendering is a technique that allows to fill hole between points, that are dimension-less, in a point cloud. Further in paragraph [0242]-THUDOR discloses the reconstructed 3D scene may be seen from the range of points of view, which may generate some rendering quality issues, especially when watching the 3D scene according to a point of view that enables to see areas of the scene identified as having a low point density (via the first information). To overcome these issues, an up-sampling process may be applied to the areas of the scene having a point density below the determined level/value to increase the number of points. Further in paragraph [0245]-THUDOR discloses the quality of the rendering of the 3D scene is increased by adding a small amount of data (i.e. the first information) to the bitstream).
THUDOR in view of HUR fail to explicitly teach wherein the circuitry is further configured to: obtain a 3D mask from the final density map, based on a threshold value; and generate the missing points data based on application of the 3D mask on the reference point cloud.
However, SCHWARZ explicitly teaches wherein the circuitry is further configured to:
obtain a 3D mask from the density map (Fig. 8a-c. Paragraph [0140]-SCHWARZ discloses the 3D to 2D projections may cause sparse data OT1, IG1 in the projection pictures TP1, GP1, and. The geometry choice affects the number of missing pixels and this may be used as a criterion for choosing the geometry. The remaining sparse values may be inpainted, that is, values may be created for such pixels by using values of the surrounding pixels through interpolation and/or filtering to obtain inpainted texture picture ITP1 and geometry picture IGP1. Such inpainted values IT1, IG1 would create new 3D points in the reconstruction. A specific depth value, e.g. 0, or a specific depth value range may be reserved to indicate that a pixel is inpainted and not present in the source material. In paragraph [0196]-SCHWARZ discloses original, un-decimated, 3D data of the object may be used to generate a mask MASK for inpainting such sparsity only within the boundaries of the 3D object. Such a mask of the first and second and further projections may be encoded into the bitstream), based on a threshold value (Fig. 8a-c. Paragraph [0197-0198]-SCHWARTZ discloses an inpainting mask is used to reduce the inpainting process to the projected object areas OA1, OA2, OA3, OA4. Mechanisms to represent the inpainting mask may include but are not limited to the following: inpainted areas, such as BA1, may be set to a certain code value, e.g. “0”, or any other predefined threshold value, or any value within one or more predefined or indicated value ranges, to indicate the decoder that these points should not be reconstructed. Please also read paragraph [0143 and 0213]); and
generate the missing points data based on application of the 3D mask on the point cloud (Fig. 6b. Paragraph [0196]-SCHWARTZ discloses original, un-decimated, 3D data of the object may be used to generate a mask MASK for inpainting such sparsity only within the boundaries of the 3D object. Each mask MP1, MP2, MP3, MP4 may correspond to the projections that form the texture and geometry pictures. Such a mask of the first and second and further projections may be encoded into the bitstream. Please also read paragraph [0140, 0197-0198 and 0213]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR and in further view of BHOWMICK of having an electronic device comprising: circuitry configured to: acquire a reference point cloud of an object; determine a first bounding box for the reference point cloud, with the teachings of SCHWARZ of having wherein the circuitry is further configured to: obtain a 3D mask from the density map, based on a threshold value; and generate the missing points data based on application of the 3D mask on the point cloud.
Wherein THUDOR’s electronic device having wherein the final density map comprises a first region for which local density differences between the first local density map and the second local density map is more than that for a second region of the final density map.
The motivation behind the modification would have been to obtain an electronic device that improves the speed and efficiency of decoding, 6DOF capabilities and signal to noise ratio, since both THUDOR and SCHWARZ concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while SCHWARZ’s methods and systems that improve 6DOF capabilities and coding and processing efficiency of volumetric video. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and SCHWARZ et al. (US 20200228836 A1), Abstract and Paragraph [0004-0005, 0039, 0105, 0107 and 0116].
Regarding claim 11, THUDOR in view of HUR and in further view of BHOWMICK explicitly teach the electronic device according to claim 1, although THUDOR explicitly teaches wherein each descriptor of the one or more descriptors corresponds to a volume descriptor that is indicative of one or more parameters of a polyhedron structure (Fig. 3. Paragraph [0100]-THUDOR discloses the density of the elements (e.g. points or mesh elements) forming the first 3D representation 30 may spatially vary. A volume unit corresponds for example to a voxel or to a cube of determined dimensions (e.g. a cube with edges having each a size equal to 1, 2 or 10 cm for example). In paragraph [0117]-THUDOR discloses each 3D part may have the form of a cube or of a rectangle parallelepiped. In paragraph [0111]-THUDOR discloses each 2D parameterization is associated with a 3D part of the representation of the object, each 3D part corresponding to a volume comprising one or more points of the point cloud. A same 3D part may be represented with one or several 2D parameterizations, e.g. with 2, 3 or more 2D parameterization. Please also read paragraph [0244]).
THUDOR is silent on and the polyhedron structure corresponds to one of the regions that include the geometry reconstruction artifacts.
However, SCHWARZ explicitly teaches and the polyhedron structure corresponds to one of the regions that include the geometry reconstruction artifacts (Fig. 6b. Paragraph [0143]-SCHWARZ discloses a three-dimensional (3D) object, represented as a dynamic point cloud, may be sequentially projected onto two-dimensional (2D) planes, for example similar to sides of polyhedron such as a cube (a four-sided polyhedron). In paragraph [0140]-SCHWARZ discloses FIG. 6b illustrates inpainting, where sparsity in the original texture and depth projections (left) are reduced by inpainting or filtering (right). The 3D to 2D projections may cause sparse data OT1, IG1 in the projection pictures TP1, GP1. Additional 3D filtering may be applied to remove unnecessary points and to close surface holes due to points missing from the projection. In paragraph [0196]-SCHWARZ discloses original, un-decimated, 3D data of the object may be used to generate a mask MASK for inpainting such sparsity only within the boundaries of the 3D object. Each mask MP1, MP2, MP3, MP4 may correspond to the projections that form the texture and geometry pictures. Such a mask of the first and second and further projections may be encoded into the bitstream, wherein the mask is indicative of pixels of the first texture picture that represent said first or second volumetric texture data).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR and in further view of BHOWMICK of having an electronic device comprising: circuitry configured to: acquire a reference point cloud of an object; determine a first bounding box for the reference point cloud, with the teachings of SCHWARZ of having and the polyhedron structure corresponds to one of the regions that include the geometry reconstruction artifacts.
Wherein THUDOR’s electronic device having wherein each descriptor of the one or more descriptors corresponds to a volume descriptor that is indicative of one or more parameters of a polyhedron structure, and the polyhedron structure corresponds to one of the regions that include the geometry reconstruction artifacts.
The motivation behind the modification would have been to obtain an electronic device that improves the speed and efficiency of decoding, 6DOF capabilities and signal to noise ratio, since both THUDOR and SCHWARZ concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while SCHWARZ’s methods and systems that improve 6DOF capabilities and coding and processing efficiency of volumetric video. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and SCHWARZ et al. (US 20200228836 A1), Abstract and Paragraph [0004-0005, 0039, 0105, 0107 and 0116].
Claims 10 is rejected under 35 U.S.C. 103 as being unpatentable over THUDOR et al. (US 20200380765 A1), hereinafter referenced as THUDOR in view of HUR et al. (US 20210407142 A1), hereinafter referenced as HUR and in further view of BHOWMICK et al. (US 20190080503 A1), hereinafter referenced as BHOWMICK and in further view of SCHWARZ et al. (US 20200228836 A1), hereinafter referenced as SCHWARZ and in further view of SINHAROY et al. (US 20200020132 A1), hereinafter referenced as SINHAROY.
Regarding claim 10, THUDOR in view of HUR and in further view of BHOWMICK and in further view of SCHWARZ explicitly teach the electronic device according to claim 9, THUDOR in view of HUR fail to explicitly teach wherein the threshold value corresponds to one of a percentage of voxels that are selectable from the reference point cloud or a percentage of local density differences between the first local density map and the second local density map that is above a specific threshold.
However, SINHAROY explicitly teaches wherein the threshold value corresponds to one of a percentage of voxels that are selectable from the reference point cloud (Fig. 5B. Paragraph [0108]- SINHAROY discloses the missed points selector 518 can derive a relationship, based on the proximity parameter between the selected missed point and each of the other missed points within the zone. The relationship indicates a density score between points. When the quantity of points surrounding the selected missed point is greater than a threshold and based on the relationship, the selected missed point is included in the additional points patch 516) or a percentage of local density differences between the first local density map and the second local density map that is above a specific threshold (Fig. 5B. Paragraph [0156]-SINHAROY discloses the additional points patch 516a and an additional points patch 516b represent a missed points patch representing geometry and a points patch representing texture, respectively. The missed points included in the additional points patches 516a and 516b can be identified by reconstructing a point cloud (in the encoder based on the projected patches that are included in the at least two frames 522 and 524) and comparing the reconstructed point cloud against the inputted point cloud 512 to find the missed points).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR and in further view of BHOWMICK and in further view of SCHWARTZ of having an electronic device comprising: circuitry configured to: acquire a reference point cloud of an object; determine a first bounding box for the reference point cloud, with the teachings of SINHAROY of having wherein the threshold value corresponds to one of a percentage of voxels that are selectable from the reference point cloud or a percentage of local density differences between the first local density map and the second local density map that is above a specific threshold.
Wherein THUDOR’s electronic device having wherein the threshold value corresponds to one of a percentage of voxels that are selectable from the reference point cloud or a percentage of local density differences between the first local density map and the second local density map that is above a specific threshold.
The motivation behind the modification would have been to obtain an electronic device that improves the speed, decoding and appearance of point cloud reconstruction and transmission as well as the signal to noise ratio, since both THUDOR and SINHAROY concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while SINHAROY’s methods and systems that improve the reconstruction of a 3D point cloud by decreasing the appearance of cracks or holes, and expedite and improve the transmission of point clouds between devices. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and SINHAROY et al. (US 20200020132 A1), Abstract and Paragraph [0035 and 0050].
Claims 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over THUDOR et al. (US 20200380765 A1), hereinafter referenced as THUDOR in view of HUR et al. (US 20210407142 A1), hereinafter referenced as HUR and in further view of BHOWMICK et al. (US 20190080503 A1), hereinafter referenced as BHOWMICK and in further view of SINAROY et al. (US 20200020132 A1), hereinafter referenced as SINHAROY and in further view of ZHU et al. (US 20230082456 A1), hereinafter referenced as ZHU.
Regarding claim 14, THUDOR in view of HUR and in further view of BHOWMICK and in further view of SINHAROY explicitly teach the electronic device according to claim 13, although THUDOR explicitly teaches wherein the circuitry is further configured to compute, based on the final density map (Fig. 9. Paragraph [0184]-THUDOR discloses the decoded point cloud 903 may then be further processed for reconstructing the 3D representation of the scene from the decoded pictures that comprise the attributes (depth and texture), from the decoded density information, from the decoded parameters representative of the 2D parameterizations and from the decoded mapping information for the mapping between the 2D parameterizations and the depth and texture maps comprised in the decoded pictures. Points of the point cloud are obtained by de-projecting the pixels of the depth and texture maps according to the inverse 2D parameterizations. The points obtained from the de-projection of the depth and texture maps are called reconstructed points), a second quality metric as a number of the one or more descriptors for which a volume parameter is above a volume threshold (Fig. 9. Paragraph [0185]-THUDOR discloses parts of the reconstructed point cloud identified, from the decoded density information, as having a points density less than the determined density level may be further processed. In paragraph [0186]-THUDOR discloses additional points may be generated between pairs of reconstructed points obtained from the decoded bitstream. The additional points may be generated by computing their associated depth and texture from the depth and texture associated with the reconstructed points. The number of generated additional points may be determined according to a determined target density level. The target density level is set equal to the average density of the parts of the reconstructed point cloud having a density greater than said determined level. In paragraph [0188]-THUDOR discloses up-sampling process is applied to the parts of the point cloud identified, from the decoded density information, as having a points density less than the determined density level. Please also read paragraph [0187]).
THUDOR in view of HUR fail explicitly teach wherein the circuitry is further configured to compute, based on the final density map, a second quality metric as a reciprocal of a number of the one or more descriptors for which a volume parameter is above a volume threshold.
However, ZHU explicitly teaches wherein the circuitry is further configured to compute, based on the final density map (Fig. 4. Paragraph [0051]-ZHU discloses the point cloud encoder 112 encodes the point cloud data from the point cloud source 111 to generate a code stream. The point cloud encoder 112 transmits the encoded point cloud data to the decoding device 120 through the output interface 113), a second quality metric as a reciprocal of a number of the one or more descriptors (Fig. 4. Paragraph [0177]-ZHU discloses after the neighbor point parameters of the current point are determined in the manner, S440 is performed. In paragraph [0193]-ZHU discloses S440-A1: Select N candidate points of the current point from the decoded points in the point cloud. Further in paragraph [0211]-ZHU discloses S450-A1: Determine an attribute weight of each of the at least one neighbor point of the current point (wherein either a weight value of each neighbor point is calculated according to the distance between the neighbor point and the current point, for example, using a reciprocal of the distance as the attribute weight of the neighbor point)) for which a parameter is above a threshold (Fig. 4. Paragraph [0225]-ZHU discloses one or more distance values are selected according to a distribution of d1, d2, . . . , dk to calculate the weight values. The first threshold is the median value of the first distances between all of the at least one neighbor point of the current point and the current point, that is, a median point of d1, d2, . . . , dk is the first threshold; and if the first distance between a neighbor point and the current point is less than or equal to the first threshold, it is determined that the attribute weight of the neighbor point is the third preset weight, for example, 1. If the first distance between a neighbor point and the current point is greater than the first threshold, it is determined that the attribute weight of the neighbor point is the fourth preset weight, for example, 2. Further in paragraph [0228]-ZHU discloses when or in response to a determination that a quantity of the plurality of preset weights is different from that of the at least one neighbor point, a preset weight in the plurality of preset weights that is closest to a reciprocal of the first distance corresponding to the neighbor point is determined as the attribute weight of the neighbor point. Please also read paragraph [0176]), and the supplementary information further includes the second quality metric (Fig. 3. Paragraph [0176]-ZHU discloses the point cloud code stream is parsed to obtain geometry information of points in the point cloud, and the space parameter of the point cloud is determined according to the geometry information of the points in the point cloud. In paragraph [0046]-ZHU discloses the storage medium can store point cloud data encoded by the encoding device 110. The decoding device 120 may read the encoded point cloud data from the storage medium. Further in paragraph [0047]-ZHU discloses the decoding device 120 may download the encoded point cloud data stored in the storage server from the storage server).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR and in further view of BHOWMICK and in further view of SINHAROY of having an electronic device comprising: circuitry configured to: acquire a reference point cloud of an object; determine a first bounding box for the reference point cloud, with the teachings of ZHU of having wherein the circuitry is further configured to compute, based on the final density map, a second quality metric as a reciprocal of a number of the one or more descriptors for which a parameter is above a volume threshold, and the supplementary information further includes the second quality metric.
Wherein THUDOR’s electronic device having wherein the circuitry is further configured to compute, based on the final density map, a second quality metric as a reciprocal of a number of the one or more descriptors for which a volume parameter is above a volume threshold, and the supplementary information further includes the second quality metric.
The motivation behind the modification would have been to obtain an electronic device that improves the speed and efficiency of decoding, since both THUDOR and ZHU concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while ZHU’s methods and systems improve the accuracy of the determined neighbor point parameters, and decoding efficiency by determining a predicted value of attribute information of a current point based on the neighbor point parameters. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and ZHU et al. (US 20230082456 A1), Abstract and Paragraph [0232].
Regarding claim 15, THUDOR in view of HUR and in further view of BHOWMICK and in further view of SINHAROY explicitly teach the electronic device according to claim 14, THUDOR in view of HUR fail explicitly teach wherein the circuitry is further configured to compute a third quality metric as a weighted sum of the first quality metric and the second quality metric, and the supplementary information further includes the third quality metric.
However, ZHU explicitly teaches wherein the circuitry is further configured to compute a third quality metric as a weighted sum of the first quality metric and the second quality metric (Fig. 4. Paragraph [0229]-ZHU discloses the predicted value of the attribute information of the current point is determined according to the attribute weight and the attribute information of each neighbor point. For example, a weighted value of the attribute information of the attribute weights of the at least one neighbor point of the current point is calculated according to the attribute weight of each neighbor point, and the weighted value is determined as the predicted value of the attribute information of the current point. In paragraph [0231]-ZHU discloses a sum of the residual value of the attribute information of the current point and the predicted value of the attribute information of the current point obtained through the steps is used as the reconstructed value of the attribute information of the current point (wherein the residual value may be the difference value between the real value of the attribute information of the point and the predicted value of the attribute information of the point). Please also read paragraph [0215-0218] and see formula (1) and (2)), and the supplementary information further includes the third quality metric (Fig. 4. Paragraph [0042]-ZHU discloses the encoding device 110 is configured to encode point cloud data (which can be understood as compression) to generate a code stream, and transmit the code stream to the decoding device 120. The decoding device 120 is configured to decode the code stream generated through encoding by the encoding device 110 to obtain decoded point cloud data. Further in paragraph [0065] The process of attribute encoding includes: by giving real values of the reconstructed information of the position information and the attribute information of the inputted point cloud, one of the three prediction modes is selected for point cloud prediction, the predicted results are quantified, and arithmetic coding is performed to form an attribute code stream. Please also read paragraph [0046-0047 and 0096])).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of THUDOR in view of HUR and in further view of BHOWMICK and in further view of SINHAROY and in further view of ZHU of having an electronic device comprising: circuitry configured to: acquire a reference point cloud of an object; determine a first bounding box for the reference point cloud, with the teachings of ZHU of having wherein the circuitry is further configured to compute a third quality metric as a weighted sum of the first quality metric and the second quality metric, and the supplementary information further includes the third quality metric.
Wherein THUDOR’s electronic device having wherein the circuitry is further configured to compute a third quality metric as a weighted sum of the first quality metric and the second quality metric, and the supplementary information further includes the third quality metric.
The motivation behind the modification would have been to obtain an electronic device that improves the speed and efficiency of decoding, since both THUDOR and ZHU concern point cloud compression. Wherein THUDOR’s methods and systems that speed up the decoding of the information and improve the signal over noise ratio, while ZHU’s methods and systems improve the accuracy of the determined neighbor point parameters, and decoding efficiency by determining a predicted value of attribute information of a current point based on the neighbor point parameters. Please see THUDOR et al. (US 20200380765 A1), Abstract and Paragraph [0098] and ZHU et al. (US 20230082456 A1), Abstract and Paragraph [0232].
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure.
VAN DER AUWERA et al. (US 20230105931 A1)- Example devices and techniques for coding point cloud data are described. An example device includes memory configured to store the point cloud data and one or more processors communicatively coupled to the memory. The one or more processors are configured to determine least two reference points in a reference point cloud frame of the point cloud data. The one or more processors are configured to apply radius interpolation to the at least two reference points to obtain at least one radius inter predictor for at least one current point in a current point cloud frame of the point cloud data....................... Please see Fig. 1-4. Abstract.
OH et al. (US 20210289211 A1)- Disclosed herein are a point cloud data transmission method including encoding point cloud data, and transmitting a bitstream containing the point cloud data, and a point cloud data processing method including receiving a bitstream containing point cloud data, and decoding the point cloud data.................... Please see Fig. 1-2. Para. [0112, 0131, 0289-0292]. Abstract. (e.g. weighted sum, density, quality metrics, reciprocals).
LI et al. (US 20230196625 A1)-The present invention provides a point cloud intra prediction method and device based on weights optimization of neighbors. The invention relates to intra prediction for point cloud attribute compression, by optimizing the weights of the neighboring points on the basis of the density of the point cloud in three directions, i.e. x, y and z directions, and specifically, calculating the optimized weight of each neighboring point by optimizing corresponding coefficients of three coordinate components, i.e. x, y and z coordinate components of distances.................... Please see Fig. 1-2. Para. [0030, 0043, and 0047]. Abstract.
SU et al. (US 20240171775 A1)- An input 3D point cloud including a spatial distribution of points is received. Patches including pre-reshaped patch data are generated from the input 3D point cloud. Encoder-side reshaping is performed on the pre-reshaped patch data to generate reshaped patch data for the patches. The reshaped patch data is encoded into a 3D video signal, which a recipient device of the 3D video signal can decode to generate a reconstructed 3D point cloud that approximates the input 3D point cloud.................. Please see Fig. 1A-D. Abstract.
ZHANG et al. (US 20210183110 A1)- A point cloud decoding method related to the field of coding technologies and includes reconstructing a point cloud comprising one or more patches, wherein the one or more patches comprise a current patch, wherein the reconstructing process includes transforming coordinates (x2, y2) of a second point of the current patch in a second coordinate system to coordinates (x1, y1) of a first point of the current patch in a first coordinate system, wherein the coordinates (x1, y1) of the first point of the current patch in the first coordinate system are obtained based on the coordinates (x2, y2) of the second point of the current patch in the second coordinate system and a transform matrix................... Please see Fig. 2-4. Abstract.
Zakharchenko et al. (US 20220353532 A1)- A video coding mechanism is disclosed. The mechanism includes receiving a bitstream comprising a plurality of two-dimensional (2D) patches in an atlas frame and a three-dimensional (3D) bounding box scale. The 2D patches are decoded. A point cloud is reconstructed by converting the 2D patches to a 3D patch coordinate system defined by each projection plane of the 3D bounding box. The 3D bounding box scale is applied to a 3D bounding box.................... Please see Fig. 1-4. Abstract.
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to Aaron Bonansinga whose telephone number is (703) 756-5380 The examiner can normally be reached on Monday-Friday, 9:00 a.m. - 6:00 p.m. ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Chineyere Wills-Burns can be reached by phone at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON TIMOTHY BONANSINGA/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673