Prosecution Insights
Last updated: April 19, 2026
Application No. 18/399,563

METHOD, APPARATUS, AND MEDIUM FOR POINT CLOUD CODING

Non-Final OA §102
Filed
Dec 28, 2023
Examiner
BAKER, CHARLOTTE M
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Bytedance Inc.
OA Round
1 (Non-Final)
93%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
93%
With Interview

Examiner Intelligence

Grants 93% — above average
93%
Career Allow Rate
991 granted / 1067 resolved
+30.9% vs TC avg
Minimal -0% lift
Without
With
+-0.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
15 currently pending
Career history
1082
Total Applications
across all art units

Statute-Specific Performance

§101
21.6%
-18.4% vs TC avg
§103
24.7%
-15.3% vs TC avg
§102
27.4%
-12.6% vs TC avg
§112
4.3%
-35.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1067 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim for foreign priority based on an application filed in China on 04 July 2021. It is noted, however, that applicant has not filed a certified copy of the Chinese application as required by 37 CFR 1.55 and the Office was not able to electronically retrieve the foreign application PCT/CN2021/104401. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pham Van et al. (hereinafter Pham Van) (US 2022/0210466 A1). Regarding claim 1: Pham Van discloses classifying, during a conversion between a current frame of a point cloud sequence and a bitstream of the point cloud sequence (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure)., pars. 201-210), at least a part of points in the current frame based on a set of planar regions, each of the set of planar regions being three-dimensional and having a height equal to a height of a bounding box of the current frame (A point cloud encoder/decoder (codec) may enclose the 3D space occupied by point cloud data in a virtual bounding box. The position of the points in the bounding box may be represented by a certain precision. Therefore, the point cloud codec may quantize positions of one or more points based on the precision. At the smallest level, the point cloud codec splits the bounding box into voxels, which are the smallest unit of space represented by a unit cube. A voxel in the bounding box may be associated with zero, one, or more than one point. The point cloud codec may split the bounding box into multiple cube/cuboid regions, which may be called tiles. The point cloud codec may code the tiles into one or more slices. The partitioning of the bounding box into slices and tiles may be based on a number of points in each partition, or based on other considerations (e.g., a particular region may be coded as tiles). The slice regions may be further partitioned using splitting decisions similar to those in video codecs., par. 4); and performing the conversion based on the classification (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure)., pars. 201-210). Regarding claim 2: Pham Van satisfies all the elements of claim 1. Pham Van further discloses wherein each of the set of planar regions is cuboid (A point cloud encoder/decoder (codec) may enclose the 3D space occupied by point cloud data in a virtual bounding box. The position of the points in the bounding box may be represented by a certain precision. Therefore, the point cloud codec may quantize positions of one or more points based on the precision. At the smallest level, the point cloud codec splits the bounding box into voxels, which are the smallest unit of space represented by a unit cube. A voxel in the bounding box may be associated with zero, one, or more than one point. The point cloud codec may split the bounding box into multiple cube/cuboid regions, which may be called tiles. The point cloud codec may code the tiles into one or more slices. The partitioning of the bounding box into slices and tiles may be based on a number of points in each partition, or based on other considerations (e.g., a particular region may be coded as tiles). The slice regions may be further partitioned using splitting decisions similar to those in video codecs., par. 4), and each point in the current frame is assigned to one of the set of planar regions based on coordinates of the point, or wherein a reference frame of the current frame comprises at least one reference planar regions, and a reference point in the reference frame belongs to at least one reference planar regions, or wherein, for a planar region in the current frame, a reference frame of the current frame comprises or does not comprise a reference planar region corresponding to the planar region (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure)., pars. 201-210). Regarding claim 3: Pham Van satisfies all the elements of claim 1. Pham Van further discloses wherein whether a point in a planar region is to be classified is dependent on a reference planar region in a reference frame of the current frame, the reference planar region corresponding to the planar region (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure). G-PCC encoder 200 and G-PCC decoder 300 may be configured to perform any of the various techniques of this disclosure in various combination. For example, motion parameters for a reference frame may be specified in terms of regions, whereas one or more slice groups may be specified for the current frame; a slice group may be associated with a region (explicitly or implicitly) and reference points from region may be used to predict points of the slice group. In another example, points in a region may be coded as a slice or a slice group., pars. 201-211). Regarding claim 4: Pham Van satisfies all the elements of claim 3. Pham Van further discloses wherein the point is classified, if at least one reference points belong to the reference planar region (More generally, G-PCC encoder 200 and G-PCC decoder 300 may be configured to perform the techniques below, alone or in any combination with the various other techniques of this disclosure: [0165] 1. Classification (or partitioning) of points of a point cloud into M groups. G-PCC encoder 200 and G-PCC decoder 300 may be configured according to one of the techniques of this disclosure, or other means to achieve the classification of the points into the M groups. [0166] a. Examples of groups include road, divider, nearby cars or vehicles, buildings, signs, traffic lights, pedestrians, etc. Note that each car/vehicle/building/etc. may be classified as a separate group. [0167] b. Groups may include points that represent an object, or that are spatially adjacent to each other. [0168] 2. G-PCC encoder 200 and G-PCC decoder 300 may specify N slice groups (N<=M). G-PCC encoder 200 and G-PCC decoder 300 may associate each of the M groups with one of the N slice groups. G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to a slice group together. [0169] a. E.g., a “ground” slice group may include points belonging to the “road” and “divider” groups, “static” slice group may include points belonging to “buildings”, and “signs”, and “dynamic” slice group may include groups such as cars/vehicles, or “pedestrians.” [0170] b. More generally, G-PCC encoder 200 and G-PCC decoder 300 may code one or more groups that share some property into a slice group. For example, groups that may have similar relative motion with respect to the LIDAR sensor/vehicle, may be coded into one slice group. [0171] c. In another example, G-PCC encoder 200 and G-PCC decoder 300 may be configured to determine that each group of points having a certain property belongs to a separate slice group. [0172] d. Points of a group may be associated with more than one slice group (e.g., the points may be repeated). [0173] 3. G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to each slice group in one or more slices. [0174] 4. G-PCC encoder 200 and G-PCC decoder 300 may identify a slice belonging to a slice group based on an index value (e.g., slice index) or a label (slice type or slice group type). [0175] a. Each slice group may be associated with a slice type/slice group type which may be signalled in each slice of the slice group. [0176] i. For example, an index/label of [0, N−1] may be associated with each of the slice groups and G-PCC encoder 200 and G-PCC decoder 300 may code an index/label “i” in a slice that belongs to the i-th slice group (0<=i<=N−1). [0177] ii. In another example, a point cloud may have two slice groups S1 and S2, and each slice group may be coded as 3 slices, making a total of 6 slices. Each of the slices of S1 may have slice type 0 and each of the slices of S2 may have slice type 1. [0178] b. In another example, each slice may be associated with a slice number of slice index; slice belonging to a particular slice group may be identified with the slice number/index. [0179] i. For example, a point cloud may have two slice groups S1 and S2, and each slice group may be coded as 3 slices, making a total of 6 slices. The slices of S1 may have slice numbers 0, 1 and 2, and slices of S2 may have slice numbers 3, 4 and 5. [0180] c. In some examples, the slice identifier may be a combination of the slice group identifier/type, and a slice number. [0181] i. For example, a point cloud may have two slice groups S1 and S2, and each slice group may be coded as 3 slices, making a total of 6 slices. The slices of S1 may have identifiers (0, 0), (0, 1), (0, 2) where the first number of each tuple is the slice type, and the second number is the slice number within the slice group. Similarly slices of S2 may have identifiers (1, 0), (1, 1), (1, 2). [0182] d. The slice type, slice group type, slice number, of slice identifier may be signalled in the slice. [0183] 5. G-PCC encoder 200 and G-PCC decoder 300 may code data referring to slices for prediction. A slice may refer to another slice for prediction. The reference slice may belong to the same picture (intra prediction) or another picture (inter prediction). [0184] a. G-PCC encoder 200 and G-PCC decoder 300 may identify the reference slice using one or more of the following: [0185] i. A reference frame number or frame counter [0186] ii. A reference slice identifier (slice type/group type, slice number, slice identifier, etc.) [0187] b. In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured according to a restriction that a slice may only refer to other slices belonging to the same slice type/slice group type. In this case, a reference slice type/slice group type need not be signalled. [0188] c. In another example, a slice may be allowed to refer all points belonging to a frame or a slice group; in this case, a reference slice number may not be signalled as all the slices of a frame/slice group may be referred for prediction. [0189] d. In another example, two or more slice identifiers may be signalled identifying that plurality of slices that may referred for prediction. [0190] 6. G-PCC encoder 200 and G-PCC decoder 300 may associate a first set of motion parameters for each point; the motion parameters may be used to compensate the position of the point; this compensated position may be used as a reference for prediction. [0191] a. In one example, motion parameters associated with a point may be the motion parameters associated with a slice containing the point. [0192] b. In one example, motion parameters associated with a slice may be the motion parameters associated with a slice group containing the slice. [0193] c. In one example, the motion parameters associated with a slice group may be the motion parameters associated with the frame containing the slice group. [0194] d. The motion parameters may be signalled in a parameter set such as SPS, GPS, etc., slice header, or other parts of the bitstream. [0195] e. The above description refers to motion parameters, but this may apply to any set of motion parameters (e.g., rotation matrix/parameters, translation vector/parameters, etc.) [0196] f. In some examples, motion parameters used to apply motion compensation for points in a reference frame may be signalled in the current frame, or a frame that is not the reference frame. E.g., if frame 1 uses points from frame 0 for prediction, then the motion parameters that apply to points in frame 0 may be signalled with frame 1. [0197] g. In one example, a reference index to the slice/slice group of a reference frame may be signalled in the current frame (in a parameter set or a slice or other syntax structure). [0198] i. In one example, one or more tuples (motion parameters, a reference index) may be signalled with a current frame (or slice), where the reference index identifies the points in the reference frame (slice/slice group/region) to which the respective motion parameters apply. [0199] h. In one example, the motion parameters may be a set of global motion parameter that apply to all points in a slice, slice group, region, or frame., pars. 164-199). Regarding claim 5: Pham Van satisfies all the elements of claim 1. Pham Van further discloses wherein how to classify a point in a planar region is dependent on a classification condition, or wherein the part of points is classified into a first set of classes based on a plurality of thresholds, the first set of classes comprising a first class associated with object points and a second class associated with road points (G-PCC encoder 200 may determine threshold values for classifying points into either ground/road points (generally referred to as “ground” points hereinafter) or object points. For example, G-PCC encoder 200 may determine a top threshold and a bottom threshold, generally representing a top and bottom of the ground or road. Thus, if points are between these two thresholds, the points may be classified as ground points, and other points (e.g., points above the top threshold or below the bottom threshold) may be classified as object points. G-PCC encoder 200 may encode data representing the top and bottom thresholds in a data structure, such as a sequence parameter set (SPS), geometry parameter set (GPS), or geometry data unit header (GDH). G-PCC encoder 200 and G-PCC decoder 300 may therefore encode or decode occupancy of nodes above the top threshold or below the bottom threshold using a global motion vector and nodes between the top and bottom threshold using a second, different global motion vector, local motion vectors, intra-prediction, or other different prediction techniques., par. 59). Regarding claim 6: Pham Van satisfies all the elements of claim 1. Pham Van further discloses wherein classifying the target point comprises: assigning the target point to one of a plurality of space units for a global motion estimation process of the current frame (This disclosure describes techniques for labeling ground and objects to improve the performance of global motion estimation. In particular, G-PCC encoder 200 and G-PCC decoder 300 may be configured to classify ground/road and object data in a point cloud, which may improve the performance of global motion estimation., par. 136); and classifying the target point based on the assignment (This disclosure describes techniques for labeling ground and objects to improve the performance of global motion estimation. In particular, G-PCC encoder 200 and G-PCC decoder 300 may be configured to classify ground/road and object data in a point cloud, which may improve the performance of global motion estimation., par. 136). Regarding claim 7: Pham Van satisfies all the elements of claim 6. Pham Van further discloses wherein a reference frame of the current frame comprises at least one reference space units, and a reference point in the reference frame belongs to at least one reference space units (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure). [0211] G-PCC encoder 200 and G-PCC decoder 300 may be configured to perform any of the various techniques of this disclosure in various combination. For example, motion parameters for a reference frame may be specified in terms of regions, whereas one or more slice groups may be specified for the current frame; a slice group may be associated with a region (explicitly or implicitly) and reference points from region may be used to predict points of the slice group. In another example, points in a region may be coded as a slice or a slice group., pars. 201-211). Regarding claim 8: Pham Van satisfies all the elements of claim 7. Pham Van further discloses wherein the at least one reference space units is at least one reference blocks or at least one planar regions (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure). [0211] G-PCC encoder 200 and G-PCC decoder 300 may be configured to perform any of the various techniques of this disclosure in various combination. For example, motion parameters for a reference frame may be specified in terms of regions, whereas one or more slice groups may be specified for the current frame; a slice group may be associated with a region (explicitly or implicitly) and reference points from region may be used to predict points of the slice group. In another example, points in a region may be coded as a slice or a slice group., pars. 201-211), each of at least one planar regions being three-dimensional and having a height equal to a height of a bounding box of the current frame (A point cloud encoder/decoder (codec) may enclose the 3D space occupied by point cloud data in a virtual bounding box. The position of the points in the bounding box may be represented by a certain precision. Therefore, the point cloud codec may quantize positions of one or more points based on the precision. At the smallest level, the point cloud codec splits the bounding box into voxels, which are the smallest unit of space represented by a unit cube. A voxel in the bounding box may be associated with zero, one, or more than one point. The point cloud codec may split the bounding box into multiple cube/cuboid regions, which may be called tiles. The point cloud codec may code the tiles into one or more slices. The partitioning of the bounding box into slices and tiles may be based on a number of points in each partition, or based on other considerations (e.g., a particular region may be coded as tiles). The slice regions may be further partitioned using splitting decisions similar to those in video codecs., par. 4). Regarding claim 9: Pham Van satisfies all the elements of claim 7. Pham Van further discloses assigning a reference point of the target point to the at least one reference space units, the reference point being in the reference frame (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure). [0211] G-PCC encoder 200 and G-PCC decoder 300 may be configured to perform any of the various techniques of this disclosure in various combination. For example, motion parameters for a reference frame may be specified in terms of regions, whereas one or more slice groups may be specified for the current frame; a slice group may be associated with a region (explicitly or implicitly) and reference points from region may be used to predict points of the slice group. In another example, points in a region may be coded as a slice or a slice group., pars. 201-211), and marking a reference space unit if a reference point is assigned to the reference space unit (G-PCC encoder 200 may derive the threshold that applies to a set (e.g., GOP, sequence, etc.) of two or more frames using various techniques: [0149] In the simplest case, G-PCC encoder 200 may select the threshold of the ordinal first frame in the set as the threshold for frames in the set. The ordinal first frame may be the ordinal first frame in the output order or the decoding order of the point cloud. [0150] In some examples, G-PCC encoder 200 may derive the threshold used according to a weighted average of thresholds derived for/applicable to two more frames in the set. For example, if there are 10 frames in the set, and t1.sub.i, t2.sub.i refers to the thresholds derived for the i-th frame, the final threshold may be derived as follows for n equal to 1 and 2:, par. 148). Regarding claim 10: Pham Van satisfies all the elements of claim 6. Pham Van further discloses wherein, for a space unit in the current frame, a reference frame of the current frame comprises (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure). [0211] G-PCC encoder 200 and G-PCC decoder 300 may be configured to perform any of the various techniques of this disclosure in various combination. For example, motion parameters for a reference frame may be specified in terms of regions, whereas one or more slice groups may be specified for the current frame; a slice group may be associated with a region (explicitly or implicitly) and reference points from region may be used to predict points of the slice group. In another example, points in a region may be coded as a slice or a slice group., pars. 201-211) or does not comprise a reference space unit corresponding to the space unit. Regarding claim 11: Pham Van satisfies all the elements of claim 6. Pham Van further discloses classifying at least a part of points in the current frame into the first set of classes (G-PCC encoder 200 may determine threshold values for classifying points into either ground/road points (generally referred to as “ground” points hereinafter) or object points. For example, G-PCC encoder 200 may determine a top threshold and a bottom threshold, generally representing a top and bottom of the ground or road. Thus, if points are between these two thresholds, the points may be classified as ground points, and other points (e.g., points above the top threshold or below the bottom threshold) may be classified as object points. G-PCC encoder 200 may encode data representing the top and bottom thresholds in a data structure, such as a sequence parameter set (SPS), geometry parameter set (GPS), or geometry data unit header (GDH). G-PCC encoder 200 and G-PCC decoder 300 may therefore encode or decode occupancy of nodes above the top threshold or below the bottom threshold using a global motion vector and nodes between the top and bottom threshold using a second, different global motion vector, local motion vectors, intra-prediction, or other different prediction techniques., par. 59), whether a point in a space unit is to be classified being dependent on a reference space unit in a reference frame of the current frame, the reference space unit corresponding to the space unit (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure). [0211] G-PCC encoder 200 and G-PCC decoder 300 may be configured to perform any of the various techniques of this disclosure in various combination. For example, motion parameters for a reference frame may be specified in terms of regions, whereas one or more slice groups may be specified for the current frame; a slice group may be associated with a region (explicitly or implicitly) and reference points from region may be used to predict points of the slice group. In another example, points in a region may be coded as a slice or a slice group., pars. 201-211); and performing the conversion based on the classification (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure)., pars. 201-210). Regarding claim 12: Pham Van satisfies all the elements of claim 11. Pham Van further discloses wherein the point is classified, if at least one reference points belong to the reference space unit, or wherein how to classify a point in a space unit is dependent on a classification condition, or wherein the first set of classes comprise a first class associated with object points and a second class associated with road points, and the part of points is classified into the first class or the second class based on a plurality of thresholds (G-PCC encoder 200 may determine threshold values for classifying points into either ground/road points (generally referred to as “ground” points hereinafter) or object points. For example, G-PCC encoder 200 may determine a top threshold and a bottom threshold, generally representing a top and bottom of the ground or road. Thus, if points are between these two thresholds, the points may be classified as ground points, and other points (e.g., points above the top threshold or below the bottom threshold) may be classified as object points. G-PCC encoder 200 may encode data representing the top and bottom thresholds in a data structure, such as a sequence parameter set (SPS), geometry parameter set (GPS), or geometry data unit header (GDH). G-PCC encoder 200 and G-PCC decoder 300 may therefore encode or decode occupancy of nodes above the top threshold or below the bottom threshold using a global motion vector and nodes between the top and bottom threshold using a second, different global motion vector, local motion vectors, intra-prediction, or other different prediction techniques., par. 59). Regarding claim 13: Pham Van satisfies all the elements of claim 1. Pham Van further discloses wherein performing the conversion based on the classification comprises: determining global motion information for the current frame based on the classification (G-PCC encoder 200 may determine threshold values for classifying points into either ground/road points (generally referred to as “ground” points hereinafter) or object points. For example, G-PCC encoder 200 may determine a top threshold and a bottom threshold, generally representing a top and bottom of the ground or road. Thus, if points are between these two thresholds, the points may be classified as ground points, and other points (e.g., points above the top threshold or below the bottom threshold) may be classified as object points. G-PCC encoder 200 may encode data representing the top and bottom thresholds in a data structure, such as a sequence parameter set (SPS), geometry parameter set (GPS), or geometry data unit header (GDH). G-PCC encoder 200 and G-PCC decoder 300 may therefore encode or decode occupancy of nodes above the top threshold or below the bottom threshold using a global motion vector and nodes between the top and bottom threshold using a second, different global motion vector, local motion vectors, intra-prediction, or other different prediction techniques., par. 59); and performing the conversion based on the global motion information (G-PCC encoder 200 may determine threshold values for classifying points into either ground/road points (generally referred to as “ground” points hereinafter) or object points. For example, G-PCC encoder 200 may determine a top threshold and a bottom threshold, generally representing a top and bottom of the ground or road. Thus, if points are between these two thresholds, the points may be classified as ground points, and other points (e.g., points above the top threshold or below the bottom threshold) may be classified as object points. G-PCC encoder 200 may encode data representing the top and bottom thresholds in a data structure, such as a sequence parameter set (SPS), geometry parameter set (GPS), or geometry data unit header (GDH). G-PCC encoder 200 and G-PCC decoder 300 may therefore encode or decode occupancy of nodes above the top threshold or below the bottom threshold using a global motion vector and nodes between the top and bottom threshold using a second, different global motion vector, local motion vectors, intra-prediction, or other different prediction techniques., par. 59). Regarding claim 14: Pham Van satisfies all the elements of claim 13. Pham Van further discloses wherein the global motion information comprises a global motion matrix determined by a least mean square (LMS) algorithm with samples and reference samples, the samples being determined based on points in the current frame, the reference samples being determined based on reference points in a reference frame of the current frame, or wherein the global motion information comprises a global motion matrix, and performing the conversion based on the global motion information (FIG. 9 is a flowchart illustrating an example process for estimating global motion. In the InterEM software, the global motion matrix is defined to match feature points between the prediction frame (reference) and the current frame. FIG. 9 illustrates the pipeline for estimating global motion. The global motion estimation algorithm may be divided into three steps: finding feature points (410), sampling feature points pairs (412), and motion estimation using a Least Mean Square (LMS) algorithm (414). [0128] The algorithm defines feature points to be those points that have large position change between the prediction frame and current frame. For each point in the current frame, G-PCC encoder 200 finds the closest point in the prediction frame and builds point pairs between the current frame and the prediction frame. If the distance between the paired points is greater than a threshold, G-PCC encoder 200 regards the paired points as feature points. [0129] After finding the feature points, G-PCC encoder 200 performs a sampling on the feature points to reduce the scale of the problem (e.g., by choosing a subset of feature points to reduce the complexity of motion estimation). Then, G-PCC encoder 200 applies the LMS algorithm to derive motion parameters by attempting to reduce the error between respective features points in the prediction frame and the current frame., pars. 127-129 and Fig. 9) comprises: obtaining a reference frame with motion compensation by applying the global motion matrix to all of points in a reference frame of the current frame (FIG. 9 is a flowchart illustrating an example process for estimating global motion. In the InterEM software, the global motion matrix is defined to match feature points between the prediction frame (reference) and the current frame. FIG. 9 illustrates the pipeline for estimating global motion. The global motion estimation algorithm may be divided into three steps: finding feature points (410), sampling feature points pairs (412), and motion estimation using a Least Mean Square (LMS) algorithm (414). [0128] The algorithm defines feature points to be those points that have large position change between the prediction frame and current frame. For each point in the current frame, G-PCC encoder 200 finds the closest point in the prediction frame and builds point pairs between the current frame and the prediction frame. If the distance between the paired points is greater than a threshold, G-PCC encoder 200 regards the paired points as feature points. [0129] After finding the feature points, G-PCC encoder 200 performs a sampling on the feature points to reduce the scale of the problem (e.g., by choosing a subset of feature points to reduce the complexity of motion estimation). Then, G-PCC encoder 200 applies the LMS algorithm to derive motion parameters by attempting to reduce the error between respective features points in the prediction frame and the current frame., pars. 127-129 and Fig. 9); and performing the conversion based on the reference frame with motion compensation (FIG. 9 is a flowchart illustrating an example process for estimating global motion. In the InterEM software, the global motion matrix is defined to match feature points between the prediction frame (reference) and the current frame. FIG. 9 illustrates the pipeline for estimating global motion. The global motion estimation algorithm may be divided into three steps: finding feature points (410), sampling feature points pairs (412), and motion estimation using a Least Mean Square (LMS) algorithm (414). [0128] The algorithm defines feature points to be those points that have large position change between the prediction frame and current frame. For each point in the current frame, G-PCC encoder 200 finds the closest point in the prediction frame and builds point pairs between the current frame and the prediction frame. If the distance between the paired points is greater than a threshold, G-PCC encoder 200 regards the paired points as feature points. [0129] After finding the feature points, G-PCC encoder 200 performs a sampling on the feature points to reduce the scale of the problem (e.g., by choosing a subset of feature points to reduce the complexity of motion estimation). Then, G-PCC encoder 200 applies the LMS algorithm to derive motion parameters by attempting to reduce the error between respective features points in the prediction frame and the current frame., pars. 127-129 and Fig. 9). Regarding claim 15: Pham Van satisfies all the elements of claim 1. Pham Van further discloses wherein the conversion includes encoding the current frame into the bitstream, or wherein the conversion includes decoding the current frame from the bitstream (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure)., pars. 201-210). Regarding claim 16: Pham Van discloses determining, during a conversion between a current frame of a point cloud sequence and a bitstream of the point cloud sequence (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure)., pars. 201-210), global motion information for the current frame based on a plurality of reference frame of the current frame (G-PCC encoder 200 may determine threshold values for classifying points into either ground/road points (generally referred to as “ground” points hereinafter) or object points. For example, G-PCC encoder 200 may determine a top threshold and a bottom threshold, generally representing a top and bottom of the ground or road. Thus, if points are between these two thresholds, the points may be classified as ground points, and other points (e.g., points above the top threshold or below the bottom threshold) may be classified as object points. G-PCC encoder 200 may encode data representing the top and bottom thresholds in a data structure, such as a sequence parameter set (SPS), geometry parameter set (GPS), or geometry data unit header (GDH). G-PCC encoder 200 and G-PCC decoder 300 may therefore encode or decode occupancy of nodes above the top threshold or below the bottom threshold using a global motion vector and nodes between the top and bottom threshold using a second, different global motion vector, local motion vectors, intra-prediction, or other different prediction techniques., par. 59); and performing the conversion based on the global motion information (G-PCC encoder 200 may determine threshold values for classifying points into either ground/road points (generally referred to as “ground” points hereinafter) or object points. For example, G-PCC encoder 200 may determine a top threshold and a bottom threshold, generally representing a top and bottom of the ground or road. Thus, if points are between these two thresholds, the points may be classified as ground points, and other points (e.g., points above the top threshold or below the bottom threshold) may be classified as object points. G-PCC encoder 200 may encode data representing the top and bottom thresholds in a data structure, such as a sequence parameter set (SPS), geometry parameter set (GPS), or geometry data unit header (GDH). G-PCC encoder 200 and G-PCC decoder 300 may therefore encode or decode occupancy of nodes above the top threshold or below the bottom threshold using a global motion vector and nodes between the top and bottom threshold using a second, different global motion vector, local motion vectors, intra-prediction, or other different prediction techniques., par. 59). Regarding claim 17: Pham Van satisfies all the elements of claim 16. Pham Van further discloses wherein determining the global motion information comprises: classifying at least a part of points in the current frame based on a set of planar regions, each of the set of planar regions being three-dimensional and having a height equal to a height of a bounding box of the current frame (A point cloud encoder/decoder (codec) may enclose the 3D space occupied by point cloud data in a virtual bounding box. The position of the points in the bounding box may be represented by a certain precision. Therefore, the point cloud codec may quantize positions of one or more points based on the precision. At the smallest level, the point cloud codec splits the bounding box into voxels, which are the smallest unit of space represented by a unit cube. A voxel in the bounding box may be associated with zero, one, or more than one point. The point cloud codec may split the bounding box into multiple cube/cuboid regions, which may be called tiles. The point cloud codec may code the tiles into one or more slices. The partitioning of the bounding box into slices and tiles may be based on a number of points in each partition, or based on other considerations (e.g., a particular region may be coded as tiles). The slice regions may be further partitioned using splitting decisions similar to those in video codecs., par. 4); and determining the global motion information based on the classification (FIG. 9 is a flowchart illustrating an example process for estimating global motion. In the InterEM software, the global motion matrix is defined to match feature points between the prediction frame (reference) and the current frame. FIG. 9 illustrates the pipeline for estimating global motion. The global motion estimation algorithm may be divided into three steps: finding feature points (410), sampling feature points pairs (412), and motion estimation using a Least Mean Square (LMS) algorithm (414). [0128] The algorithm defines feature points to be those points that have large position change between the prediction frame and current frame. For each point in the current frame, G-PCC encoder 200 finds the closest point in the prediction frame and builds point pairs between the current frame and the prediction frame. If the distance between the paired points is greater than a threshold, G-PCC encoder 200 regards the paired points as feature points. [0129] After finding the feature points, G-PCC encoder 200 performs a sampling on the feature points to reduce the scale of the problem (e.g., by choosing a subset of feature points to reduce the complexity of motion estimation). Then, G-PCC encoder 200 applies the LMS algorithm to derive motion parameters by attempting to reduce the error between respective features points in the prediction frame and the current frame., pars. 127-129 and Fig. 9). Regarding claim 18: Pham Van satisfies all the elements of claim 16. Pham Van further discloses wherein determining the global motion information comprises: assigning a target point in the current frame to one of a plurality of space units for a global motion estimation process of the current frame (This disclosure describes techniques for labeling ground and objects to improve the performance of global motion estimation. In particular, G-PCC encoder 200 and G-PCC decoder 300 may be configured to classify ground/road and object data in a point cloud, which may improve the performance of global motion estimation., par. 136); and classifying the target point based on the assignment (This disclosure describes techniques for labeling ground and objects to improve the performance of global motion estimation. In particular, G-PCC encoder 200 and G-PCC decoder 300 may be configured to classify ground/road and object data in a point cloud, which may improve the performance of global motion estimation., par. 136); classifying the target point into a set of classes (G-PCC encoder 200 may determine threshold values for classifying points into either ground/road points (generally referred to as “ground” points hereinafter) or object points. For example, G-PCC encoder 200 may determine a top threshold and a bottom threshold, generally representing a top and bottom of the ground or road. Thus, if points are between these two thresholds, the points may be classified as ground points, and other points (e.g., points above the top threshold or below the bottom threshold) may be classified as object points. G-PCC encoder 200 may encode data representing the top and bottom thresholds in a data structure, such as a sequence parameter set (SPS), geometry parameter set (GPS), or geometry data unit header (GDH). G-PCC encoder 200 and G-PCC decoder 300 may therefore encode or decode occupancy of nodes above the top threshold or below the bottom threshold using a global motion vector and nodes between the top and bottom threshold using a second, different global motion vector, local motion vectors, intra-prediction, or other different prediction techniques., par. 59); and determining the global motion information based on the classification (G-PCC encoder 200 may determine threshold values for classifying points into either ground/road points (generally referred to as “ground” points hereinafter) or object points. For example, G-PCC encoder 200 may determine a top threshold and a bottom threshold, generally representing a top and bottom of the ground or road. Thus, if points are between these two thresholds, the points may be classified as ground points, and other points (e.g., points above the top threshold or below the bottom threshold) may be classified as object points. G-PCC encoder 200 may encode data representing the top and bottom thresholds in a data structure, such as a sequence parameter set (SPS), geometry parameter set (GPS), or geometry data unit header (GDH). G-PCC encoder 200 and G-PCC decoder 300 may therefore encode or decode occupancy of nodes above the top threshold or below the bottom threshold using a global motion vector and nodes between the top and bottom threshold using a second, different global motion vector, local motion vectors, intra-prediction, or other different prediction techniques., par. 59). Regarding claim 19: Pham Van satisfies all the elements of claim 16. Pham Van further discloses wherein the conversion includes encoding the current frame into the bitstream, or wherein the conversion includes decoding the current frame from the bitstream (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure)., pars. 201-210). Regarding claim 20: Pham Van further discloses classifying, during a conversion between a current frame of a point cloud sequence and a bitstream of the point cloud sequence (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure)., pars. 201-210), at least one of: a target point in the current frame into a first set of classes based on a second set of thresholds, the number of thresholds in the second set being larger than the number of classes in the first set (G-PCC encoder 200 may determine threshold values for classifying points into either ground/road points (generally referred to as “ground” points hereinafter) or object points. For example, G-PCC encoder 200 may determine a top threshold and a bottom threshold, generally representing a top and bottom of the ground or road. Thus, if points are between these two thresholds, the points may be classified as ground points, and other points (e.g., points above the top threshold or below the bottom threshold) may be classified as object points. G-PCC encoder 200 may encode data representing the top and bottom thresholds in a data structure, such as a sequence parameter set (SPS), geometry parameter set (GPS), or geometry data unit header (GDH). G-PCC encoder 200 and G-PCC decoder 300 may therefore encode or decode occupancy of nodes above the top threshold or below the bottom threshold using a global motion vector and nodes between the top and bottom threshold using a second, different global motion vector, local motion vectors, intra-prediction, or other different prediction techniques., par. 59), a target point in the current frame into a first set of classes based on a second set of thresholds, the number of thresholds in the second set being equal to the number of classes in the first set (In some examples, in addition or in the alternative to the techniques discussed above, G-PCC encoder 200 and G-PCC decoder 300 may be configured to implicitly classify points, e.g., as object or ground points, through coding points in slices corresponding to the classes. For example, if the point cloud includes object and ground (or road) point classes, G-PCC encoder 200 and G-PCC decoder 300 may code an object slice including a first subset of points that are all classified as object points, and a ground or road slice including a second subset of points that are all classified as ground or road points. More than two classes may be used in this way. In general, G-PCC encoder 200 and G-PCC decoder 300 may be configured to determine that there is one slice for each class of points, and that all points within a given slice are to be classified according to the corresponding class for the given slice. An explicit classification algorithm is not necessary in this example, which may reduce computations to be performed by G-PCC encoder 200 and G-PCC decoder 300., par. 163), at least one threshold in the second set is generated based on a further frame of the point cloud sequence (In this manner, the techniques of this disclosure may result in more efficient coding of object points. Rather than coding points in the point cloud using respective local motion vectors, all of the object points between respective clouds may be predicted using a single global motion vector. Thus, signaling overhead related to signaling motion information for the object points may be drastically reduced. Moreover, because it may be largely assumed that ground points will remain constant between frames, the coding techniques for the ground points may consume a relatively low number of bits., par. 60), a target point in the current frame into a first set of classes based on a second set of thresholds (G-PCC encoder 200 may determine threshold values for classifying points into either ground/road points (generally referred to as “ground” points hereinafter) or object points. For example, G-PCC encoder 200 may determine a top threshold and a bottom threshold, generally representing a top and bottom of the ground or road. Thus, if points are between these two thresholds, the points may be classified as ground points, and other points (e.g., points above the top threshold or below the bottom threshold) may be classified as object points. G-PCC encoder 200 may encode data representing the top and bottom thresholds in a data structure, such as a sequence parameter set (SPS), geometry parameter set (GPS), or geometry data unit header (GDH). G-PCC encoder 200 and G-PCC decoder 300 may therefore encode or decode occupancy of nodes above the top threshold or below the bottom threshold using a global motion vector and nodes between the top and bottom threshold using a second, different global motion vector, local motion vectors, intra-prediction, or other different prediction techniques., par. 59), the number of thresholds in the second set being equal to the number of classes in the first set (G-PCC encoder 200 may determine threshold values for classifying points into either ground/road points (generally referred to as “ground” points hereinafter) or object points. For example, G-PCC encoder 200 may determine a top threshold and a bottom threshold, generally representing a top and bottom of the ground or road. Thus, if points are between these two thresholds, the points may be classified as ground points, and other points (e.g., points above the top threshold or below the bottom threshold) may be classified as object points. G-PCC encoder 200 may encode data representing the top and bottom thresholds in a data structure, such as a sequence parameter set (SPS), geometry parameter set (GPS), or geometry data unit header (GDH). G-PCC encoder 200 and G-PCC decoder 300 may therefore encode or decode occupancy of nodes above the top threshold or below the bottom threshold using a global motion vector and nodes between the top and bottom threshold using a second, different global motion vector, local motion vectors, intra-prediction, or other different prediction techniques., par. 59), at least one threshold in the second set being generated based on a histogram of geometry features of points in the point cloud sequence, the histogram being generated based on a characteristic of the point (FIG. 12 is a graph 460 illustrating an example derivation of thresholds using a histogram according to the techniques of this disclosure. Graph 460 represents an example histogram for collected heights (z-values) of point cloud data. G-PCC encoder 200 may calculate thresholds z_bottom 462 and z_top 464 using the histogram., par. 212); and performing the conversion based on the classification (In some examples, G-PCC encoder 200 and G-PCC decoder 300 may be configured to specify one or more regions within a point cloud. G-PCC encoder 200 and G-PCC decoder 300 may further associate motion parameters with each region. G-PCC encoder 200 and G-PCC decoder 300 may code data in the bitstream representing the motion parameters associated with a region. G-PCC encoder 200 and G-PCC decoder 300 may use the motion parameters to compensate positions of points. G-PCC encoder 200 and G-PCC decoder 300 may use the compensated points as reference/prediction for coding the position of a point in a current frame. In some cases, the use of regions (compared with slices) for classification may achieve better compression performance, because G-PCC encoder 200 and G-PCC decoder 300 may code points belonging to different regions together. [0202] 1. G-PCC encoder 200 and G-PCC decoder 300 may code data representing one or more regions in a point cloud. [0203] a. G-PCC encoder 200 and G-PCC decoder 300 may code a value N representing the number of regions, as well as data representing parameters that specify each of the N regions. [0204] i. In some examples, N may be restricted to be within a certain value range (e.g., N may be constrained to less than a fixed value, such as 10). [0205] b. G-PCC encoder 200 and G-PCC decoder 300 may code the parameter of each region in the bitstream. In some examples, a region may be specified using one or more of the following parameters: [0206] i. An upper bound and lower bound for x, y, and z coordinates defining the region (or any other coordinate system used to code the point cloud). [0207] ii. In some examples, one or more of upper or lower bound may not be specified; in this case, G-PCC encoder 200 and G-PCC decoder 300 may use default values appropriate to the coordinate and the coordinate system as an inferred value. [0208] 1. For example, in a spherical domain (r, phi, laserId), if bounds for phi are not signalled, then the upper and lower bound may be inferred to correspond to 360 degrees and 0 degrees, respectively. [0209] 2. Motion parameters may be associated with each region; motion compensation may be applied to one or more points belonging to the region to obtain compensated position/points; compensated positions/points may be used as reference for prediction of points in a current points cloud frame. [0210] a. One or more methods disclosed in this disclosure of signalling motion parameters may be applied to signal the motion parameters of each region. For example, G-PCC encoder 200 and G-PCC decoder 300 may code motion parameters for each region in a parameter set (e.g., SPS, GPS), or other parts of the bitstream (e.g., slice header, or a separate syntax structure)., pars. 201-210). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHARLOTTE M BAKER whose telephone number is (571)272-7459. The examiner can normally be reached Mon - Fri 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JENNIFER MEHMOOD can be reached at (571)272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHARLOTTE M BAKER/Primary Examiner, Art Unit 2664 29 January 2026
Read full office action

Prosecution Timeline

Dec 28, 2023
Application Filed
Jan 29, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602905
A Computer Software Module Arrangement, a Circuitry Arrangement, an Arrangement and a Method for Improved Object Detection Adapting the Detection through Shifting the Image
2y 5m to grant Granted Apr 14, 2026
Patent 12585654
Dynamic Vision System for Robot Fleet Management
2y 5m to grant Granted Mar 24, 2026
Patent 12579900
UAV PERCEPTION VALIDATION BASED UPON A SEMANTIC AGL ESTIMATE
2y 5m to grant Granted Mar 17, 2026
Patent 12548331
TECHNIQUES TO PERFORM TRAJECTORY PREDICTIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12543924
MEDICAL SUPPORT SYSTEM, MEDICAL SUPPORT DEVICE, AND MEDICAL SUPPORT METHOD
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
93%
Grant Probability
93%
With Interview (-0.2%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 1067 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month