Prosecution Insights
Last updated: April 19, 2026
Application No. 18/849,322

POINT CLOUD DATA TRANSMISSION DEVICE, POINT CLOUD DATA TRANSMISSION METHOD, POINT CLOUD DATA RECEPTION DEVICE, AND POINT CLOUD DATA RECEPTION METHOD

Non-Final OA §102§103§DP
Filed
Sep 20, 2024
Examiner
WONG, ALLEN C
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
LG Electronics Inc.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
669 granted / 805 resolved
+25.1% vs TC avg
Moderate +12% lift
Without
With
+11.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
832
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
41.6%
+1.6% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 805 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/7/24 and 8/6/25 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 8-9 and 15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sinharoy (US 2019/0197739). Regarding claim 1, Sinharoy discloses a method of transmitting point cloud data (paragraph [67], fig.5, Sinharoy discloses encoder 500 encodes a point cloud data as an encoded bitstream 525, wherein point cloud data is transmitted over a network), the method comprising: encoding point cloud data (paragraph [67], fig.5, Sinharoy discloses encoder 500 encodes a point cloud data as an encoded bitstream 525); and transmitting a bitstream containing the point cloud data (paragraph [67], fig.5, Sinharoy discloses encoder 500 encodes a point cloud data as an encoded bitstream 525, wherein point cloud data is transmitted over a network to a decoder). Regarding claim 8, Sinharoy discloses a device for transmitting point cloud data (paragraph [67], fig.5, Sinharoy discloses encoder 500 encodes a point cloud data as an encoded bitstream 525, wherein point cloud data is transmitted over a network), comprising: an encoder configured to encode point cloud data (paragraph [67], fig.5, Sinharoy discloses encoder 500 encodes a point cloud data as an encoded bitstream 525); and a transmitter configured to transmit the point cloud data (paragraph [67], fig.5, Sinharoy discloses encoder 500 encodes a point cloud data as an encoded bitstream 525, wherein point cloud data is transmitted over a network to a decoder). Regarding claim 9, Sinharoy discloses a method of receiving point cloud data (paragraph [78], fig.6, Sinharoy discloses decoder 600 decodes point cloud data by receiving a bitstream as encoded by encoder 500, wherein the bitstream includes geometry and attribute data), the method comprising: receiving a bitstream containing point cloud data (paragraph [78], fig.6, Sinharoy discloses decoder 600 decodes point cloud data by receiving a bitstream as encoded by encoder 500 of fig.5, wherein the bitstream includes geometry and attribute data); and decoding the point cloud data (paragraph [78], fig.6, Sinharoy discloses decoder 600 decodes point cloud data by receiving a bitstream as encoded by encoder 500 of fig.5). Regarding claim 15, Sinharoy discloses a device for receiving point cloud data (paragraph [78], fig.6, Sinharoy discloses decoder 600 decodes point cloud data by receiving a bitstream as encoded by encoder 500, wherein the bitstream includes geometry and attribute data), comprising: a receiver configured to receive point cloud data (paragraph [78], fig.6, Sinharoy discloses decoder 600 decodes point cloud data by receiving a bitstream as encoded by encoder 500 of fig.5, wherein the bitstream includes geometry and attribute data); a decoder configured to decode the point cloud data (paragraph [78], fig.6, Sinharoy discloses decoder 600 decodes point cloud data by receiving a bitstream as encoded by encoder 500 of fig.5). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Sinharoy (US 2019/0197739) and Chen (US 2021/0263167) in view of Ermilios (US 2019/0104278). Regarding claim 2, Sinharoy does not disclose wherein the encoding comprises: splitting road data from the point cloud data; and splitting the road data into a plurality of units; and grouping the road data belonging to a first unit of the units; and setting a motion search window for the grouped road data; and predicting a motion based on the motion search window, wherein the splitting of the road data is performed based on a threshold. However, Chen teaches splitting road data from the point cloud data (paragraph [84], Chen discloses that laser point cloud data can be classified as road surface point cloud data and road-side point cloud data, and paragraph [85], Chen discloses that the road surface point cloud data can be separated into right side of road surface data and left side of road surface data); and splitting the road data into a plurality of units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and paragraph [270], Chen discloses the road-side data division unit can divide and split road-side data into grid cells or plural units); and grouping the road data belonging to a first unit of the units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and thus, the grid cells pertaining to road surface are grouped to belong to a first unit of the units), wherein the splitting of the road data is performed based on a threshold (paragraph [182], Chen discloses that a preset difference threshold is utilized for determining whether laser points are located on the boundary between the road and regions on two sides of the road, thus permitting the differentiation between road surface data versus road-side to delineate boundaries on the road data, thus, splitting road data is performed based on a threshold or boundary). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy and Chen together as a whole for permitting a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Sinharoy and Chen do not disclose setting a motion search window for the grouped road data; and predicting a motion based on the motion search window. However, Ermilios teaches setting a motion search window for the grouped road data (paragraph [49], Ermilios discloses implementing a search window for predicting motion between two images with block matching technique, wherein paragraph [47], Ermilios discloses that movement on the road surface texture between two adjacent frames (ie. images) is tracked with motion estimation); and predicting a motion based on the motion search window (paragraph [49], Ermilios discloses implementing a search window for predicting motion between two images with block matching technique, wherein paragraph [47], Ermilios discloses that movement on the road surface texture between two adjacent frames (ie. images) is tracked with motion estimation). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy, Chen and Ermilios together as a whole for reliably estimate motion data for driving applications in adverse environmental conditions (Ermilios’ paragraph [3]). Regarding claim 10, Sinharoy does not disclose wherein the decoding comprises, splitting road data from the point cloud data; and splitting the road data into a plurality of units; and grouping the road data belonging to a first unit of the units; and applying a motion to the grouped road data. However, Chen teaches splitting road data from the point cloud data (paragraph [84], Chen discloses that laser point cloud data can be classified as road surface point cloud data and road-side point cloud data, and paragraph [85], Chen discloses that the road surface point cloud data can be separated into right side of road surface data and left side of road surface data); and splitting the road data into a plurality of units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and paragraph [270], Chen discloses the road-side data division unit can divide and split road-side data into grid cells or plural units); and grouping the road data belonging to a first unit of the units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and thus, the grid cells pertaining to road surface are grouped to belong to a first unit of the units). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy and Chen together as a whole for permitting a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Sinharoy and Chen does not disclose applying a motion to the grouped road data. However, Ermilios teaches applying a motion to the grouped road data (paragraph [49], Ermilios discloses implementing a search window for predicting motion between two images with block matching technique, wherein paragraph [47], Ermilios discloses that movement on the road surface texture between two adjacent frames (ie. images) is tracked with motion estimation). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy, Chen and Ermilios together as a whole for reliably estimate motion data for driving applications in adverse environmental conditions (Ermilios’ paragraph [3]). Claims 3-4 and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Sinharoy (US 2019/0197739), Chen (US 2021/0263167) and Ermilios (US 2019/0104278) in view of Zeng (US 2020/0410690). Regarding claim 3, Sinharoy does not disclose wherein the splitting the road data into the plurality of units comprises: splitting the road data based on at least one of an identifier, an angle, or a distance of a sensor, wherein the distance is a distance from a center of the point cloud data. However, Chen teaches wherein the splitting the road data into the plurality of units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and paragraph [270], Chen discloses the road-side data division unit can divide and split road-side data into grid cells or plural units). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy and Chen together as a whole for permitting a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Sinharoy, Chen and Ermilios do not disclose splitting the road data based on at least one of an identifier, an angle, or a distance of a sensor, wherein the distance is a distance from a center of the point cloud data. However, Zeng teaches splitting the road data based on at least one of an identifier, an angle, or a distance of a sensor (paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data, wherein paragraph [26], Zeng discloses that road network data is obtained, and paragraph [81], Zeng discloses that laser sensor can be implemented for splitting road data by distinguishing objects of a scene, and paragraph [82], Zeng discloses 3D point cloud data can be segmented), wherein the distance is a distance from a center of the point cloud data (paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy, Chen, Ermilios and Zeng together as a whole for accurately determining objects in a monitored scene. Regarding claim 4, Sinharoy does not disclose wherein the grouping of the road data belonging to the first unit comprises: grouping the road data based on at least one of the identifier, angle, or distance of the sensor, wherein the first unit is a region at a distance greater than or equal to a predetermined distance from the center of the point cloud data. However, Chen teaches wherein the grouping of the road data belonging to the first unit (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and thus, the grid cells pertaining to road surface are grouped to belong to a first unit of the units). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy and Chen together as a whole for permitting a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Sinharoy, Chen and Ermilios do not disclose grouping the road data based on at least one of the identifier, angle, or distance of the sensor, wherein the first unit is a region at a distance greater than or equal to a predetermined distance from the center of the point cloud data. However, Zeng teaches grouping the road data based on at least one of the identifier, angle, or distance of the sensor (paragraph [141], Zeng discloses a plurality of datasets are combined according to distances between a plurality of segmented line segments for gathering appropriate datasets of point cloud data, and paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data, wherein paragraph [26], Zeng discloses that road network data is obtained, and paragraph [81], Zeng discloses that laser sensor can be implemented for splitting road data by distinguishing objects of a scene, and paragraph [82], Zeng discloses 3D point cloud data can be segmented), wherein the first unit is a region at a distance greater than or equal to a predetermined distance from the center of the point cloud data (paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data, wherein paragraph [26], Zeng discloses that road network data is obtained). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy, Chen, Ermilios and Zeng together as a whole for accurately determining objects in a monitored scene. Regarding claim 11, Sinharoy does not disclose wherein the splitting the road data into the plurality of units comprises: splitting the road data based on at least one of an identifier, an angle, or a distance of a sensor, wherein the distance is a distance from a center of the point cloud data. However, Chen teaches wherein the splitting the road data into the plurality of units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and paragraph [270], Chen discloses the road-side data division unit can divide and split road-side data into grid cells or plural units). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy and Chen together as a whole for permitting a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Sinharoy, Chen and Ermilios do not disclose splitting the road data based on at least one of an identifier, an angle, or a distance of a sensor, wherein the distance is a distance from a center of the point cloud data. However, Zeng teaches splitting the road data based on at least one of an identifier, an angle, or a distance of a sensor (paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data, wherein paragraph [26], Zeng discloses that road network data is obtained, and paragraph [81], Zeng discloses that laser sensor can be implemented for splitting road data by distinguishing objects of a scene, and paragraph [82], Zeng discloses 3D point cloud data can be segmented), wherein the distance is a distance from a center of the point cloud data (paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy, Chen, Ermilios and Zeng together as a whole for accurately determining objects in a monitored scene. Regarding claim 12, Sinharoy does not disclose wherein the grouping of the road data belonging to the first unit comprises: grouping the road data based on at least one of the identifier, angle, or distance of the sensor, wherein the first unit is a region at a distance greater than or equal to a predetermined distance from the center of the point cloud data. However, Chen teaches wherein the grouping of the road data belonging to the first unit (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and thus, the grid cells pertaining to road surface are grouped to belong to a first unit of the units). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy and Chen together as a whole for permitting a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Sinharoy, Chen and Ermilios do not disclose grouping the road data based on at least one of the identifier, angle, or distance of the sensor, wherein the first unit is a region at a distance greater than or equal to a predetermined distance from the center of the point cloud data. However, Zeng teaches grouping the road data based on at least one of the identifier, angle, or distance of the sensor (paragraph [141], Zeng discloses a plurality of datasets are combined according to distances between a plurality of segmented line segments for gathering appropriate datasets of point cloud data, and paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data, wherein paragraph [26], Zeng discloses that road network data is obtained, and paragraph [81], Zeng discloses that laser sensor can be implemented for splitting road data by distinguishing objects of a scene, and paragraph [82], Zeng discloses 3D point cloud data can be segmented), wherein the first unit is a region at a distance greater than or equal to a predetermined distance from the center of the point cloud data (paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data, wherein paragraph [26], Zeng discloses that road network data is obtained). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy, Chen, Ermilios and Zeng together as a whole for accurately determining objects in a monitored scene. Regarding claim 13, Sinharoy discloses decoding a bitstream (paragraph [78], fig.6, Sinharoy discloses decoder 600 decodes point cloud data by receiving a bitstream as encoded by encoder 500 of fig.5, wherein the bitstream includes geometry and attribute data). Sinharoy and Chen do not disclose wherein the bitstream contains motion information about the first unit, wherein the applying the motion comprises: applying the motion information to road data corresponding to the first unit. However, Ermilios teaches wherein the point cloud data contains motion information about the first unit (paragraph [49], Ermilios discloses implementing a search window for predicting motion between two images with block matching technique, wherein paragraph [47], Ermilios discloses that movement on the road surface texture between two adjacent frames (ie. images) is tracked with motion estimation), wherein the applying the motion comprises: applying the motion information to road data corresponding to the first unit (paragraph [49], Ermilios discloses implementing a search window for predicting motion between two images with block matching technique, wherein paragraph [47], Ermilios discloses that movement on the road surface texture between two adjacent frames (ie. images) is tracked with motion estimation). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy, Chen and Ermilios together as a whole for reliably estimate motion data for driving applications in adverse environmental conditions (Ermilios’ paragraph [3]). Regarding claim 14, Sinharoy discloses decoding a bitstream (paragraph [78], fig.6, Sinharoy discloses decoder 600 decodes point cloud data by receiving a bitstream as encoded by encoder 500 of fig.5, wherein the bitstream includes geometry and attribute data). Sinharoy does not disclose wherein the bitstream further contains information indicating a method of splitting the road data into the plurality of units. However, Chen teaches wherein the point cloud data further contains information indicating a method of splitting the road data into the plurality of units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface point cloud data into grid cells or plural units, and paragraph [270], Chen discloses the road-side data division unit can divide and split road-side point cloud data into grid cells or plural units). Since Sinharoy discloses decoding a bitstream, and Chen discloses “wherein the point cloud data further contains information indicating a method of splitting the road data into the plurality of units”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy and Chen together as a whole for ascertaining the limitation of “wherein the bitstream further contains information indicating a method of splitting the road data into the plurality of units” in order to permit a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Sinharoy (US 2019/0197739), Chen (US 2021/0263167), Ermilios (US 2019/0104278) and Zeng (US 2020/0410690) in view of Poon (US 2009/0067509). Regarding claim 5, Sinharoy, Chen, Ermilios and Zeng do not disclose wherein the setting of the motion search window is performed based on at least one of the identifier, an azimuth of the sensor, or the distance. However, Poon teaches wherein the setting of the motion search window is performed based on at least one of the identifier, an azimuth of the sensor, or the distance (paragraph [71], Poon discloses that the motion search window for finding motion vectors is based on the distance, wherein there are various distances that can be utilized for finding motion vectors). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy, Chen, Ermilios, Zeng and Poon together as a whole for thoroughly finding motion vectors in order to best locate objects as needed for any object/surface searching applications. Claim 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Sinharoy (US 2019/0197739), Chen (US 2021/0263167), Ermilios (US 2019/0104278), Zeng (US 2020/0410690) and Poon (US 2009/0067509) in view of Zhang (US 2019/0149838). Regarding claim 6, Sinharoy, Chen, Ermilios, Zeng and Poon do not disclose wherein the predicting of the motion comprises: calculating a rate-distortion optimization (RDO) cost based on whether the motion is applied. However, Zhang teaches wherein the predicting of the motion comprises: calculating a rate-distortion optimization (RDO) cost based on whether the motion is applied (paragraph [112], Zhang discloses a rate distortion optimization cost is computed to determine to select affine motion vector for predicting motion of a current block, CU (coding unit) or PU (prediction unit)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy, Chen, Ermilios, Zeng, Poon and Zhang together as a whole for efficiently compressing video data (Zhang’s paragraph [6]). Regarding claim 7, Sinharoy discloses encoding a bitstream (paragraph [67], fig.5, Sinharoy discloses encoder 500 encodes a point cloud data as an encoded bitstream 525). Sinharoy does not disclose wherein the bitstream contains information indicating a method of splitting the road data into the plurality of units. However, Chen teaches wherein the point cloud data contains information indicating a method of splitting the road data into the plurality of units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface point cloud data into grid cells or plural units, and paragraph [270], Chen discloses the road-side data division unit can divide and split road-side point cloud data into grid cells or plural units). Since Sinharoy discloses encoding point cloud data onto a bitstream, and Chen discloses “wherein the point cloud data contains information indicating a method of splitting the road data into the plurality of units”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Sinharoy and Chen together as a whole for ascertaining the limitation of “wherein the bitstream contains information indicating a method of splitting the road data into the plurality of units” in order to permit a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 8-9 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 7 and 10 of U.S. Patent No. 11,017,591. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of present Application ‘322 is similar to but broader than claim 1 of Patent ‘591. Thus, claim 1 of present Application ‘322 is anticipated by claim 1 of Patent ‘591. Claim 8 of present Application ‘322 is similar to but broader than claim 10 of Patent ‘591. Thus, claim 1 of present Application ‘322 is anticipated by claim 10 of Patent ‘591. Claim 9 of present Application ‘322 is similar to but broader than claim 4 of Patent ‘591. Thus, claim 9 of present Application ‘322 is anticipated by claim 4 of Patent ‘591. Claim 15 of present Application ‘322 is similar to but broader than claim 7 of Patent ‘591. Thus, claim 15 of present Application ‘322 is anticipated by claim 7 of Patent ‘591. Claims 2 and 10 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 9 of U.S. Patent No. 11,017,591 and Chen (US 2021/0263167) in view of Ermilios (US 2019/0104278). Regarding claim 2, claim 1 of Patent ‘591 do not disclose wherein the encoding comprises: splitting road data from the point cloud data; and splitting the road data into a plurality of units; and grouping the road data belonging to a first unit of the units; and setting a motion search window for the grouped road data; and predicting a motion based on the motion search window, wherein the splitting of the road data is performed based on a threshold. However, Chen teaches splitting road data from the point cloud data (paragraph [84], Chen discloses that laser point cloud data can be classified as road surface point cloud data and road-side point cloud data, and paragraph [85], Chen discloses that the road surface point cloud data can be separated into right side of road surface data and left side of road surface data); and splitting the road data into a plurality of units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and paragraph [270], Chen discloses the road-side data division unit can divide and split road-side data into grid cells or plural units); and grouping the road data belonging to a first unit of the units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and thus, the grid cells pertaining to road surface are grouped to belong to a first unit of the units), wherein the splitting of the road data is performed based on a threshold (paragraph [182], Chen discloses that a preset difference threshold is utilized for determining whether laser points are located on the boundary between the road and regions on two sides of the road, thus permitting the differentiation between road surface data versus road-side to delineate boundaries on the road data, thus, splitting road data is performed based on a threshold or boundary). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 1 of Patent ‘591 and Chen together as a whole for permitting a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Claim 1 of Patent ‘591 and Chen do not disclose setting a motion search window for the grouped road data; and predicting a motion based on the motion search window. However, Ermilios teaches setting a motion search window for the grouped road data (paragraph [49], Ermilios discloses implementing a search window for predicting motion between two images with block matching technique, wherein paragraph [47], Ermilios discloses that movement on the road surface texture between two adjacent frames (ie. images) is tracked with motion estimation); and predicting a motion based on the motion search window (paragraph [49], Ermilios discloses implementing a search window for predicting motion between two images with block matching technique, wherein paragraph [47], Ermilios discloses that movement on the road surface texture between two adjacent frames (ie. images) is tracked with motion estimation). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 1 of Patent ‘591, Chen and Ermilios together as a whole for reliably estimate motion data for driving applications in adverse environmental conditions (Ermilios’ paragraph [3]). Regarding claim 10, claim 9 of Patent ‘591 does not disclose wherein the decoding comprises, splitting road data from the point cloud data; and splitting the road data into a plurality of units; and grouping the road data belonging to a first unit of the units; and applying a motion to the grouped road data. However, Chen teaches splitting road data from the point cloud data (paragraph [84], Chen discloses that laser point cloud data can be classified as road surface point cloud data and road-side point cloud data, and paragraph [85], Chen discloses that the road surface point cloud data can be separated into right side of road surface data and left side of road surface data); and splitting the road data into a plurality of units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and paragraph [270], Chen discloses the road-side data division unit can divide and split road-side data into grid cells or plural units); and grouping the road data belonging to a first unit of the units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and thus, the grid cells pertaining to road surface are grouped to belong to a first unit of the units). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 9 of Patent ‘591 and Chen together as a whole for permitting a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Claim 9 of Patent ‘591 and Chen do not disclose applying a motion to the grouped road data. However, Ermilios teaches applying a motion to the grouped road data (paragraph [49], Ermilios discloses implementing a search window for predicting motion between two images with block matching technique, wherein paragraph [47], Ermilios discloses that movement on the road surface texture between two adjacent frames (ie. images) is tracked with motion estimation). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 9 of Patent ‘591, Chen and Ermilios together as a whole for reliably estimate motion data for driving applications in adverse environmental conditions (Ermilios’ paragraph [3]). Claims 3-4 and 11-14 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 9 of U.S. Patent No. 11,017,591, Chen (US 2021/0263167) and Ermilios (US 2019/0104278) in view of Zeng (US 2020/0410690). Regarding claim 3, claim 1 of Patent ‘591 does not disclose wherein the splitting the road data into the plurality of units comprises: splitting the road data based on at least one of an identifier, an angle, or a distance of a sensor, wherein the distance is a distance from a center of the point cloud data. However, Chen teaches wherein the splitting the road data into the plurality of units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and paragraph [270], Chen discloses the road-side data division unit can divide and split road-side data into grid cells or plural units). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 1 of Patent ‘591 and Chen together as a whole for permitting a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Claim 1 of Patent ‘591, Chen and Ermilios do not disclose splitting the road data based on at least one of an identifier, an angle, or a distance of a sensor, wherein the distance is a distance from a center of the point cloud data. However, Zeng teaches splitting the road data based on at least one of an identifier, an angle, or a distance of a sensor (paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data, wherein paragraph [26], Zeng discloses that road network data is obtained, and paragraph [81], Zeng discloses that laser sensor can be implemented for splitting road data by distinguishing objects of a scene, and paragraph [82], Zeng discloses 3D point cloud data can be segmented), wherein the distance is a distance from a center of the point cloud data (paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 1 of Patent ‘591, Chen, Ermilios and Zeng together as a whole for accurately determining objects in a monitored scene. Regarding claim 4, claim 1 of Patent ‘591 does not disclose wherein the grouping of the road data belonging to the first unit comprises: grouping the road data based on at least one of the identifier, angle, or distance of the sensor, wherein the first unit is a region at a distance greater than or equal to a predetermined distance from the center of the point cloud data. However, Chen teaches wherein the grouping of the road data belonging to the first unit (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and thus, the grid cells pertaining to road surface are grouped to belong to a first unit of the units). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 1 of Patent ‘591 and Chen together as a whole for permitting a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Claim 1 of Patent ‘591, Chen and Ermilios do not disclose grouping the road data based on at least one of the identifier, angle, or distance of the sensor, wherein the first unit is a region at a distance greater than or equal to a predetermined distance from the center of the point cloud data. However, Zeng teaches grouping the road data based on at least one of the identifier, angle, or distance of the sensor (paragraph [141], Zeng discloses a plurality of datasets are combined according to distances between a plurality of segmented line segments for gathering appropriate datasets of point cloud data, and paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data, wherein paragraph [26], Zeng discloses that road network data is obtained, and paragraph [81], Zeng discloses that laser sensor can be implemented for splitting road data by distinguishing objects of a scene, and paragraph [82], Zeng discloses 3D point cloud data can be segmented), wherein the first unit is a region at a distance greater than or equal to a predetermined distance from the center of the point cloud data (paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data, wherein paragraph [26], Zeng discloses that road network data is obtained). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 1 of Patent ‘591, Chen, Ermilios and Zeng together as a whole for accurately determining objects in a monitored scene. Regarding claim 11, claim 9 of Patent ‘591 does not disclose wherein the splitting the road data into the plurality of units comprises: splitting the road data based on at least one of an identifier, an angle, or a distance of a sensor, wherein the distance is a distance from a center of the point cloud data. However, Chen teaches wherein the splitting the road data into the plurality of units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and paragraph [270], Chen discloses the road-side data division unit can divide and split road-side data into grid cells or plural units). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 9 of Patent ‘591 and Chen together as a whole for permitting a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Claim 9 of Patent ‘591, Chen and Ermilios do not disclose splitting the road data based on at least one of an identifier, an angle, or a distance of a sensor, wherein the distance is a distance from a center of the point cloud data. However, Zeng teaches splitting the road data based on at least one of an identifier, an angle, or a distance of a sensor (paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data, wherein paragraph [26], Zeng discloses that road network data is obtained, and paragraph [81], Zeng discloses that laser sensor can be implemented for splitting road data by distinguishing objects of a scene, and paragraph [82], Zeng discloses 3D point cloud data can be segmented), wherein the distance is a distance from a center of the point cloud data (paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 9 of Patent ‘591, Chen, Ermilios and Zeng together as a whole for accurately determining objects in a monitored scene. Regarding claim 12, claim 9 of Patent ‘591 does not disclose wherein the grouping of the road data belonging to the first unit comprises: grouping the road data based on at least one of the identifier, angle, or distance of the sensor, wherein the first unit is a region at a distance greater than or equal to a predetermined distance from the center of the point cloud data. However, Chen teaches wherein the grouping of the road data belonging to the first unit (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface data into grid cells or plural units, and thus, the grid cells pertaining to road surface are grouped to belong to a first unit of the units). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 9 of Patent ‘591 and Chen together as a whole for permitting a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Claim 9 of Patent ‘591, Chen and Ermilios do not disclose grouping the road data based on at least one of the identifier, angle, or distance of the sensor, wherein the first unit is a region at a distance greater than or equal to a predetermined distance from the center of the point cloud data. However, Zeng teaches grouping the road data based on at least one of the identifier, angle, or distance of the sensor (paragraph [141], Zeng discloses a plurality of datasets are combined according to distances between a plurality of segmented line segments for gathering appropriate datasets of point cloud data, and paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data, wherein paragraph [26], Zeng discloses that road network data is obtained, and paragraph [81], Zeng discloses that laser sensor can be implemented for splitting road data by distinguishing objects of a scene, and paragraph [82], Zeng discloses 3D point cloud data can be segmented), wherein the first unit is a region at a distance greater than or equal to a predetermined distance from the center of the point cloud data (paragraph [76], Zeng discloses a distance between the center of point cloud and the laser sensor is pre-determined in that the pre-determined distance is a threshold for splitting road data, wherein paragraph [26], Zeng discloses that road network data is obtained). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 9 of Patent ‘591, Chen, Ermilios and Zeng together as a whole for accurately determining objects in a monitored scene. Regarding claim 13, claim 9 of Patent ‘591 discloses decoding a bitstream. Claim 9 of Patent ‘591 and Chen do not disclose wherein the bitstream contains motion information about the first unit, wherein the applying the motion comprises: applying the motion information to road data corresponding to the first unit. However, Ermilios teaches wherein the point cloud data contains motion information about the first unit (paragraph [49], Ermilios discloses implementing a search window for predicting motion between two images with block matching technique, wherein paragraph [47], Ermilios discloses that movement on the road surface texture between two adjacent frames (ie. images) is tracked with motion estimation), wherein the applying the motion comprises: applying the motion information to road data corresponding to the first unit (paragraph [49], Ermilios discloses implementing a search window for predicting motion between two images with block matching technique, wherein paragraph [47], Ermilios discloses that movement on the road surface texture between two adjacent frames (ie. images) is tracked with motion estimation). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 9 of Patent ‘591, Chen and Ermilios together as a whole for reliably estimate motion data for driving applications in adverse environmental conditions (Ermilios’ paragraph [3]). Regarding claim 14, claim 9 of Patent ‘591 discloses decoding a bitstream. Claim 9 of Patent ‘591 does not disclose wherein the bitstream further contains information indicating a method of splitting the road data into the plurality of units. However, Chen teaches wherein the point cloud data further contains information indicating a method of splitting the road data into the plurality of units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface point cloud data into grid cells or plural units, and paragraph [270], Chen discloses the road-side data division unit can divide and split road-side point cloud data into grid cells or plural units). Since claim 9 of Patent ‘591 discloses decoding a bitstream, and Chen discloses “wherein the point cloud data further contains information indicating a method of splitting the road data into the plurality of units”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 9 of Patent ‘591 and Chen together as a whole for ascertaining the limitation of “wherein the bitstream further contains information indicating a method of splitting the road data into the plurality of units” in order to permit a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Claim 5 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,017,591, Chen (US 2021/0263167), Ermilios (US 2019/0104278) and Zeng (US 2020/0410690) in view of Poon (US 2009/0067509). Regarding claim 5, claim 1 of Patent ‘591, Chen, Ermilios and Zeng do not disclose wherein the setting of the motion search window is performed based on at least one of the identifier, an azimuth of the sensor, or the distance. However, Poon teaches wherein the setting of the motion search window is performed based on at least one of the identifier, an azimuth of the sensor, or the distance (paragraph [71], Poon discloses that the motion search window for finding motion vectors is based on the distance, wherein there are various distances that can be utilized for finding motion vectors). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 1 of Patent ‘591, Chen, Ermilios, Zeng and Poon together as a whole for thoroughly finding motion vectors in order to best locate objects as needed for any object/surface searching applications. Claims 6-7 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,017,591, Chen (US 2021/0263167), Ermilios (US 2019/0104278), Zeng (US 2020/0410690) and Poon (US 2009/0067509) in view of Zhang (US 2019/0149838). Regarding claim 6, claim 1 of Patent ‘591, Chen, Ermilios, Zeng and Poon do not disclose wherein the predicting of the motion comprises: calculating a rate-distortion optimization (RDO) cost based on whether the motion is applied. However, Zhang teaches wherein the predicting of the motion comprises: calculating a rate-distortion optimization (RDO) cost based on whether the motion is applied (paragraph [112], Zhang discloses a rate distortion optimization cost is computed to determine to select affine motion vector for predicting motion of a current block, CU (coding unit) or PU (prediction unit)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 1 of Patent ‘591, Chen, Ermilios, Zeng, Poon and Zhang together as a whole for efficiently compressing video data (Zhang’s paragraph [6]). Regarding claim 7, claim 1 of Patent ‘591 discloses encoding a bitstream. Claim 1 of Patent ‘591 does not disclose wherein the bitstream contains information indicating a method of splitting the road data into the plurality of units. However, Chen teaches wherein the point cloud data contains information indicating a method of splitting the road data into the plurality of units (paragraph [266], Chen discloses the road surface data division unit can divide and split road surface point cloud data into grid cells or plural units, and paragraph [270], Chen discloses the road-side data division unit can divide and split road-side point cloud data into grid cells or plural units). Since claim 1 of Patent ‘591 discloses encoding point cloud data onto a bitstream, and Chen discloses “wherein the point cloud data contains information indicating a method of splitting the road data into the plurality of units”, therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of claim 1 of Patent ‘591 and Chen together as a whole for ascertaining the limitation of “wherein the bitstream contains information indicating a method of splitting the road data into the plurality of units” in order to permit a clear delineation of road surfaces, road side data and other objects on the monitored scene for accurately determining high precision position of a vehicle (Chen’s paragraph [4]). Claims 1 and 9 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 14 of U.S. Patent No. 11,170,556. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of present Application ‘322 is similar to but broader than claim 1 of Patent ‘556. Thus, claim 1 of present Application ‘322 is anticipated by claim 1 of Patent ‘556. Claim 9 of present Application ‘322 is similar to but broader than claim 14 of Patent ‘556. Thus, claim 9 of present Application ‘322 is anticipated by claim 14 of Patent ‘556. Claims 1, 8-9 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5, 9 and 13 of U.S. Patent No. 11,217,037. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of present Application ‘322 is similar to but broader than claim 1 of Patent ‘037. Thus, claim 1 of present Application ‘322 is anticipated by claim 1 of Patent ‘037. Claim 8 of present Application ‘322 is similar to but broader than claim 5 of Patent ‘037. Thus, claim 1 of present Application ‘322 is anticipated by claim 5 of Patent ‘037. Claim 9 of present Application ‘322 is similar to but broader than claim 13 of Patent ‘037. Thus, claim 9 of present Application ‘322 is anticipated by claim 13 of Patent ‘037. Claim 15 of present Application ‘322 is similar to but broader than claim 9 of Patent ‘037. Thus, claim 15 of present Application ‘322 is anticipated by claim 9 of Patent ‘037. Claims 1, 8-9 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 7 and 10 of U.S. Patent No. 11,315,270. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of present Application ‘322 is similar to but broader than claim 1 of Patent ‘270. Thus, claim 1 of present Application ‘322 is anticipated by claim 1 of Patent ‘270. Claim 8 of present Application ‘322 is similar to but broader than claim 4 of Patent ‘270. Thus, claim 1 of present Application ‘322 is anticipated by claim 4 of Patent ‘270. Claim 9 of present Application ‘322 is similar to but broader than claim 10 of Patent ‘270. Thus, claim 9 of present Application ‘322 is anticipated by claim 10 of Patent ‘270. Claim 15 of present Application ‘322 is similar to but broader than claim 7 of Patent ‘270. Thus, claim 15 of present Application ‘322 is anticipated by claim 7 of Patent ‘270. Claims 1, 8-9 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 7, 13 and 16 of U.S. Patent No. 11,341,687. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of present Application ‘322 is similar to but broader than claim 1 of Patent ‘687. Thus, claim 1 of present Application ‘322 is anticipated by claim 1 of Patent ‘687. Claim 8 of present Application ‘322 is similar to but broader than claim 16 of Patent ‘687. Thus, claim 1 of present Application ‘322 is anticipated by claim 16 of Patent ‘687. Claim 9 of present Application ‘322 is similar to but broader than claim 7 of Patent ‘687. Thus, claim 9 of present Application ‘322 is anticipated by claim 7 of Patent ‘687. Claim 15 of present Application ‘322 is similar to but broader than claim 13 of Patent ‘687. Thus, claim 15 of present Application ‘322 is anticipated by claim 13 of Patent ‘687. Claims 1, 8-9 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5, 9 and 13 of U.S. Patent No. 11,395,004. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of present Application ‘322 is similar to but broader than claim 9 of Patent ‘004. Thus, claim 1 of present Application ‘322 is anticipated by claim 9 of Patent ‘004. Claim 8 of present Application ‘322 is similar to but broader than claim 13 of Patent ‘004. Thus, claim 1 of present Application ‘322 is anticipated by claim 13 of Patent ‘004. Claim 9 of present Application ‘322 is similar to but broader than claim 5 of Patent ‘004. Thus, claim 9 of present Application ‘322 is anticipated by claim 5 of Patent ‘004. Claim 15 of present Application ‘322 is similar to but broader than claim 1 of Patent ’004. Thus, claim 15 of present Application ‘322 is anticipated by claim 1 of Patent ‘004. Claims 1, 8-9 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5, 9 and 10 of U.S. Patent No. 11,483,363. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of present Application ‘322 is similar to but broader than claim 1 of Patent ‘363. Thus, claim 1 of present Application ‘322 is anticipated by claim 1 of Patent ‘363. Claim 8 of present Application ‘322 is similar to but broader than claim 5 of Patent ‘363. Thus, claim 1 of present Application ‘322 is anticipated by claim 5 of Patent ‘363. Claim 9 of present Application ‘322 is similar to but broader than claim 10 of Patent ‘363. Thus, claim 9 of present Application ‘322 is anticipated by claim 10 of Patent ‘363. Claim 15 of present Application ‘322 is similar to but broader than claim 9 of Patent ‘363. Thus, claim 15 of present Application ‘322 is anticipated by claim 9 of Patent ‘363. Claims 1, 8-9 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 6, 11 and 16 of U.S. Patent No. 11,601,488. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of present Application ‘322 is similar to but broader than claim 1 of Patent ‘488. Thus, claim 1 of present Application ‘322 is anticipated by claim 1 of Patent ‘488. Claim 8 of present Application ‘322 is similar to but broader than claim 16 of Patent ‘488. Thus, claim 1 of present Application ‘322 is anticipated by claim 16 of Patent ‘488. Claim 9 of present Application ‘322 is similar to but broader than claim 6 of Patent ‘488. Thus, claim 9 of present Application ‘322 is anticipated by claim 6 of Patent ‘488. Claim 15 of present Application ‘322 is similar to but broader than claim 11 of Patent ‘488. Thus, claim 15 of present Application ‘322 is anticipated by claim 11 of Patent ‘488. Claims 1, 8-9 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 6, 11 and 16 of U.S. Patent No. 11,803,986. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of present Application ‘322 is similar to but broader than claim 1 of Patent ‘986. Thus, claim 1 of present Application ‘322 is anticipated by claim 1 of Patent ‘986. Claim 8 of present Application ‘322 is similar to but broader than claim 6 of Patent ‘986. Thus, claim 1 of present Application ‘322 is anticipated by claim 6 of Patent ‘986. Claim 9 of present Application ‘322 is similar to but broader than claim 11 of Patent ‘986. Thus, claim 9 of present Application ‘322 is anticipated by claim 11 of Patent ‘986. Claim 15 of present Application ‘322 is similar to but broader than claim 16 of Patent ‘986. Thus, claim 15 of present Application ‘322 is anticipated by claim 16 of Patent ‘986. Claims 1, 8-9 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-4 of U.S. Patent No. 11,818,190. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of present Application ‘322 is similar to but broader than claim 1 of Patent ‘190. Thus, claim 1 of present Application ‘322 is anticipated by claim 1 of Patent ‘190. Claim 8 of present Application ‘322 is similar to but broader than claim 2 of Patent ‘190. Thus, claim 1 of present Application ‘322 is anticipated by claim 2 of Patent ‘190. Claim 9 of present Application ‘322 is similar to but broader than claim 3 of Patent ‘190. Thus, claim 9 of present Application ‘322 is anticipated by claim 3 of Patent ‘190. Claim 15 of present Application ‘322 is similar to but broader than claim 4 of Patent ‘190. Thus, claim 15 of present Application ‘322 is anticipated by claim 4 of Patent ‘190. Claims 1, 8-9 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 7 and 13 of U.S. Patent No. 11,882,303. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of present Application ‘322 is similar to but broader than claim 1 of Patent ‘303. Thus, claim 1 of present Application ‘322 is anticipated by claim 1 of Patent ‘303. Claim 8 of present Application ‘322 is similar to but broader than claim 13 of Patent ‘303. Thus, claim 1 of present Application ‘322 is anticipated by claim 13 of Patent ‘303. Claim 9 of present Application ‘322 is similar to but broader than claim 4 of Patent ‘303. Thus, claim 9 of present Application ‘322 is anticipated by claim 4 of Patent ‘303. Claim 15 of present Application ‘322 is similar to but broader than claim 7 of Patent ‘303. Thus, claim 15 of present Application ‘322 is anticipated by claim 7 of Patent ‘303. Claims 1, 8-9 and 15 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5, 9 and 13 of U.S. Patent No. 11,895,341. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of present Application ‘322 is similar to but broader than claim 1 of Patent ‘341. Thus, claim 1 of present Application ‘322 is anticipated by claim 1 of Patent ‘341. Claim 8 of present Application ‘322 is similar to but broader than claim 5 of Patent ‘341. Thus, claim 1 of present Application ‘322 is anticipated by claim 5 of Patent ‘341. Claim 9 of present Application ‘322 is similar to but broader than claim 9 of Patent ‘341. Thus, claim 9 of present Application ‘322 is anticipated by claim 9 of Patent ‘341. Claim 15 of present Application ‘322 is similar to but broader than claim 13 of Patent ‘341. Thus, claim 15 of present Application ‘322 is anticipated by claim 13 of Patent ‘341. Peruse tables below. Present Application 18/849,322 US Patent 11,017,591 US Patent 11,170,556 US Patent 11,217,037 Claim 1. A method of transmitting point cloud data, the method comprising: encoding point cloud data; and transmitting a bitstream containing the point cloud data. 1. A method for transmitting point cloud data by a device, the method comprising: encoding point cloud data, wherein a bitstream including the point cloud data includes parameter set data for the point cloud data; encapsulating the point cloud data into a file, wherein geometry data, attribute data and occupancy data in the point cloud data are encapsulated into one or more component tracks in the file, decoder configuration information including the parameter set data includes a setup unit including atlas parameter set information, the decoder configuration information is encapsulated into a volumetric visual track in the file; and transmitting the file, wherein the atlas parameter set information is constant for a bitstream referenced by a sample entry in which the decoder configuration information is present. 1. A method for transmitting encoded point cloud data by an apparatus including a memory and a processor coupled to the memory, the method comprising: encoding point cloud data including geometry data and attribute data to output a bitstream, the bitstream including one or more units, a unit including a header, the header including type information identifying a type of the unit, in response to the type information identifying that the unit is a geometry unit or an attribute unit, the header including information for representing whether the geometry unit or the attribute unit includes only geometry data or attribute data encoded based on a specific encoding scheme; encapsulating the bitstream into a file including metadata, the file including a geometry track including samples carrying the encoded geometry data and an attribute track including samples carrying the encoded attribute data, a sample entry of the geometry track and a sample entry of the attribute track including a header of the geometry unit and a header of the attribute unit, respectively, wherein the header of the geometry unit includes the information for representing whether the geometry unit includes only the geometry data encoded based on the specific encoding scheme and the header of the attribute unit includes the information for representing whether the attribute unit includes only the attribute data encoded based on the specific encoding scheme; and transmitting the encapsulated file. 1. A method for processing point cloud data by a transmitting system, the method comprising: performing a partition of the point cloud data into one or more slices selectively based on a comparison determination between a number of points of the point cloud data and maximum number information, wherein, when the number of points of the point cloud data is larger than the maximum number information for representing a maximum number of points in a slice, the partition of the point cloud data into one or more slices is performed based on an octree of the point cloud data and wherein, when the number of points of the point cloud data is less than or equal to the maximum number information, the partition of the point cloud data into one or more slices is not performed; encoding the point cloud data on which the partition is performed or not performed; and transmitting a bitstream including the encoded point cloud data and signaling information for the point cloud data. Claim 8. A device for transmitting point cloud data, comprising: an encoder configured to encode point cloud data; and a transmitter configured to transmit the point cloud data. 10. An apparatus for transmitting point cloud data, the apparatus comprising: an encoder configured to encode point cloud data, wherein a bitstream including the point cloud data includes parameter set data for the point cloud data; an encapsulator configured to encapsulate the point cloud data into a file, wherein geometry data, attribute data and occupancy data in the point cloud data are encapsulated into one or more component tracks in the file, decoder configuration information including the parameter set data includes a setup unit including atlas parameter set information, the decoder configuration information is encapsulated into a volumetric visual track in the file; and a transmitter configured to transmit the file, wherein the atlas parameter set information is constant for a bitstream referenced by a sample entry in which the decoder configuration information is present. 5. A transmitting system for processing point cloud data, the transmitting system comprising: a partitioner configured to perform a partition of the point cloud data into one or more slices selectively based on a comparison determination between a number of points of the point cloud data and maximum number information, wherein, when the number of points of the point cloud data is larger than the maximum number information for representing a maximum number of points in a slice, the partition of the point cloud data into one or more slices is performed based on an octree of the point cloud data and wherein, when the number of points of the point cloud data is less than or equal to the maximum number information, the partition of the point cloud data into one or more slices is not performed; an encoder configured to encode the point cloud data on which the partition is performed or not performed; and a transmitter configured to transmit a bitstream including the encoded point cloud data and signaling information for the point cloud data. Claim 9. A method of receiving point cloud data, the method comprising: receiving a bitstream containing point cloud data; and decoding the point cloud data. 4. A method for receiving point cloud data by a device, the method comprising: receiving a file including a bitstream including point cloud data and parameter set data for the point cloud data; decapsulating the point cloud data from the file, wherein geometry data, attribute data and occupancy data in the point cloud data are decapsulated from one or more component tracks in the file, decoder configuration information including the parameter set data is decapsulated from a volumetric visual track in the file, and the decoder configuration information further includes a setup unit including atlas parameter set information; and decoding the point cloud data; wherein the atlas parameter set information is constant for a bitstream referenced by a sample entry in which the decoder configuration information is present. 14. A method for receiving encoded point cloud data by an apparatus comprising a memory and a processor coupled with the memory, the method comprising: receiving a file carrying encoded point cloud data including encoded geometry data and encoded attribute data and metadata; decapsulating the file to output a bitstream of the encoded point cloud data based on the metadata, the bitstream including one or more units, a unit including a header, the header including type information identifying a type of the unit, in response to the type information identifying that the unit is a geometry unit or an attribute unit, the header including information for representing whether the geometry unit or the attribute unit includes only geometry data or attribute data encoded based on a specific encoding scheme, the file including a geometry track including samples carrying the encoded geometry data and an attribute track including samples carrying the encoded attribute data, a sample entry of the geometry track and a sample entry of the attribute track including a header of a geometry unit and a header of an attribute unit, respectively, wherein the header of the geometry unit includes the information for representing whether the geometry unit includes only the geometry data encoded based on the specific encoding scheme and the header of the attribute unit includes the information for representing whether the attribute unit includes only the attribute data encoded based on the specific encoding scheme; and decoding the encoded point cloud data. 13. A method of processing encoded point cloud data by a receiving system, the method comprising: receiving a bitstream including the encoded point cloud data and signaling information for the encoded point cloud data; decoding the encoded point cloud data, wherein the point cloud data is partitioned into one or more slices selectively based on a comparison determination between a number of points of the point cloud data and maximum number information by a transmitting system, wherein, when the number of points of the point cloud data is larger than the maximum number information for representing a maximum number of points in a slice, the point cloud data is partitioned into one or more slices based on an octree of the point cloud data and wherein, when the number of points of the point cloud data is less than or equal to the maximum number information, the point cloud data is not partitioned into one or more slices; and rendering the decoded point cloud data. Claim 15. A device for receiving point cloud data, comprising: a receiver configured to receive point cloud data; and a decoder configured to decode the point cloud data. 7. An apparatus for receiving point cloud data, the apparatus comprising: a receiver configured to receive a file including a bitstream including point cloud data and parameter set data for the point cloud data; a decapsulator configured to decapsulate the point cloud data from the file, wherein geometry data, attribute data and occupancy data in the point cloud data are decapsulated from one or more component tracks in the file, decoder configuration information including the parameter set data is decapsulated from a volumetric visual track in the file, and the decoder configuration information further includes a setup unit including atlas parameter set information; and a decoder configured to decode the point cloud data; wherein the atlas parameter set information is constant for a bitstream referenced by a sample entry in which the decoder configuration information is present. 9. A receiving system for processing encoded point cloud data, the receiving system comprising: a receiver configured to receive a bitstream including the encoded point cloud data and signaling information for the encoded point cloud data; a decoder configured to decode the encoded cloud data, wherein the point cloud data is partitioned into one or more slices selectively based on a comparison determination between a number of points of the point cloud data and maximum number information by a transmitting system, wherein, when the number of points of the point cloud data is larger than the maximum number information for representing a maximum number of points in a slice, the point cloud data is partitioned into one or more slices based on an octree of the point cloud data and wherein, when the number of points of the point cloud data is less than or equal to the maximum number information, the point cloud data is not partitioned into one or more slices; and a render configured to render the decoded point cloud data. Present Application 18/849,322 US Patent 11,315,270 US Patent 11,341,687 US Patent 11,395,004 Claim 1. A method of transmitting point cloud data, the method comprising: encoding point cloud data; and transmitting a bitstream containing the point cloud data. 1. A point cloud data transmission method comprising: encoding point cloud data; encapsulating a bitstream that includes the encoded point cloud data into a file; and transmitting the file, wherein the bitstream is stored either in a single track or in multiple tracks of the file, wherein the file further includes signaling data, wherein the signaling data include at least one parameter set and spatial region information, wherein the encoded point cloud data are divided into one or more 3D spatial regions, wherein the spatial region information includes number information for identifying a number of the one or more 3D spatial regions, region identification information for identifying each 3D spatial region, and position information related to an anchor point of each 3D spatial region, wherein the spatial region information is at least static spatial region information that does not change over time or dynamic spatial region information that dynamically changes over time, and wherein the encoded point cloud data include geometry data and attribute data. 1. A method for transmitting point cloud data, the method comprising: encoding point cloud data including geometry data and attribute data; encapsulating a bitstream including the point cloud data into a file, wherein the file includes a first track including the geometry data and the attribute data, wherein the file includes a second track including bounding box information for the point cloud data, wherein the bounding box information includes extension information of a bounding box, wherein the second track further includes a timed metadata track including a sample entry for a dynamic type and a sample including dynamic spatial region information including dimension information for a spatial region for the bounding box, and wherein the spatial region is dynamically changed over time; and transmitting the file. 9. A method for transmitting point cloud data, the method comprising: encoding geometry data of point cloud data, wherein the geometry data is represented based on an octree structure including level of details (LoDs), and wherein a geometry position of an octree node geometry data in a of a specific LOD of an index that is less than a largest index of LoDs in the octree structure is reconstructed based on a center position of the octree node for spatial scalability; encoding attribute data of the point cloud data based on the reconstructed geometry position; and transmitting a bitstream including the geometry data and the attribute data of the point cloud data. Claim 8. A device for transmitting point cloud data, comprising: an encoder configured to encode point cloud data; and a transmitter configured to transmit the point cloud data. 4. A point cloud data transmission apparatus comprising: an encoder to encode point cloud data; an encapsulator to encapsulate a bitstream that includes the encoded point cloud data into a file; and a transmitter to transmit the file, wherein the bitstream is stored either in a single track or in multiple tracks of the file, wherein the file further includes signaling data, wherein the signaling data include at least one parameter set and spatial region information, wherein the encoded point cloud data are divided into one or more 3D spatial regions, wherein the spatial region information includes number information for identifying a number of the one or more 3D spatial regions, region identification information for identifying each 3D spatial region, and position information related to an anchor point of each 3D spatial region, wherein the spatial region information is at least static spatial region information that does not change over time or dynamic spatial region information that dynamically changes over time, and wherein the encoded point cloud data include geometry data and attribute data. 16. An apparatus for transmitting point cloud data, the apparatus comprising: an encoder configured to encode point cloud data including geometry data and attribute data; an encapsulator configured to encapsulate a bitstream including the point cloud data into a file, wherein the file includes a first track including the geometry data and the attribute data, wherein the file includes a second track including bounding box information for the point cloud data, wherein the bounding box information includes extension information of a bounding box, wherein the second track further includes a timed metadata track including a sample entry for a dynamic type and a sample including dynamic spatial region information including dimension information for a spatial region for the bounding box, and wherein the spatial region is dynamically changed over time; and a transmitter configured to transmit the file. 13. An apparatus for transmitting point cloud data, the apparatus comprising: an encoder configured to: encode geometry data of point cloud data, wherein the geometry data is represented based on an octree structure including level of details (LODs), and wherein a geometry position of an octree node of a specific LOD of an index that is less than a largest index of LoDs in the octree structure is reconstructed based on a center position of the octree node for spatial scalability, and encode attribute data of the point cloud data based on the reconstructed geometry position; and a transmitter configured to transmit a bitstream including the geometry data and the attribute data of the point cloud data. Claim 9. A method of receiving point cloud data, the method comprising: receiving a bitstream containing point cloud data; and decoding the point cloud data. 10. A point cloud data reception method comprising: receiving a file; decapsulating the file into a bitstream that includes point cloud data, wherein the bitstream is stored either in a single track or in multiple tracks of the file, and wherein the file further includes signaling data; decoding a part or all of the point cloud data based on the signaling data; and rendering a part or all of the decoded point cloud data, wherein the signaling data include at least one parameter set and spatial region information, wherein the point cloud data are divided into one or more 3D spatial regions, wherein the spatial region information includes number information for identifying a number of the one or more 3D spatial regions, region identification information for identifying each 3D spatial region, and position information related to an anchor point of each 3D spatial region, wherein the spatial region information is at least static spatial region information that does not change over time or dynamic spatial region information that dynamically changes over time, and wherein the point cloud data include geometry data and attribute data. 7. A method for receiving point cloud data, the method comprising: receiving a file including a bitstream including point cloud data including geometry data and attribute data; decapsulating the file; and decoding the point cloud data, wherein the file includes a first track including the geometry data and the attribute data, wherein the file further includes a second track including bounding box information for the point cloud data, wherein the bounding box information includes extension information of a bounding box, wherein the second track further includes a timed metadata track including a sample entry for a dynamic type and a sample including dynamic spatial region information including dimension information for a spatial region for the bounding box, and wherein the spatial region is dynamically changed over time. 5. A method for receiving point cloud data, the method comprising: receiving a bitstream including point cloud data; decoding geometry data of the point cloud data partially, wherein the geometry data is represented based on an octree structure including level of details (LoDs), and wherein a geometry position of an octree node of a specific LOD of an index that is less than a largest index of LoDs in the octree structure is reconstructed based on a center position of the octree node for spatial scalability; and decoding attribute data of the point cloud data based on the reconstructed geometry position. Claim 15. A device for receiving point cloud data, comprising: a receiver configured to receive point cloud data; and a decoder configured to decode the point cloud data. 7. A point cloud data reception apparatus comprising: a receiver to receive a file; a decapsulator to decapsulate the file into a bitstream that includes point cloud data, wherein the bitstream is stored either in a single track or in multiple tracks of the file, and wherein the file further includes signaling data; a decoder to decode a part or all of the point cloud data based on the signaling data; and a renderer to render a part or all of the decoded point cloud data, wherein the signaling data include at least one parameter set and spatial region information, wherein the point cloud data are divided into one or more 3D spatial regions, wherein the spatial region information includes number information for identifying a number of the one or more 3D spatial regions, region identification information for identifying each 3D spatial region, and position information related to an anchor point of each 3D spatial region, wherein the spatial region information is at least static spatial region information that does not change over time or dynamic spatial region information that dynamically changes over time, and wherein the point cloud data include geometry data and attribute data. 13. An apparatus for receiving point cloud data, the apparatus comprising: a receiver configured to receive a file including a bitstream including point cloud data including geometry data and attribute data; a decapsulator configured to decapsulate the file; and a decoder configured to decode the point cloud data, wherein the file includes a first track including the geometry data and the attribute data, wherein the file includes a second track including bounding box information for the point cloud data, wherein the bounding box information includes extension information of a bounding box, wherein the second track further includes a timed metadata track including a sample entry for a dynamic type and a sample including dynamic spatial region information including dimension information for a spatial region for the bounding box, and wherein the spatial region is dynamically changed over time. 1. An apparatus for receiving point cloud data, the apparatus comprising: a receiver configured to receive a bitstream including point cloud data; and a decoder configured to: decode geometry data of the point cloud data partially, wherein the geometry data is represented based on an octree structure including level of details (LoDs), and wherein a geometry position of an octree node of a specific LOD of an index that is less than a largest index of LoDs in the octree structure is reconstructed based on a center position of the octree node for spatial scalability, and decode attribute data of the point cloud data based on the reconstructed geometry position. Present Application 18/849,322 US Patent 11,483,363 US Patent 11,601,488 US Patent 11,803,986 Claim 1. A method of transmitting point cloud data, the method comprising: encoding point cloud data; and transmitting a bitstream containing the point cloud data. 1. A method for transmitting point cloud data, the method comprising: encoding point cloud data, the encoding the point cloud data including: encoding geometry data for the point cloud data, encoding attribute data for the point cloud data, wherein the encoding the attribute data is performed based on a prediction value for the attribute data, wherein the prediction value is generated based on a prediction mode, wherein a first value of the prediction mode represents an average of near points of the point cloud data, a second value of the prediction mode represents a first near point of the near points, a third value of the prediction mode represents a second near point of the near points, and a fourth value of the prediction mode represents a third near point of the near points, and wherein the prediction mode is selected based on a value that is generated according to a difference of a residual and a reconstructed residual for the attribute data, an estimated bit size for the residual, and a square root of a quantization value for the attribute data; and transmitting a bitstream including the point cloud data. 1. A method for transmitting point cloud data, the method comprising: encoding point cloud data including geometry and attribute, the geometry representing positions of points of the point cloud data and the attribute including at least one of color and reflectance of the points, wherein: the attribute is encoded based on one or more LODs (Level Of Details) that are generated by reorganizing the points, one or more neighbor points of a point of a LOD of the one or more LODs are selected based on a maximum neighbor distance; and transmitting a bitstream including the encoded point cloud data, wherein the maximum neighbor distance is generated based on a LOD and a neighbor search range for the point, wherein the maximum neighbor distance is represented as 2.sup.LoD.sup.2×3×NN_range, and wherein the LoD represents a level of LOD of the point, and the NN_range is a neighbor point search range that represents a number of one or more octree nodes around the point. 1. A method of transmitting point cloud data, the method comprising: encoding the point cloud data; and transmitting a bitstream containing the point cloud data, wherein the encoding the point cloud data includes: geometry encoding geometry information of the point cloud data based on a tree including the point cloud data, and wherein the geometry information is quantized based on a quantization parameter, and an offset is further applied to the point cloud data to scale a position of the geometry information. Claim 8. A device for transmitting point cloud data, comprising: an encoder configured to encode point cloud data; and a transmitter configured to transmit the point cloud data. 5. An apparatus for transmitting point cloud data, the apparatus comprising: an encoder configured to encode point cloud data, the encoder including: an encoder configured to encode geometry data for the point cloud data, an encoder configured to encode attribute data for the point cloud data, wherein the attribute data is encoded based on a prediction value for the attribute data, wherein the prediction value is generated based on a prediction mode, wherein a first value of the prediction mode represents an average of near points of the point cloud data, a second value of the prediction mode represents a first near point of the near points, a third value of the prediction mode represents a second near point of the near points, and a fourth value of the prediction mode represents a third near point of the near points, and wherein the prediction mode is selected based on a value that is generated according to a difference of a residual and a reconstructed residual for the attribute data, an estimated bit size for the residual, and a square root of a quantization value for the attribute data; and a transmitter configured to transmit a bitstream including the point cloud data. 16. A device of transmitting point cloud data, the device comprising: an encoder configured to encode point cloud data including geometry and attribute, the geometry representing positions of points of the point cloud data and the attribute including at least one of color and reflectance of the points, wherein: the attribute is encoded based on one or more LODs (Level Of Details) that are generated by reorganizing the points, one or more neighbor points of a point of a LOD of the one or more LODs are selected based on a maximum neighbor distance; and a transmitter configured to transmit a bitstream including the encoded point cloud data, wherein the maximum neighbor distance is generated based on a LOD and a neighbor search range for the point, wherein the maximum neighbor distance is represented as 2.sup.LoD.sup.2×3×NN_range, and wherein the LoD represents a level of LOD of the point, and the NN_range is a neighbor point search range that represents a number of one or more octree nodes around the point. 6. A device for transmitting point cloud data, the device comprising: an encoder configured to encode the point cloud data; and a transmitter configured to transmit a bitstream containing the point cloud data, wherein the encoder further includes: a geometry encoder configured to geometry encode geometry information of the point cloud data based on a tree including the point cloud data, and wherein the geometry information is quantized based on a quantization parameter, and an offset is further applied to the point cloud data to scale a position of the geometry information. Claim 9. A method of receiving point cloud data, the method comprising: receiving a bitstream containing point cloud data; and decoding the point cloud data. 10. A method for receiving point cloud data, the method comprising: receiving a bitstream including point cloud data; decoding the bitstream, the decoding including: decoding geometry data for the point cloud data, decoding attribute data for the point cloud data, wherein the attribute data is decoded based on a prediction value for the attribute data, wherein the prediction value is generated based on a prediction mode, wherein a first value of the prediction mode represents an average of near points of the point cloud data, a second value of the prediction mode represents a first near point of the near points, a third value of the prediction mode represents a second near point of the near points, and a fourth value of the prediction mode represents a third near point of the near points, and wherein the prediction mode is selected based on a value that is generated according to a difference of a residual and a reconstructed residual for the attribute data, an estimated bit size for the residual, and a square root of a quantization value for the attribute data. 6. A method for processing point cloud data, the method comprising: receiving a bitstream including the point cloud data, the point cloud data including geometry and attribute, the geometry representing positions of points of the point cloud data, and the attribute including at least one of color and reflectance of the points; and decoding the point cloud data, wherein the attribute is decoded based on one or more LODs (Level Of Details) that are generated by reorganizing the points, one or more neighbor points of a point of a LOD of the one or more LODs are selected based on a maximum neighbor distance, wherein the maximum neighbor distance is generated based on a LOD and a neighbor search range for the point, wherein the maximum neighbor distance is represented as 2.sup.LoD.sup.2×3×NN_range, and wherein the LoD represents a level of LOD of the point, and the NN_range is a neighbor point search range that represents a number of one or more octree nodes around the point. 11. A method of receiving point cloud data, the method comprising: receiving a bitstream containing the point cloud data; and decoding the point cloud data, wherein the decoding the point cloud includes: geometry decoding geometry information of the point cloud data based on a tree including the point cloud data, and wherein an offset is applied to the point cloud data to scale a position of the geometry information, and the geometry information is de-quantized based on a quantization parameter. Claim 15. A device for receiving point cloud data, comprising: a receiver configured to receive point cloud data; and a decoder configured to decode the point cloud data. 9. An apparatus for receiving point cloud data, the apparatus comprising: a receiver configured to receive a bitstream including point cloud data; a decoder configured to decode the bitstream, the decoder including: a decoder configured to decode geometry data for the point cloud data, a decoder configured to decode attribute data for the point cloud data, wherein the attribute data is decoded based on a prediction value for the attribute data, wherein the prediction value is generated based on a prediction mode, wherein a first value of the prediction mode represents an average of near points of the point cloud data, a second value of the prediction mode represents a first near point of the near points, a third value of the prediction mode represents a second near point of the near points, and a fourth value of the prediction mode represents a third near point of the near points, and wherein the prediction mode is selected based on a value that is generated according to a difference of a residual and a reconstructed residual for the attribute data, an estimated bit size for the residual, and a square root of a quantization value for the attribute data. 11. A device for processing point cloud data, the device comprising: a receiver to receive a bitstream including the point cloud data, the point cloud data including geometry and attribute, the geometry representing positions of points of the point cloud data, the attribute including at least one of color and reflectance of the points; and a decoder to decode the point cloud data, wherein the attribute is decoded based on one or more LODs (Level Of Details) that are generated by reorganizing the points, one or more neighbor points of a point of a LOD of the one or more LODs are selected based on a maximum neighbor distance, wherein the maximum neighbor distance is generated based on a LOD and a neighbor search range for the point, wherein the maximum neighbor distance is represented as 2.sup.LoD.sup.2×3×NN_range, and wherein the LoD represents a level of LOD of the point, and the NN_range is a neighbor point search range that represents a number of one or more octree nodes around the point. 16. A device for receiving point cloud data, the device comprising: a receiver configured to receive a bitstream containing the point cloud data; and a decoder configured to decode the point cloud data, wherein the decoder includes: a geometry decoder configured to geometry decode geometry information of the point cloud data based on a tree including the point cloud information, and wherein an offset is applied to the point cloud data to scale a position of the geometry information, and the geometry data is de-quantized based on a quantization parameter. Present Application 18/849,322 US Patent 11,818,190 US Patent 11,882,303 US Patent 11,895,341 Claim 1. A method of transmitting point cloud data, the method comprising: encoding point cloud data; and transmitting a bitstream containing the point cloud data. 1. A method for transmitting point cloud data, the method comprising: encoding point cloud data; encapsulating the point cloud data based on a file; and transmitting the point cloud data, wherein the file includes a track for atlas data for the point cloud data and a component track including the point cloud data, wherein the track includes atlas parameter sample group information including one or more units including the atlas data and Supplemental Enhancement Information (SEI) messages for a sample group, wherein the atlas parameter sample group information has a grouping type, wherein the SEI messages include an essential SEI message and a non-essential SEI message, wherein the file further includes information for a type of at least one of the essential SEI message or the non-essential SEI message, wherein the atlas parameter sample group information further includes information related to a number of the essential SEI message or a number of the non-essential SEI message, and wherein the atlas parameter sample group information is identified based on the grouping type. 1. A method of processing point cloud data in a transmission device, the method comprising: encoding the point cloud data including geometry information and attribute information, wherein the geometry information represents positions of points of the point cloud data and the attribute information represents attributes of the points of the point cloud data, wherein the encoding the point cloud data includes encoding the geometry information and encoding the attribute information, wherein the encoding the attribute information includes generating at least one LOD (Level of Detail), wherein the generating at least one LOD includes sampling at least one point in at least one node of an octree, and wherein the at least one point is a closest point from a center of the at least one node; and transmitting a bitstream including the encoded point cloud data, wherein the bitstream further includes LOD generation-related information, the LOD generation-related information including information that represents a method for the sampling. 1. A method of transmitting point cloud data, the method comprising: encoding point cloud data including geometry data and attribute data; and transmitting a bitstream including the point cloud data, wherein a bitstream for the geometry data includes Network Abstract Layer (NAL) units, wherein each NAL unit includes geometry data in each depth of a tree related to the geometry data, wherein a bitstream for the attribute data includes Network Abstract Layer (NAL) units, and wherein each NAL unit includes attribute data in each Level of Detail (LOD) related to the attribute data. Claim 8. A device for transmitting point cloud data, comprising: an encoder configured to encode point cloud data; and a transmitter configured to transmit the point cloud data. 2. An apparatus for transmitting point cloud data, the apparatus comprising: an encoder configured to encode point cloud data; an encapsulator configured to encapsulate the point cloud data based on a file; and a transmitter configured to transmit the point cloud data, wherein the file includes a track for atlas data for the point cloud data and a component track including the point cloud data, wherein the track includes atlas parameter sample group information including one or more units including the atlas data and Supplemental Enhancement Information (SEI) messages for a sample group, wherein the atlas parameter sample group information has a grouping type, wherein the SEI messages include an essential SEI message and a non-essential SEI message, wherein the file further includes information for a type of at least one of the essential SEI message or the non-essential SEI message, wherein the atlas parameter sample group information further includes information related to a number of the essential SEI message or a number of the non-essential SEI message, and wherein the atlas parameter sample group information is identified based on the grouping type. 13. A transmission device for processing point cloud data, the transmission device comprising: an encoder configured to encode the point cloud data including geometry information and attribute information, wherein the geometry information represents positions of points of the point cloud data and the attribute information represents attributes of the points of the point cloud data, wherein the encoder includes a first encoder configured to encode the geometry information and a second encoder configured to encode the attribute information, wherein the second encoder includes an LOD (Level of Detail) generator configured to generate at least one LOD, wherein the LOD generator performs sampling at least one point in at least one node of an octree, and wherein the at least one point is a closest point from a center of the at least one node; and a transmitter configured to transmit a bitstream including the encoded point cloud data, wherein the bitstream further includes LOD generation-related information, the LOD generation-related information including information that represents a method for the sampling. 5. An apparatus for transmitting point cloud data, the apparatus comprising: an encoder configured to encode point cloud data including geometry data and attribute data; and a transmitter configured to transmit a bitstream including the point cloud data, wherein a bitstream for the geometry data includes Network Abstract Layer (NAL) units, wherein each NAL unit includes geometry data in each depth of a tree related to the geometry data, wherein a bitstream for the attribute data includes Network Abstract Layer (NAL) units, and wherein each NAL unit includes attribute data in each Level of Detail (LOD) related to the attribute data. Claim 9. A method of receiving point cloud data, the method comprising: receiving a bitstream containing point cloud data; and decoding the point cloud data. 3. A method for receiving point cloud data, the method comprising: receiving a file including point cloud data; decapsulating the file; decoding the point cloud data; and wherein the file includes a track for atlas data for the point cloud data and a component track including the point cloud data, wherein the track includes atlas parameter sample group information including one or more units including the atlas data and Supplemental Enhancement Information (SEI) messages for a sample group, wherein the atlas parameter sample group information has a grouping type, wherein the SEI messages include an essential SEI message and a non-essential SEI message, wherein the file further includes information for a type of at least one of the essential SEI message or the non-essential SEI message, wherein the atlas parameter sample group information further includes information related to a number of the essential SEI message or a number of the non-essential SEI message, and wherein the atlas parameter sample group information is identified based on the grouping type. 4. A method of processing point cloud data in a reception device, the method comprising: receiving a bitstream including the point cloud data, the point cloud data including geometry information and attribute information, wherein the geometry information represents positions of points of the point cloud data and the attribute information represents attributes of the points of the point cloud data; and decoding the point cloud data, wherein the decoding the point cloud data includes decoding the geometry information and decoding the attribute information, wherein the decoding the point cloud data includes decoding the geometry information and decoding the attribute information, wherein the decoding the attribute information includes generating at least one LOD (Level of Detail), wherein the generating at least one LOD includes sampling at least one point in at least one node of an octree, wherein the at least one point is a closest point from a center of the at least one node, wherein the bitstream further includes LOD generation-related information, the LOD generation-related information including information that represents a method for the sampling. 9. A method of receiving point cloud data, the method comprising: receiving a bitstream including point cloud data including geometry data and attribute data; decoding the point cloud data, wherein a bitstream for the geometry data includes Network Abstract Layer (NAL) units, wherein each NAL unit includes geometry data in each depth of a tree related to the geometry data, wherein a bitstream for the attribute data includes Network Abstract Layer (NAL) units, and wherein each NAL unit includes attribute data in each Level of Detail (LOD) related to the attribute data. Claim 15. A device for receiving point cloud data, comprising: a receiver configured to receive point cloud data; and a decoder configured to decode the point cloud data. 4. An apparatus for receiving point cloud data, the apparatus comprising: a receiver configured to receive a file including point cloud data; a decapsulator configured to decapsulate the file; a decoder configured to decode the point cloud data; and wherein the file includes a track for atlas data for the point cloud data and a component track including the point cloud data, wherein the track includes atlas parameter sample group information including one or more units including the atlas data and Supplemental Enhancement Information (SEI) messages for a sample group, wherein the atlas parameter sample group information has a grouping type, wherein the SEI messages include an essential SEI message and a non-essential SEI message, wherein the file further includes information for a type of at least one of the essential SEI message or the non-essential SEI message, wherein the atlas parameter sample group information further includes information related to a number of the essential SEI message or a number of the non-essential SEI message, and wherein the atlas parameter sample group information is identified based on the grouping type. 7. A reception device for processing point cloud data, the reception device comprising: a receiver configured to receive a bitstream including the point cloud data, the point cloud data including geometry information and attribute information, wherein the geometry information represents positions of points of the point cloud data and the attribute information represents attributes of the points of the point cloud data; and a decoder configured to decode the point cloud data, wherein the decoder includes a first decoder for decoding the geometry information and a second decoder for decoding the attribute information, wherein the second decoder includes an LOD (Level of Detail) generator configured to generate at least one LOD, wherein the LOD generator performs sampling at least one point in at least one node of an octree, wherein the at least one point is a closest point from a center of the at least one node, and wherein the bitstream further includes LOD generation-related information, the LOD generation-related information including information that represents a method for the sampling. 13. An apparatus for receiving point cloud data, the apparatus comprising: a receiver configured to receive a bitstream including point cloud data including geometry data and attribute data; a decoder configured to decode the point cloud data, wherein a bitstream for the geometry data includes Network Abstract Layer (NAL) units, wherein each NAL unit includes geometry data in each depth of a tree related to the geometry data, wherein a bitstream for the attribute data includes Network Abstract Layer (NAL) units, and wherein each NAL unit includes attribute data in each Level of Detail (LOD) related to the attribute data. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLEN C WONG whose telephone number is (571)272-7341. The examiner can normally be reached on Flex Monday-Thursday 9:30am-7:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V Perungavoor can be reached on 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALLEN C WONG/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Sep 20, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604009
IMAGE ENCODING/DECODING METHOD AND APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12598321
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12587671
VIDEO ENCODING APPARATUS AND A VIDEO DECODING APPARATUS
2y 5m to grant Granted Mar 24, 2026
Patent 12581134
FEATURE ENCODING/DECODING METHOD AND DEVICE, AND RECORDING MEDIUM STORING BITSTREAM
2y 5m to grant Granted Mar 17, 2026
Patent 12581091
METHODS AND APPARATUS OF ENCODING/DECODING VIDEO PICTURE DATA
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
95%
With Interview (+11.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 805 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month