DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/09/2024 is being considered by the examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Gao (US 2021/0248784).
As per claims 1, 8, and 15 Gao teaches, a method and device of transmitting point cloud data (Gao, ¶[0054] “According to embodiments, a point cloud may be represented by a general tree structure with an octree partition..” this represents point cloud data method), the method comprising: encoding geometry data in the point cloud data; encoding attribute data in the point cloud data based on the geometry data and decoding the geometry data based on the signaling information; (Gao, ¶[0135] “encoding geometry information of the nodes (block 1130), and encoding attribute information of the nodes before the entire point cloud is partitioned (block 1140).” This represents encoding geometry data in the point cloud data and encoding attribute data and fig.5 represents based on geometry.); and decoding the attribute data based on the signaling information and the decoded geometry data (Gao, ¶[0062] “According to embodiments, an encoder/decoder may compress/decompress, respectively, the points according to the order defined by I. At each iteration i, a point P.sub.i may be selected.” This represents decoding the attribute and ¶ [0069] “The proposed methods and apparatuses may be used separately or combined in any order. Further, each of the methods (or embodiments), encoder, and decoder may be implemented by processing circuitry (e.g., one or more processors or one or more integrated circuits).” And any other subject matter having to do with the decoding part of things is the opposite of encoding) and transmitting the encoded geometry data (Gao, ¶[0075] “According to the disclosure, instead of coding attributes after geometry coding is completed, in certain embodiments, the geometry of a point cloud is first encoded until a depth of k is reached, where k is specified by an encoder and transmitted in the bitstream.” This represents transmitting the encoded geometry data since the bitstream is transmitted), the encoded attribute data, and signaling information, wherein the encoding of the geometry data comprises: generating a predictive tree of a current frame based on the geometry data (Gao, ¶[0057] “FIG. 5 depicts a predictive tree for a rabbit, where a magnified block shows a part of the tree.” This represents generating a predictive tree); generating a predictive tree of a reference frame based on geometry data of the reference frame (Goa, ¶ [0057] “For example, the position of a point can be predicted from the position of its parent point, or from the positions of its parent and its grandparent point. For example, FIG. 5 depicts a predictive tree that spans a point cloud of a rabbit” and fig.5 represents generating a predictive tree of a reference frame based on geometry data of the reference frame by the rabbit shown in fig.5); generating a predicted value from the predictive tree of the reference frame based on the signaling information and a parent node of a node to be encoded in a structure of the predictive tree of the current frame (Goa, ¶[0054] “This is illustrated in FIG. 4, wherein a shaded circle denotes an occupied node in the tree while a blank circle denotes an unoccupied node.” And ¶ [0057] “. For example, the position of a point can be predicted from the position of its parent point” represents the parent point); and acquiring residual information by predicting the node to be encoded based on the generated predicted value (Gao, ¶[0062] “More precisely, the attribute values (a.sub.i).sub.i∈0 . . . k-1 may be predicted by using a linear interpolation process-based on the distances of the nearest neighbours of point i. Let N.sub.i be the set of the k-nearest neighbours of the current point i, and let be their decoded/reconstructed attribute values, with being their distances to the current point. Here, the predicted attribute value â.sub.1” and “¶[0033] “FIG. 5 is an illustration of a predictive tree, according to embodiments.” And “FIG. 5 depicts a predictive tree for a rabbit, where a magnified block shows a part of the tree.” And ¶[0055] “FIG. 4B shows a depth-first traversal order where nodes are visited/processed starting from a root node followed by its first occupied child and its own first occupied child until reaching the leaf nodes.” this represents predicting the node to be encoded fig.5 and they are according to these nodes).
As per claims 2 and 9, Gao teaches, he method of claim 1, wherein the predicted value is determined based on one or more reference nodes of the reference frame (Gao, ¶[0057] “For prediction of a point, all ancestors can be used. For example, the position of a point can be predicted from the position of its parent point, or from the positions of its parent and its grandparent point.” Different points represent reference nodes such as parent and its grandparent point in frame fig.5).
As per claims 3 and 10, Gao teaches, the method of claim 2, wherein the predicted value is determined by applying a variation between the one or more reference nodes of the reference frame (Gao, ¶[0057] “For prediction of a point, all ancestors can be used. For example, the position of a point can be predicted from the position of its parent point, or from the positions of its parent and its grandparent point.” From the different positions represents variation between reference nodes of the reference frame of the rabbit ).
As per claims 4 and 11, Gao teaches, the method of claim 1, wherein the point cloud data is acquired using one or more lasers, wherein an elevation angle of the current frame and an elevation angle of the reference frame are the same (Gao, ¶[0094] “while a predictive tree-based approach works well for relatively less dense point clouds, such as those generated by via Light Detection and Ranging (LIDAR) (e.g. as used in autonomous driving vehicles). Further, tri-soup coding may be more applicable for a dense surface point cloud.” Lidar is a laser, and as can be seen the elevation angle of fig.5 and ¶[0117] “[0117] According to an embodiment, at least one of the geometry coding mode and the attribute coding mode can be signaled at the sequence level, frame level or slice level.” Frame level is the same at a zooming in level).
As per claims 5 and 12, Gao teaches, the method of claim 1, wherein the point cloud data is acquired using one or more lasers, wherein an elevation angle of the current frame is different from an elevation angle of the reference frame (Gao, ¶[0094] “while a predictive tree-based approach works well for relatively less dense point clouds, such as those generated by via Light Detection and Ranging (LIDAR) (e.g. as used in autonomous driving vehicles). Further, tri-soup coding may be more applicable for a dense surface point cloud.” Lidar is a laser, and as can be seen the elevation angle of fig.5 and ¶[0117] “[0117] According to an embodiment, at least one of the geometry coding mode and the attribute coding mode can be signaled at the sequence level, frame level or slice level.” Frame level is different zooming out).
As per claims 6 and 13, Gao teaches, the method of claim 5, wherein the residual information comprises a difference between a laser ID of the current frame and a laser ID of the reference frame (Gao, ¶[0094] “while a predictive tree-based approach works well for relatively less dense point clouds, such as those generated by via Light Detection and Ranging (LIDAR) (e.g. as used in autonomous driving vehicles” Both of these IDs are required are required for 3D perception in autonomous vehicle as a base of Lidar technology).
As per claims 7 and 14, Gao teaches, the method of claim 1, wherein the signaling information comprises prediction mode information for indicating a method to determine the predicted value (Gao, ¶[0098] “For predictive coding, according to an embodiment, the difference between a position of a point and its prediction may be found without quantization. In this case no geometry distortion will be introduced. In the same or another embodiment, the difference between the position of a point and its prediction may be quantized, and the difference may be quantized and encoded. In this case, geometry distortion may be introduced.” This represents that the signaling information comprises prediction mode information for indicating a method to determine the predicted value as the prediction value is quantized).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANTIAGO GARCIA whose telephone number is (571)270-5182. The examiner can normally be reached Monday-Friday 9:30am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SANTIAGO GARCIA/Primary Examiner, Art Unit 2673
/SG/