DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Prior arts cited in this office action:
Sugio et al. (WO 2020196677 A1, hereinafter “Sugio”)
Xiong et al. (CN 112601082 A, hereinafter “Xiong”)
Ikonin et al. (KR 20230072487 A, hereinafter “Ikonin”)
Claim Objections
Claims 4-7 and 14-17 are objected to because of the following informalities: The variables m, and M recited in claims 4 and 14 need to be properly defined (as positive integers) in the claims. The acronyms DC recited in claim 6 and AC recited in claims 7 and 17 also need to be properly defined in claim itself. Appropriate corrections and/or explanation are respectfully requested.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 9-13 are rejected under 35 U.S.C. 103 as being unpatentable over Sugio et al. (WO 2020196677 A1, hereinafter “Sugio”) in view of Xiong et al. (CN 112601082 A, hereinafter “Xiong”).
Regarding claims 1, 9 and 13:
Sugio teaches a point cloud attribute encoding method (Sugio abstract, [0002]-[0003], where Sugio discloses encoding method , decoding method and decoding apparatus), comprising:
sorting point cloud data to be encoded to obtain sorted point cloud data, wherein
the point cloud data to be encoded are point cloud data with attributes to be encoded (Sugio [0009], where Sugio teaches The three-dimensional data coding method according to one aspect of the present disclosure is a three-dimensional data coding method for encoding a plurality of three-dimensional points, each of which has attribute information, and (i) the plurality of three-dimensional points are When classified into a plurality of layers, each of the plurality of three-dimensional points is classified into the plurality of layers so that the distance between the three-dimensional points belonging to each layer is longer in the upper layer than in the lower layer. As a result, a hierarchical structure is generated, and the attribute information possessed by each of the plurality of three-dimensional points is encoded using the hierarchical structure, and the encoded attribute information and the hierarchy used for generating the hierarchical structure are generated);
constructing a multilayer structure based on the sorted point cloud data and distances between the sorted point cloud data (Sugio [0009]-[0012], [0911], where Sugio teaches the hierarchical information may include a distance threshold for classifying the plurality of three-dimensional points into the plurality of layers so that the distance between the three-dimensional points falls within a different distance range in each layer);and
encoding point cloud attributes for each of the nodes based on the multilayer
structure and the respective encoding mode (Sugio [0009]-[0016], [0911], where Sugio teaches As a result, a hierarchical structure is generated, and the attribute information possessed by each of the plurality of three-dimensional points is encoded using the hierarchical structure, and the encoded attribute information and the hierarchy used for generating the hierarchical structure are generated.).
Sugio fails to teach explicitly obtaining an encoding mode corresponding to each of nodes in the multilayer structure, wherein the encoding mode corresponding to each of the nodes is a direct encoding mode, a predictive encoding mode, or a transform encoding mode, wherein the predictive encoding mode is to encode a node based on information of a neighboring node corresponding to the node, and wherein the transform encoding mode is to encode the node based on a transform matrix;
Xiong teaches according to the occupied image index, for geometry and attribute content, block is divided into unoccupied, occupied and boundary block. Different types of blocks are generated by different policies, so they have different characteristics in rate distortion optimization (RDO). It is well known that RDO consumes most of the HEVC coding. The encoding of each type of block with an appropriate scheme facilitates improved computational efficiency. Therefore, the invention researches the rate distortion characteristic of different types of blocks, and claims a fast V-PCC coding method guided by the occupied graph (Xiong 0034]-[0039], claim 1).
Therefore, taking the teachings of Sugio and Xiong as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to use a coding mode that is better suited for each layer, each block or each group of point cloud, in order to render the coding process more efficient and faster by improving computational efficiency of the coding (Xiong [0037]).
Regarding claims 2 and 10:
Sugio in view of Xiong teaches wherein the sorting point cloud data to be encoded to obtain sorted point cloud data comprises:
based on three-dimensional coordinates of each of the point cloud data to be encoded, arranging the point cloud data to be encoded into a one-dimensional order from a three-dimensional distribution according to a preset rule to obtain the sorted point cloud data (Sugio [0184]-[0186], [0383]-[0387]; [0697], [0849], [0891]-[0894]).
Regarding claims 3 and 11:
Sugio in view of Xiong teaches wherein the constructing a multilayer structure based on the sorted point cloud data and distances between the sorted point cloud data comprises:
using the sorted point cloud data as nodes in a bottom layer; and
constructing the multilayer structure from bottom up based on the nodes in the
bottom layer and distances between the nodes in the bottom layer, wherein a distance between a
plurality of child nodes corresponding to a parent node of the multilayer structure is less than a
preset distance threshold (Sugio [0184]-[0186], [0697], [0849], [0891]-[0894]; Xiong [0007]-[0008], [0034]).
Regarding claim 12:
Sugio in view of Xiong teaches wherein the decoding point cloud attributes for each of the nodes based on the multilayer structure and the respective decoding mode comprises:
calculating a reconstructed first attribute coefficient for each of the nodes from top
to bottom based on the multilayer structure; and decoding each of the nodes from top to bottom based on the multilayer structure, the reconstructed first attribute coefficient of each of the nodes, and the decoding mode corresponding to each of the nodes(Sugio [0184]-[0186], [0697], [0849], [0891]-[0894]; Xiong [0007]-[0008], [0034]).
Claims 4-7, 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Sugio et al. (WO 2020196677 A1, hereinafter “Sugio”) in view of Xiong et al. (CN 112601082 A, hereinafter “Xiong”) and in view of Ikonin et al. (KR 20230072487 A, hereinafter “Ikonin”).
Regarding claims 4 and 14:
Sugio in view of Xiong fails to explicitly teach wherein the obtaining an encoding mode corresponding to each of nodes in the multilayer structure, wherein the encoding mode corresponding to each of the nodes is a direct encoding mode, a predictive encoding mode, or a transform encoding mode, comprises: setting the encoding mode corresponding to direct encoding nodes in the multilayer structure to be the direct encoding mode, the direct encoding nodes being nodes in the first layer of the multilayer structure; setting the encoding mode corresponding to predictive encoding nodes in the multilayer structure to be the predictive encoding mode, the predictive encoding nodes being nodes from a second layer to a layer m of the multilayer structure that do not have a parent node; and setting the encoding method corresponding to transform encoding nodes in the multilayer structure to be the transform encoding mode, the transform encoding nodes being nodes from the second layer to the layer m of the multilayer structure that have a parent node; wherein the multilayer structure comprises M layers, the layer m is a bottom layer.
However, Ikonin teaches Compared with Fig. 16, the block diagram of Fig. 17 introduces multiple coding options in the same resolution layer. This is illustrated by options 1 through N in the layer-1 cost calculation unit 710. Note that usually more than one or all layers can contain more options. In other words, any of cost calculation units 613, 623, 633 may provide more options. These options include different reference pictures used for motion estimation/compensation, uni-, bi- or multi-hypothesis prediction, such as inter- or intra-frame prediction, direct coding without prediction, multi-hypothesis prediction, presence of residual information or one or more or all of the different prediction methods, such as absence, quantization level of the residual, etc. A cost is calculated for each coding option in the cost calculation unit 710 . The best option is then selected using least cost pooling (720). The indicator (e.g. index) 705 of the best selected option is sent to the layer information selection module 730, and if the corresponding point of the current layer is selected to transmit information, the indicator BestOpt is sent in the bitstream. do. In the given example, options are described only for the first layer, but it should be understood that similar option selection logic can be applied to all layers or other layers with different resolutions (Ikonin [0034]-[0037], figs. 16 and 17).
Therefore, taking the teachings of Sugio, Xiong and Ikonin as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to select coding mode corresponding to each section as claimed by the applicant by analyzing each coding mode for each section and select the best coding mode among the plurality of the coding mode for each desired section,
Regarding claims 5 and 15:
Sugio in view of Xiong and in view of Ikonin teaches wherein the direct encoding mode is to encode the direct encoding nodes directly based on information of the direct encoding nodes; the predictive encoding mode is to encode the predictive encoding nodes based on information of neighboring nodes within a proximity range of the respective predictive encoding nodes; and the transform encoding mode is to encode the transform encoding nodes using a transform matrix (Sugio [0006]-[009], [0013]-[0016], [0037]; Xiong 0034]-[0039], claim 1; Ikonin [0034]-[0037).
Regarding claims 6 and 16:
Sugio in view of Xiong and in view of Ikonin teaches wherein the encoding point cloud attributes for each of the nodes based on the multilayer structure and the respective encoding mode comprises:
calculating a first attribute coefficient of each of the nodes based on the multilayer
structure from bottom up, wherein the first attribute coefficient of a node in the bottom layer of
the multilayer structure is a raw point cloud attribute value corresponding to the node, and the first attribute coefficients of nodes in other layers are DC coefficients corresponding to the respective nodes in the other layers; and encoding each of the nodes from top to bottom based on the multilayer structure, the first attribute coefficient of each of the nodes, and the respective encoding mode of each of the nodes (Sugio [0006]-[009], [0013]-[0016], [0037]; Xiong 0034]-[0039], claim 1; Ikonin [0034]-[0037).
Regarding claims 7 and 17:
Sugio in view of Xiong and in view of Ikonin teaches wherein the encoding each of the nodes from top to bottom based on the multilayer structure, the first attribute coefficient of each of the nodes, and the respective encoding mode of each of the nodes comprises:
traversing the multilayer structure from top to bottom from m=1 to m=M-1, to
obtain second attribute coefficient and/or first attribute residual coefficient corresponding to each
of the nodes by:
taking nodes in a layer m as first target nodes, calculating the second attribute coefficients for each of the first target nodes and reconstructed first attribute coefficients of transform encoding mode child nodes of each of the first target nodes based on each of the first target nodes and the respective transform encoding mode child nodes; and
for each of predictive encoding nodes in a layer m+1, obtaining a second target node corresponding to each of the predictive encoding nodes in the layer m+1 respectively, and obtaining by estimation the first attribute residual coefficients of the corresponding predictive encoding nodes;
wherein the second attribute coefficient is an AC coefficient corresponding to each of the nodes, the second target node comprises K nodes in the layer m+1 that are closest to the respective predictive encoding node and have calculated the reconstructed first attribute coefficients, and K is a preset number of searches; and performing quantization and entropy encoding for the first attribute coefficients of the nodes in the first layer of the multilayer structure and the second attribute coefficients and/or the first attribute residual coefficients of the nodes in the other layers (Sugio [0006]-[009], [0013]-[0016], [0037]; Xiong 0034]-[0039], claim 1; Ikonin [0034]-[0037).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEDNEL CADEAU whose telephone number is (571)270-7843. The examiner can normally be reached Mon-Fri 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chieh Fan can be reached at 571-272-3042. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WEDNEL CADEAU/Primary Examiner, Art Unit 2632 March 16, 2026