Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
2. The information disclosure statement (IDS) submitted on 03/11/2025 and 05/30/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Objections
3. Claims 3 and 24 are objected to because of the following informalities: The claim recites "said previously encoded voxels" and “said augmented feature”. There is insufficient antecedent basis for this limitation in the claim. Appropriate correction is required.
Claims 19 and 29 are objected to because of the following informalities: The claim recites "said previously decoded voxels" and “said augmented feature”. There is insufficient antecedent basis for this limitation in the claim. Appropriate correction is required.
Claim Rejections - 35 USC § 103
4. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
6. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
7. Claim(s) 1-5 and 17-31 is/are rejected under 35 U.S.C. 103 as being unpatentable over Akhtar et al. (US 2023/0377208) hereinafter “Akhtar” in view of MA et al. (US 2023/0075442) hereinafter “MA”.
As per claim 1, Akhtar discloses a method of encoding point cloud data, comprising:
obtaining features associated with point cloud data for a point cloud (FIG. 2; paragraph 0056, point cloud encoder 200 may obtain a set of positions of points in the point cloud and a set of attributes. Point cloud encoder 200 may obtain the set of positions of the points in the point cloud and the set of attributes from data source 104 (FIG. 1). The positions may include coordinates of points in a point cloud. The attributes may include information about the points in the point cloud, such as colors associated with points in the point cloud), said point cloud data is represented as a sparse tensor at a level of detail (LoD) (paragraph 0078-0079, Encoder network 502 may correspond to all or a portion of point cloud encoder 200. Encoder network 502 may obtain PC tensors at four different scales capturing multiscale features at different level of details…That is, encoder network 502 may create sparse features from an original point cloud sparse tensor P at four different scales);
processing said features associated with said LoD to match a resolution of another LoD, wherein said another LoD is finer and subsequent to said LoD (FIG. 2; paragraph 0062, LOD generation unit 220 and lifting unit 222 may apply LOD processing and lifting, respectively, to the attributes of the reconstructed points. LOD generation is used to split the attributes into different refinement levels. Each refinement level provides a refinement to the attributes of the point cloud. The first refinement level provides a coarse approximation and contains few points; the subsequent refinement level typically contains more points, and so on. The refinement levels may be constructed using a distance-based metric or may also use one or more other classification criteria (e.g., subsampling from a particular order). Thus, all the reconstructed points may be included in a refinement level. Each level of detail is produced by taking a union of all points up to particular refinement level: e.g., LOD1 is obtained based on refinement level RL1, LOD2 is obtained based on RL1 and RL2, LODN is obtained by union of RL1, RL2, RLN.);
for each occupied voxel in said LoD, encoding a plurality of voxels at said another LoD based on said processed features (FIG. 2; Arithmetic Encoding Unit 226; paragraph 0062, LOD generation may be followed by a prediction scheme (e.g., predicting transform) where attributes associated with each point in the LOD are predicted from a weighted average of preceding points, and the residual is quantized and entropy coded) to obtain occupancy information at said another LoD (paragraph 0067, the occupancy of each of the eight children node at each octree level is signaled in the bitstream. When the signaling indicates that a child node at a particular octree level is occupied, the occupancy of children of this child node is signaled. The signaling of nodes at each octree level is signaled before proceeding to the subsequent octree level. At the final level of the octree, each node corresponds to a voxel position; when the leaf node is occupied, one or more points may be specified to be occupied at the voxel position. In some instances, some branches of the octree may terminate earlier than the final level due to quantization. In such cases, a leaf node is considered an occupied node that has no child nodes; see also paragraph 0050, At each node of an octree, an occupancy is signaled (when not inferred) for one or more of its child nodes (up to eight nodes). Multiple neighborhoods are specified including (a) nodes that share a face with a current octree node, (b) nodes that share a face, edge or a vertex with the current octree node, etc. Within each neighborhood, the occupancy of a node and/or its children may be used to predict the occupancy of the current node or its children); and
However, Akhtar does not explicitly disclose updating said processed features, based on occupancy information at said another LoD, to generate updated features associated with said another LoD.
In the same field of endeavor, MA discloses updating said processed features, based on occupancy information at said another LoD, to generate updated features associated with said another LoD (paragraph 0047-0048, the finally obtained geometric information and attribute information of the hidden layer feature are encoded respectively into binary bitstream to obtain a compressed bitstream…In some possible implementations, firstly, the frequency of occurrence of the geometric information in the hidden layer feature is determined. For example, the frequency of occurrence of geometric coordinates of the point cloud is determined by using an entropy model. Herein, the entropy model is based on a trainable probability density distribution represented by factorization, or a conditional entropy model based on context information. Then an adjusted hidden layer feature is obtained by performing adjustment through weighting the hidden layer feature according to the frequency. For example, the greater the probability of occurrence, the greater the weight).
One of ordinary skill in the art, before the effective filing date of the claimed invention, would have been motivated to combine the elements taught by Akhtar, with those of MA, because all references are drawn to the same field of endeavor, because indeed all references are related to point cloud compression using geometric and attribute information, and because such a combination represents a mere combination of prior art elements, according to known methods, to yield a predictable result, such as improving the accuracy of the determined geometric information, the attribute information and thus, the encoding efficiency. This rationale applies to all combinations of Akhtar and MA used in this Office Action unless otherwise noted.
As per claim 2, Akhtar and MA discloses the method of claim 1, wherein said encoding a plurality of voxels at said another LoD (Akhtar; FIG. 2, Arithmetic Encoding Unit 226) comprises, for a current voxel belonging to said plurality of voxels at said another LoD: obtaining occupancy information of previously encoded voxels of said plurality of voxels at said another LoD (Akhtar; paragraph 0050, At each node of an octree, an occupancy is signaled (when not inferred) for one or more of its child nodes (up to eight nodes). Multiple neighborhoods are specified including (a) nodes that share a face with a current octree node, (b) nodes that share a face, edge or a vertex with the current octree node, etc. Within each neighborhood, the occupancy of a node and/or its children may be used to predict the occupancy of the current node or its children; see also paragraph 0067);
obtaining context information for encoding said current voxel (see paragraphs 0048 and 0109 of MA);
generating an augmented feature by associating said context information with feature of said current voxel (paragraphs 0048 and 0109 of MA);
aggregating another feature for said current voxel based on said augmented feature (paragraphs 0064-0065 of MA);
generating an occupancy probability for said current voxel based on said another feature (paragraph 0089 of MA); and
encoding occupancy information for said current voxel, based on said occupancy probability for said current voxel (paragraphs 0092-0093 and 0106 of MA).
As per claim 3, Akhtar and MA discloses the method of claim 1, further comprising: pruning said processed features based on said occupancy information of said previously encoded voxels (paragraphs 0081-0082 of Akhtar), wherein said augmented feature is based on said pruned features (paragraph 0109 of MA).
As per claim 4, arguments analogous to those applied for the second limitation of claim 1 are applicable for claim 4.
As per claim 5, MA discloses wherein feature aggregation is performed on said upsampled features (paragraph 0105).
As per claims 17-21, the claims are directed to a decoding method reciting limitations corresponding to the limitations of the encoding method of claims 1-5; therefore, arguments analogous to those applied for claims 1-5 are applicable to claims 17-21. In addition, Akhtar and Ma teach decoding method and apparatus corresponding to the cited encoding method (see for instance FIGs. 3-4 and paragraphs 0064-0071 of Akhtar and FIG. 2 of MA).
As per claims 22-26, arguments analogous to those applied for claims 1-5 are applicable for claims 22-26; in addition, Akhtar teaches an apparatus for encoding point cloud data, comprising one or more processors and at least one memory coupled to said one or more processors, wherein said one or more processors are configured to perform the claimed method of encoding (see paragraphs 0041 and 0072 of Akhtar).
As per claims 27-31, arguments analogous to those applied for claims 17-21 are applicable for claims 27-31; in addition, Akhtar teaches an apparatus for decoding point cloud data, comprising one or more processors and at least one memory coupled to said one or more processors, wherein said one or more processors are configured to perform the claimed method of decoding (see paragraphs 0041 and 0072 of Akhtar).
8. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. (US-20220327101-A1, US-20210329270-A1, US-20240267527-A1, US-20250039446-A1, US-20250211787-A1, US-20250232483-A1, US-20070262988-A1, US-20080238919-A1, US-20150113379-A1, WO-2024011426-A1)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED JEBARI whose telephone number is (571)270-7945. The examiner can normally be reached Mon-Fri: 09:00am-06:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at 571-272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED JEBARI/Primary Examiner, Art Unit 2482