DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
2. The information disclosure statement (IDS) was submitted on 05/05/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
3. Claims 1, 5-7, 9-10, 13-14 and 16, are rejected under 35 U.S.C. 102(a)(1) based upon a public use or sale or other public availability of the invention Danillo Graziosi et al., (hereinafter Graziosi) “An overview of ongoing point cloud compression standardization activities: video-based (V-PCC) and geometry-based (G_PCC)”, SIP (2020); Cambridge University Press, 04 Apr. 2020.
Re Claim 1. (Currently Amended) Graziosi discloses, a method(a method of point cloud compression i.e., V-PCC, G-PCC, encoding, Title or Sec.I, Pg.1) comprising:
encoding geometry data of point cloud data (encoding geometry data of the point cloud, Sec.III(A) and Fig.4, or Fig.8, or Sec.IV (A) for G-PCC); and
encoding attribute data of the point cloud data (encoding attribute data Sec.III(A) and Fig.4, or Fig.8, Sec.IV(B)(3)).
Re Claim 5. (Original) Graziosi discloses, the method of claim 1, wherein the encoding of the point cloud data comprises:
encoding geometry of the point cloud data (per mapping at claim 1); and
encoding an attribute of the point cloud data (per mapping at claim 1),
wherein the encoding of the geometry comprises:
generating a predictive tree including a node containing the geometry and a child node for the node (generating the octree, Sec.V,(B) per Figs.8 and 9, Pg.9-10 including a first node level “0” and a child nodes at levels 1-3 in Fig.11 Sec.IV.(B)(1)-(2) etc.);
generating a predicted value for a current node based on the predictive tree (predicting the geometry G-PCC point cloud, by using the octree Sec.IV.(B)(1) – (2)); and
generating a residual between the geometry of the current node and the predicted value (generating a transform residual Fig.12, or the transform difference residual D(N), during the prediction of H(N), point cloud data, per Fig.15 or as updated during the lifting transform at Fig.16, at Sec.IV Pg.13).
Re Claim 6. (Original) Graziosi discloses, the method of claim 5, wherein the encoding of the geometry further comprises:
generating and transforming a difference between the residual and a reference residual for the residual, wherein the reference residual is calculated from the predictive tree of a reference frame for a current frame for the current node (generating a transform residual for the current node, Fig.12, from the transform difference residual D(N), during the prediction of H(N), point cloud data, per Fig.15 or as updated during the lifting transform at Fig.16, at Sec.IV Pg.13).
Re Claim 7. (Original) Graziosi discloses, the method of claim 1, wherein the encoding of the point cloud data comprises:
encoding geometry of the point cloud data (per mapping at claim 1); and
encoding an attribute of the point cloud data (per mapping at claim 1), wherein the encoding of the attribute comprises:
splitting, predicting, and updating the attribute (splitting, predicting and updating, at Fig.16, Sec.V(B), Pg.13-14); and
generating and transforming a difference between a residual related to the attribute according to a prediction mode and a residual generated by another mode (generating a transform residual for the current node, Fig.12, and transforming the difference residual D(N), during the prediction of H(N), point cloud data, per Fig.15 or as updated during the lifting transform at Fig.16, at Sec.IV Pg.13).
Re Claim 9. (Currently Amended) Graziosi discloses, the method of claim 1,
wherein the geometry data and the attribute data are included in a bitstream, wherein the bitstream includes:
a geometry data unit comprising geometry tree type information (a geometry image generation unit Fig.4) comprising:
a first value indicating that geometry is encoded based on an occupancy tree (an octree based on occupancy data patches indicating the point location in 3D space, per Sec.IV (A) based on octree node occupancy, Sec.IV.(B) (1) Figs.8 to 9); and
a second value indicating that an attribute is encoded based on a predictive tree (another signaled value indicating predictive attribute coding e.g., by predicting transform Sec.IV.(B) (3)(b) Figs.8 to 9), ;
a geometry data unit header comprising information indicating whether a difference between residuals of geometry is generated and transformed (transforming the residuals per Fig.12 and 15 Sec.IV (B) (3));
a data unit comprising an occupancy tree or a predictive tree, the occupancy tree or predictive tree having a value obtained by transforming the difference between the residuals of the geometry (occupancy tree based residual transform per Fig.12).
Re Claim 10. (Currently Amended) Graziosi discloses, the method of claim 1,
wherein the geometry data and the attribute data are included in a bitstream, wherein the bitstream includes (per Fig.8 etc.):
an attribute data unit comprising attribute coding type information, the attribute coding type information comprising:
a first value indicating region adaptive hierarchical transform (RAHT) (indicating adaptive hierarchical transform RAHT, Sec.IV (B) (3) Fig.8);
a second value indicating a Level of Detail (LoD) with prediction transform (attribute coding in transform prediction relying on level of detail (LoD), Fig.13 and 14, Pg.12, Sec.IV (B) (3));
a third value indicating an LoD with lifting transform (the LoD with lifting transform Sec.IV (B) (3) and Fig.16 Sec.IV (B) (3) Pg.13), and
a fourth value indicating raw attribute data;
an attribute data unit header comprising information indicating whether a difference between residuals related to an attribute is generated and transformed (based on bitstream signaled information directing to transform the residual i.e., the difference between attribute data of the reference i.e., decoded sum of attributes and the original attributes at distance level d to generate the transformed residual attribute at Fig.12, Pg.11, Sec.IV (B) (3)); or
a data unit comprising attribute coefficient information, the attribute coefficient information comprising a value obtained by transforming the difference between the residuals related to the attribute (residual transform coefficient block in Fig.12 and secondary difference residual D(N) and D(N-1), in Fig.15 and 16 at Pgs.13-14).
11-12. (Canceled)
Re Claim 14. (Currently Amended) This claim represents the decoding method comprising the PCC coding method limitations of the process performed at the prediction loop of the encoding method of claim 1, hence it is rejected on the same evidence mapped mutatis mutandis.
15. (Canceled)
Re Claim 16. (New) This claim represents the method of acquiring a bitstream for point cloud data, wherein the bitstream is generated by encoding geometry data of the point cloud data and the attribute data of the point cloud data to be transmitted in a bitstream, by performing each and every limitation of the encoding method at claim 1, by receiving point cloud frames, performing coding of both geometric and attribute image information i.e., atlas info, and further transmitting the video point cloud coded data V-PCC in a bitstream format (per Graziosi: at Fig.4 and subsequent Sec.II to Sec.V), and as further mapped at claims in detail, hence it is rejected on the same evidentiary premises mutatis mutandis.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application does not currently name joint inventors.
4. Claims 2-4, are rejected under 35 U.S.C. 103 as being obvious over Graziosi and Sebastian Schwarz et al., (hereinafter Schwarz) “Emerging MPEG Standards for Point Cloud Compression”; DOI: 10.1109/JETCAS.2018.2885981 © 2018 IEEE
in view of Khaled Mammou et al., (hereinafter Mammou) (US 2022/0292722).
Re Claim 2. (Original) Graziosi discloses, the method of claim 1, further comprising:
Graziosi teaches about, inter estimating, from a reference frame for a current frame containing the point cloud data, global motion information (the motion information is applied to V-PCC, or G-PCC at Sec.II (B) (4) by structure from motion (SfM) technique, ref. [22] or for encoding GOP structure, Sec.III (B)(4) ref [46] and Sec.IV (A)) or local motion information for the current frame (the coding technique for V-PCC indicating motion estimation by inter-prediction coding, Sec.III(B)(6) in the local point cloud frame compression of the decoded, per Sec.II geometry as depicted in Fig.4, Sec.IV((A) in e.g., a local motion estimation process for the video coder at patch level from the frame allocation generator [46]).
Schwarz teaches about, inter estimating, from a reference frame for a current frame containing the point cloud data, global motion information or local motion information for the current frame (selecting the number of nearest neighbors k, at different values for lowest cost, for determining the optimal set of distances, dl, to reduce the search space, per Eq.(8) within a search strategy e.g., a local binary search determining the distance d*L-1, that minimizes the Lagrangean cost function, Sec.IV(D), Sec.V(A) ref.[8]).
This claim is also considered obvious over the analogous art to, Mammou teaches the details of coding in V-PCC involving motion information process as in,
inter estimating, from a reference frame for a current frame containing the point cloud data, global motion information or local motion information for the current frame (at the inter compression encoder unit 250, receiving the point cloud frame at 252, and applying 3D motion estimation/compensation at unit 254, per Fig.2C, Par.[0014, 0076-0077, 0086] or at a decoder in Fig.2D, decompressing the point cloud information at 270, using motion images, differential at 272, Par.[0089-0090] according to coding the block-to-patch information using local indexes, Par.[0218-0219]).
The ordinary skilled in the art would have found the incentive to develop on the motion information based point cloud coding, in an inter-prediction process, by which improving the dynamic response of the G-PCC coding suggested in Graziosi, with the cost effective search strategy (e.g., a local binary search determining the distance. Emphasis added), found in Schwarz and be obvious to seek other art detailing the motion information applied to GOP frames, identified in Mammou, at Par.[0090, 0142] for 3D inter-frame prediction, Par.[0089-0090] in order to improve the motion estimation in determining the 2D center of motion of the pixel block search, Par.[0128-0129], by which finding the combination predictable.
Re Claim 3. (Original) Graziosi discloses, the method of claim 1, wherein the encoding of the point cloud data comprises:
encoding geometry of the point cloud data (encoding geometry data of the point cloud, Sec.III(A) and Fig.4, or Fig.8, or Sec.IV (A) for G-PCC); and
encoding an attribute of the point cloud data, wherein the encoding of the geometry (encoding attribute data Sec.III(A) and Fig.4, or Fig.8, Sec.IV(B)(3)) comprises:
representing the geometry as a tree including at least one node and at least one child node of the at least one node (representing the geometry in a tree format, e.g., as an octree describing the point cloud locations in a 3D space, Sec.IV(A), Fig.9 at Sec.IV(B)(1)-(3) and including at least one node with at least one child Fig.11, Sec.IV(B)(3));
searching for neighbor nodes for a current node based on the tree (searching the classification of the neighboring points projection direction, for motion refinement, Sec.III(B)(1), (4), (5) by considering the correlation with neighboring octets, representing an octree node, Sec.IV(B)(1), (3) and by “Predicting/Lifting Transform”, where the attributes of each point are encoded by using the level of detail (LoD) by predicting the attributes of the P2 point, from the reconstructed versions of the nearest neighbors, P4, P5 or P0, at Sec.IV(B)(3) ); and
calculating an occupancy index of the current node from an occupancy bit related to the tree (calculating an occupancy map, - where the ordinary skilled would find obvious to consider being indexed-, of a binary image which signals whether a pixel corresponds to a valid 3D projected point, Sec.III(A)1, Fig.3 or Fig.4 Sec.III(B)(2)-(3), where an index value 1, is indicating that at least one valid pixel in the BxB block, and 0 indicating an empty area filled by a padding procedure, Sec.III(B)(3)).
However, in an alternative interpretation of the claimed “occupancy index” being referred at node-leaf level i.e., a node of the specific geometric pixel point value, being derived from a bit related to a tree node, it is found that Schwarz teaches the,
calculating an occupancy index of the current node from an occupancy bit related to the tree (entropy encoding the set of occupied blocks with an octree where the leaves represent the occupied blocks, and is represented by one byte for each internal non-leaf node, where the bits indicate the occupied children of the node i.e., the occupancy index at “Entropy encoding of blocks”, Sec.VI-point B)
Also, the art to Mammou specifically teaches about indexing patches of the occupancy map by,
calculating an occupancy index of the current node from an occupancy bit related to the tree (occupancy index or the value of each pixel is calculated during the patch packing process by determining if a 3D voxel projects onto that particular 2D location, thus for each block of occupancy map generating a list of candidate patches, i.e., of an indexed list for that block, at step 1280, and coding the list of candidate patches according to their full or empty blocks, steps 1281-1285 in Fig.12A, 12C, where for each block, the index of the patches in the list is coded and signaled in the bitstream, Par.[0179-0188, 0202, 0215] or coding by using the occupancy index Par.[0218-0221]).
The ordinary skilled in the art would have found the incentive to develop on the motion information based point cloud coding, in an inter-prediction process, by which improving the dynamic response of the G-PCC coding suggested in Graziosi, and Schwarz as being obvious to seek other art detailing the motion information application to GOP frames, identified in Mammou, at Par.[0090, 0142] for 3D inter-frame prediction, Par.[0089-0090] in order to improve the motion estimation in determining the 2D center of motion of the pixel block search, Par.[0128-0129], by which finding the combination predictable.
Re Claim 4. (Original) Graziosi, Schwarz and Mammou disclose, the method of claim 3,
Graziosi teaches, wherein the encoding of the geometry (the geometry encoding at Sec.V(B) and ref.[54]) comprises:
generating and transforming a difference between the occupancy index and a reference occupancy index for the occupancy index wherein the reference occupancy index is calculated from an occupancy bit of the tree of a reference frame for a current frame for the current node (generating a transform residual Fig.12, or the transform difference residual D(N), during the prediction of H(N), point cloud data, per Fig.15 or as updated during the lifting transform at Fig.16, at Sec.IV Pg.13).
Schwarz also teaches this limitation at (for a residual at Fig.5, Sec.IV,(B) Pg.136-137, or Fig.10 Sec.VI, Pg.141 “Spatial Transform”).
5. Claims 8 and 13, are rejected under 35 U.S.C. 103 as being obvious over Graziosi and Mammou in view of Wen Gao et al., (hereinafter Gao) (US 11,317,117).
Re Claim 8. (Currently Amended) Graziosi, Schwarz and Mammou disclose, the method of claim 1,
wherein the geometry data and the attribute data are included in a bitstream, wherein the bitstream (per Fig.4, the geometry and attribute image generation is compressed into a bitstream V-PCC) includes:
Mammou teaches about high level parameter signaling, a sequence parameter set (PCCNAL-SPS signaling Par.[0426]) comprising:
information indicating whether a difference between residuals of geometry is generated and transformed; and
information indicating whether a difference between residuals of an attribute is generated and transformed;
a tile parameter set comprising:
information indicating whether the difference between the residuals of the geometry is generated and transformed; and
information indicating whether the difference between the residuals of the attribute is generated and transformed;
a geometry parameter set comprising information indicating whether the difference between the residuals of the geometry is generated and transformed (the ordinary skilled would have found obvious to associate the information signaled in the bitstream with the V-PCC, G-PCC, A-PCC and the Tile and Slice split video data, to implicitly support the addressed PCC coding in order to be processed as suggested in Graziosi and previously mapped at claims, per Sec.III(A) at Pgs.3-4, or the syntax signaled for V-PCC at Sec.III.(B) (6), or in Mammou, the PCC bitstream information for point cloud coding, at PCCNAL header, Par.[0367-0402] at least); or
Gao teaches about, an attribute parameter set comprising information indicating whether the difference between the residuals of the attribute is generated and transformed (in a point cloud coding, using high level signaling e.g., SPS, APS, i.e., signaling information representing an attribute parameter set, at Col.16 Lin.40-47).
In consideration to the known and applied technology regarding the point cloud coding (PCC), for including geometrical (GPCC) and attribute (APCC) type of spatial and attribute data representation respectively, the skilled in the art would have associated before the effective filing date of invention, and would have found obvious to accept the suggested high-level signaling e.g., SPS, etc., in Graziosi and Mammou, to further combine their art with Gao, express teaching of the Attribute Parameter Set (ATS), in order to perform the point cloud coding by including the video attributes of the images (i.e., color as texture), thus to consider the combination predictable as claimed.
Re Claim 13. (Currently Amended) Graziosi discloses this claim representing the decoding method comprising the PCC coding method limitations of the process performed at the prediction loop of the encoding method of claim 1, further comprising each and every limitation of the decoding method by inversely predicting the PCC data at the said prediction loop, as established for the inter-prediction frame by:
Graziosi teaches about, estimating, from a reference frame for a current frame containing the point cloud data, global motion information or local motion information for the current frame (the motion information is applied to V-PCC, or G-PCC at Sec.II (B) (4) by structure from motion (SfM) technique, ref. [22] or for encoding GOP structure, Sec.III (B)(4) ref [46] and Sec.IV (A)) or local motion information for the current frame (the coding technique for V-PCC indicating motion estimation by inter-prediction coding, Sec.III(B)(6) in the local point cloud frame compression of the decoded, per Sec.II geometry as depicted in Fig.4, Sec.IV((A) in e.g., a local motion estimation process for the video coder at patch level from the frame allocation generator [46]);
performing arithmetic decoding on the point cloud data (arithmetic coding Sec.IV(B) Pg.9);
inversely transforming a difference between residuals of geometry of the point cloud data; and inversely transforming a difference between residuals of an attribute of the point cloud data, wherein the bitstream contains (inversely processing the encoded data at decoder site of Fig.15 e.g., as represented by L’(N+1)):
information indicating whether the difference between the residuals of the geometry is generated and transformed (per Graziosi: at claim 6); and
information indicating whether the difference between the residuals of the attribute is generated and transformed (per Graziosi, Schwarz, Mammou and Gao: at claim 8) hence it is rejected on the same evidence mapped mutatis mutandis.
In consideration to the known and applied technology regarding the point cloud coding (PCC), for including geometrical (GPCC) and attribute (APCC) type of spatial and attribute data representation respectively, the skilled in the art would have associated before the effective filing date of invention, and would have found obvious to accept the suggested high-level signaling e.g., SPS, etc., in Graziosi and Mammou, to further combine their art with Gao, express teaching of the Attribute Parameter Set (ATS), in order to perform the point cloud coding by including the video attributes of the images (i.e., color as texture), thus to consider the combination predictable as claimed.
Conclusion
6. The prior art made of record and not relied upon, is considered pertinent to applicant's disclosure. See PTO-892 form. Applicant is required under 37 C.F.R. 1.111(c) to consider these references when responding to this action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DRAMOS KALAPODAS whose telephone number is (571)272-4622. The examiner can normally be reached on Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached on 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DRAMOS KALAPODAS/Primary Examiner, Art Unit 2487