Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/11/24 and 2/7/25 are being considered by the examiner.
Claim Objection
Claims 7, 16 and 19 recite Adaptive-DCT. Applicant doesn’t describe what DCT stands for in the specification.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 of U.S. Patent Application No. 18/771,658. Although the claims at issue are not identical, they are not patentably distinct from each other because the present claims have the same scope of US Patent.
This is a provisional obviousness-type double patenting rejection because the conflicting claims have not in fact been patented.
Table 1 illustrates the conflicting claims.
Present Application 18/771,342
1
2
US Patent App No 18/771,658
1,2
5
Table 2 provides a comparative mapping of the limitations of independent claims 1 of the present application when compared against the limitations of claims 1 of Patent Application No 18/771,658.
Present Application 18/771,342
US Patent Application No 18/771,658
1. A method comprising:
determining, by a decoder and based on decoding geometry information of a point could frame associated with content, a reconstructed geometry of the point cloud frame;
determining attribute predictors, associated with the reconstructed geometry, based on projecting attributes of a reference point cloud frame onto the reconstructed geometry; and
decoding, based on the attribute predictors, attribute information of the reconstructed geometry.
1.A method comprising: determining, by a decoder and from a bitstream, geometry motion vectors; determining, from the bitstream, attribute motion vectors; determining a reconstructed geometry of a point cloud frame associated with content by decoding, based on the geometry motion vectors, a geometry associated with the point cloud frame; and decoding, based on the attribute motion vectors, attributes associated with the reconstructed geometry.
2.The method of claim 1, wherein the decoding the attributes associated with the reconstructed geometry comprises: determining, based on the attribute motion vectors, projected attributes;
determining, based on the projected attributes, attribute predictors of the attributes associated with the reconstructed geometry; and decoding, based on the attribute predictors, the attributes associated with the reconstructed geometry.
2. The method of claim 1, wherein the decoding the attributes of the reconstructed geometry comprises:
decoding, from a bitstream, residual attributes indicating differences between the attributes of the reconstructed geometry and the attribute predictors; and
determining, based on the attribute predictors and the residual attributes, the attributes of the reconstructed geometry.
5. The method of claim 1, wherein the decoding the attributes associated with the reconstructed geometry comprises: decoding residual attributes indicating differences between the attributes of the reconstructed geometry and attribute predictors associated with the reconstructed geometry; and determining based on adding the attribute predictors and the residual attributes, the attributes of the reconstructed geometry
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 8 and 10-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Chou et al. (US 2017/0347122 A1).
As to Claim 1, Chou teaches A method comprising:
determining, by a decoder and based on decoding geometry information of a point could frame associated with content, a reconstructed geometry of the point cloud frame (Chou discloses “The encoder compresses (610) geometry for the point cloud data. The geometry includes indicators of which points of the point cloud data are occupied points” in [0111]; “Alternatively, the octtree decoder (420) decompresses the geometry data ( 412) in some other way (e.g., lossy decompression, in which case a lossy-reconstructed version of the geometry data (412) is passed to the inverse region-adaptive hierarchical transformer (445)” in [0089]);
determining attribute predictors, associated with the reconstructed geometry, based on projecting attributes of a reference point cloud frame onto the reconstructed geometry (Chou discloses “The motion compensator (470) applies MV(s) for a block to the reconstructed reference frame(s) from the reference frame buffer (474). For the block, the motion compensator (470) produces a motion-compensated prediction, which is a region of attributes in the reference frame(s) that are used to generate motion-compensated prediction values (476) for the block” in [0095], see also [0094]); and
decoding, based on the attribute predictors, attribute information of the reconstructed geometry (Chou discloses “For the reconstructed point cloud data (405), the decoder (401, 402) outputs geometry data (412) and reconstructed attributes (414) for occupied points to the output buffer (410)” in [0096], see also [0114].)
As to Claim 2, Chou teaches The method of claim 1, wherein the decoding the attributes of the reconstructed geometry comprises:
decoding, from a bitstream, residual attributes indicating differences between the attributes of the reconstructed geometry and the attribute predictors; and determining, based on the attribute predictors and the residual attributes, the attributes of the reconstructed geometry (Chou discloses “When inter-frame compression is used for a block, the encoder (302) can determine whether or not to encode and transmit the differences (if any) between prediction values (376) and corresponding original attributes (314). The differences (if any) between the prediction values (376) and corresponding original attributes (314) provide values of the prediction residual” in [0076]; “The inverse region-adaptive hierarchical transformer (445) can produce blocks of reconstructed residual values (if interframe decompression is used) or reconstructed attributes (if intra-frame decompression is used)… When intra-frame compression is used (intra path at switch (439)), the decoder (402) uses the reconstructed attributes produced by the inverse region-adaptive hierarchical transformer (445)” in [0094], see also [0036, 0090-0091].)
As to Claim 3, Chou teaches The method of claim 2, wherein the decoding of the residual attributes comprises:
decoding, from the bitstream, transformed coefficients corresponding to the residual attributes; and applying an inverse intra transform to the decoded transformed coefficients (Chou discloses “As part of the encoding (520), the encoder applies a RAHT to attributes of occupied points among the multiple points, which produces transform coefficients. For example, for intra-frame compression, the encoder applies a RAHT to attributes of occupied points among the multiple points, which produces the transform coefficients. Alternatively, for inter-frame compression, the encoder can apply a RAHT to prediction residuals for attributes of occupied points among the multiple points, which produces the transform coefficients” in [0109]; “In particular, the decoder is configured to perform various operations, including applying an inverse transform such as an inverse RAHT to transform coefficients for attributes of occupied points among the multiple points.” in [0009].)
As to Claim 4, Chou teaches The method of claim 2, further comprising:
decoding, from the bitstream, transformed coefficients corresponding to the residual attributes; and dequantizing the decoded transformed coefficients (Chou discloses “As part of the encoding (520), the encoder applies a RAHT to attributes of occupied points among the multiple points, which produces transform coefficients. For example, for intra-frame compression, the encoder applies a RAHT to attributes of occupied points among the multiple points, which produces the transform coefficients. Alternatively, for inter-frame compression, the encoder can apply a RAHT to prediction residuals for attributes of occupied points among the multiple points, which produces the transform coefficients” in [0109]; inverse quantizer 455 in [0090-0091].)
As to Claim 8, Chou teaches The method of claim 1, wherein an already-coded reference point cloud frame, associated with the reference point cloud frame, is used to decode the geometry information of the point cloud frame (Chou discloses “The motion compensator (370) applies MV(s) for a block to the reconstructed reference frame(s) from the reference frame buffer (374). For the block, the motion compensator (370) produces a motion-compensated prediction, which is a region of attributes in the reference frame(s) that are used to generate motion-compensated prediction values (376) for the block” in [0074]; “The decoder (402) of FIG. 4b determines whether a given point cloud frame needs to be stored for use as a reference frame for inter-frame decompression of subsequent frames. The reference frame buffer (474) buffers one or more reconstructed previously decoded point cloud frames for use as reference frames” in [0093].)
Claim 10 recites similar limitations as claim 1 but in a encoding form (inverse decoding). Therefore, the same rationale used for claim 1 is applied.
As to Claim 11, Chou teaches The method of claim 10, wherein the encoding the attribute information comprises:
determining, based on differences between the attributes of the reconstructed geometry and the attribute predictors, residual attributes; and encoding, into a bitstream associated with the point cloud frame, the residual attributes (Chou discloses “When inter-frame compression is used for a block, the encoder (302) can determine whether or not to encode and transmit the differences (if any) between prediction values (376) and corresponding original attributes (314). The differences (if any) between the prediction values (376) and corresponding original attributes (314) provide values of the prediction residual” in [0076]; “In the input buffer (492), the encoded data (495) includes encoded data for geometry data (412) as well as encoded data for attributes (414) of occupied points… For example, the attribute(s) for an occupied point can include…(7) one or more sample values each defining, at least in part, a residual associated with the occupied point” in [0083].)
As to Claim 12, Chou teaches The method of claim 11, wherein the encoding the residual attributes comprises:
determining, based on applying an intra transform to the residual attributes, transformed coefficients; and entropy encoding, in the bitstream, the transformed coefficients corresponding to the residual attributes (Chou discloses “The RAHT is coupled with a feedforward approach to entropy coding the quantized transform coefficients” in [0105]; “As part of the encoding (520), the encoder applies a RAHT to attributes of occupied points among the multiple points, which produces transform coefficients. For example, for intra-frame compression, the encoder applies a RAHT to attributes of occupied points among the multiple points, which produces the transform coefficients. Alternatively, for inter-frame compression, the encoder can apply a RAHT to prediction residuals for attributes of occupied points among the multiple points, which produces the transform coefficients” in [0109].)
As to Claim 13, Chou teaches The method of claim 12, further comprising quantizing, before the entropy encoding, the transformed coefficients (Chou, Fig 3A.)
As to Claim 14, Chou teaches The method of claim 10, wherein the determining the attribute predictors is further based on mapping attributes of the geometry to the reconstructed geometry (Chou discloses “The motion compensator (370) applies MV(s) for a block to the reconstructed reference frame(s) from the reference frame buffer (374). For the block, the motion compensator (370) produces a motion-compensated prediction, which is a region of attributes in the reference frame(s) that are used to generate motion-compensated prediction values (376) for the block” in [0074], see also [0095].)
As to Claim 15, Chou teaches The method of claim 10, wherein he attributes of the reconstructed geometry comprise colors (Chou discloses “For example, the attribute(s) for an occupied point can include: (1) one or more sample values each defining, at least in part, a color associated with the occupied point (e.g., YUV sample values, RGB sample values, or sample values in some other color space)” in [0062].)
As to Claim 16, Chou teaches The method of claim 12, wherein the intra transform comprises at least one of: an Adaptive-DCT; a RAHT transform; or a Haar transform (Chou, [0105].)
Claim 17 is rejected based upon similar rationale as Claims 1 & 2.
As to Claim 18, Chou teaches The method of claim 17, wherein the receiving the residual attributes comprises:
receiving transformed coefficients corresponding to the residual attributes; and applying an inverse intra transform to the transformed coefficients (Chou discloses “The inverse region-adaptive hierarchical transformer (445) can produce blocks of reconstructed residual values (if interframe decompression is used) or reconstructed attributes (if intra-frame decompression is used). When inter-frame decompression is used (inter path at switch (439)), reconstructed residual values, if any, are combined with the prediction values (476) to produce a reconstruction of the attributes of occupied points for the current point cloud frame… When intra-frame compression is used (intra path at switch (439)), the decoder (402) uses the reconstructed attributes produced by the inverse region-adaptive hierarchical transformer (445)” in [0094]; see also [0091].)
Claim 19 is rejected based upon similar rationale as Claim 16.
Claim 20 is rejected based upon similar rationale as Claim 8.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Chou et al. (US 2017/0347122 A1) in view of Hur et al. (US 2026/0057558 A1).
As to Claim 5, Chou teaches The method of claim 1, wherein the decoding the attributes of the reconstructed geometry comprises:
determining, by applying an intra transform to the attribute predictors, transformed attribute predictors; determining transformed residual attributes indicating differences between transformed attributes of the reconstructed geometry and the transformed attribute predictors; determining, based on the transformed attribute predictors and the transformed residual attributes, the transformed attributes of the reconstructed geometry; and applying an inverse intra transform to the transformed attributes of the reconstructed geometry (Chou discloses “for a prediction value that does not have a corresponding original attribute, the encoder can estimate (e.g., by interpolation or extrapolation using original attributes) the missing attribute, and calculate the prediction residual as the difference between the prediction value and estimated attribute” in [0076]; “The inverse region-adaptive hierarchical transformer (345) performs an inverse RAHT, inverting whatever RAHT was applied by the region-adaptive hierarchical transformer (340), and thereby producing blocks of reconstructed residual values (if inter-frame compression was used) or reconstructed attributes (if intra-frame compression was used) or reconstructed attributes (if intra-frame compression was used). When inter-frame compression has been used (inter path at switch (339)), reconstructed residual values, if any, are combined with the prediction values (376) to produce a reconstruction (348) of the attributes of occupied points for the current point cloud frame.” in [0077]. Here, Chou doesn’t specifically disclose interpolation-based prediction. Hur further discloses “The attribute decoding according to the embodiments includes region adaptive hierarchical transform (RAHT) decoding, interpolation-based hierarchical nearest-neighbor prediction (prediction transform) decoding, and interpolation-based hierarchical nearest-neighbor prediction with an update/lifting step (lifting transform) decoding” in [0129].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Chou with the teaching of Hur so as to use intra-frame encoder to encode the geometry and attributes of the intra-frame point cloud data to generate a bitstream.
As to Claim 6, Chou in view of Hur teaches The method of claim 5, wherein the determining the transformed residual attributes comprises:
decoding transformed coefficients corresponding to the transformed residual attributes; and dequantizing the transformed coefficients (Chou discloses “As part of the encoding (520), the encoder applies a RAHT to attributes of occupied points among the multiple points, which produces transform coefficients. For example, for intra-frame compression, the encoder applies a RAHT to attributes of occupied points among the multiple points, which produces the transform coefficients. Alternatively, for inter-frame compression, the encoder can apply a RAHT to prediction residuals for attributes of occupied points among the multiple points, which produces the transform coefficients” in [0109]; “In particular, the decoder is configured to perform various operations, including applying an inverse transform such as an inverse RAHT to transform coefficients for attributes of occupied points among the multiple points.” in [0009]; “For reconstruction, the inverse quantizer (355) performs inverse quantization on the quantized transform coefficients, inverting whatever quantization was applied by the quantizer (350). The inverse region-adaptive hierarchical transformer (345) performs an inverse RAHT, inverting whatever RAHT was applied by the region-adaptive hierarchical transformer (340), and thereby producing blocks of reconstructed residual values (if inter-frame compression was used) or reconstructed attributes (if intra-frame compression was used)” in [0077]. Hur, Fig 3 & 9.)
As to Claim 7, Chou in view of Hur teaches The method of claim 5, wherein the intra transform comprises at least one of: an Adaptive-DCT; a RAHT transform; or a Haar transform (Chou, [0105]. Hur, [0064, 0212].)
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Chou et al. (US 2017/0347122 A1) in view of Hwang et al. (US 2023/0206391 A1).
As to Claim 9, Chou teaches The method of claim 1, wherein the determining the attribute predictors comprises smoothing the projected attributes of the reference point cloud frame (Chou discloses “The reconstructed attributes can be further filtered” in [0094]. Here, smoothing process is one of filtering process. For example, Hwang discloses “In a smoothing process, a geometry smoothing process, which is a stage of correcting a discontinuous part occurring at a boundary of each patch, uses a 3D filter to change the position of a boundary point to be similar to those of neighboring points, and an attribute smoothing stage may change an attribute value such as color information of a boundary point with reference to values of neighboring points” in [0043].)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the invention of Chou with the teaching of Hwang so as to change an attribute value of a boundary point with reference to values of neighboring points (Hwang, [0043]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WEIMING HE whose telephone number is (571)270-1221. The examiner can normally be reached on Monday-Friday, 8:30am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached on 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WEIMING HE/
Primary Examiner, Art Unit 2611