DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is in response to the remark entered on September 9, 2025.
Claims 1-3, 5-8, 10-13, 15-18 & 20-21 are pending in the current application.
Claims 1-2, 6-8, 11-12 & 16-18 are amended.
Claims 4, 9, 14 & 19 are cancelled.
Claim 21 is newly added.
Response to Arguments
Applicant's remarks filed 03/03/2025, pg. 7, regarding the interpretation of claims 6-7 & 17 under 35 USC 112(f), or sixth paragraph,” have been fully considered, and are persuasive. The claims are no longer interpreted under 35 USC 112(f), or sixth paragraph.
Applicant's remarks filed 09/09/2025, pgs. 7-8, regarding the obviousness-type double patenting rejection of claims 1-3, 5-8, 10-13, 15-18, and 20 have been fully considered, but they are not persuasive. In response to the Applicant's remarks that Chou in view of Iguchi do not disclose the noted features in claim 1, 6, 11, and 16, the Examiner directs attention to the response to the rejection of claims 1, 6, 11, and 16 under 35 USC 103 below.
Therefore the obviousness-type double patenting rejection of claims 1-3, 5-8, 10-13, 15-18, and 20 is maintained.
Applicant's remarks filed 09/09/2025, pages 9-15, and 10/02/2025 regarding the rejection of claims 1, 6, 11 & 16 under 35 USC 103 have been fully considered, but they are not persuasive.
The Applicant first asserts that Iguchi does not teach of suggest the limitations of, “wherein the signaling information includes a geometry parameter set and an attribute parameter set, wherein the geometry parameter set includes information for inter-prediction of the geometry information, and wherein the attribute parameter set includes information for inter-prediction of the attribute information,” and secondly asserts that Chou does not disclose the limitations of, “encoding geometry information including positions of points of the point cloud data by applying inter-prediction or intra-prediction.”
The Examiner respectfully disagrees because the combination of Chou in view of Iguchi teaches the above limitations as explained below. First, Chou discloses of, “encoding geometry information including positions of points of the point cloud data by applying inter-prediction or intra-prediction.” In Paragraph [0066], Chou initially sets the stage by disclosing that, “the general encoding control decides whether to use intra-frame compression or inter-frame compression for attributes of occupied points in blocks of the current point cloud frame.” This indicates to the Examiner that inter-frame compression, reading as inter-prediction, is used to compress point cloud data and occupied points’ attributes. Finally, Paragraphs [0065], [0102], [0107], and [0133] each contribute to the reading of encoding geometry information including positions of points of the point cloud. Paragraphs [0107] & [0133] disclose, “ encoding of point cloud data,” as geometry information, and “including the position of the point in the encoded data indicates the point is occupied,” wherein these passages indicate point cloud data is being encoded, wherein Paragraphs [0065] & [0102] disclose that a, “point cloud represents one or more objects in 3D space as a set of points. A point in the point cloud is associated with a position in 3D space (typically, a position having x, y, and z coordinates).” Furthermore, Chou discloses of applying inter-prediction to the geometry information, wherein Paragraph [0061]-[0069] & Fig. 3B shows of applying inter-prediction 338 to the geometry data at region-adaptive hierarchical transformer 340 and at MUX 390, and thus inter-prediction is applied (touching) to the geometry data 312, and separately to attribute information 314. Therefore Chou discloses, “encoding geometry information including positions of points of the point cloud data by applying inter-prediction to the geometry information.”
Lastly, Iguchi, in the same field of point cloud data compression as Chou, teaches, “wherein the signaling information includes a geometry parameter set and an attribute parameter set, wherein the geometry parameter set includes information for inter-prediction of the geometry information, and wherein the attribute parameter set includes information for inter-prediction of the attribute information.” Chou already discloses of inter-prediction in encoding, but does not disclose wherein the signaling information includes a geometry parameter set and an attribute parameter set, and that the attribute parameter set includes information for inter-prediction of the attribute information. Therefore the Examiner relies upon Iguchi to teach these noted claim limitations for incorporation into Chou. In Paragraph [0463], the Examiner reads of tile additional information regarding tile division as the broadly-claimed, “information for inter-prediction,” that are stored in both of a parameter set for geometry information (GPS) and a parameter set for attribute information (APS), and when a tile division method is different between geometry information and attribute information, different tile additional information is stored in each of a GPS and an APS. Because inter-frame prediction, or inter-prediction, involves dividing frames into blocks or tiles, and then trying to find a block or tile similar to the one that is being encoded in a previously encoded frame, the Examiner reads tile additional information regarding tile division on as the claimed, “information for inter-prediction.” Thus, Iguchi teaches, “wherein the signaling information includes a geometry parameter set and an attribute parameter set, wherein the geometry parameter set includes information for inter-prediction of the geometry information, and wherein the attribute parameter set includes information for inter-prediction of the attribute information,” and therefore the combination of Chou in view of Iguchi teaches or suggests claim 1. Furthermore, it is noted that one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Therefore the rejection of claim 1, and similarly claims 6, 11 & 16 under 35 USC 103 is maintained.
Applicant’s remarks filed 09/09/2025, page 15, with respect to the rejection of claims 2-3, 5, 7-8, 10, 12-13, 15, 17-18 & 20 under 35 USC 103 have been fully considered, but they are not persuasive.
Applicant relies on the patentability of the claims from which these claims depend to traverse the rejection without prejudice to any further basis for patentability of these claims based on the additional elements recited.
Examiner cannot concur with the Applicant because the combination of Chou and Iguchi teach independent claims 1, 6, 11 & 16 as outlined below. Thus, claims 2-3, 5, 7-8, 10, 12-13, 15, 17-18 & 20 are also rejected for the similar reasons as outlined below.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Patent US 12,149,579 B2
Claims 1-3, 5-8, 10-13, 15-18 & 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of US 12,149,579 B2 in view of Chou et al. (US 2017/0347122 A1) (hereinafter Chou), and further in view of Iguchi et al. (US 2023/0017612 A1, with priority benefit 63/001,770) (hereinafter Iguchi).
Instant – 18/016,771
US 12,149,579 B2
1. A method of encoding point cloud data, the method comprising:
1. (Currently Amended) A method of transmitting point cloud data, the method comprising:
encoding geometry information including positions of points of the point cloud data by applying inter-prediction to the geometry information; and
encoding, by an encoder, the point cloud data including geometry information
encoding attribute information including attribute values of the points of the point cloud data by applying the inter-prediction to the attribute information;
encoding, by an encoder, the point cloud data including […] attribute information
where a bitstream including the encoded geometry information and the encoded attribute information, and further includes signaling information,
transmitting, by a transmitter, a bitstream including the encoded point cloud data and signaling information,
wherein the signaling information includes a geometry parameter set and an attribute parameter set,
wherein the signaling information includes […], a geometry parameter set including geometry-related information, and an attribute parameter set including attribute-related information,
wherein the geometry parameter set includes information for inter-prediction of the geometry information, and
wherein the geometry data unit header includes at least one of identification information for specifying the geometry parameter set related to the geometry data unit or slice information related to the geometry data unit,
wherein the attribute parameter set includes information for inter-prediction of the attribute information.
wherein the attribute data unit includes an attribute data unit header and a portion of the attribute information […].
Although the claims are not identical, they are not patentably distinct from each other because claim 1 of the instant application differs from claim 1 of the US 12,149,579 B2 in that the instant application discloses the limitations of, geometry information including positions of points of the point cloud data by applying inter-prediction; and attribute values of the points of the point cloud data by applying the inter-prediction.
However, these limitations are known in the art as evidenced by Chou, wherein
Paragraphs [0061]-[0069], [0102], [0109]-[0110], [0119]-[0123] & [0133], Fig. 9, shows that the input buffer (310) receives point cloud data (305) from a source, then the encoder receives geometry data, that includes indicators of points of the point cloud data occupied, having a position in 3D space with x,y,z coordinates, through intra/inter-frame compression for attributes of occupied points in blocks, and finally compresses geometry data through intra/inter-frame compression for attributes of occupied points in blocks using region-adaptive hierarchical transformation. It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the instant invention to add the teachings of Chou as above, to provide a bitstream of encoded point cloud data that can readily be decoded with devices having diverse computational capabilities or quality requirements as stated in the Paragraph [0006].
Furthermore, claim 1 of the instant application differs from claim 1 of the US 12,149,579 B2 in that the instant application discloses the limitations of, wherein the geometry parameter set includes information for inter-prediction of the geometry information, and wherein the attribute parameter set includes information for inter-prediction of the attribute information.
However, these limitations are known in the art as evidenced by Iguchi, wherein Paragraph [0351], [0412] & [0460]-[0464], Figs. 45-46, supported in Pgs. 60-61 in 63/001,770, teaches of metadata tiling division information, as information for inter-prediction of geometry/attribute information, are stored in geometry parameter set (GPS) and attribute parameter set (APS). It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the instant invention to add the teachings of Iguchi as above, to address the massive amount of data of a point cloud that necessitates compression of the amount of three-dimensional data by encoding for accumulation and transmission by reducing the amount of data to be processed as Iguchi discusses in Paragraph [0004]-[0011].
Regarding claims 2-3 & 5, although the claims are not identical, the further limitations would have been obvious for the same reasons of obviousness as set forth in the rejections outlined below with respect to Chou & Iguchi.
Regarding claims 6-8 & 10, claims (6-7 & 10) are drawn to the device for transmitting point cloud data having limitations similar to the method of using the same as claimed in claims (1-3 & 5) treated in the above rejections. Therefore, device claims (6-8 & 10) correspond to method claims (1-3 & 5) and are rejected for the same reasons of obviousness as used above.
Regarding claim 11-13 & 15, claims (11-13 & 15) are drawn to a method of receiving point cloud data having limitations similar and reciprocal to the method of transmitting point cloud data of using the same as claimed in claims (1-3 & 5) treated in the above rejection. Therefore, method claims (11-13 & 15) correspond to method claims (1-3 & 5) and are rejected for the same reasons of obviousness as used above.
Regarding claims 16-18 & 20, claims (16-18 & 20) are drawn to the device for transmitting point cloud data having limitations similar to the method of using the same as claimed in claims (11-13 & 15) treated in the above rejections. Therefore, device claims (16-18 & 20) correspond to method claims (11-13 & 15) and are rejected for the same reasons of obviousness as used above.
This is a nonstatutory double patenting rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-8, 10-13, 15-18 & 20 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Chou et al. (US 2017/0347122 A1) (hereinafter Chou) in view of Iguchi et al. (US 2023/0017612 A1, with priority benefit 63/001,770) (hereinafter Iguchi).
Regarding claim 1, Chou discloses a method of encoding point cloud data [Paragraph [0011] & [0062], method to encode and transmit points of the point cloud], the method comprising:
encoding geometry information including positions of points of the point cloud data by applying inter-prediction to the geometry information [Paragraph [0061]-[0069], [0102], [0109]-[0110], [0119]-[0123] & [0133], Encoder receives geometry data, that includes indicators of points of the point cloud data occupied, having a position in 3D space with x,y,z coordinates, through intra/inter-frame compression for attributes of occupied points in blocks, and applying inter-prediction 338 to the geometry data at region-adaptive hierarchical transformer 340 and at MUX 390]; and
encoding attribute information including attribute values of the points of the point cloud data by applying the inter-prediction to the attribute information [Paragraph [0061]-[0069], [0102], [0109]-[0110], [0120] & [0133], Encoder receives and compresses geometry data through intra/inter-frame compression for attributes of occupied points in blocks using RAHT and applying inter-prediction 338 to the attribute information at region-adaptive hierarchical transformer 340 and at MUX 390],
wherein a bitstream including the encoded geometry information and the encoded attribute information, further includes signaling information [Paragraph [0062]-[0071], [0078] & [0111]-[0113], Fig. 7, Decoder receives encoded data, comprising geometry information, attribute information, and general control data/metadata/parameters as signaling information].
However, Chou does not explicitly disclose wherein the signaling information includes a geometry parameter set and an attribute parameter set, wherein the geometry parameter set includes information for inter-prediction of the geometry information, and wherein the attribute parameter set includes information for inter-prediction of the attribute information.
Iguchi teaches wherein the signaling information includes a geometry parameter set and an attribute parameter set, wherein the geometry parameter set includes information for inter-prediction of the geometry information, and wherein the attribute parameter set includes information for inter-prediction of the attribute information [Paragraph [0351], [0412] & [0460]-[0464], Figs. 45-46, supported in Pgs. 60-61 in 63/001,770, Metadata tiling division information, as information for inter-prediction of geometry/attribute information, are stored in geometry parameter set (GPS) and attribute parameter set (APS)],
It would have been obvious to the person of ordinary skill in the art before the effective filing date of the claimed invention to modify the method disclosed by Chou to add the teachings of Iguchi as above, to address the massive amount of data of a point cloud that necessitates compression of the amount of three-dimensional data by encoding for accumulation and transmission by reducing the amount of data to be processed (Iguchi, Paragraph [0004]-[0011]).
Regarding claim 2, Chou and Iguchi disclose the method of claim 1, and are analyzed as previously discussed with respect to the claim.
Furthermore, Chou discloses wherein the encoding of the geometry information [Paragraph [0061]-[0069], [0102], [0109]-[0110], [0120] & [0133], Encoder receives and compresses geometry data through intra/inter-frame compression] comprises:
deriving one or more reference regions of a current node of the geometry information from reconstructed geometry information stored in a buffer, and generating predicted geometry information based on the one or more derived reference regions [Paragraph [0072]-[0076], In inter-frame compression, reference frame buffer buffers reconstructed previously coded/decoded point cloud frames for use as reference frames using motion estimator to produce motion-compensated prediction as predicted geometry information, being from a region of attributes in reference frame(s) as one or more reference regions of a current node];
outputting a difference between the geometry information and the predicted geometry information as residual geometry information; and entropy-encoding the residual geometry information and outputting a geometry bitstream in the bitstream [Paragraph [0072]-[0078], In inter-frame compression determines whether to encode and transmit differences, as residual geometry information, between prediction values (predicted geometry information) and corresponding original attributes, using region-adaptive hierarchical transformer].
Regarding claim 3, Chou and Iguchi disclose the method of claim 2, and are analyzed as previously discussed with respect to the claim.
Furthermore, Chou discloses wherein the encoding of the geometry information [Paragraph [0061]-[0069], [0102], [0109]-[0110], [0120] & [0133], Encoder receives and compresses geometry data through intra/inter-frame compression] comprises:
reconstructing the geometry information by adding the predicted geometry information and the residual geometry information [Paragraph [0072]-[0078], Combining of reconstructed residual values as residual geometry information and prediction values to produce a reconstruction of the attributes of occupied points]; and
storing the reconstructed geometry information in the buffer for the derivation of the one or more reference regions from the reconstructed geometry information [Paragraph [0072]-[0078], reference frame buffer stores the reconstructed attributes for use in motion-compensated prediction of attributes of subsequence frames].
Regarding claim 5, Chou and Iguchi disclose the method of claim 3, and are analyzed as previously discussed with respect to the claim.
Furthermore, Chou discloses wherein the signaling information further includes buffer management information related to management of the buffer [Paragraph [0066] & [0071]-[0078], As part of the general control data, the encoder (302) can include information that indicates how to update the reference frame buffer (374), e.g., removing a reconstructed point cloud frame, adding a newly reconstructed point cloud frame].
Regarding claims (6-8 & 10), claims (6-8 & 10) are drawn to the device for encoding point cloud data having limitations similar to the method of using the same as claimed in claims (1-3 & 5) treated in the above rejections. Therefore, device claims (6-8 & 10) correspond to method claims (1-3 & 5) and are rejected for the same reasons of obviousness as used above.
Furthermore, Chou discloses a device, the device comprising a memory; and at least one processor connected to the memory [Paragraph [0040], computer system 100 includes CPU processor, connected to memory (120,125)].
Regarding claim (11-13 & 15), claims (11-13 & 15) are drawn to a method of decoding point cloud data having limitations similar and reciprocal to the method of transmitting point cloud data of using the same as claimed in claims (1-3 & 5) treated in the above rejection. Therefore, method claims (11-13 & 15) correspond to method claims (1-3 & 5) and are rejected for the same reasons of obviousness as used above.
Furthermore, Chou discloses a method of decoding point cloud data [Paragraph [0011] & [0092], method to receive and decode points of the point cloud]
Regarding claims (16-18 & 20), claims (16-18 & 20) are drawn to the device for decoding point cloud data having limitations similar to the method of using the same as claimed in claims (11-13 & 15) treated in the above rejections. Therefore, device claims (16-18 & 20) correspond to method claims (11-13 & 15) and are rejected for the same reasons of obviousness as used above.
Furthermore, Chou discloses a device, the device comprising: a memory; and at least one processor connected to the memory [Paragraph [0040], computer system 100 includes CPU processor, connected to memory (120,125)].
Regarding claim 21, claim 21 is drawn to a method of transmitting data for point cloud data having limitations similar to the method of encoding point cloud data of using the same as claimed in claim (1) treated in the above rejection. Therefore, method claims 21 corresponds to method claim 1 and is rejected for the same reasons of obviousness as used above.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL CHANG whose telephone number is (571)272-5707. The examiner can normally be reached M-Sa, 12PM - 10 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL CHANG/Primary Examiner, Art Unit 2487