DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55 (Korean Application KR 10-2020-0096099 filed July 31st, 2020).
Response to Arguments
Applicant amended claims 1, 9, 13, 15, 22, and 25 beyond formalities and 112 Rejections.
Applicant canceled claim 2.
The pending claims are 1 and 3 – 32 [Page 10 lines 1 – 11].
Applicant amended the claims to address Examiner’s 112(b) Rejections [Page 10 lines 12 – 21]. The Examiner reconsiders the 112(b) Rejections in view of the amended clams.
Applicant amended the claims to address Examiner’s 112(d) Rejections [Page 10 line 22 – Page 11 line 2]. The Examiner reconsiders the 112(d) Rejections in view of the amended clams.
Applicant's arguments filed January 14th, 2026 have been fully considered but they are not persuasive.
First, the Applicant recites the references against the claims [Page 11 lines 3 – 6].
Second, the Applicant recites features of amended independent claim 1, gives Specification support for amended independent claim 1, and alleges the references do not render obvious the amended features [Page 11 lines 7 – 25].
Third, the Applicant recites selected portions of Iguchi (Paragraphs 131 and 308) and broadly alleges Iguchi does not render obvious features of the amended claims [Page 11 line 26 – Page 12 line 19]. However, Iguchi in at least Paragraphs 516 – 523 renders obvious features of the amended claims (which was cited against claim 4 as well which describes a possible “coding order” claimed) and additionally Paragraphs 308 – 309 (coding order) and 448 (coding order also to process the attribute information based on the predictive tree) where the Examiner notes Iguchi’s use of “coding order” in Paragraph 309 (only Paragraph 308 was argued) is consistent with the Applicant’s use in at least claim 4 and Specification Paragraph 248.
Fourth, the Applicant contends Yea Paragraph 112 does not render obvious features of the amended independent claim 1 [Page 12 line 20 – Page 13 line 6]. The Examiner notes Yea in Paragraph 76 renders obvious meta data in a coding / decoding order in which combining with attribute data. Additionally Auwera Paragraphs 48 and 60 render obvious attribute data processing in a coding order.
Fifth, the Applicant contends Sugio broadly does not render obvious features of the amended independent claim 1 [Page 13 lines 7 – 15].
Sixth, the Applicant contends the combination of references do not render obvious amended independent claim 1, similarly argues for amended independent claims 9, 15, and 22 as well as the dependent claims [Page 13 lines 16 – 21].
Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references (Sugio broadly argued and Auwera is not argued).
While the Applicant’s points may be understood, the Examiner respectfully disagrees; thus the Rejection is maintained.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on August 9th, 2023 was filed before the mailing date of the First action on the Merits (mailed December 19th, 2024). The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1 and 3 – 32 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claim 1, the claim requires “reconstructing attribute values in the coding order of a block for the tree structure” but later claims “information for representing whether or not an order in which the attribute data is coded is the coding order” (last limitation) thus the information contradicts the required use of the coding order for the attribute values rather than a result of the information indicating “whether or not” the coding order is used thus the claim has Indefinite metes and bounds.
Regarding claims 9, 15, and 22, see independent claim 1 for similar reasoning and thus are similarly Rejected.
Regarding claims 3 – 8, 10 – 14, 16 – 21, and 23 – 32, the dependent claims do not cure the deficiencies of their respective independent claims and thus are similarly Rejected.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3 – 6, 9 – 12, 15 – 19, 22 – 26, and 29 – 32 are rejected under 35 U.S.C. 103 as being unpatentable over Iguchi, et al (WO2021/095879 A1 referred to as “Iguchi” throughout in which citations will come from the WO document in lieu of enabling US Provisional Application 62/934,822) [First cited in the Office Action mailed December 19th, 2024], and further in view of Sugio, et al. (CA 3-103-196 A1 referred to as “Sugio” throughout) [First cited in the Office Action mailed December 19th, 2024], Van der Auwera, et al. (WO2021/207502 A1 referred to as “Auwera” throughout in which citations will come from the WO / WIPO documents instead of all enabling US Provisional documents) [First cited in the Office Action mailed June 30th, 2025], and Yea, et al. (US PG PUB 2021/0329270 A1 referred to as “Yea” throughout in which citations will come from the US PG PUB in lieu of enabling US Provisional Application 63/011,913).
Regarding claim 1, see claim 9 which is the apparatus performing the steps of the claimed method.
Regarding claim 4, see claim 10 which is the apparatus performing the steps of the claimed method.
Regarding claim 5, see claim 11 which is the apparatus performing the steps of the claimed method.
Regarding claim 6, see claim 12 which is the apparatus performing the steps of the claimed method.
Regarding claim 15, see claim 22 which is the apparatus performing the steps of the claimed method.
Regarding claim 16, see claim 23 which is the apparatus performing the steps of the claimed method.
Regarding claim 17, see claim 24 which is the apparatus performing the steps of the claimed method.
Regarding claim 18, see claim 25 which is the apparatus performing the steps of the claimed method.
Regarding claim 19, see claim 26 which is the apparatus performing the steps of the claimed method.
Regarding claim 29, see claim 6 which recites the same / similar limitation and thus is similarly Rejected.
Regarding claim 30, see claim 12 which recites the same / similar limitation and thus is similarly Rejected.
Regarding claim 31, see claim 19 which recites the same / similar limitation and thus is similarly Rejected.
Regarding claim 32, see claim 26 which recites the same / similar limitation and thus is similarly Rejected.
Regarding claim 3, Iguchi teaches rendering and tree features to classify and sort the point cloud data for compression / decompression. Sugio teaches coding / decoding point cloud data with prediction and attribute information considered including the differences of such data. Auwera teaches syntax elements and signaling angle / azimuth information for point cloud compression with additional laser identification information. Yea teaches using a coding order related to prediction trees for transform application to the point cloud data to further explain teachings of Iguchi.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Iguchi with those of Sugio combining features in differential / residual based encoding / decoding and further with Auwera’s syntax teachings and information signaled including azimuth / angle information and additionally with the coding order considerations for transform application as taught by Yea. The combination teaches
configuring a position predictive tree based on the geometry data [Iguchi Figures 46 – 48, 85, 92 – 100, and 109 – 110 (tree generation processes) as well as Paragraphs 88 (tree generation based on geometry data), 151 – 153 and 178 (rendering obvious position / geometry data as obvious variants to one of ordinary skill in the art – see also Paragraph 65), 485, 525 – 528 (prediction tree based on scanning data / geometry of the point cloud), 573 – 578 (prediction tree based on attributes / direction and position), and 598 – 603 (tree generation / configuration with position / geometry)]; and
encoding a position predictive mode and a position residual based on the position predictive tree [Iguchi Figures 46 – 48, 85, 92 – 100, and 109 – 110 (tree generation processes) as well as Paragraphs 88 (tree generation based on geometry data), 151 – 153 and 178 (rendering obvious position / geometry data as obvious variants to one of ordinary skill in the art – see also Paragraph 65), 485, 525 – 528 (prediction tree based on scanning data / geometry of the point cloud), 573 – 578 (prediction tree based on attributes / direction and position of data), and 598 – 603 (tree generation with position / geometry)],
wherein the encoding of the attribute data comprises:
sorting the point cloud data [Iguchi Figure 16, 54 (see at least reference character 6601), 88 – 92 (see at least reference character 10001), and 100 – 103 as well as Paragraphs 128 – 132 (sorting 3D points by codes / position / geometry of the points including using a Morton code order at least rendering the selection of methods obvious to one of ordinary skill in the art), 308 – 312 (sorts on attribute and depth / geometry data – see at least Paragraph 312 for Morton order of the position of the points rendering obvious the geometry data claimed), 509 – 520 (sorting geometry / position data in the encoding process which affects the arrangement of the data for the predicted tree generated from the sorted data – see in view oof Paragraphs 522 – 530 (reordering is an obvious variant of the claimed sorting to one of ordinary skill in the art))];
configuring an attribute predictive tree based on the sorted point cloud data [Iguchi Figures 54, 78 – 81, 88 – 92 and 100 – 103 as well as Paragraphs 301 – 306 (adding attribute information to the predictive tree), 438 – 443 (adding information to the tree for coding / decoded from the tree rendering obvious information was encoded with the attribute prediction mode and attribute residual such as in Paragraphs 448 – 455 (see at least “a_pred_mode” and “a_residual_value”)), 525 – 528 (coding tree information with position / attribute information), 573 – 581 (attributes and geometry captured in the predictive tree information encoded see also Paragraphs 588 and 600 – 602)]; and
encoding an attribute predictive mode and an attribute residual based on the attribute predictive tree [Iguchi Figures 54, 78 – 81, 88 – 92 and 100 – 103 as well as Paragraphs 301 – 306 (adding attribute information to the predictive tree), 438 – 443 (adding information to the tree for coding / decoded from the tree rendering obvious information was encoded with the attribute prediction mode and attribute residual such as in Paragraphs 448 – 455 (see at least “a_pred_mode” and “a_residual_value”)), 525 – 528 (coding tree information with position / attribute information), 573 – 581 (attributes and geometry captured in the predictive tree information encoded see also Paragraphs 588 and 600 – 602)].
See claim 1 for the motivation to combine Iguchi, Sugio, Auwera, and Yea.
Regarding claim 9, Iguchi teaches rendering and tree features to classify and sort the point cloud data for compression / decompression. Sugio teaches coding / decoding point cloud data with prediction and attribute information considered including the differences of such data. Auwera teaches syntax elements and signaling angle / azimuth information for point cloud compression with additional laser identification information. Yea teaches using a coding order related to prediction trees for transform application to the point cloud data to further explain teachings of Iguchi.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Iguchi with those of Sugio combining features in differential / residual based encoding / decoding and further with Auwera’s syntax teachings and information signaled including azimuth / angle information and additionally with the coding order considerations for transform application as taught by Yea. The combination teaches
a memory [Iguchi Figures 1, 5, 46 and 125 (see at least reference characters 3149 and 4615 – 4616) as well as Paragraphs 31 – 33 and 291 – 295 (memory and processor implementations)]; and
at least one processor connected to the memory [Iguchi Figures 1, 5, 46 and 125 (see at least reference characters 3149 and 4615 – 4616) as well as Paragraphs 31 – 33 and 291 – 295 (memory and processor implementations)], the at least one processor is configured to:
encode geometry data of point cloud data based on a predictive tree [Iguchi Figures 46 – 48, 85, 92 – 100, and 109 – 110 (tree generation processes) as well as Paragraphs 88 (tree generation based on geometry data), 151 – 153 and 178 (rendering obvious position / geometry data as obvious variants to one of ordinary skill in the art – see also Paragraph 65), 485, 525 – 528 (prediction tree based on scanning data / geometry of the point cloud), 573 – 578 (prediction tree based on attributes / direction and position of data), and 598 – 603 (tree generation with position / geometry)];
encode attribute data of the point cloud data based on a tree structure [Iguchi Figures 54, 78 – 79, 88 – 92 and 100 – 103 as well as Paragraphs 301 – 3012 (adding attribute information to the predictive tree based on tree / coding order and structure of the tree), 438 – 443 (adding information to the tree for coding / decoded from the tree rendering obvious information was encoded with the predicted tree such as in Paragraphs 448 and 453 (trees for geometry and attribute data)), 525 – 528 (coding tree information with position / attribute information), 573 – 581 (attributes and geometry captured in the predictive tree information encoded see also Paragraphs 588 and 600 – 602)];
wherein, to encode the geometry data, the at least one processor is further configured to reconstruct the geometry data of the point cloud data based on a coding order [Iguchi Figure 16, 54, 88 – 92, and 100 – 103 as well as Paragraphs 308 – 312 (sorts on attribute and depth / geometry data), 453 – 462 (reconstructing the tree / geometry data in combination with Paragraphs 485 – 492 and Yea Paragraphs 103 – 109) and 509 – 528 (sorting geometry / position data in the encoding process which affects the arrangement of the data for the predicted tree generated from the sorted data and the transform performed in at least Paragraph 516 which uses transforms to combine with the transform teachings based on coding order taught by Yea); Yea Figures 9 – 10 (see at least the predictive / lifting transforms used based on the order given of the points) as well as Paragraphs 103 – 109 (octree / tree prediction with order based on LOD information for the traversal order with reconstruction of the geometry / tree information in Paragraphs 103 – 107) and 112 – 123 (predicting transform based on coding order / LOD order render obvious variants of the claimed “coding order” where the order / distances form the coding order for the predicting / lifting transform)],
wherein, to encode the attribute data, the at least one processor is further configured to reconstruct attribute values in the coding order of a block for the tree structure [Iguchi Figure 16, 54, 88 – 92, and 100 – 103 as well as Paragraphs 308 – 312 (sorts on attribute and depth / geometry data based on the tree structure), 448 – 452 (attribute data part of the tree and thus in coding order in combination with Paragraphs 308 – 312), and 509 – 520 (sorting geometry / position data in the encoding process which affects the arrangement of the data for the predicted tree generated from the sorted data and the transform performed in at least Paragraph 516 which uses transforms to combine with the transform teachings based on coding order taught by Yea); Yea Figures 9 – 10 (see at least the predictive / lifting transforms used based on the order given of the points) as well as Paragraphs 76 (metadata included in coding order), 107 – 109 (octree / tree prediction with order based on LOD information for the traversal order) and 112 – 123 (predicting transform based on coding order / LOD order render obvious variants of the claimed “coding order” where the order / distances form the coding order for the predicting / lifting transform)],
wherein the encoded geometry data and the encoded attribute data are included in a bitstream [Iguchi Figures 40 – 43, 53 – 5, and 84 – 86 as well as Paragraphs 258 – 263 and 267 – 270 (type of predictive tree information signaled), 303 – 308 (e.g. “pred_mode”) and 485 – 492 (encoding geometry / attribute data to a bitstream); Yea Figures 4 – 5 as well as Paragraphs 51 and 58 – 61 (bitstream generated of compressed data) and 103 – 109 (compressing / encoding bitstream with geometry / attribute data)], and
wherein the bitstream includes information representing the geometry data is coding using the predictive tree [Iguchi Figures 40 – 43, 53 – 55, and 84 – 86 as well as Paragraphs 258 – 263 and 267 – 270 (type of predictive tree information signaled), 303 – 308 (e.g. “pred_mode”) and 485 – 492 (encoding geometry / attribute data to a bitstream)], information for representing a prediction method related to the attribute data [Iguchi Figures 84 and 103 – 108 as well as Paragraphs 308 – 312, 448, 461 – 464 (coding attribute information with prediction methods / techniques) and 471 – 480 and 585 – 596 (coding mode an obvious variant of the claims “prediction method” for the attribute information (see at least Paragraph 590) which is further combinable with at least Auwera); Auwera Figures 7 – 8 (subfigures included) as well as Paragraphs 73 – 83 (encoding attribute information of objects / lasers / scans) and 125 – 128 (prediction information and associated syntax to code attribute / angle information)], and information for representing whether or not an order in which the attribute data is coded is the coding order [Iguchi Figure 16, 54 (see at least reference character 6601), 80 – 82 (syntax related to attribute and geometry data using prediction / coding information), 88 – 92 (see at least reference character 10001), and 100 – 103 as well as Paragraphs 128 – 132 (sorting 3D points by codes / position / geometry of the points including using a Morton code order at least rendering the selection of methods obvious to one of ordinary skill in the art), 308 – 312 (sorts on attribute and depth / geometry data – see at least Paragraph 312 for Morton order of the position of the points rendering obvious the geometry data claimed which is signaled with the tree data), 448 – 4523 (see at least “a_pred_mode” rendering obvious indications if the attribute information follows the predicted tree order / prediction order or not), 509 – 520 (sorting geometry / position data in the encoding process which affects the arrangement of the data for the predicted tree generated from the sorted data – see in view oof Paragraphs 522 – 530 (reordering is an obvious variant of the claimed sorting to one of ordinary skill in the art)) or alternatively Sugio Figures 60 – 66 as well as Paragraphs 565 – 578 (order to geometry information coded / decoded where at least Paragraph 568 renders obvious same order considerations / signaling for attribute and geometry information].
The motivation to combine Sugio with Iguchi is to combine features in the same / related field of invention of encoding / decoding three dimensional data such as point clouds [Sugio Paragraphs 1 – 4] in order to improve efficiencies of coding / decoding by combining given information with differences [Sugio Paragraphs 9 – 11 where the Examiner observes at least KSR Rationales (D) or (F) are also applicable].
The motivation to combine Auwera with Sugio and Iguchi is to combine features in the same / related field of invention of point cloud encoding / decoding [Auwera Paragraphs 2 – 3] in order to improve storage requirements and position location information of point cloud data [Auwera Paragraphs 3 – 4 and 35 – 37 where the Examiner observes at least KSR Rationales (D) or (F) are also applicable].
The motivation to combine Yea with Auwera, Sugio, and Iguchi is to combine features in the same / related field of invention of point cloud coding [Yea Paragraph 2] in order to improve performance in compressing point cloud data [Yea Paragraphs 4, 36, 56, and 122 – 124 where the Examiner observes at least KSR Rationales (D) or (F) are also applicable].
This is the motivation to combine Iguchi, Sugio, Auwera, and Yea which will be used throughout the Rejection.
Regarding claim 10, Iguchi teaches rendering and tree features to classify and sort the point cloud data for compression / decompression. Sugio teaches coding / decoding point cloud data with prediction and attribute information considered including the differences of such data. Auwera teaches syntax elements and signaling angle / azimuth information for point cloud compression with additional laser identification information. Yea teaches using a coding order related to prediction trees for transform application to the point cloud data to further explain teachings of Iguchi.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Iguchi with those of Sugio combining features in differential / residual based encoding / decoding and further with Auwera’s syntax teachings and information signaled including azimuth / angle information and additionally with the coding order considerations for transform application as taught by Yea. The combination teaches
sort the point cloud data based on the coding order, and wherein the coding order is an order of searching the position predictive tree [Iguchi Figure 16, 54 (see at least reference character 6601), 88 – 92 (see at least reference character 10001), and 100 – 103 as well as Paragraphs 128 – 132 (sorting 3D points by codes / position / geometry of the points), 308 – 312 (sorts on attribute and depth / geometry data), 509 – 520 (sorting geometry / position data in the encoding process which affects the arrangement of the data for the predicted tree generated from the sorted data – see in view of Paragraphs 522 – 530 (reordering is an obvious variant of the claimed sorting to one of ordinary skill in the art)); Yea Figures 8 – 9 as well as Paragraphs 107 – 109 (tree traversal order based on coding order / LOD order) and 112 – 123 (transform based on ordered points / ordering points in coding / prediction order)].
See claim 9 for the motivation to combine Iguchi, Sugio, Auwera, and Yea.
Regarding claim 11, Iguchi teaches rendering and tree features to classify and sort the point cloud data for compression / decompression. Sugio teaches coding / decoding point cloud data with prediction and attribute information considered including the differences of such data. Auwera teaches syntax elements and signaling angle / azimuth information for point cloud compression with additional laser identification information. Yea teaches using a coding order related to prediction trees for transform application to the point cloud data to further explain teachings of Iguchi.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Iguchi with those of Sugio combining features in differential / residual based encoding / decoding and further with Auwera’s syntax teachings and information signaled including azimuth / angle information and additionally with the coding order considerations for transform application as taught by Yea. The combination teaches
sort the point cloud data in an azimuth order, a radius order, or a Morton order based on the geometry data [Iguchi Figure 16, 54 (see at least reference character 6601), 88 – 92 (see at least reference character 10001), and 100 – 103 as well as Paragraphs 128 – 132 (sorting 3D points by codes / position / geometry of the points including using a Morton code order at least rendering the selection of methods obvious to one of ordinary skill in the art), 308 – 312 (sorts on attribute and depth / geometry data – see at least Paragraph 312 for Morton order of the position of the points rendering obvious the geometry data claimed), 509 – 520 (sorting geometry / position data in the encoding process which affects the arrangement of the data for the predicted tree generated from the sorted data – see in view oof Paragraphs 522 – 530 (reordering is an obvious variant of the claimed sorting to one of ordinary skill in the art))].
See claim 9 for the motivation to combine Iguchi, Sugio, Auwera, and Yea.
Regarding claim 12, Iguchi teaches rendering and tree features to classify and sort the point cloud data for compression / decompression. Sugio teaches coding / decoding point cloud data with prediction and attribute information considered including the differences of such data. Auwera teaches syntax elements and signaling angle / azimuth information for point cloud compression with additional laser identification information. Yea teaches using a coding order related to prediction trees for transform application to the point cloud data to further explain teachings of Iguchi.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Iguchi with those of Sugio combining features in differential / residual based encoding / decoding and further with Auwera’s syntax teachings and information signaled including azimuth / angle information and additionally with the coding order considerations for transform application as taught by Yea. The combination teaches
encode a mode difference between the position predictive mode and the attribute predictive mode [Sugio Figures 48 – 50 and 60 (see at least reference character S3016) as well as Paragraphs 436 – 446 (attribute difference information encoded with other differences between attributes and the predictive mode in Paragraph 419), 489, 546 (differences from predicted value / mode (obvious variant to one of ordinary skill in the art) further rendered obvious in Paragraphs 576, 636, and 669); Iguchi Figures 51 – 53, 81 – 83 (see at least reference character S962), and 108 – 110 as well as Paragraph 298 (rendering obvious the prediction mode to encode position / geometry information), 454 – 460 and 599 – 602 (difference from attribute and predicted modes / values)]; and
encode a residual difference between the position residual and the attribute residual [Sugio Figures 125 – 127 as well as Paragraphs 964 – 966 (encoding the residual of the prediction / position residual and attribute information (including differences – see previous limitation); Iguchi Figures 51 – 53, 80 – 83, and 108 – 112 as well as Paragraphs 445 – 460 (geometry and attribute residual value), and 599 – 617 (encoding residuals of prediction modes / values and other information / attributes (e.g. Paragraphs 609 and 612 the angles)].
See claim 9 for the motivation to combine Iguchi, Sugio, Auwera, and Yea.
Regarding claim 22, Iguchi teaches rendering and tree features to classify and sort the point cloud data for compression / decompression. Sugio teaches coding / decoding point cloud data with prediction and attribute information considered including the differences of such data. Auwera teaches syntax elements and signaling angle / azimuth information for point cloud compression with additional laser identification information. Yea teaches using a coding order related to prediction trees for transform application to the point cloud data to further explain teachings of Iguchi.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Iguchi with those of Sugio combining features in differential / residual based encoding / decoding and further with Auwera’s syntax teachings and information signaled including azimuth / angle information and additionally with the coding order considerations for transform application as taught by Yea. The combination teaches
a memory [Iguchi Figures 1, 5, 46 and 125 (see at least reference characters 3149 and 4615 – 4616) as well as Paragraphs 31 – 33 and 291 – 295 (memory and processor implementations)]; and
at least one a processor connected to the memory [Iguchi Figures 1, 5, 46 and 125 (see at least reference characters 3149 and 4615 – 4616) as well as Paragraphs 31 – 33 and 291 – 295 (memory and processor implementations)], the at least one processor is configured to:
decode geometry data of the point cloud data in a bitstream based on a predictive tree [Iguchi Figures 21 – 22 (see at least the decoding unit receives encoded point cloud data), 36 – 38 (see at least client in reference character 1502), 80 – 82, 103 – 104 (see at least reference characters S10022 and S10023) and 133 – 134 as well as Paragraphs 154 – 163 (decoding position / geometry data as well as attribute data / information where Paragraph 167 renders obvious encoding geometry and attribute data for point cloud data), 247 – 254 (receiving coded point cloud data), 450 – 453 (prediction mode encoded with the tree data), 465, 475, and 482 – 492 (prediction modes and residuals to decode geometry data in a bitstream), and 560 and 576 – 586 (tree and mode information where the mode information is associated with the prediction / position residual included as well and further used in decoding in Paragraphs 600 – 603), and 841 – 848 (decoder includes receiving encoded attribute and position information (geometry))]; and
decode attribute data of the point cloud data based on a tree structure [Iguchi Figures 21 – 22 (see at least the decoding unit receives encoded point cloud data), 36 – 38 (see at least client in reference character 1502), 80 – 82, and 133 – 134 as well as Paragraphs 57 (decoder reconstructs point cloud data), 154 – 163 (decoding position / geometry data as well as attribute data / information where Paragraph 167 renders obvious encoding geometry and attribute data for point cloud data), 247 – 254 (receiving coded point cloud data), 308 – 312 (coding order based on the tree), 448 – 455 (prediction tree contains attribute information, prediction mode information, and residual values of attribute data (see at least Paragraph 450)), 525 – 528 (coding tree information with position / attribute information), 573 – 581 (attributes and geometry captured in the predictive tree information encoded see also Paragraphs 588 and 600 – 602), and 841 – 848 (decoder includes receiving encoded attribute and position information (geometry))],
wherein, to decode the geometry data, the at least one processor is further configured to reconstruct the geometry data of the point cloud data based on a coding order [Iguchi Figure 16, 21 – 22 (see at least the decoding unit receives encoded point cloud data), 54, 88 – 92, and 100 – 103 as well as Paragraphs 308 – 312 (sorts on attribute and depth / geometry data), 453 – 462 (reconstructing the tree / geometry data in combination with Paragraphs 485 – 492 and Yea Paragraphs 103 – 109), 509 – 528 (sorting geometry / position data in the encoding process which affects the arrangement of the data for the predicted tree generated from the sorted data and the transform performed in at least Paragraph 516 which uses transforms to combine with the transform teachings based on coding order taught by Yea), 573 – 581 (attributes and geometry captured in the predictive tree information encoded see also Paragraphs 588 and 600 – 602), and 841 – 848 (decoder includes receiving encoded attribute and position information (geometry)); Yea Figures 9 – 10 (see at least the predictive / lifting transforms used based on the order given of the points) as well as Paragraphs 103 – 109 (octree / tree prediction with order based on LOD information for the traversal order with reconstruction of the geometry / tree information in Paragraphs 103 – 107) and 112 – 123 (predicting transform based on coding order / LOD order render obvious variants of the claimed “coding order” where the order / distances form the coding order for the predicting / lifting transform)],
wherein, to decode the attribute data, the at least one processor is further configured to reconstruct attribute values in the coding order of a block for the tree structure [Iguchi Figure 16, 21 – 22 (see at least the decoding unit receives encoded point cloud data), 54, 80 – 82, 88 – 92, and 100 – 103 as well as Paragraphs 308 – 312 (sorts on attribute and depth / geometry data based on the tree structure), 448 – 452 (attribute data part of the tree and thus in coding order in combination with Paragraphs 308 – 312), and 509 – 520 (sorting geometry / position data in the encoding process which affects the arrangement of the data for the predicted tree generated from the sorted data and the transform performed in at least Paragraph 516 which uses transforms to combine with the transform teachings based on coding order taught by Yea), 573 – 581 (attributes and geometry captured in the predictive tree information encoded see also Paragraphs 588 and 600 – 602), and 841 – 848 (decoder includes receiving encoded attribute and position information (geometry)); Yea Figures 7 (decoder) and 9 – 10 (see at least the predictive / lifting transforms used based on the order given of the points) as well as Paragraphs 76 (metadata included in coding order), 107 – 109 (octree / tree prediction with order based on LOD information for the traversal order) and 112 – 123 (predicting transform based on coding order / LOD order render obvious variants of the claimed “coding order” where the order / distances form the coding order for the predicting / lifting transform)], and
wherein the bitstream includes information representing the geometry data is coding using the predictive tree [Iguchi Figures 40 – 43, 53 – 5, and 84 – 86 as well as Paragraphs 258 – 263 and 267 – 270 (type of predictive tree information signaled), 303 – 308 (e.g. “pred_mode”) and 485 – 492 (encoding geometry / attribute data to a bitstream)], information for representing a prediction method related to the attribute data [Iguchi Figures 84 and 103 – 108 as well as 461 – 464 (coding attribute information with prediction methods / techniques) and 471 – 480 and 585 – 596 (coding mode an obvious variant of the claims “prediction method” for the attribute information (see at least Paragraph 590) which is further combinable with at least Auwera); Auwera Figures 7 – 8 (subfigures included) as well as Paragraphs 73 – 83 (encoding attribute information of objects / lasers / scans) and 125 – 128 (prediction information and associated syntax to code attribute / angle information)] and information for representing whether or not an order in which the attribute data is coded is the coding order [Iguchi Figure 16, 54 (see at least reference character 6601), 80 – 82 (syntax related to attribute and geometry data using prediction / coding information), 88 – 92 (see at least reference character 10001), and 100 – 103 as well as Paragraphs 128 – 132 (sorting 3D points by codes / position / geometry of the points including using a Morton code order at least rendering the selection of methods obvious to one of ordinary skill in the art), 308 – 312 (sorts on attribute and depth / geometry data – see at least Paragraph 312 for Morton order of the position of the points rendering obvious the geometry data claimed which is signaled with the tree data), 448 – 4523 (see at least “a_pred_mode” rendering obvious indications if the attribute information follows the predicted tree order / prediction order or not), 509 – 520 (sorting geometry / position data in the encoding process which affects the arrangement of the data for the predicted tree generated from the sorted data – see in view oof Paragraphs 522 – 530 (reordering is an obvious variant of the claimed sorting to one of ordinary skill in the art)) or alternatively Sugio Figures 60 – 66 as well as Paragraphs 565 – 578 (order to geometry information coded / decoded where at least Paragraph 568 renders obvious same order considerations / signaling for attribute and geometry information].
See claim 9 for the motivation to combine Iguchi, Sugio, Auwera, and Yea as the decoder is the obvious inverse of the claimed encoder as recognized by one of ordinary skill in the art.
Regarding claim 23, Iguchi teaches rendering and tree features to classify and sort the point cloud data for compression / decompression. Sugio teaches coding / decoding point cloud data with prediction and attribute information considered including the differences of such data. Auwera teaches syntax elements and signaling angle / azimuth information for point cloud compression with additional laser identification information. Yea teaches using a coding order related to prediction trees for transform application to the point cloud data to further explain teachings of Iguchi.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Iguchi with those of Sugio combining features in differential / residual based encoding / decoding and further with Auwera’s syntax teachings and information signaled including azimuth / angle information and additionally with the coding order considerations for transform application as taught by Yea. The combination teaches
reconstruct a position predictive tree based on the geometry data [See “decoder” in claim 22 (e.g. Paragraph 52 for structures) and additionally Iguchi Figures 80 – 82 and 103 – 104 (see at least reference characters S10022 and S10023) as well as Paragraphs 453 and 576 – 586 (decoding position information predictive tree)]; and
decode the geometry data based on a structure of the position predictive tree, a position predictive mode, and a position residual [Iguchi Figures 103 – 106 as well as Paragraphs 450 – 453 (prediction mode encoded with the tree data), 465, 475, and 483 – 486 (prediction modes and residuals to decode geometry data), and 560 and 580 – 586 (tree and mode information where the mode information is associated with the prediction / position residual included as well and further used in decoding in Paragraphs 600 – 603)];
sort the point cloud data [Iguchi Paragraph 57 (decoder reconstructs point cloud data) to be viewed in combination with Iguchi Figure 16, 54 (see at least reference character 6601), 88 – 92 (see at least reference character 10001), and 100 – 103 as well as Paragraphs 128 – 132 (sorting 3D points by codes / position / geometry of the points), 308 – 312 (sorts on attribute and depth / geometry data), 509 – 520 (sorting geometry / position data in the encoding process which affects the arrangement of the data for the predicted tree generated from the sorted data – see in view of Paragraphs 522 – 530 (reordering is an obvious variant of the claimed sorting to one of ordinary skill in the art))];
reconstruct an attribute predictive tree based on the sorted point cloud data [Iguchi Figures 54, 78 – 79, 88 – 92 and 100 – 103 as well as Paragraphs 57 (decoder reconstructs point cloud data), 128 – 132 (sorting 3D points by codes / position / geometry of the points), 301 – 306 (adding attribute information to the predictive tree), 308 – 312 (sorts on attribute and depth / geometry data), 438 – 443 (adding information to the tree for coding / decoded from the tree rendering obvious information was encoded with the predicted tree such as in Paragraphs 448 or 453 (trees for geometry and attribute data)), 525 – 528 (coding tree information with position / attribute information), 573 – 581 (attributes and geometry captured in the predictive tree information encoded see also Paragraphs 588 and 600 – 602)]; and
decode the attribute data based on a structure of the attribute predictive tree, an attribute predictive mode, and an attribute residual [Iguchi Figures 80 – 82 as well as Paragraphs 57 (decoder reconstructs point cloud data), 448 – 455 (prediction tree contains attribute information, prediction mode information, and residual values of attribute data (see at least Paragraph 450)), 525 – 528 (coding tree information with position / attribute information), 573 – 581 (attributes and geometry captured in the predictive tree information encoded see also Paragraphs 588 and 600 – 602)].
See claim 22 for the motivation to combine Iguchi, Sugio, Auwera, and Yea.
Regarding claim 24, Iguchi teaches rendering and tree features to classify and sort the point cloud data for compression / decompression. Sugio teaches coding / decoding point cloud data with prediction and attribute information considered including the differences of such data. Auwera teaches syntax elements and signaling angle / azimuth information for point cloud compression with additional laser identification information. Yea teaches using a coding order related to prediction trees for transform application to the point cloud data to further explain teachings of Iguchi.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Iguchi with those of Sugio combining features in differential / residual based encoding / decoding and further with Auwera’s syntax teachings and information signaled including azimuth / angle information and additionally with the coding order considerations for transform application as taught by Yea. The combination teaches
sort the point cloud data based on the coding order, and wherein the coding order is an order of searching the position predictive tree [Iguchi Figure 16, 54 (see at least reference character 6601), 80 – 82, 88 – 92 (see at least reference character 10001), and 100 – 103 as well as Paragraphs 57 (functions of reconstructor performed by a decoder), 128 – 132 (sorting 3D points by codes / position / geometry of the points), 308 – 312 (sorts on attribute and depth / geometry data), 509 – 520 (sorting geometry / position data in the encoding process which affects the arrangement of the data for the predicted tree generated from the sorted data – see in view of Paragraphs 522 – 530 (reordering is an obvious variant of the claimed sorting to one of ordinary skill in the art)); Yea Figures 8 – 9 as well as Paragraphs 107 – 109 (tree traversal order based on coding order / LOD order) and 112 – 123 (transform based on ordered points / ordering points in coding / prediction order)].
See claim 22 for the motivation to combine Iguchi, Sugio, Auwera, and Yea.
Regarding claim 25, Iguchi teaches rendering and tree features to classify and sort the point cloud data for compression / decompression. Sugio teaches coding / decoding point cloud data with prediction and attribute information considered including the differences of such data. Auwera teaches syntax elements and signaling angle / azimuth information for point cloud compression with additional laser identification information. Yea teaches using a coding order related to prediction trees for transform application to the point cloud data to further explain teachings of Iguchi.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Iguchi with those of Sugio combining features in differential / residual based encoding / decoding and further with Auwera’s syntax teachings and information signaled including azimuth / angle information and additionally with the coding order considerations for transform application as taught by Yea. The combination teaches
sort the point cloud data in an azimuth order, a radius order, or a Morton order based on the geometry data [Iguchi Figure 16, 54 (see at least reference character 6601), 80 – 83, 88 – 92 (see at least reference character 10001), and 100 – 103 as well as Paragraphs 57 (functions of reconstructor performed by a decoder), 128 – 132 (sorting 3D points by codes / position / geometry of the points including using a Morton code order at least rendering the selection of methods obvious to one of ordinary skill in the art), 308 – 312 (sorts on attribute and depth / geometry data – see at least Paragraph 312 for Morton order of the position of the points rendering obvious the geometry data claimed), 509 – 520 (sorting geometry / position data in the encoding process which affects the arrangement of the data for the predicted tree generated from the sorted data – see in view oof Paragraphs 522 – 530 (reordering is an obvious variant of the claimed sorting to one of ordinary skill in the art))].
See claim 22 for the motivation to combine Iguchi, Sugio, Auwera, and Yea.
Regarding claim 26, Iguchi teaches rendering and tree features to classify and sort the point cloud data for compression / decompression. Sugio teaches coding / decoding point cloud data with prediction and attribute information considered including the differences of such data. Auwera teaches syntax elements and signaling angle / azimuth information for point cloud compression with additional laser identification information. Yea teaches using a coding order related to prediction trees for transform application to the point cloud data to further explain teachings of Iguchi.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Iguchi with those of Sugio combining features in differential / residual based encoding / decoding and further with Auwera’s syntax teachings and information signaled including azimuth / angle information and additionally with the coding order considerations for transform application as taught by Yea. The combination teaches
decode the attribute data based on a mode difference between the position predictive mode and the attribute predictive mode [Sugio Figures 48 – 50 and 60 (see at least reference character S3016) as well as Paragraphs 436 – 446 (attribute difference information encoded with other differences between attributes and the predictive mode in Paragraph 419), 489, 546 (differences from predicted value / mode (obvious variant to one of ordinary skill in the art) further rendered obvious in Paragraphs 576, 636, and 669 (encoded difference data decoded)); Iguchi Figures 51 – 53, 81 – 83 (see at least reference character S962), and 108 – 110 as well as Paragraph 57 (decoding encoded data), 298 (rendering obvious the prediction mode to encode position / geometry information), 454 – 460 and 599 – 602 (difference from attribute and predicted modes / values)]; and
decode the attribute data based on a residual difference between the position residual and the attribute residual [Sugio Figures 125 – 127 as well as Paragraphs 964 – 966 (encoding the residual of the prediction / position residual and attribute information (including differences – see previous limitation including decoder citations)); Iguchi Figures 51 – 53, 80 – 83, and 108 – 112 as well as Paragraphs 57 (decoding encoded data), 445 – 460 (geometry and attribute residual value), and 599 – 617 (encoding residuals of prediction modes / values and other information / attributes (e.g. Paragraphs 609 and 612 the angles)].
See claim 22 for the motivation to combine Iguchi, Sugio, Auwera, and Yea.
Allowable Subject Matter
Claim 7 – 8, 13 – 14, 20 – 21, and 27 – 28 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) [or 35 USC 112(d)] or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph [or 35 USC 112 4th paragraph], set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: Claims 7 and 8 are taken as the representative claims. Regarding claim 7, the claim recites a grouping requirement of the mode differences and residual differences for encoding (and similarly decoding) which is not fairly taught in the prior art as the prior art teaches encoding / decoding but not the particulars of entropy / CABAC / binarization contemplated in the claim limitation that is claimed and does not render obvious such features. Claim 8 recites novel considerations in grouping points that the cited prior art did not disclose nor fairly found in search and consideration. Claim 8 recites grouping with spherical coordinate considerations based on the same laser / identification of the laser (assuming a plurality of lasers or the presence of a LaserID syntax element / attribute) that is not rendered obvious in the prior art while level of details and layers of points are taught, even coordinate components for grouping (e.g. azimuth or radius in spherical coordinates) was not taught found in the prior art. Claims 7, 13, 20, and 27 are similarly allowable for the encoder / decoders claimed (apparatus and method) and claims 8, 14, 21, and 28 are similarly allowable.
The Examiner in the Conclusion section cites additional pertinent art including Interference results based on the indicated allowable subject matter.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Mammou, et al. (US PG PUB 2021/0104075 A1 referred to as “Mammou” throughout) teaches in Paragraph 47 difference considerations similar to claim 6. Ray, et al. (US PG PUB 2022/0114763 A1 referred to as “Ray” throughout) in Figures 2 – 3 and Paragraphs 62 – 69 teaches LOD considerations with collection of points for coding / decoding.
References found during Interference Search: Hur, et al. (US PG PUB 2022/0383552 A1 referred to as “Hur” throughout) was closest to claim 7, but is commonly owned and is excepted prior art.
References which could raise ODP issues based on amendments to the claims: Lee, et al. (US PG PUB 2020/0153885 A1 referred to as “Lee” throughout).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tyler W Sullivan whose telephone number is (571)270-5684. The examiner can normally be reached IFP.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at (571)-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TYLER W. SULLIVAN/ Primary Examiner, Art Unit 2487