DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant(s) Response to Official Action
The response filed on 12/04/2025 has been entered and made of record.
Response to Arguments/Amendments
Presented arguments have been fully considered, but are rendered moot in view of the new ground(s) of rejection necessitated by amendment(s) initiated by the applicant(s).
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 11-17 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention.
Re. Claims 1 and 17, the claims do not appear to have adequately disclosed or described the following limitation: “… wherein the geometry data is coded based on a set of beams …”.
For the purposes of examination, the claims are interpreted as the following:
“… decod[ing] geometry data in a coded point cloud frame of the point cloud data based on first coordinates, (emphasis added).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 11-17 are rejected under 35 U.S.C. 103 as being unpatentable over Lasserre et al., hereinafter referred to as Lasserre (US 2021/0192798 A1) in view of Mammou et al., hereinafter referred to as Mammou (US 2019/0311502 A1).
As per claim 11, Lasserre discloses a method of decoding point cloud data (Lasserre: Abstract), the method comprising:
decoding geometry data in a coded point cloud frame of the point cloud data based on first coordinates (Lasserre: Para. [0023] disclose "decoding encoded data to reconstruct a point cloud, the point cloud being located within a volumetric space containing the points of the point cloud, each of the points having a geometric location"; Para. [0150]: "entropy decode 712 the bitstream of encoded occupancy data to reconstruct and output the decoded point cloud."); and
converting the first coordinates to second coordinates (Lasserre: Para. [0071] discloses "change of coordinates X of a referential system relative to the coordinates Y of the master referential system master may be defined using a linear transform: Y=MX+V"; Lasserre: Para. [0146] discloses "transform may be applied to the reference point cloud to place the reference point cloud in the applicable frame of reference". Here, the change of coordinates X to Y corresponds to converting first coordinates to second coordinates.);
performing global motion compensation to a point of a reference frame for the coded point cloud frame based on a global motion matrix and the second coordinates and converting the second coordinates to the first coordinates (Lasserre: Para. [0073] discloses "matrix M and the vector V are quantized and coded into the bitstream to define the global motion of a frame of reference"; Lasserre: Para. [0146] discloses "motion compensation 612 process also takes into account the referential motion, i.e. transform. In other words, it applies both the applicable transform and the motion vector to the reference point cloud to generate a prediction."; Lasserre: Para. [0149] discloses "Referential motion, i.e. transform(s), are decoded 704 to find the relative motion … used in a motion compensation 710 process … to produce the prediction.").
However, Lasserre does not explicitly disclose “… decoding attribute data in the coded point cloud frame using Levels of Detail (LoDs) …”.
Further, Mammou is in the same field of endeavor and teaches decoding attribute data in the coded point cloud frame using Levels of Detail (LoDs) (Mammou: Para. [0236] discloses "spatial information may be used to build a hierarchical Level of detail (LOD) structure. The LOD structure may be used to compress attributes"; Mammou: Paras. [0285]-[0290] disclose the method of decoding attribute information using levels of detail.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Lasserre and Mammou before him or her, to modify the decoding device of Lasserre to include the attribute decoding methods feature as described in Mammou. The motivation for doing so would have been to improve the compression efficiency and scalability of the point cloud data by utilizing the hierarchical structure techniques that effectively manage large amounts of data associated with the reconstructed geometric points.
As per claim 17, Lasserre discloses a device for decoding point cloud data (Lasserre: Abstract), the device comprising:
a memory (Lasserre: Para. [0169] discloses the decoder 1200 includes a memory 1204.); and
at least one processor connected to the memory (Lasserre: Para. [0169] discloses the decoder 1200 includes a decoding application 1206 that includes a computer program or application stored in memory 1204 and containing instructions that, when executed, cause the processor 1202 to perform operations such as those described herein.), wherein the at least one processor is configured to:
decode geometry data in a coded point cloud frame of the point cloud data based on first coordinates (Lasserre: Para. [0023] disclose "decoding encoded data to reconstruct a point cloud, the point cloud being located within a volumetric space containing the points of the point cloud, each of the points having a geometric location"; Para. [0150]: "entropy decode 712 the bitstream of encoded occupancy data to reconstruct and output the decoded point cloud.");
convert the first coordinates to second coordinates (Lasserre: Para. [0071] discloses "change of coordinates X of a referential system relative to the coordinates Y of the master referential system master may be defined using a linear transform: Y=MX+V"; Lasserre: Para. [0146] discloses "transform may be applied to the reference point cloud to place the reference point cloud in the applicable frame of reference". Here, the change of coordinates X to Y corresponds to converting first coordinates to second coordinates.);
perform global motion compensation to a point of a reference frame for the coded point cloud frame based on a global motion matrix and the second coordinates and convert the second coordinates to the first coordinates (Lasserre: Para. [0073] discloses "matrix M and the vector V are quantized and coded into the bitstream to define the global motion of a frame of reference"; Lasserre: Para. [0146] discloses "motion compensation 612 process also takes into account the referential motion, i.e. transform. In other words, it applies both the applicable transform and the motion vector to the reference point cloud to generate a prediction."; Lasserre: Para. [0149] discloses "Referential motion, i.e. transform(s), are decoded 704 to find the relative motion … used in a motion compensation 710 process … to produce the prediction.").
However, Lasserre does not explicitly disclose “… decode attribute data in the coded point cloud frame using Levels of Detail (LoDs) …”.
Further, Mammou is in the same field of endeavor and teaches decoding attribute data in the coded point cloud frame using Levels of Detail (LoDs) (Mammou: Para. [0236] discloses "spatial information may be used to build a hierarchical Level of detail (LOD) structure. The LOD structure may be used to compress attributes"; Mammou: Paras. [0285]-[0290] disclose the method of decoding attribute information using levels of detail.).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, and having the teachings of Lasserre and Mammou before him or her, to modify the decoding device of Lasserre to include the attribute decoding methods feature as described in Mammou. The motivation for doing so would have been to improve the compression efficiency and scalability of the point cloud data by utilizing the hierarchical structure techniques that effectively manage large amounts of data associated with the reconstructed geometric points.
As per claim 12, the method of claim 11, wherein the first coordinates are related to cartesian coordinates, and wherein the second coordinates are related to angular coordinates (Mammou: Paras. [0046], [0135]-[0136] disclose the spatial information may be relative to a local coordinate system or may be relative to a global coordinate system (for example, a Cartesian coordinate system) [first coordinates are related to cartesian coordinates] and cylindrical or spherical type of projection methods may be used, for example, if spherical, equirectangular or equiangular projection may be used [second coordinates are related to angular coordinates]. Spherical or cylindrical projection axiomatically rely on angular coordinates to map 3D points on to a 2D surface.).
As per claim 13, Lasserre-Mammou disclose the method of claim 11, wherein the decoding the geometry data in the coded point cloud frame further includes generating a cylinder box for the reference frame (Lasserre: Para. [0092] discloses "The regions 302, 304, 306 may be cylindrical … The inner region 302 is associated with or attached to a first frame of reference … The outer region 306... is associated with … a second frame of reference".);
splitting the cylinder box into prediction units (PUs), and wherein the global motion compensation is performed based on the PUs (Lasserre: Para. [0094] discloses "In one embodiment, the finest granularity to which a meaningful segmentation may apply is the Prediction Unit (PU). A PU is a 3D block or cuboid to which a motion vector is attached and applied" and Lasserre: Para. [0107] discloses "The coded MV is the residual motion vector that is added to the global motion (transform) associated with the identified frame of reference to obtain the motion vector to be applied for compensation".).
As per claim 14, Lasserre-Mammou disclose the method of claim 13, further comprising:
obtaining flag information indicating whether motion estimation is performed based on the cylinder box (Lasserre: [0106] discloses the concept of obtaining a description of a function [flag information] to determine how segmentation for motion estimation is performed.), wherein the generating the cylinder box comprises:
generating the cylinder box when the flag information is a specific value (Lasserre: Paras. [0092], [0106] disclose the regions may be cylindrical [cylinder box] and the description of the function may include the shape of the zone. If the function defines a circular (or elliptical) zone, it is necessary to signal the radii and foci. Therefore, generating the cylinder box when the information indicates such a shape is used by signaling specific values.).
As per claim 15, Lasserre-Mammou disclose the method of claim 14, further comprising:
obtaining information indicating a method of splitting the cylinder box, wherein the splitting comprises splitting the cylinder box into the PUs based on the information indicating the method of splitting the cylinder box (Lasserre: Paras. [0098]-[0099] disclose determining whether to split a volume (Logically Processing Unit or LPU) into prediction units (PUs) using flags or split combinations (methods) and Lasserre: Paras. [0092], [0149]-[0150] disclose that the decoder decodes segmentation information to reproduce the splitting and that these volumetric regions may correspond to cylindrical regions/boxes.).
As per claim 16, Lasserre-Mammou disclose the method of claim 15, wherein the method of splitting the cylinder box comprises at least one of azimuth, radius, and elevation as a basis for the splitting (Lasserre: Para. [0023], [0092], [0106] disclose segmenting the volumetric space into concentric cylindrical regions wherein the segmentation is based on circular zones defined by radii.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and can be viewed in the list of references.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PEET DHILLON whose telephone number is (571)270-5647. The examiner can normally be reached M-F: 5am-1:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V. Perungavoor can be reached at 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PEET DHILLON/Primary Examiner
Art Unit: 2488
Date: 02-23-2026