DETAILED ACTION
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The specification does not explain what act or process is performed under the concept of “initializing.”
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 3-4, 8, 10-11, 13-15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Pham Van (US Patent 11,949,909).
Regarding Claim 1, Pham Van (US Patent 11,949,909) discloses a method (system 100 for encoding and decoded point cloud data, Column 4 lines 1-10, Fig. 1) comprising:
generating information for representing that a reference frame (reference frame may be signalled in the current frame, Column 25 lines 55-58) is generated based on a road (classify points in the point cloud as road points or object points, Column 28 lines 1-7) for the point cloud data (points, Column 25 lines 55-65) in a bitstream (signalled, Column 25 lines 55-59—i.e., this means in the bitstream);
encoding geometry data (point cloud positions, Column 8 lines 4-35) of the point cloud data (point cloud, Column 8 lines 4-35) based on an occupancy tree (compressed geometry is represented as an octree and occupancy is signaled or inferred for each child node Column 8 lines 4-35) for the geometry data (geometry-based point cloud compression, point cloud positions, Column 7 line 50 – Column 8 line 35),
wherein the geometry data (current cloud 140, Column 11 lines 20-35) is inter-predicted (encoded using inter-prediction, Column 11 lines 20-35) based on the information for representing that the reference frame (reference cloud 130, Column 11 lines 20-35) is generated (reference frame may be signalled in the current frame, Column 25 lines 55-58) based on a road for the point cloud data (classify points in the point cloud as road points or object points, Column 28 lines 1-7) and encoding attribute data of the point cloud data (Once the geometry is coded, the attributes corresponding to the geometry points are coded, Column 8 lines 35-36).
Regarding Claim 3, Pham Van (US Patent 11,949,909) discloses the method of claim 1, wherein the point cloud data is acquired by LiDAR, wherein the point cloud data comprises points based on a laser ID of the LiDAR (In angular mode, characteristics of LIDAR sensors may be used to code the prediction tree more efficiently. In angular mode, coordinates of positions are converted to radius (r) 384, azimuth (cp) 386, and laser index (i) 388 values, Column 15 lines 15-25).
Regarding Claim 4, Pham Van (US Patent 11,949,909) discloses the method of Claim 1 wherein a point for inter prediction is updated based on a threshold (If the distance between the paired points is greater than a threshold, G-PCC encoder 200 regards the paired points as feature points, Column 18 lines 1-18).
Regarding Claim 8, Pham Van (US Patent 11,949,909) discloses the method of claim 4, wherein the encoding of the point cloud data further comprises:
initializing the reference road frame (Reference cloud 130 may be stored in a decoded frame buffer or history buffer, Column 11 lines 25-30).
Regarding Claim 10, Pham Van (US Patent 11,949,909) discloses a device (encoder Fig. 2, Column 9 lines 50-55) comprising: a memory (memory, Fig. 2, Column 9 lines 53-57), and at least one processor connected to the memory, the at least one processor configured (implemented in hardware, software, firmware, Column 47 lines 10-15) …. The remainder of the claim is rejected on the grounds provided in Claim 1.
Regarding Claim 11, the claim is rejected on the grounds provided in Claim 1.
Regarding Claim 13, the claim is rejected on the grounds provided in Claim 3.
Regarding Claim 14, the claim is rejected on the grounds provided in Claim 4.
Regarding Claim 15, the claim is rejected on the grounds provided in Claim 10.
Claim(s) 2, 12 are rejected under 35 U.S.C. 103 as being unpatentable over Pham Van (US Patent 11,949,909), as evidenced by Liu (NPL “Extending the Detection Range for Low-Channel Roadside LiDAR by Static Background Construction,” IEEE 2022).
Regarding Claim 2, Pham Van (US Patent 11,949,909) discloses the method of claim 1, wherein the point cloud data comprises points related to a road and points related to an object (classify points in the point cloud as road points or object points, Column 28 lines 1-7), wherein, in a frame containing the points related to the road (classify points in the point cloud as road points or object points, Column 28 lines 1-7), the points related to the road are changed by the object related to the road, or are missing (inherent in a LiDAR scanning system).
Liu (NPL “Extending the Detection Range for Low-Channel Roadside LiDAR by Static Background Construction,” IEEE 2022) provides evidence that LiDAR scanning systems inherently return the points related to the road are changed by the object related to the road, or are missing (The laser beam will return different laser points for in-range objects when a vehicle or road user passes. For out-range objects, the laser beam will return laser point when a vehicle or road user passing, else it will return nothing, p. 4 left column, Fig. 2 Fig. 2 caption).
Regarding Claim 12, the claim is rejected on the grounds provided in Claim 2.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 5-7 are rejected under 35 U.S.C. 103 as being unpatentable over Pham Van (US Patent 11,949,909) in view of Liu (NPL “Extending the Detection Range for Low-Channel Roadside LiDAR by Static Background Construction,” IEEE 2022).
Regarding Claim 5, Pham Van (US Patent 11,949,909) discloses the method of claim 1, wherein the encoding of the point cloud data (predictive geometry coding, Column 15 lines 15-25) comprises:
encoding (encode residual coordinate values, Column 15 lines 8-13; Column 16 line 44 – Column 18 line 40) a road frame (To derive motion set for ground/road, only the points with the label of "ground/road" may be used, Column 20 lines 22-30) of a current frame (current frame, Column 17 lines 45 - end) containing the point cloud data (point cloud data, Column 15 lines 12), wherein the encoding of the road frame comprises:
predicting (global/local motion between, Column 17 line 46 – Column 18 line 25) the road frame (To derive motion set for ground/road, only the points with the label of "ground/road" may be used, Column 20 lines 22-30) of the current frame (current frame, Column 17 lines 45 - end) based on a reference road frame (prediction frame, reference, Column 18 lines 1-17) for the road frame (To derive motion set for ground/road, only the points with the label of "ground/road" may be used, Column 20 lines 22-30) of the current frame (current frame, Column 17 lines 45 - end);
calculating points (residual values, Column 15 lines 14-25) for each laser ID (laser index, Column 15 lines 14-25) based on a spherical coordinate system (code residual values in the in r, φ, i domain, Column 15 lines 15-30; where I is laserID, Column 26 lines 40-45) with origin coordinate information (the origin of the frame may be the center of LIDAR system, Column 22 lines 43-50; lasers spinning around the Z axis according to an azimuth angle, Column 15 lines 35-40; LIDAR system positioned at point 476, Column 27 lines 62-end) about the reference road frame being changed (apply estimated global motion to the prediction reference frame, Column 17 lines 58 - end);
searching the reference road frame for a [] point (match feature points between the prediction frame (reference) and the current frame, Column 18 lines 1-10) in the road frame (To derive motion set for ground/road, only the points with the label of "ground/road" may be used, Column 20 lines 22-30) based on at least one of the laser ID or an angle (certain lasers identify ground points, Column 30 lines 32-36).
Pham Van does not disclose, but Liu (NPL “Extending the Detection Range for Low-Channel Roadside LiDAR by Static Background Construction,” IEEE 2022) teaches searching the reference road frame for a missing point (in the set of distances at horizontal angle j and vertical angle i, p.4 right column, select the maximum distance Dij, p.5 left column, equation 5) … and updating the missing point in the road frame (set the point cloud measurement dij to the maximum distance Dij, p.5 left column, equation 5; p. 4 left column).
One of ordinary skill in the art before the application was filed would have been motivated to correct the point cloud of Phan Vam using the background construction algorithm of Liu because Liu teaches that doing so would enable the usage of low channel LiDAR to achieve high accuracy point maps that will become useful for autonomous vehicle applications (p. 3 left column).
Regarding Claim 6, Pham Van (US Patent 11,949,909) discloses rotating the reference road frame (motion parameters are defined as a rotation matrix and translation vector, which will be applied on all the points in a prediction reference frame, Column 17 lines 45-57) with origin coordinate information (Compute the original coordinates (x, y, z), Column 17 lines 1-30) about the reference road frame being changed (applied on all points in the prediction reference frame, Column 18 lines 45 - 56). The remainder of Claim 6 is rejected on the grounds provided in Claim 5.
Regarding Claim 7, the claim is rejected on the grounds provided in Claim 6.
Claim(s) 9 is rejected under 35 U.S.C. 103 as being unpatentable over Pham Van (US Patent 11,949,909) in view of Kobayashi (US 20070171973 A1).
Regarding Claim 9, Pham Van (US Patent 11,949,909) discloses the method of claim 1.
Pham Van does not disclose but Kobayashi (US 20070171973 A1) teaches wherein the bitstream (coded data, Fig. 1) contains at least one of information indicating (setting a conditional P picture in conjunction with an IDR picture [0100]) whether to generate a reference road frame; or information indicating whether a current frame is a new scene (set in response to a scene change [0100]).
One of ordinary skill in the art before the application was filed would have been motivated to modify Pham Van to indicate a scene change, as taught by Koyabashi, because Kobayashi teaches that the difference in data values between a current frame and previous frame during a scene change is high [0101], suggesting that relying on frame from before scene change as a reference for a new scene would result in inefficient predictive coding and should be avoided.
Response to Arguments
Applicant’s remarks filed 12/31/2025 are not persuasive because Applicant misrepresents the amended claims. Applicant states that Applicant has amended the claims to incorporate the allowable subject matter of Claim 4 into the independent claims. Remarks at 7. This is false. The limitations of the independent claims are new and have not been examined.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20190281274 A1 – two model distribution modes based on amount of change in model
Jin, “An Improved Coarse-to-fine Motion Estimation Scheme for LiDAR Point Cloud Geometry Compression,” IEEE 2021 – segmenting the road point cloud from the object point cloud and inter-predicting the current frame
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHADAN E HAGHANI whose telephone number is (571)270-5631. The examiner can normally be reached M-F 9AM - 5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHADAN E HAGHANI/ Examiner, Art Unit 2485