DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/29/25 has been entered.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-7, 10-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US 2024/0129530 A1 (“Nishi”).
Regarding claim 1, Nishi discloses a method comprising: generating information for representing that an inter prediction is used based on a plurality of reference frames (e.g. see at least information relating to the prediction (prediction information), paragraphs [0082]-[0083]; interframeflag indicating whether inter prediction is enabled in conjunction with inter_ref_frame_idx specifying the processing unit (e.g., a frame) referred to in the inter prediction of the current point if the number of reference frames NumRefFrames is greater than 1 shown in Fig. 13, paragraphs [0187]-[0191] and see using multiple reference frames in Fig. 10); generating information related to which reference frame among the reference frames is used (e.g. see at least reference frame number, paragraphs [0082]-[0083]; also see inter_ref_frame_idx specifying the point cloud in the processing unit (e.g., a frame) referred to in the inter prediction of the current point if the number of reference frames NumRefFrames is greater than 1 as shown in Fig. 13, paragraphs [0187]-[0191]); generating information for representing whether the reference frame is referenced bi- directionally (e.g. see selecting inter prediction candidate points, e.g. 312 immediately after 311 in processing order and 313 immediately before 311 having an angle component in reference frame 310 that is the same as or similar to the horizontal angle of current point 301, paragraph [0164], and also see selection of candidate points in Fig. 10; inter_ref_point_idx is information that specifies the prediction point (Predictor) referred to in the inter prediction of the current point, paragraphs [0192]-[0193]); encoding geometry data of point cloud data (e.g. see at least encoding device 100 encoding geometry information of a point cloud to be encoded shown Fig. 1, paragraph [0064]), wherein the encoding the geometry data includes: generating reference frames for the inter prediction (e.g. see at least reference from buffer 107 for inter prediction, paragraphs [0065], [0077]-[0079]; Figs. 8-9 further show reference frame for inter prediction and Fig. 10 using multiple reference frames); and obtaining a predictive tree including points of the geometry data (e.g. see at least prediction tree, e.g. see at least Fig. 2, paragraphs [0067]-[0069]; also see, predtree in Fig. 1, paragraphs [0074]-[0075]), wherein the geometry data is predicted based on the reference frames and the predictive tree (e.g. see at least inter prediction, e.g. inter predictor 109 in Fig. 1, predicting geometry information based on reference from buffer 107 and predtree; Figs. 8-9 further discuss in more details inter prediction based on reference frame and Fig. 10 using multiple reference frames); and encoding attribute data of the point cloud data (e.g. see at least encoding device 100 encoding attribute information of a point cloud to be encoded shown Fig. 1, paragraph [0064]).
Regarding claim 2, Nishi further discloses wherein the encoding of the point cloud data comprises: predicting a first point belonging to a first frame of the point cloud data (e.g. see at least current point 301 in the current frame 300 in Fig. 8; also see 301 in Figs. 9-10), wherein the predicting comprises: predicting the first point based on points belonging to one or more of the plurality of reference frames (e.g. see at least candidate points, e.g. 311-313, in the reference frame 310 in Fig. 8; also see candidate points, e.g. 311-313 and 321-323, in the reference frame 310 in Fig. 9 and reference frames 310-320 in Fig. 10).
Regarding claim 3, Nishi further discloses wherein: the one or more reference frames are before the first frame in order; or the one or more reference frames are after the first frame in order; or one of the one or more frames is before the first frame, and the other one of the one or more reference frames is after the first frame (e.g. see at least encoding order, paragraph [0161]; thus, the reference frame are already encoded or processed before the current frame that is being encoded or processed).
Regarding claim 4, Nishi further discloses wherein the predicting comprises: searching for a third point belonging to one of the one or more reference frames based a previous point of the first point in the first frame (e.g. see at least determining candidate point 313 in the reference frame 310 based on processed point 302 that is processed prior to current point 301 in Fig. 9, paragraphs [0166]-[0167]).
Regarding claim 5, Nishi further discloses wherein the predicting comprises: selecting a plurality of candidate points in the one or more reference frames based on a laser identifier and a radius of the previous point (e.g. see laser id, paragraphs [0080], [0163], [0185]-[0186], and distance component, which is a distance between each of the three-dimensional points and origin as shown in at least Figs. 8-10, paragraphs [0158], [0173], [0198]).
Regarding claim 6, Nishi further discloses wherein the predicting comprises: predicting the first point based on one of the candidate points (e.g. see at least selecting candidate points 311 and 312 in steps S303-S304 in Fig. 9 based on candidate point 3 13 to inter predict current point 301, paragraphs [0168]-[0169]).
Regarding claim 7, Nishi further discloses wherein the predicting comprises: predicting the first point based on at least two of the candidate points (e.g. see at least inter predicting current point 301 based on candidate points 311, 312 and 313 in Figs. 8-9, and further based on candidate points 321, 322 and 323 in Fig. 10).
Regarding claim 10, Nishi further discloses wherein the predicting comprises: determining a first candidate point from a first reference frame (e.g. see at least determining inter prediction candidate point 311, 312 and/or 313 in reference frame 310 in Fig. 10); determining a second candidate point from a second reference frame (e.g. see at least determining inter prediction candidate point 321, 322 and/or 323 in reference frame 310 in Fig. 10); and wherein the first point is predicted based on the first candidate point and the second candidate point (e.g. see inter predicting the current point 301 in Fig. 10 based on the inter candidate points from multiple reference frames 310 and 320).
Regarding claims 11-20, the claims recite analogous limitations to the claims above and are therefore rejected on the same premise.
Response to Arguments
Applicant's arguments filed 12/29/25 have been fully considered but they are not persuasive.
Applicant asserts on pages 8-10 of the Remarks that Nishi fails to disclose newly added limitations.
However, the examiner respectfully disagrees. It is noted that the rejection has been updated in order to illustrate that the claims remain anticipated by Nishi. Nishi, in at least Fig. 13, discloses generated information or syntax such as interframeflag, inter_ref_frame_idx and inter_ref_point_idx included in the bitstream (e.g. entropy encoder 111 as shown in Fig. 1 outputting the bitstream that is entropy decoded as shown in Fig. 3); together the syntax in Fig. 13, in view of corresponding text in the specification and drawings, meet the newly added limitations in the broadest reasonable sense. Please see detailed mapping above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 2024/0249441 A1, Xu et al., Method, apparatus and medium for point cloud coding
US 2024/0135598 A1, Nishi et al., Three-dimensional data decoding method, three-dimensional data decoding device, and three-dimensional data encoding device
US 2023/0105931 A1, Van der Auwera et al., Inter prediction coding with radius interpolation for predictive geometry-based point cloud compression
Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANCIS G GEROLEO whose telephone number is (571)270-7206. The examiner can normally be reached M-F 7:00 am - 3:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anna M Momper can be reached on (571) 270-5788. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Francis Geroleo/Primary Examiner, Art Unit 3619