DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/03/2026 has been entered.
Response to Arguments
Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Objections
Claims 16 and 17 are objected to because of the following informalities: “A number of points” is an indefinite phrase meaning "some" or "several" points, often acting as a plural subject. "The number of points" is a specific, singular count of items, often acting as a singular subject.
According to the context of the claims, the phrase should be “the number of points”.
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 3-9, 11-17 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims of U.S. Patent No. US 12273557 B2. Although some of the claims including all the independent claims at issue are not identical, they are not patentably distinct from each other. Claim 1 of US 12273557 B2 discloses “the signaling data further includes information that is repeated as many as a number of blocks and the information indicates whether the motion compensation is applied to a corresponding block”.
Claim 1, 3-9, 11-17 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims of copending applications Application No. 19/082973 (claim 1), 18/549107 (claims 1, 5 and 13) (reference application). Although some of the claims including all the independent claims at issue are not identical, they are not patentably distinct from each other.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, 3-9, 11-17 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, because the best mode contemplated by the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s) has not been disclosed. Evidence of concealment of the best mode is based upon the disclosure of the specification, see paragraphs [0228]-[0235], [0262], [0269]-[0270], [0377], [0436], [0438] of the instant application publication. A motion vector direct coding mode (MVDCM) is proposed for cases where the number of points within a search window area, which are targets for finding motion vectors, and the number of points within the current node in the current frame are excessively low. The motion vectors calculated by MVDCM are used for motion compensation/inter prediction. Furthermore, paragraphs [0260], [0368], [0387], [0409] disclose a flag MVDCM_flag, however, the independent claims do not clearly claim the flag (if it is intended to do so) by the limitations “generating first information for representing whether motion compensation is applied for a block for a reference frame of the point cloud frame, and generating a compensated reference frame for the point cloud frame based on the first information”, because in either way, using MVDCM or not, motion compensation is applied.
Claims 16-17 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. No support can be found in the specification that “the bitstream further includes information for a number of points for the reference frame”. In the instant application publication, paragraph [0260] discloses “signaling for representing reference information on points subject to be classified as the MVDCM may be added to a bitstream in FIG. 22”, wherein geom_num_points in figure 22 represents the number of points for a slice in a frame.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 3-9, 11-17 are rejected under 35 U.S.C. 103 as being unpatentable over LASSERRE et al. (US 20200258262 A1) in view of IGUCHI et al. (US 20210142521 A1) and NISHI et al. (US 20230239491 A1).
Regarding claim 1. LASSERRE discloses A method (abstract, This method for inter-predictive encoding of a time-varying 3D point cloud including a series of successive frames divided in 3D blocks into at least one bitstream; [0101] … are transmitted to the decoder side in a bit stream) comprising:
Encoding geometry data in a point cloud frame of point cloud data (figure 3, [0003] methods for encoding and decoding a point cloud and corresponding encoder and decoder; [0017] a method for predictive encoding of a time-varying 3D point cloud including a series of successive frames divided in 3D blocks into at least one bitstream; [0012] a method for inter-predictive encoding of frames based on previous inputs; figure 3, [0150] Then, a prediction residual, which is to be encoded and output to the compressed bitstream, is determined, at step 30; [0005] a point cloud consists in a set of points usually intended to represent the external surface of a 3D object but also more complex geometries. Each point is defined by its 3D spatial location (x, y and z coordinates in the 3D space), i.e, geometry information, and possibly by other associated attributes).
However, LASSERRE does not explicitly disclose
encoding attribute data in the point cloud frame of the point cloud data,
wherein the encoding the geometry data in the point cloud frame includes:
generating first information for representing whether motion compensation is applied for a block for a reference frame of the point cloud frame, and
generating a compensated reference frame for the point cloud frame based on the first information.
IGUCHI discloses
encoding attribute data in the point cloud frame of the point cloud data (figure 6, [0190] First encoder 4630 includes geometry information encoder 4631, attribute information encoder 4632; [0195] The process of encoding attribute information may include a prediction process, using a reference node for calculating a predicted value of attribute information; [0402]),
wherein the encoding the geometry data in the point cloud frame includes:
generating first information for representing whether motion compensation is applied for a block for a reference frame of the point cloud frame ([0232] The additional information (metadata) may be referred to as a parameter set or control information (signaling information); [0247] a process of generating encoded data of additional information (metadata); [0657] in the bitstream, a signal for each tile that indicates whether a motion vector is applied or not, the method of deriving a motion vector, or the method of compensation), and
generating a compensated reference frame for the point cloud frame based on the first information ([0656] The three-dimensional data encoding device may use a different method of deriving a motion vector or a different method of compensation for each tile).
NISHI also discloses
encoding attribute data in the point cloud frame of the point cloud data (figure 5, [0179] attribute information encoder 4632; [0133] encoding methods (encoding schemes) for point cloud data),
wherein the encoding the geometry data in the point cloud frame (geometry information encoder 4631; [0133] encoding methods (encoding schemes) for point cloud data) includes:
generating first information for representing whether motion compensation is applied for a block for a reference frame of the point cloud frame (abstract, parsing a first syntax to determine, for each of at least one prediction tree in a slice, whether inter prediction or intra prediction is to be performed on the prediction tree, parsing a second syntax to determine a motion compensation mode to be performed on the prediction tree when the inter prediction is to be performed on the prediction tree; [0468]; [0479]; [0522]), and
generating a compensated reference frame for the point cloud frame based on the first information ([0522] In a case where the prediction method for the current three-dimensional point to be encoded or to be decoded is the inter prediction (e.g., intra_pred_flag = 0), In a case where a plurality of candidate points is specified in the inter prediction point cloud that is referred to when the inter prediction point is determined, average values of sets of coordinates of the specified candidate points may be used as coordinates of the inter prediction point).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of LASSERRE, IGUCHI and NISHI, to encode attribute data of the point cloud data, and to signal encoding mode, in order to better encode the point cloud data.
Regarding claim 3. LASSERRE discloses The method of claim 1, wherein a prediction unit is generated for the point cloud frame ([0017] a method for predictive encoding of a time-varying 3D point cloud including a series of successive frames divided in 3D blocks into at least one bitstream), and
wherein encoding the point cloud data comprises encoding the prediction unit based on a number of one or more points included in at least one of the prediction unit or the block (figure 2, [0061] the block 4 could be encoded using 3D motion information representing a transformation between the reference block 2 and the current block 4 and a prediction error or residual; figure 3, [0067] For each pair of candidate reference block PC.sub.Rec selected, at step 18, from the searching region and current block PC.sub.Cur 4, the alignment of the two subsets of point cloud frames 6, 8 is achieved at step 20. By subsets, it is meant here the points of the reference frame 8 falling into the candidate reference block PC.sub.Ref and the points of the current frame 6 falling into the current block PC.sub.Cur; figure 2, [0061] the encoder will try to find a 3D block 2, named reference block, similar to the current block 4 in a current frame 6, that is encoded on a previously encoded frame 8, referred to as a reference frame. This process is based on a block matching algorithm. If the encoder succeeds on its search, the block 4 could be encoded using 3D motion information representing a transformation between the reference block 2 and the current block 4 and a prediction error or residual; [0029] each block to be encoded in the current frame is compared with several blocks within a searching region in the reference frame; [0066] Let's suppose that the left bottom corner of the current 3D block 4 is (x.sub.min, y.sub.min, z.sub.min), then the 3D-blocks of the reference frame 8 with their left bottom corners in the region ([x.sub.min−S,x.sub.min+S], [y.sub.min−S,y.sub.min+S], [z.sub.min−S,z.sub.min+S]) constitute the searching region of the matched reference 3D-block).
Regarding claim 4. LASSERRE discloses The method of claim 3, wherein points included in the block are encoded based on a number of one or more points included in the prediction unit (figure 3, [0067] For each pair of candidate reference block PC.sub.Rec selected, at step 18, from the searching region and current block PC.sub.Cur 4, the alignment of the two subsets of point cloud frames 6, 8 is achieved at step 20. By subsets, it is meant here the points of the reference frame 8 falling into the candidate reference block PC.sub.Ref and the points of the current frame 6 falling into the current block PC.sub.Cur), and
wherein based on that a point included in the block is outside a range of the prediction unit, the point is excluded (figure 3, [0067] For each pair of candidate reference block PC.sub.Rec selected, at step 18, from the searching region and current block PC.sub.Cur 4, the alignment of the two subsets of point cloud frames 6, 8 is achieved at step 20. By subsets, it is meant here the points of the reference frame 8 falling into the candidate reference block PC.sub.Ref and the points of the current frame 6 falling into the current block PC.sub.Cur).
Regarding claim 5. LASSERRE discloses The method of claim 3, wherein a motion vector for the motion compensation is generated based on a point located at a bottom, left, and front of the block and a point located at a bottom, left, and front of the prediction unit ([0147]-0148] The vector pointing to the matched reference 3D-block is signaled as (x.sub.min.sup.MatchedRef−x.sub.min.sup.Cur,y.sub.min.sup.MatchedRef−y.sub.min.sup.Cur,z.sub.min.sup.MatchedRef−z.sub.min.sup.Cur)).
Regarding claim 6. LASSERRE discloses The method of claim 3, wherein a motion vector for the motion compensation is generated based on a point on a vertex of the block and a point on a vertex of the prediction unit ([0147]-0148] The vector pointing to the matched reference 3D-block is signaled as (x.sub.min.sup.MatchedRef−x.sub.min.sup.Cur,y.sub.min.sup.MatchedRef−y.sub.min.sup.Cur,z.sub.min.sup.MatchedRef−z.sub.min.sup.Cur)).
Regarding claim 7. LASSERRE discloses The method of claim 3, wherein a motion vector for the motion compensation is generated based on at least one of an average, a maximum, a minimum, or a median of points included in the block and at least one of an average, a maximum, a minimum, or a median of points included in the prediction unit ([0129]-[0132] the mean of the points falling into the candidate reference 3D-Block is denoted as M.sup.Ref, the mean of the points falling into the current 3D-block is estimated as C.sup.Cur, the translation vector can be determined based on M.sup.Ref and C.sup.Cur).
Regarding claim 8. The same analysis has been stated in claim 1.
Regarding claim 9. The same analysis has been stated in claim 1 (receiving/decoding side).
Regarding claim 11. The same analysis has been stated in claim 3.
Regarding claim 12. The same analysis has been stated in claim 4.
Regarding claim 13. The same analysis has been stated in claim 5.
Regarding claim 14. The same analysis has been stated in claim 6.
Regarding claim 15. The same analysis has been stated in claim 9.
Regarding claim 16. (New) LASSERRE in view of IGUCHI and NISHI discloses The method of claim 1, wherein the encoded geometry data, the encoded attribute data, and the first information are included in a bitstream (see claim 1), and wherein the bitstream further includes information for a number of points for the reference frame (LASSERRE [0144] PtNum.sup.PC.sup.Ref is the number of points in the candidate reference block PC.sub.Ref; IGUCHI [0374] information indicating the number of points is generated; IGUCHI figure 39, [0388] the individual information includes the number of points included in a leaf node, [0389]).
The same motivation has been stated in claim 1.
Regarding claim 17. (New) the same analysis has been stated in claim 16.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOLAN XU whose telephone number is (571)270-7580. The examiner can normally be reached Mon. to Fri. 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V. PERUNGAVOOR can be reached at (571) 272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIAOLAN XU/ Primary Examiner, Art Unit 2488