DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
2. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
U.S. application 18/601,904
U.S. patent 11,961,264
Claim 1: A method comprising:
receiving a bitstream that encodes at least (i) geometry information for a point cloud, (ii) color hint data, and (iii) a residual color signal; producing color prediction data for the point cloud by supplying the geometry information and the color hint data as inputs to a neural network characterized by neural network parameter data; and adding the residual color signal to the color prediction data to generate a reconstructed color signal for the point cloud.
Claim 5: The method of claim 1, wherein the bitstream further encodes the neural network parameter data.
Claim 1: A method comprising: receiving a bitstream that encodes at least (i) geometry information for a point cloud, (ii) neural network parameter data, and (iii) a residual color signal; producing color prediction data for the point cloud by supplying the geometry information as input to a neural network characterized by the received neural network parameter data; and adding the residual color signal to the color prediction data to generate a reconstructed color signal for the point cloud.
Claim 5: The method of claim 1, wherein the bitstream further encodes color hint data, and wherein producing color prediction data further comprising supplying the color hint data as input to the neural network.
Claim 7: The method of claim 1, wherein the color hint data comprises local color hint data comprising at least one color sample of at least one respective position in the point cloud.
Claim 6: The method of claim 1, wherein the bitstream further encodes local color hint data comprising at least one color sample of at least one respective position in the point cloud, and wherein producing color prediction data further comprises supplying the local color hint data as input to the neural network.
Claim 8: The method of claim 1, wherein the color hint data comprises global color hint data comprising color histogram data.
Claim 7. The method of claim 1, wherein the bitstream further encodes global color hint data comprising color histogram data, and wherein producing color prediction data further comprises supplying the global color hint data as input to the neural network.
Claim 9: The method of claim 1, wherein the color hint data comprises global color hint data comprising color saturation data.
Claim 8: The method of claim 1, wherein the bitstream further encodes global color hint data comprising color saturation data, and wherein producing color prediction data further comprises supplying the global color hint data as input to the neural network.
14. An apparatus comprising: a processor; and a memory storing instructions operative, when executed by the processor, to cause the apparatus to: receive a bitstream that encodes at least (i) geometry information for a point cloud, (ii) color hint data, and (iii) a residual color signal; produce color prediction data for the point cloud by supplying the geometry information and the color hint data as inputs to a neural network characterized by neural network parameter data; and add the residual color signal to the color prediction data to generate a reconstructed color signal for the point cloud.
Claim 13: An apparatus comprising: a processor configured to perform at least: receiving a bitstream that encodes at least (i) geometry information for a point cloud, (ii) neural network parameter data, and (iii) a residual color signal; producing color prediction data for the point cloud by supplying the geometry information as input to a neural network characterized by the received neural network parameter data; and adding the residual color signal to the color prediction data to generate a reconstructed color signal for the point cloud.
Claim 15: The apparatus of claim 13, wherein the bitstream further encodes color hint data, and wherein producing color prediction data further comprising supplying the color hint data as input to the neural network.
Claim 17: A method comprising:
receiving a bitstream that encodes at least (i) geometry information for a point cloud and (ii) color
hint data;
producing synthesized color data (i.e. read as color prediction data) for the point cloud by supplying the geometry information and
color hint data as inputs to a neural network characterized by neural network parameter data; and
rendering a colored point cloud using the geometry information and the synthesized color data.
Claim 13: An apparatus comprising: a processor configured to perform at least: receiving a bitstream that encodes at least (i) geometry information for a point cloud, (ii) neural network parameter data, and (iii) a residual color signal; producing color prediction data for the point cloud by supplying the geometry information as input to a neural network characterized by the received neural network parameter data; and adding the residual color signal to the color prediction data to generate a reconstructed color signal for the point cloud.
Claim 15: The apparatus of claim 13, wherein the bitstream further encodes color hint data, and wherein producing color prediction data further comprising supplying the color hint data as input to the neural network.
Claim 19: A method comprising:
receiving a bitstream that encodes at least (i) geometry information for a point cloud and (ii) neural
network parameter data; producing synthesized color data (i.e. read as color prediction data) for the point cloud by supplying the geometry information as input to a neural network characterized by the received neural network parameter data; and rendering a colored point cloud using the geometry information and the synthesized color data.
Claim 13: An apparatus comprising: a processor configured to perform at least: receiving a bitstream that encodes at least (i) geometry information for a point cloud, (ii) neural network parameter data, and (iii) a residual color signal; producing color prediction data for the point cloud by supplying the geometry information as input to a neural network characterized by the received neural network parameter data; and adding the residual color signal to the color prediction data to generate a reconstructed color signal for the point cloud.
Claims 1 and 5 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 5 of U.S. Patent No. 11,961,264 B2.
Claim 7 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 6 of U.S. Patent No. 11,961,264 B2.
Claim 8 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 7 of U.S. Patent No. 11,961,264 B2.
Claim 9 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 8 of U.S. Patent No. 11,961,264 B2.
Claim 14 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 15 of U.S. Patent No. 11,961,264 B2.
Claim 17 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 15 of U.S. Patent No. 11,961,264 B2.
Claim 19 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 13 of U.S. Patent No. 11,961,264 B2.
Although the claims at issue are not identical, they are not patentably distinct from each other because the scope of the claims of this instant application are encompassed by the patented claims.
Claim Rejections - 35 USC § 103
3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cohen et al. (U.S. patent pub. 2017/0214943), and further in view of Xiao et. al. ("Interactive Deep Colorization with Simultaneous Global and Local Inputs," Arix.org, January 27, 2018, 13 pages.).
Regarding claim 1: Cohen discloses a method comprising:
receiving a bitstream that encodes at least (i) geometry information for a point cloud, (ii) color hint data, and (iii) a residual color signal (Cohen et al.; figs. 1 and 6 and paragraphs 0014-0017, 0027, 0067, wherein the geometry and residual values are obtained);
adding the residual color signal to the color prediction data to generate a reconstructed color signal for the point cloud (Cohen et al.; figs. 1 and 6 and paragraphs 0014-0017, 0027, and 0067, the reconstructed blocks are determined by determining the locations, i.e. geometry, of the blocks and adding the residual blocks to reconstruct the 3D blocks.).
Cohen et al does not teach “bitstream encodes “color hint data” nor “producing color prediction data for the point cloud by supplying the geometry information and the color hint data as inputs to a neural network characterized by neural network parameter data.” Xiao et al. teaches to apply color global and local inputs, i.e. read as color hint data, into a neural network (Xiao et al.; abstract, sections 1, 3, 3.1, 3.2, 3.3, 3.3.1, and 3.3.2). It would have been obvious to one ordinary skilled in the art to combine the teaching of Xiao et al. to the disclosure of Cohen et al., modified to use color hint data and a neural network to estimate color prediction data by inputting color and geometry, to reconstruct an object/image. One ordinary skilled in the art would have been motivated to combine the teaching of Xiao et al. into the disclosure of Cohen et al. in order to reconstruct an object/image more accurately and realistically using color and geometry attributes.
Regarding claim 2: The method of claim 1, further comprising rendering a representation of the point cloud using the reconstructed color signal (Xiao et al.; abstract, sections 1, 3, 3.1, 3.2, 3.3, 3.3.1, and 3.3.2).
Regarding claim 3. The method of claim 1, further comprising rendering a colored point cloud using the geometry information and the reconstructed color signal (see claim 1, Cohen et al. for geometry, paragraph 0067. Xiao et al. for color; abstract, sections 1, 3, 3.1, 3.2, 3.3, 3.3.1, and 3.3.2.).
Regarding claim 4: The method of claim 1, wherein the neural network parameter data comprises a set of neural network weights (Xiao et al.; sections 3.3.1, first paragraph, and 3.3.3).
Regarding claim 5: The method of claim 1, wherein the bitstream further encodes the neural network parameter data (Xiao et al.; sections 3.3.1, first paragraph, and 3.3..3).
Regarding claim 6: The method of claim 1, wherein the neural network parameter data comprises information identifying a stored set of neural network weights (Xiao et al.; sections 3.3.1, first paragraph, and 3.3.3).
Regarding claim 7: The method of claim 1, wherein the color hint data comprises local color hint data comprising at least one color sample of at least one respective position in the point cloud (Xiao et al.; abstract, sections 1, 3, 3.1, 3.2, 3.3, 3.31, and 3.32).
Regarding claim 8: The method of claim 1, wherein the color hint data comprises global color hint data comprising color histogram data (Xiao et al.; abstract, sections 1, 3, 3.1, 3.2, 3.3, 3.3.1, and 3.3.2).
Regarding claim 9: The method of claim 1, wherein the color hint data comprises global color hint data comprising color saturation data (Xiao et al.; abstract, sections 1, 3, 3.1, 3.2, 3.3, 3.3.1, and 3.3.2).
Regarding claim 10: The method of claim 1, wherein producing color prediction data further comprises supplying a previously-reconstructed color signal of a previously-reconstructed point cloud as an input to the neural network (Xiao et al.; sections 3.5, 4.2, and 4.4 texture library used for colorization, i.e. read as previously-reconstructed color signal).
Regarding claim 11: The method of claim 1, wherein the produced color prediction data comprises luma and chroma information for each of a plurality of points in the point cloud (Xiao et al.; section 2 brightness=luma. Sections 1, 3, 3.1, 3.2, 3.3, 3.3.1, and 3.3.2 for color, i.e. chromas).
Regarding claim 12: The method of claim 1, wherein the geometry information is encoded in the bitstream in a compressed form, and wherein the method further comprises decompressing the geometry information (Cohen et al.; figs. 1 and 6 and paragraphs 0014-0017, 0027, and 0067).
Regarding claim 13: The method of claim 1, wherein the geometry information for the point cloud comprises position information for each of a plurality of points in the point cloud (Cohen et al.; figs. 1 and 6 and paragraphs 0014-0017, 0027, and 0067).
Regarding claim 14: Cohen et al. (U.S. patent pub. 2017/0214943), and further in view of Xiao et. al. ("Interactive Deep Colorization with Simultaneous Global and Local Inputs," Arix.org, January 27, 2018, 13 pages.) teach an apparatus comprising:
a processor (Cohen et al.; paragraph 0065); and
a memory storing instructions operative, when executed by the processor (Cohen et al.; paragraph 0065), to cause the apparatus to:
receive a bitstream that encodes at least (i) geometry information for a point cloud, (ii) color hint data, and (iii) a residual color signal (see claim 1);
produce color prediction data for the point cloud by supplying the geometry information and the color hint data as inputs to a neural network characterized by neural network parameter data (see claim 1); and
add the residual color signal to the color prediction data to generate a reconstructed color signal for the point cloud (see claim 1). See claim 1 for obvious and motivation statements.
Regarding claim 15: The apparatus of claim 14, wherein the neural network parameter data comprises a set of neural network weights (see claim 4).
Regarding claim 16: The apparatus of claim 14, wherein the neural network parameter data comprises information identifying a stored set of neural network weights (see claim 6).
Regarding claim 17: Cohen et al. (U.S. patent pub. 2017/0214943), and further in view of Xiao et. al. ("Interactive Deep Colorization with Simultaneous Global and Local Inputs," Arix.org, January 27, 2018, 13 pages.) teach a method comprising: receiving a bitstream that encodes at least (i) geometry information for a point cloud and (ii) color hint data; producing synthesized color data for the point cloud by supplying the geometry information and color hint data as inputs to a neural network characterized by neural network parameter data; and rendering a colored point cloud using the geometry information and the synthesized color data (See claim 1 for rejection. Synthesized color data= color prediction data to form a reconstructed colored object/image). See claim 1 for obvious and motivation statement.
Regarding claim 18: The method of claim 17, wherein the synthesized color data produced for the point cloud comprises luma and chroma information for each of a plurality of points in the point cloud (se claim 11).
Regarding claim 19: Cohen et al. (U.S. patent pub. 2017/0214943), and further in view of Xiao et. al. ("Interactive Deep Colorization with Simultaneous Global and Local Inputs," Arix.org, January 27, 2018, 13 pages.) teach a method comprising:
receiving a bitstream that encodes at least (i) geometry information for a point cloud and (ii) neural network parameter data; producing synthesized color data for the point cloud by supplying the geometry information as input to a neural network characterized by the received neural network parameter data; and rendering a colored point cloud using the geometry information and the synthesized color data (See claim 5 for rejection. Synthesized color data= color prediction data to form a reconstructed colored object/image). See claim 1 for obvious and motivation statement.
Regarding claim 20: The method of claim 19, wherein the synthesized color data produced for the point cloud comprises luma and chroma information for each of a plurality of points in the point cloud (see claim 11).
Contact Information
4. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANAND BHATNAGAR whose telephone number is (571)272-7416. The examiner can normally be reached on M-F 7:30am-4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached on 571-272-4650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only.
For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANAND P BHATNAGAR/Primary Examiner, Art Unit 2668
March 5, 2026