DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This Office Action is in response to Application’s amendment/response filed on 12/15/2025, which has been entered and made of record. No Claims has been cancelled. No Claims have been added. Claims 1-10 are pending in the application.
Response to Arguments
Applicant’s arguments with respect to claims 1 and 10 regarding the newly-added occupancy decoder/encoder and refiner configured to process at least two different types of images selected from the occupancy frame, the geometry frame, or the attribute frame limitations are fully considered but are moot in view of the new grounds of rejection represented this Office Action. Applicant’s arguments are directed to amend limitation. The detail 103 rejection below addresses all the newly added limitations/arguments.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over Iguchi et al. (US 20220337859 A1)(Hereinafter referred to as Iguchi) in view of Graziosi et al. (“An overview of ongoing point cloud compression standardization activities: video-based (V-PCC) and geometry-based (G-PCC)”)(Hereinafter referred to as Graziosi) and in further view of “Information technology — Coded representation of immersive media - Part 29: Video-based dynamic mesh coding (V-DMC)”(Hereinafter referred to as VDMC).
Regarding Claim 1, Iguchi discloses A 3D data decoding apparatus for decoding coded data including a geometry and an attribute, converted from 3D data, the 3D data decoding apparatus comprising: (See [0002], “a three-dimensional data decoding device.” Also see Fig. 12 showing Second Decoder 4660 which includes geometry and attribute related processing.)
a refinement information decoder configured to decode characteristics information of refinement and activation information of refinement from the coded data; (See [0206], “Point cloud data is PCC point cloud data like a PLY file or PCC point cloud data generated from sensor information, and includes geometry information (position), attribute information (attribute), and other additional information (metadata). Also see Fig. 12 showing Additional Information Decoder 4663. In this case, Iguchi teaches an additional information decoder which decodes additional information or metadata. Thus the Additional Information Decoder corresponds to “refinement information decoder” and metadata would encompass “characteristics information of refinement and activation information of refinement”.
Also see [0299], “Here, the control information is metadata or the like, such as a parameter set or supplemental enhancement information (SEI). Lastly, see [0011], “A three-dimensional data decoding method . . . obtaining a bitstream in which an item of control information . . .” Here, Iguchi also teaches the idea of control information which is metadata/SEI. Note that it is well known in the art that SEI can be used to indicate how to enhance/post-process video data, thus also corresponding to “characteristics information of refinement and activation information of refinement”. )
a geometry decoder configured to decode a geometry frame from the coded data; (See Fig. 12 Video Decoder 4662 which can decode a Geometry Image (geometry frame) from Encoded Geometry Image (coded data).)
an attribute decoder configured to decode an attribute frame from the coded data; and (See Fig. 12 showing Video Decoder 4662 which can decode an Attribute Image (attribute frame) from Encoded Attribute Image (coded data).)
wherein: the refinement information decoder is configured to decode information indicating a classification of the geometry and the attribute are to be used from coded data of the characteristics information. (See Iguchi [0299] and [0011] describing control information which is metadata/SEI. Also see Iguchi [0206] and Fig. 12 describing decoding additional information using an additional information decoder (refinement information decoder). Also see [0011], “wherein the item of control information includes (i) an item of classification information indicating whether information stored in the data unit is an item of geometry information or an item of attribute information of the encoded three-dimensional point”.)
However, Iguchi fails to explicitly disclose A 3D data decoding apparatus for decoding coded data and decoding 3D data including an occupancy, a geometry and an attribute, converted from 3D data, the 3D data decoding apparatus comprising:
an occupancy decoder configured to decode an occupancy frame from the coded data;
wherein: the refinement information decoder is configured to decode refinement information indicating which of the occupancy, the geometry, and the attribute are to be used from coded data of the characteristics information and
the refiner is configured to perform the refinement processing using at least two different types of images selected from the occupancy frame, the geometry frame, or the attribute frame, wherein the at least two different types of images are specified according to the refinement information.
Graziosi is an art about point cloud (3D data) compression, specifically MPEG PCC activity and codec architecture. (See Abstract)
Graziosi teaches A 3D data decoding apparatus for decoding coded data and decoding 3D data including an occupancy, a geometry and an attribute, converted from 3D data, the 3D data decoding apparatus comprising: (See Page 8 Right Column “5) Duplicates Pruning, Geometry Smoothing, and Attribute Smoothing” Paragraph 1, “The reconstruction process uses the decoded bitstreams for occupancy map, geometry, and attribute images to reconstruct the 3D point cloud.”)
an occupancy decoder configured to decode an occupancy frame from the coded data; (See Page 8 Right Column “5) Duplicates Pruning, Geometry Smoothing, and Attribute Smoothing” Paragraph 1, “The reconstruction process uses the decoded bitstreams for occupancy map, geometry, and attribute images to reconstruct the 3D point cloud.” Here, since Graziosi teaches decoding bitstreams to obtain an occupancy map (occupancy frame from the coded data), that implies the existence of an “occupancy decoder”.)
the refiner is configured to perform the refinement processing using at least two different types of images selected from the occupancy frame, the geometry frame, or the attribute frame. (See Page 5 Fig. 4 showing a video based point cloud compression (V-PCC) model. See the “Post-Processing” section of the Fig. 4 that indicates Attribute Smoothing and Geometry Smoothing.
Further see Page 9 Left Column Paragraph 2, “The compression of geometry and attribute images and the additional points introduced due to occupancy map subsampling may introduce artifacts, which could affect the reconstructed point cloud. TMC2 can use techniques to improve the local reconstruction quality. Notice that similar post-processing methods can be signaled and done at the decoder side. For instance, to reduce possible geometry artifacts caused by segmentation, TMC2 may smooth the points at the boundary of patches using a process known as 3D geometry smoothing [50].” Here, Graziosi teaches the idea of processing the geometry and attribute images with geometry/attribute smoothing (refinement). Graziosi specifically notes that this can be done on the decoder side. Although not explicitly stated or shown, it would be obvious to have a “refiner” that runs these described processes.
Finally, regarding the limitation “to perform the refinement processing using at least two different types of images selected from the occupancy frame, the geometry frame, or the attribute frame”, since the limitation specifically specifies “or” when reciting the different frame types, then simply having geometry frame and attribute frame refinement is sufficient to fulfill the broadest reasonable interpretation of this limitation. Thus, in the case where there is both attribute and geometry smoothing, then that can be considered as “using at least two different types of images”. Note that the claim limitations has no stipulation of performing these refinements simultaneously.)
However, Iguchi in view of Graziosi still fail to explicitly disclose wherein: the refinement information decoder is configured to decode refinement information indicating which of the occupancy, the geometry, and the attribute are to be used from coded data of the characteristics information and
the refiner is configured to perform the refinement processing using at least two different types of images selected from the occupancy frame, the geometry frame, or the attribute frame, wherein the at least two different types of images are specified according to the refinement information.
VDMC teaches the refinement information decoder is configured to decode refinement information indicating which of the occupancy, the geometry, and the attribute are to be used from coded data of the characteristics information and (See Pages 9-10 describing V3C unit syntax. Specifically, in section 8.3.2.2 V3C unit header syntax contains: V3C_GVD (Geometry Video Data), V3C_AVD (Attribute Video Data), and V3C_OVD (Occupancy Video Data). Within these pages are if and else if statements which checks which unit type the data belongs to (geometry, attribute, or occupancy). Note that for each type, VDMC shows that there can be different flags which can be used to indicate/activate different features. See for example the vuh_auxilliary_video_flag.
Lastly, see Page 23 showing afps_vmc_ext_subdivision_enable_flag. This is an example of a flag that can be used to identify if subdivision (refinement) is to be performed. Thus, VDMC teaches the idea of having flags that specifies the enablement of a refinement process.
In combination with Iguchi in view of Graziosi, the additional information/metadata taught by Iguchi (See Iguchi [0299], [0011], [0206], and Fig. 12) would contain information that indicates which of an occupancy, a geometry, or an attribute is to be used for refinement purposes with VDMC already teaches identifying between occupancy, geometry, or attribute. VDMC also teaches the idea of flags that indicate if post-processing (refinement) should be performed with that data or not.)
wherein the at least two different types of images are specified according to the refinement information. (See Pages 9-10 describing V3C unit syntax and Page 23 showing afps_vmc_ext_subdivision_enable_flag. In combination with Iguchi in view of Graziosi, the post-processing (refinement) taught by Graziosi would thus be performed using the image specified by the metadata (flag) which indicates which data to be used.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iguchi in view of Graziosi with VDMC to include metadata and flag information that indicates which data (Geometry Video Data, Attribute Video, or Occupancy Video Data) should be used for refinement purposes.
The motivation to combine Iguchi in view of Graziosi with VDMC would have been obvious as VDMC is simply the video coding standard that both Iguchi and Graziosi refer and related to. The benefit of having metadata/flags would be that it allows the system to know when and how to the process the data.
Regarding Claim 2, Iguchi in view of Graziosi and VDMC disclose The 3D data decoding apparatus according to claim 1, wherein the refinement information decoder is further configured to decode (See Iguchi [0299] and [0011] describing control information which is metadata/SEI. Also see Iguchi [0206] and Fig. 12 describing decoding additional information using an additional information decoder (refinement information decoder).)
a number of attributes to be applied as the refinement information in a case that the refinement information indicates that the attribute is to be used and the refiner is configured to perform the refinement processing using a specified number of attributes. (See VDMC Pages 9-10 showing the if and else if statements that can be used to check if the unit type is Attribute Video Data (refinement information indicates that an attribute is to be used).
See VDMC Page 19, “vps_ext_mesh_data_attribute_count[j] indicates the number of total attributes in the basemesh including the attributes signalled through the basemesh data sub-bitstream and the attributes signalled in the video sub-bitstreams for the atlas with atlas ID j. vps_ext_mesh_data_attribute_count [j ] shall be in the range of 0 to N, inclusive.” Here, VDMC teaches the idea of a flag that can be decoded from a bitstream that indicates a number of attributes.
In combination with Graziosi which teaches teaching geometry/attribute smoothing (refinement), it would be obvious to have a number of attributes flag that specifies a number of attributes to be used during post-processing (refinement). The motivation to combine would have been similar to that of Claim 1 rejection motivation.)
Regarding Claim 3, Iguchi in view of Graziosi and VDMC disclose The 3D data decoding apparatus according to claim 1, wherein the refinement information decoder is further configured to (See Iguchi [0299] and [0011] describing control information which is metadata/SEI. Also see Iguchi [0206] and Fig. 12 describing decoding additional information using an additional information decoder (refinement information decoder).)
individually decode a syntax element indicating whether to use the occupancy, the geometry, or the attribute as the refinement information, and (See VDMC Pages 9-10 describing V3C unit syntax with Section 8.3.2.2 containing: V3C_GVD (Geometry Video Data), V3C_AVD (Attribute Video Data), and V3C_OVD (Occupancy Video Data). In combination with Iguchi in view of Graziosi this information would indicates which of an occupancy, a geometry, or an attribute is to be used for refinement purposes.)
in a case that the occupancy, the geometry, or the attribute has been specified, the refiner is configured to perform the refinement using the specified occupancy, geometry, or attribute. (See Graziosi Page 5 Fig. 4 “Post-Processing” and Page 9 Left Column Paragraph 2, teaching geometry/attribute smoothing (refinement). See VDMC Pages 9-10 describing V3C unit syntax and Page 23 showing afps_vmc_ext_subdivision_enable_flag. In combination with Iguchi in view of Graziosi, the post-processing (refinement) taught by Graziosi would thus be performed using the image specified by the metadata (flag) which indicates which data to be used. The motivation to combine would have been similar to that of Claim 1 rejection motivation.)
Regarding Claim 4, Iguchi in view of Graziosi and VDMC disclose The 3D data decoding apparatus according to claim 1, wherein the refinement information decoder is further configured to further decode (See Iguchi [0299] and [0011] describing control information which is metadata/SEI. Also see Iguchi [0206] and Fig. 12 describing decoding additional information using an additional information decoder (refinement information decoder).)
information indicating a map identifier (ID) to be applied from the characteristics information, and (See VDMC Pages 9-10, Section 8.3.2.2, “vuh_map_index”. In this case, vuh_map_index corresponds to map ID. Note, in many common scenarios, this information would be found in the metadata (characteristics information).)
the refiner is configured to perform the refinement processing on a specified map ID. (See Graziosi Page 5 Fig. 4 “Post-Processing” and Page 9 Left Column Paragraph 2, teaching geometry/attribute smoothing (refinement). Note that since VDMC teaches a map index for Attribute Video Data (V3C_AVD) and Geometry Video Data (V3C_GVD). Thus in combination with Graziosi which teaches to perform post-processing (refinement), it would be obvious to have the refiner would take into account the map index and perform refinement on a specified map index. The motivation to combine would have been similar to that of Claim 1 rejection motivation.)
Regarding Claim 5, Iguchi in view of Graziosi and VDMC disclose The 3D data decoding apparatus according to claim 4, wherein the refinement information decoder is further configured to decode (See Iguchi [0299] and [0011] describing control information which is metadata/SEI. Also see Iguchi [0206] and Fig. 12 describing decoding additional information using an additional information decoder (refinement information decoder).)
the information indicating the map ID to be applied (See VDMC Pages 9-10, Section 8.3.2.2, “vuh_map_index”. In this case, vuh_map_index corresponds to map ID.)
in a case that the refinement information indicates that the geometry or the attribute is to be used. (See Graziosi Page 5 Fig. 4 “Post-Processing” and Page 9 Left Column Paragraph 2, teaching geometry/attribute smoothing (refinement). Note that since VDMC teaches a map index for Attribute Video Data (V3C_AVD) and Geometry Video Data (V3C_GVD), and Graziosi which teaches to perform post-processing (refinement), it would be obvious to have the refiner would take into account the map index and perform refinement on a specified map index. The motivation to combine would have been similar to that of Claim 1 rejection motivation.)
Regarding Claim 6, Iguchi in view of Graziosi and VDMC disclose The 3D data decoding apparatus according to claim 4, wherein the refinement information decoder is further configured to decode (See Iguchi [0299] and [0011] describing control information which is metadata/SEI. Also see Iguchi [0206] and Fig. 12 describing decoding additional information using an additional information decoder (refinement information decoder).)
information indicating an attribute index, a partition index, and an auxiliary flag to be applied in a case that the refinement information indicates that the attribute is to be used. (See VDMC Page 9 showing when unit type is Attribute Video data (vuh_unit_type == V3C_AVD), it has vuh_attribute_index, vuh_partition_index, and vuh_auxiliary_video_flag_index. The motivation to combine would have been similar to that of Claim 1 rejection motivation.)
Regarding Claim 7, Iguchi in view of Graziosi and VDMC disclose The 3D data decoding apparatus according to claim 1, wherein the refinement information decoder is further configured to decode (See Iguchi [0299] and [0011] describing control information which is metadata/SEI. Also see Iguchi [0206] and Fig. 12 describing decoding additional information using an additional information decoder (refinement information decoder).)
an index indicating the characteristics information, a cancel flag, and a persistence flag as the activation information. (See VDMC Pages 35-36 describing the Decoding Process and the flags that indicate different decoding processes. Also see VDMC Pages 9-10 with V3C syntax showing different flags. Note that although an index indicating “characteristics information, a cancel flag, and a persistence flag” are not explicitly described, the idea of having a variable (index) used to indicated different flags/activations is a well-known and common concept. The motivation to combine would have been similar to that of Claim 1 rejection motivation.)
Regarding Claim 8, Iguchi in view of Graziosi and VDMC disclose The 3D data decoding apparatus according to claim 1, wherein the refinement information decoder is further configured to decode (See Iguchi [0299] and [0011] describing control information which is metadata/SEI. Also see Iguchi [0206] and Fig. 12 describing decoding additional information using an additional information decoder (refinement information decoder).)
information indicating a map identifier (ID) to be applied as the activation information, and (See VDMC Pages 9-10 Section 8.3.2.2, “vuh_map_index” under the if and else if headers of when the unit type is Attribute video data and Geometric video data. In this case vuh_map_index corresponds to “map ID” and indicates that map index of the current geometry or attribute stream.)
the refiner is configured to perform the refinement processing using a geometry or an attribute specified by the map ID. (See Graziosi Page 5 Fig. 4 “Post-Processing” and Page 9 Left Column Paragraph 2, teaching geometry/attribute smoothing (refinement). Note that since VDMC teaches a map index for Attribute Video Data (V3C_AVD) and Geometry Video Data (V3C_GVD), and Graziosi which teaches to perform post-processing (refinement), it would be obvious to have the refiner would take into account the map index and perform refinement on a specified map index. The motivation to combine would have been similar to that of Claim 1 rejection motivation.)
Regarding Claim 9, Iguchi in view of Graziosi and VDMC disclose The 3D data decoding apparatus according to claim 1, wherein the refinement information decoder is further configured to decode (See Iguchi [0299] and [0011] describing control information which is metadata/SEI. Also see Iguchi [0206] and Fig. 12 describing decoding additional information using an additional information decoder (refinement information decoder).)
information indicating an attribute to be applied as the activation information, and the refiner is configured to perform the refinement processing using the attribute indicated by the information. (See Graziosi Page 2 Section “A) Definition, acquisition, and rendering” Paragraph 1 reciting, “Each point in the 3D space is associated with a geometry position together with the associated attribute information (e.g. color, reflectance, etc.).” Here, Graziosi teaches the well-known concept that attribute information includes things such as color.
Also see Graziosi Page 8 Right Column Paragraph 1, “Since the reconstructed geometry can be different from the original one, TMC2 transfers the color from the original point cloud to the decoded point cloud and uses these new color values for transmission. The recoloring procedure [47] considers the color value of the nearest point from the original point cloud as well as a neighborhood of points closer to the reconstructed point to determine a possible better color value.” Here, Graziosi teaches recoloring for better color values which corresponds to performing “refinement” on a specific attribute. Thus, in combination with Iguchi which teaches having metadata/SEI and VDMC which teaches creating flags, it would be obvious to be able to have a refiner refine certain attributes such as color, reflectance, etc. as well as have flags (information) that indicates which attribute is to be refined. The motivation to combine would have been similar to that of Claim 1 rejection motivation.)
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Iguchi in view of Graziosi and VDMC and in further view of Akhtar et al. (“Video-Based Point Cloud Compression Artifact Removal”) (Hereinafter referred to as Akhtar).
Regarding Claim 10, Iguchi in view of Graziosi and VDMC disclose A three-dimensional (3D) data coding apparatus for encoding 3D data including an occupancy geometry, and an attribute into coded data, the 3D data coding apparatus comprising: (See Iguchi [0002], “three-dimensional data encoding device”. Also see Iguchi Fig. 10 showing Second Encoder 4650 which processes both Geometry and Attribute Information. Also see Graziosi Page 5 Fig. 4 showing an “Geometry sub-bitstream”, “Attribute sub-bitstream”, and “Occupancy map sub-bitstream” in an encoder diagram.)
a refinement information coder configured to encode characteristics information of refinement and activation information (See Iguchi [0299] and [0011] describing control information which is metadata/SEI. Also see Iguchi [0206] describing Additional Information as metadata. Lastly, see Iguchi Fig. 10 showing Additional Information Encoder 4651 encoding Additional Information (refinement information coder).)
a geometry coder configured to encode a geometry frame; (See Graziosi Page 7 Section “3) Geometry and Occupancy Maps” Paragraph 2, “The occupancy map is a binary image coded using a lossless video encoder [41].”)
a geometry coder configured to encode a geometry frame; (See Iguchi Fig. 10 Video Encoder 4654 which can encodes a Geometry Image (geometry frame).)
an attribute coder configured to encode an attribute frame; and (See Iguchi Fig. 10 Video Encoder 4654 which can encodes an Attribute Image (attribute frame).)
a refiner configured to perform refinement processing of the attribute frame or the geometry frame, wherein: (See Graziosi Page 5 Fig. 4 showing a video based point cloud compression (V-PCC) model. See the “Post-Processing” section of the Fig. 4 that indicates Attribute Smoothing and Geometry Smoothing. Note although not explicitly stated or shown, it would be obvious to have a “refiner” that runs the Attribute/Geometry Smoothing processes.)
the refinement information coder is further configured to encode refinement information indicating which of the occupancy, the geometry, or the attribute are to be used into coded data of the characteristics information, and (See VDMC Pages 9-10 describing V3C unit syntax. In section 8.3.2.2 V3C unit header syntax contains: V3C_GVD (Geometry Video Data), V3C_AVD (Attribute Video Data), and V3C_OVD (Occupancy Video Data). Within these pages are if and else if statements which checks which unit type the data belongs to (geometry, attribute, or occupancy). Note that for each type, VDMC shows that there can be different flags which can be used to indicate/activate different features. See for example the vuh_auxilliary_video_flag.
Lastly, see Page 23 showing afps_vmc_ext_subdivision_enable_flag. This is an example of a flag that can be used to identify if subdivision (refinement) is to be performed. Thus, VDMC teaches the idea of having flags that specifies the enablement of a refinement process.
In combination with Iguchi in view of Graziosi, the additional information/metadata taught by Iguchi (See Iguchi [0299], [0011], [0206], and Fig. 10) would contain information that indicates which of an occupancy, a geometry, or an attribute is to be used for refinement purposes. VDMC already teaches identifying between occupancy, geometry, or attribute. VDMC also teaches the idea of flags that indicate if post-processing (refinement) should be performed with that data or not.)
the refiner is configured to perform the refinement processing using, as input, at least two different types of images selected from the occupancy frame, the geometry frame, or the attribute frame, wherein the at least two different types of images are specified according to the refinement information. (See Graziosi Page 5 Fig. 4 showing a video based point cloud compression (V-PCC) model encoder diagram. See the “Post-Processing” section of the Fig. 4 that indicates Attribute Smoothing and Geometry Smoothing.
Further see Page 9 Left Column Paragraph 2, “The compression of geometry and attribute images and the additional points introduced due to occupancy map subsampling may introduce artifacts, which could affect the reconstructed point cloud. TMC2 can use techniques to improve the local reconstruction quality.” Here, Graziosi teaches the idea of processing the geometry and attribute images with geometry/attribute smoothing (refinement). Although not explicitly stated or shown, it would be obvious to have a “refiner” that runs these described processes.
Regarding the limitation “to perform the refinement processing using, as input, at least two different types of images selected from the occupancy frame, the geometry frame, or the attribute frame”, since the limitation specifically specifies “or” when reciting the different frame types, then simply having geometry frame and attribute frame refinement is sufficient to fulfill the broadest reasonable interpretation of this limitation. Thus, in the case where there is both attribute and geometry smoothing, then that can be considered as “using at least two different types of images”.
Lastly, see VDMC Pages 9-10 describing V3C unit syntax and Page 23 showing afps_vmc_ext_subdivision_enable_flag. In combination with Iguchi in view of Graziosi, the post-processing (refinement) taught by Graziosi would thus be performed using the image specified by the metadata (flag) which indicates which data to be used.)
However, Iguchi in view of Graziosi and VDMC fail to explicitly disclose a refinement information coder configured to encode characteristics information of refinement and activation information of neural network refinement;
Akhtar is an art that teaches video based point cloud compression (V-PCC) (see Abstract.)
Akhtar also teaches activation information of neural network refinement; (See Abstract, “In this work, we developed a novel out-of-the-loop point cloud geometry artifact removal solution that can significantly improve reconstruction quality without additional bandwidth cost. Our novel framework consists of a point cloud sampling scheme, an artifact removal network, and an aggregation scheme. . . The geometry artifact removal network then processes these patches to obtain artifact-removed patches.”
Also See Page 2 Right Column, Paragraph 2, First Bullet Point, “We present a projection-aware 3D sparse convolutional neural network-based framework for point cloud artifact removal.”
Here, Akhtar teaches to improve reconstruction quality (refinement) using a neural network. In combination with Iguchi and VDMC, it would be obvious to have metadata/flags that activates this the neural network refinement process.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Iguchi in view of Graziosi and VDMC with Akhtar to include having neural network based refinement and activation information for this refinement.
The motivation to combine would Iguchi in view of Graziosi and VDMC with Akhtar have been obvious as Akhtar is within the same field of processing 3D data. The benefit of using a neural network for refinement would be to improve reconstruction quality as noted by Akhtar (See Abstract).)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Reference Budagavi et al. (US 20200221125 A1) (Hereinafter referred to as Budagavi) is made of record as an art that teaches an occupancy decoder an occupancy decoder configured to decode an occupancy frame from the coded data (See [0110]-[0112]) and performing refinement on the occupancy frame (See [0131]).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THANG G HUYNH whose telephone number is (571)272-5432. The examiner can normally be reached Mon-Thu 7:30am-4:30pm EST | Fri 7:30am-11:30am EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
/T.G.H./Examiner, Art Unit 2611