DETAILED ACTION
1. The communication is in response to the application received 01/06/2025, wherein claims 1-18 are pending and are examined as follows.
Notice of Pre-AIA or AIA Status
2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
3. The information disclosure statements (IDS) were submitted on 01/06/2025 and 11/25/2025. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Specification
4. The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Please incorporate features relevant to the claims. Recommend a title that addresses predicting attribute information based on other attribute information.
Claim Rejections - 35 USC § 103
5. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 5, 8, 10, 14-15, and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Oh et al. US 2024/0029312 A1 (with reference to WO 2022/119254 A1 – see attached with English translation), in view of Aflaki et al. EP 3751857 A1, hereinafter referred to as Oh and Zhang, respectively.
Regarding Claim 1, Given the broadest reasonable interpretation (BRI) of the following limitations, Oh teaches and/or suggests “A decoding method comprising [See fig. 1 regarding a point cloud video decoder]: receiving a bitstream generated by encoding three-dimensional points [Fig. 1 shows said decoder receiving a compressed bitstream of encoded 3D point cloud attribute data generated by the encoder in fig. 35. The attribute bitstream is received by the decoder in fig. 36] each including first attribute information and second attribute information [Given the BRI of first and second “attribute information”, and in the absence of further defining limitations, Oh teaches channels of attribute information (e.g. ¶0304-¶0305), where each channel (color information) can be construed as first attribute information, second attribute information, etc. This aligns with the 1st and 2nd color information of fig. 1 in the filed specification]; and predicting the first attribute information by referring to the second attribute information.” [See fig. 36, where Oh’s cross-channel based prediction is employed for predicting one channel of attribute data (e.g. a 1st attribute information) based on a reference channel of attribute data (e.g. a 2nd attribute information)] Although Oh’s teachings are deemed relevant given the BRI of “first attribute information and second attribute information”, the work of Aflaki from the same or similar field of endeavor is brought in to show that in addition to performing cross-component prediction as shown in Oh above (e.g. ¶0310-¶0313), cross-attribute prediction may also be utilized [See ¶0072 of Aflaki where the dependency may consider cross component and ‘cross-attribute’ prediction] In light of Aflaki’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the techniques of Oh for coding point cloud data to include the teachings of Aflaki as above to show that in addition to Oh’s cross-component prediction for decoding attribute information of a point cloud, cross-attribute prediction may also be utilized. Based on Aflaki’s teachings, an improvement to volumetric video coding can therefore be realized (¶0005).
Regarding claim 2, claim 2 is rejected under the same art and evidentiary limitations as determined for the method of Claim 1. Regarding “meta information indicating that the first attribute information is to be predicted by referring to the second attribute information”, see ¶0348 of Oh where if the coding mode for the channel of the attribute information is cross-channel reference prediction mode, attribute_data_unit_data_type_ccp( ) may be transmitted, where ccp may mean cross component prediction.
Regarding Claim 3, Oh and Aflaki teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Oh further teaches and/or suggests “wherein the first attribute information to be predicted and the second attribute information to be referred to are included in a same one of three-dimensional points.” [Given the BRI of first and second attribute information, please see fig. 24, where a 3D point(s) in a current node contain restored Y-channel attribute information (e.g. 2nd attribute information) and generated predicted attribute information (e.g. 1st attribute information). Also refer to fig. 30.]
Regarding Claim 5, Oh and Aflaki teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Oh further teaches and/or suggests “wherein the first attribute information and the second attribute information are stored in a first component and a second component, respectively.” [Given the BRI of the term “component”, Oh’s reference information (e.g. restored attribute information for Y-channel) can be considered one component, while the predicted attribute information for the Cb/Cr channels can be considered the other component (i.e. a dependent component). Please refer to fig. 30 and supporting text. This aligns with the understood meaning of components presented in figs. 8-9 of the filed specification]
Regarding Claim 8, Oh and Aflaki teach all the limitations of claim 5, and are analyzed as previously discussed with respect to that claim. Oh further teaches and/or suggests “wherein the first component and the second component [Regarding components, see citation in claim 5] each include a first dimension element and a second dimension element [Each component, such as the reference component of restored attribute information (fig. 30) contains 1st dimension elements Y (Y0, Y1, Y2). The same applies to the predicted attribute information component which contains 2nd dimension elements Cb as shown], the first dimension element of the first component is predicted by referring to the first dimension element of the second component, and the second dimension element of the first component is predicted by referring to the second dimension element of the second component. [Fig. 30 further illustrates the prediction process for predicting the Cb elements and the Cr elements based on the Y elements as shown]
Regarding Claim 10, Oh and Aflaki teach all the limitations of claim 5, and are analyzed as previously discussed with respect to that claim. Oh further teaches and/or suggests “wherein the bitstream: does not include information on a first prediction mode applied to the first component; and includes information on a second prediction mode applied to the second component.” [See for e.g. ¶0310-¶0311 regarding cross-channel reference prediction mode]
Regarding Claim 14, Oh and Aflaki teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Oh further teaches and/or suggests “wherein the bitstream includes flag information indicating whether reference to the second attribute information in order to predict the first attribute information is enabled.” [See for e.g. ¶0326 and ¶0357 with reference to the parse_parameter_flag which when enabled, delivers the weights and offsets when the prediction method is in the cross-channel reference prediction mode]
Regarding Claim 15, Oh and Aflaki teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Oh further teaches and/or suggests “wherein the bitstream includes coefficient information for calculating a first predicted value of the first attribute information.” [Fig. 36 references decoded transformed coefficients that undergo inverse quantization and transformation for subsequent use in prediction]
Regarding Claim 17, Oh and Aflaki teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. However Oh does not appear to address the features of claim 17. Aflaki on the other hand from the same or similar field of endeavor is relied on to teach and/or suggest “wherein the first attribute information includes RGB values, the second attribute information includes a reflectance value [See ¶0024 with respect to describing each 3D point with color and other attribute information, including reflectance], and at least one value among the RGB values is predicted by referring to the reflectance value.” [¶0072 describes cross-attribute prediction. Although not explicitly shown, it would be within the level of skill in the art to show that given Aflaki’s teachings of cross-attribute prediction and recognizing a 3D point can be described by both color and reflectance, that one attribute can be predicted by the other] The motivation for combining Oh and Aflaki has been discussed in connection with claim 1, above.
Regarding claim 18, claim 18 is rejected under the same art and evidentiary limitations as determined for the method of Claim 1. As to the claimed hardware, see for e.g. ¶0103, ¶0175, ¶0395-¶0397 of Oh for support.
Claims 4 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Oh, in view of Aflaki, and in further view of Sugio et al. EP 3809373 A1, hereinafter referred to as Sugio.
Regarding Claim 4, Oh and Aflaki teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Oh further teaches and/or suggests “wherein the first attribute information to be predicted and the second attribute information to be referred to are included in different ones of the three-dimensional points.” [See for e.g. fig. 27 of Oh, where 3D points in the reference region (27004) differ from those in the region (current node 27002) to be predicted] Although the teachings of Oh and Aflaki are deemed relevant in light of the aforementioned features given their BRI, the work of Sugio from the same or similar field of endeavor is further relied on to teach and/or suggest “wherein the first attribute information to be predicted and the second attribute information to be referred to are included in different ones of the three-dimensional points.” [See for e.g. ¶0013, where based on attribute information items of one or more second three-dimensional points in the vicinity of a first 3-D point, a predicted value of an attribute information item of the first 3-D point is calculated via two or more prediction modes, i.e., are different ones of 3-D points] In light of Sugio’s teachings, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the coding techniques of Oh and Aflaki for managing volumetric data, to include the teachings of Sugio as above to provide a means for improving coding efficiency when encoding and decoding a plurality of three-dimensional points (¶0000-¶0010).
Regarding Claim 6, Oh and Aflaki teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Oh and Aflaki however do not appear to reasonably address the features of claim 6. Sugio on the other hand from the same or similar field of endeavor is relied on to teach and/or suggest “wherein a first quantization step for the first attribute information is greater than a second quantization step for the second attribute information.” [See for e.g. ¶0555 with respect to changing the quantization scale for each LoD.] The motivation for combining Oh, Aflaki, and Sugio has been discussed in connection with claim 4, above.
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Oh, in view of Aflaki, and in further view of Hur et al. US 11,158,107 B2, hereinafter referred to as Hur.
Regarding Claim 7, Oh and Aflaki teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. Oh and Aflaki, however, do not appear to address the features of claim 7. Hur on the other hand from the same or similar field of endeavor is relied on to teach and/or suggest “wherein the first attribute information and the second attribute information are generated by decoding first encoded attribute information and second encoded attribute information, respectively, [See col. 2 lines 5-20. First attribute information (e.g. color) and second attribute information (e.g. reflectance) of each point based on searched neighbor points can be encoded and transmitted to a decoder. As such, each attribute information can be decoded. Also refer to the decoder in fig. 21 where first and second attributes can be restored] and the second attribute information has been losslessly compressed.” [See e.g. col. 11 lines 49-56 with respect to lossless encoding of point cloud data including attributes of the points.] In light of Hur’s teachings regarding point cloud data, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the coding techniques of Oh and Aflaki for managing volumetric data, to include the teachings of Hur as above to provide methods for efficiently processing large amounts of point cloud content and for addressing latency and encoding/decoding complexity (col. 1 lines 20-46).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Oh, in view of Aflaki, and in further view of Chen et al. US 2021/0235085 A1, hereinafter referred to as Chen.
Regarding Claim 9, Oh and Aflaki teach all the limitations of claim 5, and are analyzed as previously discussed with respect to that claim. Oh further teaches and/or suggests “wherein the first component includes a first dimension element [The predicted/dependent component shown in for e.g. fig. 30 (e.g. predicted attribute information for Cb/Cr channels) has a 1st dimension element (e.g. Cb0)] and a second dimension element [for e.g. Cb1 or Cb2], the first dimension element is predicted by referring to the second component [Cb0 for e.g. is predicted by referring to the corresponding reference component of restored attribute information (e.g. Y0)], and the second dimension element [for e.g. Cb1 or Cb2] is predicted by referring to the first dimension element.” [However, Oh and Aflaki do not appear to address the aforementioned limitation. Although this is understood as performing cross component prediction within the first component (i.e. prediction component), which is believed to be within the level of skill in the art, the work of Chen below is brought in for support] Given Oh and Aflaki do not address “and the second dimension element is predicted by referring to the first dimension element.”, Chen from the same or similar field of endeavor is relied on to teach and/or suggest this feature. [See for e.g. ¶0117 with respect to cross component predicting the prediction values] In light of Chen’s teachings regarding point cloud data, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the coding techniques of Oh and Aflaki for managing volumetric data, to include the teachings of Chen as above to provide methods that significantly improves the perceptual quality of the user viewing image by adopting more sophisticated image processing algorithms, while increasing the image compression ratio (¶0014).
Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Oh, in view of Aflaki, and in further view of Zhang et al. US 2022/0051447 A1, hereinafter referred to as Zhang.
Regarding Claim 12, Oh and Aflaki teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. However Oh and Aflaki do not appear to address the features of claim 12. Zhang on the other hand from the same or similar field of endeavor is relied on to teach and/or suggest “wherein the bitstream includes a first residual value of the first attribute information [See for e.g. ¶0033-¶0035 regarding prediction residuals for the R, G, and B components. Also note for e.g. table 200D in fig. 2D for prediction residuals], and the first residual value is a difference between a first value of the first attribute information and a second value of the second attribute information.” [See citations above] In light of Zhang’s teachings regarding point cloud data, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the coding techniques of Oh and Aflaki for coding multi-component attributes for point cloud coding, to include the teachings of Zhang as above to provide methods for reducing the amount of data required to represent a point cloud for faster transmission or reduction of storage (¶0003).
Regarding Claim 13, Oh and Aflaki teach all the limitations of claim 1, and are analyzed as previously discussed with respect to that claim. However Oh and Aflaki do not appear to address the features of claim 13. Zhang on the other hand from the same or similar field of endeavor is relied on to teach and/or suggest “wherein the bitstream includes a first residual value of the first attribute information [See for e.g. table 200D in fig. 2D for prediction residuals], and the first residual value is a difference between a first value of the first attribute information and a second predicted value of the second attribute information.[See for e.g. ¶0033-¶0035 regarding prediction residuals for the R, G, and B components.] The motivation for combining Oh, Aflaki, and Zhang has been discussed in connection with claim 12, above.
Allowable Subject Matter
6. Claims 11 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. In light of the specification, the Examiner finds the claimed invention to be patentably distinct from the prior art of records. The prior art of record, taken individually or in combination fail to explicitly teach or render obvious within the context of the respective independent claims the limitations:
[Claim 11] The decoding method according to claim 1, wherein the bitstream includes a first residual value of the first attribute information, the first residual value is a difference between a third residual value of the first attribute information and a second residual value of the second attribute information, the third residual value is a difference between a first value of the first attribute information and a first predicted value of the first attribute information, and the second residual value is a difference between a second value of the second attribute information and a second predicted value of the second attribute information.
[Claim 16] The decoding method according to claim 1, wherein the bitstream includes a first data unit and a second data unit, the first data unit storing attribute information to be predicted by referring to other attribute information, the second data unit storing attribute information not to be predicted by referring to other attribute information.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892 for additional references.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD A HANSELL JR. whose telephone number is (571)270-0615. The examiner can normally be reached Mon - Fri 10 am- 7 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jamie Atala can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RICHARD A HANSELL JR./Primary Examiner, Art Unit 2486