DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/24/25 has been entered.
Response to Amendment
Applicant’s amendments to claims 12 and 14 are acknowledged.
Response to Arguments
Applicant's arguments filed 12/24/25 have been fully considered but they are not persuasive.
Regarding claim 12, Applicant argues on pages 5-6 of the Response (similar to Applicant’s arguments from the Response filed 7/16/25) that Gamei and Hur do not teach “generating a quantization parameter from the scale value based on a table indicating a correspondence between value of the quantization parameter and value of the scale value, the table being shared between the first encoding scheme and the second encoding scheme” nor does it teach “wherein the first encoding scheme uses a Level of Detail (LoD), and the second encoding scheme uses a Region Adaptive Hierarchical Transform (RAHT)”, as amended.
However, Gamei teaches quantization including a quantization scaling factor and the application of complicated lookup tables under the control of a quantisation parameter (e.g., “from a scale value based on a table”) (page 7, lines 14-23; page 41, lines 14-36). In addition, as previously stated in the Final Rejection of 10/28/25, Hur teaches the use of a single quantization table used by the quantizer 40001 (Tables 2, 3; para[0140]-[0142]). Hur also teaches that the encoder/decoder inputs the transformed attributes to the RAHT transformer 40008 and/or the LOD generator 40009, as shown in Fig. 4 (para[0103]-[0104]).
Applicant also argues on page 5 of the Response, that “…Tables 2 and 3 of Hur each indicate a relationship among a value before quantization, a value after quantization, and quantStep (corresponding to the scale value), but do not indicate a correspondence between the scale value and a quantization parameter” (emphasis added by Applicant).
However, Hur teaches in Tables 2 and 3 that a PCCQuantization value is calculated, which is interpreted to be the “quantization parameter” of claim 12. Thus, Hur teaches in Tables 2 and 3 the use of tables that convey a relationship between a “quantization parameter” (i.e., PCCQuantization) and a “scale value” (i.e., quantStep) (Tables 2, 3; para[0140]-[0142]).
Therefore, Gamei and Hur teach all of the limitations of claim 12. In addition, please see the below-stated rejection of claim 12.
Regarding claims 13-15, please see the above-stated discussion for claim 12 and the below-stated rejection of the claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over Gamei et al. (WO 2013/160694; cited in the IDS filed 4/2/25) in view of Hur et al. (U.S. Pub. No. 2022/0159284).
In regard to claim 12, Gamei teaches an encoding method (i.e., encoding processes) (Figs. 5, 6; page 6, line 21) using a first encoding scheme (i.e., intra-image prediction) (page 9, line 24) and a second encoding scheme different from the first encoding scheme (i.e., inter-image, or motion-compensated (MC), prediction) (page 9, line 25), the method comprising:
generating a coefficient value based on the first encoding scheme or the second encoding scheme (i.e., the result is to generate a so-called residual image signal 330 representing a different between the actual and predicted images; the output of the transform unit 340, which is to say, a set of DCT coefficients for each transformed block of image data, is supplied to a quantiser 350) (page 6, line 28-page 7, line 4; page 7, lines 14-22);
generating a quantized coefficient from the coefficient value using a scale value (i.e., quantization techniques…ranging from a simple multiplication by a quantization scaling factor through to the application of complicated lookup tables under the control of a quantisation parameter; QpCb, Qpluminance, Qpluminance + chroma_qp_index_offset, Qpluminance + second_chroma_qp_index_offset) (page 7, lines 14-23; page 41, lines 14-36);
generating a quantization parameter (i.e., QpCb QpCr) (page 41, lines 14-36) from the scale value based on a table indicating a correspondence between value of the quantization parameter and value of the scale value (i.e., QpCb = scalingTable[Qpluminance + chroma_qp_index_offset], QpCr = scalingTable[Qpluminance + second_chroma_qp_index_offset]) (page 41, lines 14-36)…;
generating a bitstream including the quantization parameter (i.e., entropy encoding) (page 8, lines 1-16).
However, Gamei does not explicitly teach the table being shared between the first encoding scheme and the second encoding scheme nor does it teach wherein the first encoding scheme uses a Level of Detail (LoD), and the second encoding scheme uses a Region Adaptive Hierarchical Transform (RAHT).
In the same field of endeavor, Hur teaches the table being shared between the first encoding scheme and the second encoding scheme (i.e., the point cloud encoder according to the embodiments (for example, the coefficient quantizer 40011) may quantize and inversely quantize the residuals; the quantization process is configured as shown in the following table; Tables 2, 3) (Tables 2, 3; para[0140]-[0142]) and teaches wherein the first encoding scheme uses a Level of Detail (LoD) (i.e., the transformed attributes are input to the RAHT transformer 40008 and/or the LOD generator 40009; the LOD generator 40009 according to the embodiments generates a level of detail (LOD) to perform prediction transform coding) (Fig. 4; para[0103], [0105]), and the second encoding scheme uses a Region Adaptive Hierarchical Transform (RAHT) (i.e., the transformed attributes are input to the RAHT transformer 40008 and/or the LOD generator 40009; the RAHT transformer 40008 according to the embodiments performs for RAHT coding for predicting attribute information based on the reconstructed geometry information) (Fig. 4; para[0103], [0104]).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Gamei and Hur because Hur teaches devices and methods to process point cloud data with high-efficiency by addressing latency and encoding/decoding complexity (See, for example, para[0003], [0009]-[0011] of Hur). Therefore, it would been obvious to combine the teachings of Gamei and Hur.
In regard to claim 13, Gamei teaches a decoding method (i.e., entropy decoder 410; the apparatus of Figs. 5 and 6 can act as a compression apparatus or a decompression apparatus) (Figs. 5-6; page 8, line 35-page 9, line 21) using a first decoding scheme (i.e., intra-image prediction) (page 9, line 24) and a second decoding scheme different from the first decoding scheme (i.e., inter-image, or motion-compensated (MC), prediction) (page 9, line 25), the method comprising:
obtaining a quantization parameter from a bitstream (i.e., entropy decoder 410; the apparatus of Figs. 5 and 6 can act as a compression apparatus or a decompression apparatus) (Figs. 5-6; page 8, line 35-page 9, line 21);
generating a scale value from a quantization parameter (i.e., QpCb QpCr) (page 41, lines 14-36) based on a table indicating a correspondence between value of the quantization parameter and value of the scale value (i.e., QpCb = scalingTable[Qpluminance + chroma_qp_index_offset], QpCr = scalingTable[Qpluminance + second_chroma_qp_index_offset]) (page 41, lines 14-36),…;
generating the coefficient value from a quantized coefficient using the scale value (i.e., quantization techniques…ranging from a simple multiplication by a quantization scaling factor through to the application of complicated lookup tables under the control of a quantisation parameter; QpCb, Qpluminance, Qpluminance + chroma_qp_index_offset, Qpluminance + second_chroma_qp_index_offset) (page 7, lines 14-23; page 41, lines 14-36); and
generating a decoded value from the coefficient value based on the first encoding scheme or the second encoding scheme (i.e., the result is to generate a so-called residual image signal 330 representing a different between the actual and predicted images; the output of the transform unit 340, which is to say, a set of DCT coefficients for each transformed block of image data, is supplied to a quantiser 350) (page 6, line 28-page 7, line 4; page 7, lines 14-22).
However, Gamei does not explicitly teach the table being shared between the first encoding scheme and the second encoding scheme nor does it teach wherein the first encoding scheme uses a Level of Detail (LoD), and the second encoding scheme uses a Region Adaptive Hierarchical Transform (RAHT).
In the same field of endeavor, Hur teaches the table being shared between the first encoding scheme and the second encoding scheme (i.e., the point cloud encoder according to the embodiments (for example, the coefficient quantizer 40011) may quantize and inversely quantize the residuals; the quantization process is configured as shown in the following table; Tables 2, 3) (Tables 2, 3; para[0140]-[0142]) and teaches wherein the first encoding scheme uses a Level of Detail (LoD) (i.e., the transformed attributes are input to the RAHT transformer 40008 and/or the LOD generator 40009; the LOD generator 40009 according to the embodiments generates a level of detail (LOD) to perform prediction transform coding) (Fig. 4; para[0103], [0105]), and the second encoding scheme uses a Region Adaptive Hierarchical Transform (RAHT) (i.e., the transformed attributes are input to the RAHT transformer 40008 and/or the LOD generator 40009; the RAHT transformer 40008 according to the embodiments performs for RAHT coding for predicting attribute information based on the reconstructed geometry information) (Fig. 4; para[0103], [0104]).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Gamei and Hur because Hur teaches devices and methods to process point cloud data with high-efficiency by addressing latency and encoding/decoding complexity (See, for example, para[0003], [0009]-[0011] of Hur). Therefore, it would been obvious to combine the teachings of Gamei and Hur.
In regard to claim 14, Gamei teaches an encoding device (i.e., encoding processes) (Figs. 5, 6; page 6, line 21) using a first encoding scheme (i.e., intra-image prediction) (page 9, line 24) and a second encoding scheme different from the first encoding scheme (i.e., inter-image, or motion-compensated (MC), prediction) (page 9, line 25), the device comprising:
a processor (i.e., all of the data compressions and/or decompression apparatus to be described below may be implemented in hardware, in software running on a general-purpose data processing apparatus such as a general-purpose computer) (page 4, lines 29-36); and
memory (i.e., an input audio/video signal 130 is supplied to a compression apparatus 140 which generates a compressed signal for storing by a store device 150 such as a magnetic disk device, an optical disk device, a magnetic tape device, a solid state storage device such as a semiconductor memory or other storage device) (page 5, lines 25-31),
wherein using the memory, the processor performs:
generating a coefficient value based on the first encoding scheme or the second encoding scheme (i.e., the result is to generate a so-called residual image signal 330 representing a different between the actual and predicted images; the output of the transform unit 340, which is to say, a set of DCT coefficients for each transformed block of image data, is supplied to a quantiser 350) (page 6, line 28-page 7, line 4; page 7, lines 14-22);
generating a quantized coefficient from the coefficient value using a scale value (i.e., quantization techniques…ranging from a simple multiplication by a quantization scaling factor through to the application of complicated lookup tables under the control of a quantisation parameter; QpCb, Qpluminance, Qpluminance + chroma_qp_index_offset, Qpluminance + second_chroma_qp_index_offset) (page 7, lines 14-23; page 41, lines 14-36),…;
generating a quantization parameter (i.e., QpCb QpCr) (page 41, lines 14-36) from the scale value based on a table indicating a correspondence between value of the quantization parameter and value of the scale value (i.e., QpCb = scalingTable[Qpluminance + chroma_qp_index_offset], QpCr = scalingTable[Qpluminance + second_chroma_qp_index_offset]) (page 41, lines 14-36); and
generating a bitstream including the quantization parameter (i.e., entropy encoding) (page 8, lines 1-16).
However, Gamei does not explicitly teach the table being shared between the first encoding scheme and the second encoding scheme nor does it teach wherein the first encoding scheme uses a Level of Detail (LoD), and the second encoding scheme uses a Region Adaptive Hierarchical Transform (RAHT).
In the same field of endeavor, Hur teaches the table being shared between the first encoding scheme and the second encoding scheme (i.e., the point cloud encoder according to the embodiments (for example, the coefficient quantizer 40011) may quantize and inversely quantize the residuals; the quantization process is configured as shown in the following table; Tables 2, 3) (Tables 2, 3; para[0140]-[0142]) and teaches wherein the first encoding scheme uses a Level of Detail (LoD) (i.e., the transformed attributes are input to the RAHT transformer 40008 and/or the LOD generator 40009; the LOD generator 40009 according to the embodiments generates a level of detail (LOD) to perform prediction transform coding) (Fig. 4; para[0103], [0105]), and the second encoding scheme uses a Region Adaptive Hierarchical Transform (RAHT) (i.e., the transformed attributes are input to the RAHT transformer 40008 and/or the LOD generator 40009; the RAHT transformer 40008 according to the embodiments performs for RAHT coding for predicting attribute information based on the reconstructed geometry information) (Fig. 4; para[0103], [0104]).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Gamei and Hur because Hur teaches devices and methods to process point cloud data with high-efficiency by addressing latency and encoding/decoding complexity (See, for example, para[0003], [0009]-[0011] of Hur). Therefore, it would been obvious to combine the teachings of Gamei and Hur.
In regard to claim 15, Gamei teaches a three-dimensional data decoding device (i.e., entropy decoder 410; the apparatus of Figs. 5 and 6 can act as a compression apparatus or a decompression apparatus) (Figs. 5-6; page 8, line 35-page 9, line 21) using a first decoding scheme (i.e., intra-image prediction) (page 9, line 24) and a second decoding scheme different from the first decoding scheme (i.e., inter-image, or motion-compensated (MC), prediction) (page 9, line 25), the device comprising:
a processor (i.e., all of the data compressions and/or decompression apparatus to be described below may be implemented in hardware, in software running on a general-purpose data processing apparatus such as a general-purpose computer) (page 4, lines 29-36); and
memory (i.e., an input audio/video signal 130 is supplied to a compression apparatus 140 which generates a compressed signal for storing by a store device 150 such as a magnetic disk device, an optical disk device, a magnetic tape device, a solid state storage device such as a semiconductor memory or other storage device) (page 5, lines 25-31),
wherein using the memory, the processor performs:
obtaining a quantization parameter from a bitstream (i.e., entropy decoder 410; the apparatus of Figs. 5 and 6 can act as a compression apparatus or a decompression apparatus) (Figs. 5-6; page 8, line 35-page 9, line 21);
generating a scale value from a quantization parameter (i.e., QpCb QpCr) (page 41, lines 14-36) based on a table indicating a correspondence between value of the quantization parameter and value of the scale value (i.e., QpCb = scalingTable[Qpluminance + chroma_qp_index_offset], QpCr = scalingTable[Qpluminance + second_chroma_qp_index_offset]) (page 41, lines 14-36),…;
generating the coefficient value from a quantized coefficient using the scale value (i.e., quantization techniques…ranging from a simple multiplication by a quantization scaling factor through to the application of complicated lookup tables under the control of a quantisation parameter; QpCb, Qpluminance, Qpluminance + chroma_qp_index_offset, Qpluminance + second_chroma_qp_index_offset) (page 7, lines 14-23; page 41, lines 14-36); and
generating a decoded value from the coefficient value based on the first encoding scheme or the second encoding scheme (i.e., the result is to generate a so-called residual image signal 330 representing a different between the actual and predicted images; the output of the transform unit 340, which is to say, a set of DCT coefficients for each transformed block of image data, is supplied to a quantiser 350) (page 6, line 28-page 7, line 4; page 7, lines 14-22).
However, Gamei does not explicitly teach the table being shared between the first encoding scheme and the second encoding scheme nor does it teach wherein the first encoding scheme uses a Level of Detail (LoD), and the second encoding scheme uses a Region Adaptive Hierarchical Transform (RAHT).
In the same field of endeavor, Hur teaches the table being shared between the first encoding scheme and the second encoding scheme (i.e., the point cloud encoder according to the embodiments (for example, the coefficient quantizer 40011) may quantize and inversely quantize the residuals; the quantization process is configured as shown in the following table; Tables 2, 3) (Tables 2, 3; para[0140]-[0142]) and teaches wherein the first encoding scheme uses a Level of Detail (LoD) (i.e., the transformed attributes are input to the RAHT transformer 40008 and/or the LOD generator 40009; the LOD generator 40009 according to the embodiments generates a level of detail (LOD) to perform prediction transform coding) (Fig. 4; para[0103], [0105]), and the second encoding scheme uses a Region Adaptive Hierarchical Transform (RAHT) (i.e., the transformed attributes are input to the RAHT transformer 40008 and/or the LOD generator 40009; the RAHT transformer 40008 according to the embodiments performs for RAHT coding for predicting attribute information based on the reconstructed geometry information) (Fig. 4; para[0103], [0104]).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Gamei and Hur because Hur teaches devices and methods to process point cloud data with high-efficiency by addressing latency and encoding/decoding complexity (See, for example, para[0003], [0009]-[0011] of Hur). Therefore, it would been obvious to combine the teachings of Gamei and Hur.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kristin Dobbs whose telephone number is (571)270-7936. The examiner can normally be reached Monday and Thursday 9:30am-5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sathyanarayanan Perungavoor can be reached at (571)272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
KRISTIN DOBBS
Examiner
Art Unit 2488
/KRISTIN DOBBS/Examiner, Art Unit 2488