DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/09/2026 has been entered.
Status of the Application
Claims 1-20 are currently pending in this application.
Response to Arguments
Presented arguments have been fully considered, but are rendered moot in view of new ground(s) of rejection necessitated by amendment(s) initiated by the applicant(s).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4, 7-10, and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over CHENG et al. (Hereafter, “Cheng”) [US 2016/0227216 A1] in view of Chuang et al. (Hereafter, “Chuang”) [US 10,979,726 B2].
In regards to claim 1, Cheng discloses an apparatus ([Abstract] apparatus) comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor [Fig. 1 and 2], cause the apparatus at least to: store syntax element values based upon positions of syntax elements in a second coding tree unit located in a picture, a slice, or a tile ([0004] a largest CU (LCU), which is also referred as coded tree unit (CTU) in HEVC [0032 and Fig. 6] The neighbor information associated with neighboring CUs (i.e., the left CU and the above CU) is temporarily stored in a buffer. Accordingly, the buffer for storing decoded information from neighboring CUs in the same LCU or other CUs in a neighboring LCU in the same LCU row is referred as “short term neighbor buffer”. The decoded neighbor information used for context formation may comprise coding parameters such as pred mode, pcm_flag and intra_flag.); determine to start to encode or decode a first coding tree unit located in the picture, the slice, or the tile ([0032] The bin decoding process for the current coding unit (640). The processing order within each LCU is indicated by the arrows.), wherein the second coding tree unit is at least partially different from the first coding tree unit ([Fig. 6] left CU (642) is in LCU (610) and the current coding unit (640) is in LCU (620)), and wherein the second coding tree unit comprises a previously encoded or decoded coding tree unit ([Fig. 6 and 0032] decoded information from left CU (642)); in response to determining to start to encode or decode the first coding tree unit ([Fig. 6] LCU (620)), determine at least one of the stored syntax element values based, at least partially, on a location of the first coding tree unit in the picture, the slice, or the tile ([0032] The bin decoding process for the current coding unit (640) requires decoded information from left CU (642) and above CU (644). In order to improve processing efficiency, the neighbor information associated with neighboring CUs (i.e., the left CU and the above CU) is temporarily stored in a buffer.); and update at least one state variable of the apparatus based, at least partially, on the at least one stored syntax element value, wherein the at least one stored syntax element value is fed to an arithmetic coding engine update process ([0028] In step 440, a context model is determined based on neighboring data and syntax information is decoded. In step 450, the syntax bin is decoded. In step 460, the context model is updated. [0034] The context model update unit (733, 743) is used to generate new context model and update context model stored in context local buffer (731, 741) during bin decoding. The bin decode unit (735, 745) performs the task of binary arithmetic decoding or bypass decoding using the updated context model from the context model update unit (733, 743).).
Chuang discloses an apparatus ([Abstract] A method and apparatus perform palette coding of a block of video data by initializing the palette or triplet palette or using a selected palette or triplet palette from a preceding image area for the beginning block of the current image area.) comprising: at least one processor ([Col. 16] The input data may be retrieved from memory (e.g., computer memory, buffer (RAM or DRAM) or other media) or from a processor.); and at least one memory storing instructions that, when executed by the at least one processor ([Col. 16] The input data may be retrieved from memory (e.g., computer memory, buffer (RAM or DRAM) or other media) or from a processor. [Col. 17] Embodiment of the present invention as described above may be implemented in various hardware, software codes, or a combination of both. For example, an embodiment of the present invention can be a circuit integrated into a video compression chip or program code integrated into video compression software to perform the processing described herein. An embodiment of the present invention may also be program code to be executed on a Digital Signal Processor (DSP) to perform the processing described herein. The invention may also involve a number of functions to be performed by a computer processor, a digital signal processor, a microprocessor, or field programmable gate array (FPGA). These processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention. The software code or firmware code may be developed in different programming languages and different formats or styles. The software code may also be compiled for different target platforms. However, different code formats, styles and languages of software codes and other means of configuring code to perform the tasks in accordance with the invention will not depart from the spirit and scope of the invention.), cause the apparatus at least to: ([Col. 13 and Fig. 1] In FIG. 1, each block stands for one CTU and there are four CTU rows in a picture. Each CTU row forms a wavefront substream that can be processed independently by an encoding or a decoding thread. The “X” symbols represent the current CTU under processing for the multiple threads. [Fig. 1] top right X in Wavefront 1), wherein the second coding tree unit is at least partially different from the first coding tree unit ([Col. 13 and Fig. 1] the last CU (indicated by “p4”) of the left CTU), and wherein the second coding tree unit comprises a previously encoded or decoded coding tree unit ([Col. 13 and Fig. 1] As shown in FIG. 1, a first CU (indicated by “p3”) in a current CTU has to wait for the last CU (indicated by “p4”) of the left CTU to finish. Again, the dependency is indicated by a curved arrow line pointing from “p3” to “p4”. Similar dependency on the left CTU is indicated by curved arrows for the CTU being process (indicated by “X”).); in response to determining to start to encode or decode the first coding tree unit, determine at least one of the stored syntax element values based, at least partially, on a location of the first coding tree unit in the picture, the slice, or the tile ([Col. 15] Inherit the last coded palette from the CU according to the CABAC synchronization point in WPP.); and update at least one state variable of the apparatus based, at least partially, on the at least one stored syntax element value ([Col. 15] In HEVC, at the start of each CTU row, the CABAC states are initialized based on the CABAC states of the synchronization point in the upper CTU row. The position of the synchronization point in upper CTU row can be defined in PPS. According to this embodiment, the synchronization position of CABAC initialization and the inheritance position of the last coded palette initialization are unified. At the beginning of each CTU row, the initial palette colors for the last coded palette of the beginning CU in the current CTU row are copied from the updated last coded palette of the CU at the CABAC synchronization point in upper CTU row. For example, in FIG. 1, the last CU of the second CTU from the upper CTU row is the CABAC synchronization point (labelled as “p2”). In FIG. 2, the updated last coded palette of the last CU (labelled as “B”) of the second CTU (i.e., the above-right CTU of the current CTU) from the upper CTU row is used as the initial last coded palette. The CU for states of CABAC synchronization point and the CU for the beginning CU to inherit palette/triplet palette predictors are unified.),
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Cheng’s storing of neighboring information for the left CU and above CU in the neighboring CTU (LCU) to the current CTU with the explicit inheritance of multiple CUs throughout different CTUs in the picture as taught by Chuang in order to improve the performance of the coding system [See Chuang].
In regards to claim 2, the limitations of claim 1 have been addressed. Cheng discloses wherein the at least one stored syntax element value comprises, at least, a value of a syntax element of the syntax elements in the second coding tree unit, the syntax element belonging to a coding unit within the second coding tree unit [0032 and Fig. 6]; and the instructions that, when executed by the at least one processor, cause the apparatus at least to: determine the value of the syntax element ([0010] the syntax elements are binarizes into bins wherein the value of the bin is known); determine a location of the coding unit within the second coding tree unit ([0032 and Fig. 6] locations of the CUs are known/determined); and store the value of the syntax element in response to the location of the coding unit meeting at least one predetermined criteria ([0032] The bin decoding process for the current coding unit (640) requires decoded information from left CU (642) and above CU (644). In order to improve processing efficiency, the neighbor information associated with neighboring CUs (i.e., the left CU and the above CU) is temporarily stored in a buffer.).
Chuang discloses determine the value of the syntax element ([Col. 3] One or more syntax elements of each block of the current image can be coded using arithmetic coding, and states of arithmetic coding, such as CABAC (context adaptive binary arithmetic coding), for the beginning block of the current image area can inherit the states of the selected block in the preceding image area. [Col. 13-16] palette value of the coding unit); determine a location of the coding unit within the second coding tree unit ([Col. 3-4] When the selected palette or triplet palette is used as the palette predictor, the selected block may also correspond to a selected CU in a selected CTU in the preceding image area located above a beginning CTU in the current image area, wherein the beginning CTU containing the beginning block. The selected CU in the selected CTU can be predefined, such as to a nearest CU with respect to the beginning block. [Col. 13-16] position of the CU for a previously coded CTU); and store the value of the syntax element in response to the location of the coding unit meeting at least one predetermined criteria ([Col. 15-16] Consequently, the palette of the first palette coded CU in the bottom CU row of the above CTU (i.e., CU-a if available, otherwise CU-b if available, etc.) is stored until it is accessed and used when coding the first palette coded CU in the current or following CTU row.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Cheng with teachings of Chuang in order to improve the performance of the coding system [See Chuang].
In regards to claim 3, the limitations of claim 2 have been addressed. Cheng discloses wherein the location of the coding unit within the second coding tree unit is proximate the location of the first coding tree unit ([Fig. 6] left CU 642 in LCU 610 is located next to LCU 620).
In regards to claim 4, the limitations of claim 2 have been addressed. Cheng discloses wherein the at least one predetermined criteria comprises at least one of: the location of the coding unit being at a bottom of the second coding tree unit, the location of the coding unit being at a left-most bottom coding unit of the second coding tree unit, the location of the coding unit being a location in a bottom left half of the second coding tree unit, or the location of the coding unit being a location in a bottom right half of the second coding tree unit ([0032] the neighbor information associated with the bottom right CU 642 in LCU 610 is stored in the buffer).
In regards to claim 7, the limitations of claim 8 have been addressed below. Cheng discloses wherein the at least one stored syntax element value further comprises at least one of: one or more stored syntax element values of bottom coding units in a coding tree unit above the first coding tree unit ([0032] The neighbor information may also be used by CUs in another LCU row. Since the picture may be processed from one LCU row and another LCU row. The neighbor information may need to be stored for a whole LCU row (e.g. above LCU row 630). Therefore, the neighbor information storage as required for other LCU row or other macroblock row is referred as “neighbor data storage”, which is much long term than that stored in the short term neighbor buffer.), or one or more stored syntax element values of bottom coding units in a coding tree unit above-left the first coding tree unit.
In regards to claim 8, the limitations of claim 1 have been addressed. Cheng discloses wherein the at least one stored syntax element value comprises at least one of: one or more stored syntax element values of bottom coding units in the second coding tree unit ([0032] The bin decoding process for the current coding unit (640) requires decoded information from left CU (642) and above CU (644). In order to improve processing efficiency, the neighbor information associated with neighboring CUs (i.e., the left CU and the above CU) is temporarily stored in a buffer.), or one or more stored syntax element values of bottom-right coding units in the second coding tree unit; and the second coding tree unit is spatially located above the first coding tree unit in a picture ([0032] The neighbor information may also be used by CUs in another LCU row. Since the picture may be processed from one LCU row and another LCU row. The neighbor information may need to be stored for a whole LCU row (e.g. above LCU row 630). Therefore, the neighbor information storage as required for other LCU row or other macroblock row is referred as “neighbor data storage”, which is much long term than that stored in the short term neighbor buffer.).
Cheng discloses the storing of neighboring information in the same LCU row or the above LCU row. The processing order can be seen in Fig. 6 of Cheng, wherein the top-left CU of the LCU is processed first. One of ordinary skill in the art would understand that the top-left CU would require neighboring information from the above CU and the left CU just as the bottom-left CU required. Therefore, it would have been obvious to one ordinary skill in the art that the neighboring information from the above CU would in the above LCU row which is above the LCU 620. [See Cheng, 0032 and Fig. 6].
In regards to claim 9, the limitations of claim 1 have been addressed. Cheng fails to explicitly disclose wherein no coding tree units in the picture, the slice, or the tile are located above the first coding tree unit, wherein the second coding tree unit is to a right of the first coding tree unit, and wherein the at least one stored syntax element value comprises one or more stored syntax element values of a right side of the second coding tree unit.
Chuang discloses wherein no coding tree units in the picture, the slice, or the tile are located above the first coding tree unit, wherein the second coding tree unit is to a right of the first coding tree unit, and wherein the at least one stored syntax element value comprises one or more stored syntax element values of a right side of the second coding tree unit ([Col. 16 and Fig. 3] In another example, if none of the CUs in the bottom row of above CTU is palette coded, the CUs in the bottom CU row of above-right CTU will be checked, from left to right, the palette of the first palette coded CU is used for current CTU row initialization. If none of the CUs in the bottom CU row of above-right CTU is palette coded, the CUs in the bottom CU row of the CTU on the right side of the above-right CTU will be checked, from left to right. If a palette coded CU is found, the palette of the first palette coded CU is used for the current CTU row initialization.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Cheng with teachings of Chuang in order to improve the performance of the coding system [See Chuang].
In regards to claim 10, the limitations of claim 1 have been addressed. Cheng discloses wherein the at least one state variable is updated based, at least partially, on at least one characteristic of the first coding tree unit ([0028] In step 440, a context model is determined based on neighboring data and syntax information is decoded. In step 450, the syntax bin is decoded. In step 460, the context model is updated. [0034] The context model update unit (733, 743) is used to generate new context model and update context model stored in context local buffer (731, 741) during bin decoding. The bin decode unit (735, 745) performs the task of binary arithmetic decoding or bypass decoding using the updated context model from the context model update unit (733, 743).), and wherein the at least one characteristic of the first coding tree unit comprises at least one of: a location of the first coding tree unit in the picture, and the slice, or the tile, or a location of a current coding unit in the first coding tree unit ([0032] The bin decoding process for the current coding unit (640) requires decoded information from left CU (642) and above CU (644). In order to improve processing efficiency, the neighbor information associated with neighboring CUs (i.e., the left CU and the above CU) is temporarily stored in a buffer.).
Claim 15 lists all the same elements of claim 1, but in method form rather than apparatus form. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 15.
Claim 16 lists all the same elements of claim 2, but in method form rather than apparatus form. Therefore, the supporting rationale of the rejection to claim 2 applies equally as well to claim 16.
Claim 17 lists all the same elements of claim 3, but in method form rather than apparatus form. Therefore, the supporting rationale of the rejection to claim 3 applies equally as well to claim 17.
Claim 18 lists all the same elements of claim 4, but in method form rather than apparatus form. Therefore, the supporting rationale of the rejection to claim 4 applies equally as well to claim 18.
Claim 19 lists all the same elements of claim 8, but in method form rather than apparatus form. Therefore, the supporting rationale of the rejection to claim 8 applies equally as well to claim 19.
Claim 20 lists all the same elements of claim 1, but in non-transitory computer-readable medium form rather than apparatus form. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 20.
Claim(s) 5 and 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cheng in view of Chuang in even further view of Zhu et al. (Hereafter, “Zhu”) [US 2023/0016377 A1].
In regards to claim 5, the limitations of claim 2 have been addressed. Cheng fails to explicitly disclose wherein the value of the syntax element is stored in a syntax element value storage, wherein the syntax element value storage comprises at least one of: an array, a matrix, a vector, or a list.
Zhu discloses wherein the value of the syntax element is stored in a syntax element value storage, wherein the syntax element value storage comprises at least one of: an array, a matrix, a vector, or a list ([0686] The array PaletteIndexIdc[i] stores the i-th palette_index_idc explicitly signalled or inferred.).
It would have been obvious to one of ordinary skill in the art to modify the teachings of Cheng with the palette index for the coding unit to be stored in an array as taught by Zhu in order to improve the quality of decompressed or decoded digital video or images [See Zhu].
In regards to claim 6, the limitations of claim 2 have been addressed. Cheng fails to explicitly disclose wherein the value of the syntax element is stored based, at least partially, on at least one of: a number of stored syntax element values associated with the first coding tree, or a number of stored syntax element values associated with the syntax element.
Zhu discloses wherein the value of the syntax element is stored based, at least partially, on at least one of: a number of stored syntax element values associated with the first coding tree, or a number of stored syntax element values associated with the syntax element ([0686] palette_index_idc is an indication of an index to the array represented by CurrentPaletteEntries. The value of palette_index_idc shall be in the range of 0 to MaxPaletteIndex, inclusive, for the first index in the block and in the range of 0 to (MaxPaletteIndex−1), inclusive, for the remaining indices in the block. The variable PaletteIndexIdc[i] stores the i-th palette_index_idc explicitly signalled or inferred. The variable MaxPaletteIndex specifies the maximum possible value for a palette index for the current coding unit. The value of MaxPaletteIndex is set equal to CurrentPaletteSize+palette_escape_val_present_flag if the cu_palette_ibc_mode is 0. Otherwise, if the cu_palette_ibc_mode is 1, the MaxPaletteIndex is set equal to CurrentPaletteSize+palette_escape_val_present_flag+1.).
It would have been obvious to one of ordinary skill in the art to modify the teachings of Cheng with the palette indexes to have a maximum amount to be stored in the array as taught by Zhu in order to improve the quality of decompressed or decoded digital video or images [See Zhu].
Claim(s) 11, 13, and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cheng in view of Chuang in further view of LI et al. (Hereafter, “Li”) [US 2023/0345004 A1].
In regards to claim 11, the limitations of claim 1 have been addressed. Cheng fails to explicitly disclose wherein updating the at least one state variable comprises averaging a current value of the at least one state variable with the at least one stored syntax element value.
Li discloses wherein updating the at least one state variable comprises averaging a current value of the at least one state variable with the at least one stored syntax element value ([0072] A probability estimate of pStateIdx can be an average of estimates from the two hypotheses (e.g., pStateIdx0 and pStateIdx1).).
It would have been obvious to one of ordinary skill in the art to modify the teachings of Cheng with the use of the average of the estimates for the probability state estimation as taught by Li in order to improve the accuracy of the probability estimation [See Li].
In regards to claim 13, the limitations of claim 1 have been addressed. Cheng fails to explicitly disclose wherein the at least one state variable comprises a short-term estimator.
Li discloses wherein the at least one state variable comprises a short-term estimator ([0072-0073] pStateIdx0).
It would have been obvious to one of ordinary skill in the art to modify the teachings of Chuang with the use of pStateIdx0 as a probability estimate as taught by Li in order to improve the accuracy of the probability estimation [See Li].
In regards to claim 14, the limitations of claim 1 have been addressed. Chuang fails to explicitly disclose wherein a maximum number of the at least one stored syntax element value is based, at least partially, on at least one of: a frame quantization parameter, a slice quantization parameter, or a level of the apparatus.
Li discloses disclose wherein a maximum number of the at least one stored syntax element value is based, at least partially, on at least one of: a frame quantization parameter, a slice quantization parameter, or a level of the apparatus ([0042] Also necessary for compliance can be that the complexity of the coded video sequence is within bounds as defined by the level of the video compression technology or standard. In some cases, levels restrict the maximum picture size, maximum frame rate, maximum reconstruction sample rate (measured in, for example megasamples per second), maximum reference picture size, and so on. Limits set by levels can, in some cases, be further restricted through Hypothetical Reference Decoder (HRD) specifications and metadata for HRD buffer management signaled in the coded video sequence.).
It would have been obvious to one of ordinary skill in the art to modify the teachings of Chuang with the use of level limits for buffer management as taught by Li in order to improve the CABAC efficiency [See Li].
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cheng in view of Chuang in even further view of Mukherjee [US 9,774,856 B1].
In regards to claim 12, the limitations of claim 1 have been addressed. Cheng fails to explicitly disclose wherein updating the at least one state variable determining a weighted sum of a current value of the at least one state variable and the at least one stored syntax element value.
Mukherjee discloses wherein updating the at least one state variable determining a weighted sum of a current value of the at least one state variable and the at least one stored syntax element value ([Col. 10] the adapted probabilities are determined through the sum of the weighted current probabilities and the forward updated probabilities).
It would have been obvious to one of ordinary skill in the art to modify the teachings of Cheng with the use of the weighted sum of the current probability and the forward update probability to determine the adaptive probability as taught by Mukherjee in order to improve the encoding and/or decoding of different portions of the data stream [See Mukherjee].
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kaitlin A Retallick whose telephone number is (571)270-3841. The examiner can normally be reached Monday-Friday 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Kelley can be reached at (571) 272-7331. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KAITLIN A RETALLICK/Primary Examiner, Art Unit 2482