Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. There are a total of 20 claims and claims 1-20 are pending.
Response to Amendment
Applicant's argument, filed on December 13, 2025 has been entered and carefully considered. Claims 1, 6, 19, and 20 are amended and 1-20 are pending.
Response to Arguments
2. On pages 7-10 applicant argues "the combination of David, Ma and David, Said,
Cheong, and Kim do not divide syntax elements for a video block into different groups based on where the syntax elements are coded," as claimed in amended claim 1. While the applicant's argument points are understood, the examiner respectfully disagrees it is because David discloses in ([Table 6-7]- in Table 6 and Table 7, the syntax element sao_band [cIdx] [rx] [ry], such as 1, is set so that the band offset (BO) sample adaptive offset process is at position (rx and ry) relative to the color component cIdx Specifies what is currently applied to the coding tree block. In contrast, a syntax element sao_band [cIdx] [rx] [ry] such as 0 is applied to the current coding tree block at positions (rx and ry) for the color component cIdx with respect to the edge offset (EO) sample adaptive offset process and this this syntax element are divided into group; however, Said also discloses If video encoder 20 uses inter prediction to generate a predictive block of a PU of a current picture, video encoder 20 may generate the predictive block of the PU based on decoded samples of a reference picture (i.e., a picture other than the current picture). In HEVC, video encoder 20 generates a “prediction unit” syntax structure within a “coding_unit” syntax structure for inter predicted PUs, but does not generate a “prediction unit” syntax structure within a “coding_unit” syntax structure for intra predicted PUs. Rather, in HEVC, syntax elements related to intra predicted PUs are included directly in the “coding_unit” syntax structure; see alos in fig. 18 and [para 0078];[0217]).
Therefore, the rejection has been maintained.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1,2,4,6-22 of U.S. Patent No 12,120,300 B2. Although the claims are not identical, they are not patentably distinct from each other
Therefore, the double patenting rejection has been maintained.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1,2,4,6-22 of U.S. Patent No 12,120,300 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because.
Claims 1,2,4,6-22 of U.S. Patent No 12,120,300 B2
Claims 1-20 of current application – 18/896,281 -Note* bold means difference in instant application
A method of processing video
data, comprising:
dividing, for a conversion between a block of a video and a bitstream of the block,
contexts used for residual coding associated with the block into different groups of contexts; applying controls on the different groups of contexts separately; and
performing the conversion based on the controls,
wherein the contexts used for residual coding are divided into the different groups of
contexts based on an initial state of the contexts and/or a probability of the contexts, and
wherein each of the different groups of contexts has a threshold to control whether a
context coded method can be applied.
A method of processing video
data, comprising:
dividing, for a conversion between a block of a video and a bitstream of the block, syntax elements associated with the block into different groups of syntax elements; applying controls on the different groups of syntax elements separately; and performing the conversion based on the controls.
9. wherein different control strategies are
applied to the different groups of syntax elements.
2.wherein different control
strategies are applied to the different groups of syntax elements.
11. wherein the syntax elements include
context coded syntax elements for residual coding, wherein the context coded syntax elements are classified into N groups, wherein each group has a threshold to control whether context
coded method can be applied, and wherein N is an integer.
3. wherein the syntax elements include context coded syntax elements
for residual coding, and the context coded syntax elements are classified into N groups, wherein each group has a threshold to control whether a context coded method can be applied, N being an integer.
12. wherein, for one group of the N groups, a corresponding counter is maintained to control a number of context coded bins that can be
coded with Context-based Adaptive Binary Arithmetic Coding (CABAC) methods, wherein, when the number of context coded bins is larger than a threshold, context coded
methods are disallowed, and
wherein the threshold depends on an initial context and/or a probability of syntax elements in a group and is signaled at at least one of a sequence parameter set (SPS) level, a
picture parameter set (PPS) level, a slice level, a picture level, or a tile group level.
4. wherein for one group of the N groups, a corresponding counter is
maintained to control a number of context coded bins that can be coded with Context-based Adaptive
Binary Arithmetic Coding (CABAC) methods, wherein when the number of context coded bins is
larger than a threshold, context coded methods are disallowed, and wherein the threshold depends on
an initial context and/or probability of syntax elements in a group and is signaled at at least one of
sequence parameter set (SPS), picture parameter set (PPS), Slice, Picture and Tile group level.
7. The method of claim 6, wherein regrouping is allowed based on
the state and/or the probability of each context.
5. wherein the syntax elements are divided into different groups of syntax
elements based on an initial context and/or probability of a syntax element.
14. wherein the syntax elements are divided into different groups based on where [[the]] ~syntax element is coded, and wherein the syntax
element is coded in a partitioning level, a coding unit (CU) level, a prediction unit (PU) level, a
transform unit (TU) level, or a residual coding level.
6. wherein the syntax elements are divided into different groups of syntax
elements based on where the syntax element is coded, wherein the syntax element is coded in partitioning level, in coding unit (CU) or prediction unit (PU) or transform unit (TU) level or residual
coding level.
6. wherein grouping the contexts is changed dynamically, and wherein, when a state for a context is updated to be within a predefined state set, the context is assigned to a certain group.
7. wherein the grouping is changed dynamically, when a context for a
syntax element is updated to be within a predefined context set, the syntax element is assigned to a
certain group.
16. wherein regrouping is allowed based on the context and/or the probability of each syntax element, and wherein the regrouping is allowed in one of the following cases: a number of samples have been coded since a last regrouping;
a given number of bits have been generated to the bitstream since the last regrouping; a given number of coding groups have been processed since the last regrouping; a probability difference in a group exceeds a first threshold;
a number of context coded bin exceeds a second threshold; or the context and/or the probability is re-initialized.
8. wherein regrouping is allowed based on the context and/or a probability of each syntax element, wherein the regrouping is allowed in one of the following cases: a number of samples have been coded since the last regrouping;a given number of bits have been generated to the bitstream since the last regrouping;a given number of coefficient groups (CG) have been processed since the last regrouping;
a probability difference in a group exceeds a certain threshold;
a number of context coded bin exceeds a certain threshold; or the context and/or probability is re-initialized.
20. dividing contexts used for residual coding associated with a block of the video into different groups of contexts.
9. dividing contexts used for residual coding associated with the block into different groups of contexts; and
applying controls on the different groups of contexts separately.
2. wherein different control strategies are
applied to the different groups of contexts.
10. wherein different control strategies are applied to the different groups
of contexts.
11. wherein the context coded syntax elements are classified into N groups, wherein each group has a threshold to control whether context coded method can be applied, and wherein N is an integer.
11. wherein the contexts are classified into N groups, wherein each group
has a threshold to control whether context coded method can be applied, N being an integer.
4. wherein, for one group of the [[N]]
different groups of contexts, a corresponding counter is maintained to control a number of context coded bins that can be coded with the contexts in the one group, wherein, when the number of context coded bins is larger than [[the]] a corresponding threshold, context coded methods with contexts in the one group are disallowed, and wherein the threshold depends on the initial state and/or the probability of the contexts in
the one group and is signaled at at least one of a sequence picture set (SPS) level, a picture parameter set (PPS) level, a slice level, a picture level, or a tile group level.
12. wherein for one group of the N groups, a corresponding counter is
maintained to control a number of context coded bins that can be coded with the contexts in the group, wherein when the number of context coded bins is larger than a threshold, context coded methods
with contexts in the group are disallowed, and wherein the threshold depends on an initial state and/or probability of contexts in a group and is signaled at at least one of sequence parameter set (SPS),
picture parameter set (PPS), Slice, Picture and Tile group level.
1. wherein the contexts used for residual coding are divided into the different groups of contexts based on an initial state of the contexts and/or a probability of the contexts.
13. wherein the contexts are divided into different groups of contexts based on an initial state and/or probability of a context.
6. wherein grouping the contexts is changed dynamically, and wherein, when a state for a context is updated to be within a predefined state
set, the context is assigned to a certain group.
14. wherein the grouping is changed dynamically, wherein when a state
for a context is updated to be within a predefined state set, the context is assigned to a certain group.
7. wherein regrouping is allowed based on the state and/or the probability of each context.
15. wherein regrouping is allowed based on the state and/or probability
of each context.
8. wherein the regrouping is allowed in one of the following cases: a number of samples have been coded since a last regrouping; a given number of bits have been generated to the bitstream since the last regrouping; a given number of coding groups have been processed since the last regrouping; a probability difference in a group exceeds a first threshold;
a number of context coded bin exceeds a second threshold; or the context and/or the probability is re-initialized.
16. wherein the regrouping is allowed in one of the following cases: a number of samples have been coded since the last regrouping; a given number of bits have been generated to the bitstream since the last regrouping; a given number of coefficient groups (CG) have been processed since the last regrouping;
a probability difference in a group exceeds a certain threshold; a number of context coded bin exceeds a certain threshold; or the context and/or probability is re-initialized.
17. wherein the conversion includes encoding the block into the bitstream.
17. wherein the conversion includes encoding the block into the bitstream.
18. wherein the conversion includes decoding the block from the bitstream.
18. wherein the conversion includes decoding the block from the bitstream.
19. An apparatus for processing video data comprising a processor and
a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to: divide, for a conversion between a block of a video and a bitstream of the block, contexts
used for residual coding associated with the block into different groups of contexts, wherein the contexts used for residual coding are divided into the different groups of contexts based on an initial state of the contexts and/or a probability of the contexts; apply controls on the different groups of contexts separately; and perform the conversion based on the controls.,_
wherein each of the different groups of contexts has a threshold to control whether a
context coded method can be applied.
19. An apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the
processor to: divide, for a conversion between a block of a video and a bitstream of the block, syntax
elements associated with the block into different groups of syntax elements;
apply controls on the different groups of syntax elements separately; and
perform the conversion based on the controls.
20. non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus, wherein the method comprises: dividing contexts used for residual coding associated with a block of the video into different groups of contexts; applying controls on the different groups of contexts separately; and generating the bitstream based on the controls, wherein the contexts used for residual coding are divided into the different groups of contexts based on an initial state of the contexts and/or a probability of the contexts, and
wherein each of the different groups of contexts has a threshold to control whether a context coded method can be applied.
20. non-transitory computer-readable recording medium storing a bitstream of a video which is generated by a method performed by a video processing apparatus, wherein the method comprises: dividing syntax elements associated with a block of the video into different groups of syntax elements; applying controls on the different groups of contexts syntax elements separately; and generating the bitstream based on the controls.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2,5-10, 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over David et al. (KR 20140090646 A) in view of Said et al. (US Pub. No. 2019/01100080 A1) and further in view of Cheong et al. (US 2015/0201220 A1).
Regarding claim 1, David teaches a method of processing video data([abstract]- a method for decoding a video bitstream), comprising:
dividing, for a conversion between a block of a video and a bitstream of the block, syntax elements associated with the block into different groups of syntax elements ([abstract; pg. 3, para second para]- a method of decoding a video bitstream comprises the steps of: (a) receiving a video bitstream; (b) deriving the processed video data from the bitstream; (c) dividing the processed video data into blocks, each of the blocks being less than or equal to a picture; (d) deriving a SAO type from the video bitstream for each of the blocks, wherein the SAO type is selected from the group comprising one or more edge offset (EO) types and a single merged band offset (BO) type; (e) for each of the pixels in each of the blocks, determining a SAO subclass associated with the SAO type; (f) deriving an intensity offset from the video bitstream for a subclass associated with the SAO type; And (g) applying SAO compensation to each of the pixels in the processed video block, wherein the SAO compensation is based on the intensity offset in step (f));
However, David does not explicitly disclose applying separate controls on the different groups of contexts; and performing the conversion based on the controls.
In an analogous art, Said teaches applying controls on the different groups of syntax elements separately([para 0069; 0159-162]- a pre-defined group of contexts; para [0158-0159]- to obtain better compression performance, the probability updates can be done on a per-context basis. For this purpose, a set of weight pairs (w.sub.1.sup.(c),w.sub.2.sup.(c)) and the set of thresholds T.sub.1.sup.(c) and T.sub.2.sup.(c) can be optimized for each context); and performing the conversion based on the controls([see in fig. 5 and para 0095 and 0158-0159]- to obtain better compression performance, the probability updates can be done on a per-context basis. For this purpose, a set of weight pairs (w.sub.1.sup.(c),w.sub.2.sup.(c)) and the set of thresholds T.sub.1.sup.(c) and T.sub.2.sup.(c) can be optimized for each context). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Said to the modified system of David to improve compression efficiency by exploiting the fact that, even when the data is finely divided in many classes (coding contexts), there is still much variability in the statistics of the data assigned for each class. So, instead of using a single “universal” adaptation technique for all classes, this disclosure proposes changing adaptation parameters according to each class, and within each class, further changing the adaptation parameters according to expected or observed probability values or measured variations in the estimates [Said; para 0076 ].
Regarding claim 2, Said teaches wherein different control strategies are applied to the different groups of syntax elements([para 0159-162]- a pre-defined group of contexts; para [0158-0159]- to obtain better compression performance, the probability updates can be done on a per-context basis. For this purpose, a set of weight pairs (w.sub.1.sup.(c),w.sub.2.sup.(c)) and the set of thresholds T.sub.1.sup.(c) and T.sub.2.sup.(c) can be optimized for each context).
Regarding claim 5, Said teaches wherein the syntax elements are divided into different groups of syntax elements based on an initial context and/or probability of a syntax element [para 0159-162]- a pre-defined group of contexts; para [0158-0159]- to obtain better compression performance, the probability updates can be done on a per-context basis. For this purpose, a set of weight pairs (w.sub.1.sup.(c),w.sub.2.sup.(c)) and the set of thresholds T.sub.1.sup.(c) and T.sub.2.sup.(c) can be optimized for each context).
Regarding claim 6, Said teaches the syntax elements are divided into the different groups of syntax elements depending on the syntax element [[is]] being coded in partitioning level, in coding unit (CU) or prediction unit (PU) or transform unit (TU) level or residual coding level ([para 0053-0055 and para [0057]]- a CTU may comprise a single CTB and syntax structures used to encode the samples of the CTB. A CTU may also be referred to as a “tree block” or a “largest coding unit” (LCU). In this disclosure, a “syntax structure” may be defined as zero or more syntax elements present together in a bitstream in a specified order. In some codecs, an encoded picture is an encoded representation containing all CTUs of the picture. It should be noted that the CTBs and CTUs described above represent merely one manner of partitioning a picture into blocks, and the techniques of this disclosure are not limited to any particular type of block structure. Successor standards to HEVC are proposing alternatives to the CTU/CTB structure introduced above, and it is contemplated that the techniques of this disclosure may be used with such new block structures).
Regarding claim 7, Said teaches wherein the grouping is changed dynamically, when a context for a syntax element is updated to be within a predefined context set, the syntax element is assigned to a certain group ([para 00159-0163]- Simultaneously for a pre-defined group of contexts, and fixed for the rest. The condition for changing the context can also be based on any information shared by the encoder and decoder. For example, FIG. 12 shows a video slice (which can be a video frame), which is organized into CTU's which are independently coded. One example of implementation is to have context estimation parameters updated only at the beginning of those blocks, and not inside. The change can be based on counters kept by each context, or information that aggregates coding data, like the current number of compressed data bytes in the slice).
Regarding claim 8, Said teaches wherein regrouping is allowed based on the context and/or the probability of each syntax element, and wherein the regrouping is allowed in one of the following cases: a number of samples have been coded since the last regrouping; a given number of bits have been generated to the bitstream since the last regrouping; a given number of coefficient groups (CG) have been processed since the last regrouping; a probability difference in a group exceeds a certain threshold; a number of context coded bin exceeds a certain threshold; or the context and/or probability is re-initialized. ([para 0218]-the number of probability states is 128, although other numbers of probability states could be defined, consistent with the techniques of this disclosure. TransIdxLPS table 1858 is used to determine which probability state is used for a next bin (bin n+1) when the previous bin (bin n) is an LPS. Regular decoding engine 1854 may also use a RangeLPS table 1856 to determine the range value for an LPS given a particular probability state σ. However, according to the techniques of this disclosure, rather than using all possible probability states σ of the TransIdxLPS table 1858, the probability state indexes σ are mapped to grouped indexes for use in RangeLPS table 1856. That is, each index into RangeLPS table 1856 may represent two or more of the total number of probability states-); or the context and/or the probability is re-initialized.
Regarding claim 9, Said teaches dividing contexts used for residual coding associated with the block into different groups of contexts; and applying controls on the different groups of contexts separately [para 0159-162]- a pre-defined group of contexts; para [0158-0159]- to obtain better compression performance, the probability updates can be done on a per-context basis. For this purpose, a set of weight pairs (w.sub.1.sup.(c),w.sub.2.sup.(c)) and the set of thresholds T.sub.1.sup.(c) and T.sub.2.sup.(c) can be optimized for each context).
Regarding claim 10, Said teaches wherein different control strategies are applied to the different groups of contexts. ([para 0159-162]- a pre-defined group of contexts; para [0158-0159]- to obtain better compression performance, the probability updates can be done on a per-context basis. For this purpose, a set of weight pairs (w.sub.1.sup.(c),w.sub.2.sup.(c)) and the set of thresholds T.sub.1.sup.(c) and T.sub.2.sup.(c) can be optimized for each context).
Regarding claim 13, Said teaches wherein the syntax elements are divided into different groups of syntax elements based on an initial context and/or probability of a syntax element [para 0159-162]- a pre-defined group of contexts; para [0158-0159]- to obtain better compression performance, the probability updates can be done on a per-context basis. For this purpose, a set of weight pairs (w.sub.1.sup.(c),w.sub.2.sup.(c)) and the set of thresholds T.sub.1.sup.(c) and T.sub.2.sup.(c) can be optimized for each context).
Regarding claim 14, Said teaches wherein the grouping is changed dynamically, wherein when a state for a context is updated to be within a predefined state set, the context is assigned to a certain group. ([para 00159-0163]- Simultaneously for a pre-defined group of contexts, and fixed for the rest. The condition for changing the context can also be based on any information shared by the encoder and decoder. For example, FIG. 12 shows a video slice (which can be a video frame), which is organized into CTU's which are independently coded. One example of implementation is to have context estimation parameters updated only at the beginning of those blocks, and not inside. The change can be based on counters kept by each context, or information that aggregates coding data, like the current number of compressed data bytes in the slice).
Regarding claim 15, Said teaches wherein regrouping is allowed based on the state and/or probability of each context([para 00159-0163]- Simultaneously for a pre-defined group of contexts, and fixed for the rest. The condition for changing the context can also be based on any information shared by the encoder and decoder. For example, FIG. 12 shows a video slice (which can be a video frame), which is organized into CTU's which are independently coded. One example of implementation is to have context estimation parameters updated only at the beginning of those blocks, and not inside. The change can be based on counters kept by each context, or information that aggregates coding data, like the current number of compressed data bytes in the slice).
Regarding claim 16, Said teaches wherein the regrouping is allowed in one of the following cases: a number of samples have been coded since the last regrouping; a given number of bits have been generated to the bitstream since the last regrouping; a given number of coefficient groups (CG) have been processed since the last regrouping; a probability difference in a group exceeds a certain threshold; a number of context coded bin exceeds a certain threshold; or the context and/or probability is re-initialized([para 0218]-the number of probability states is 128, although other numbers of probability states could be defined, consistent with the techniques of this disclosure. TransIdxLPS table 1858 is used to determine which probability state is used for a next bin (bin n+1) when the previous bin (bin n) is an LPS. Regular decoding engine 1854 may also use a RangeLPS table 1856 to determine the range value for an LPS given a particular probability state σ. However, according to the techniques of this disclosure, rather than using all possible probability states σ of the TransIdxLPS table 1858, the probability state indexes σ are mapped to grouped indexes for use in RangeLPS table 1856. That is, each index into RangeLPS table 1856 may represent two or more of the total number of probability states); or the context and/or the probability is re-initialized([para 0098]- in the HEVC standard, the contexts are periodically re-initialized with a table defining, for each context, how to convert from a compression-quality parameter (known as quantization step, or quantization parameter (QP) value) (see References 7 and 8) to FSM states.).
Regarding claim 17, Said teaches wherein the conversion includes encoding the block into the bitstream([para 0060;0220]- After the bins are decoded by regular decoding engine 1854, a reverse binarizer 1860 may perform a reverse mapping to convert the bins back into the values of the non-binary valued syntax elements).
Regarding claim 18, Said teaches wherein the conversion includes decoding the block from the bitstream([para 0060;;0098; 0220]- After the bins are decoded by regular decoding engine 1854, a reverse binarizer 1860 may perform a reverse mapping to convert the bins back into the values of the non-binary valued syntax elements).
Regarding claim 19, the claim is interpreted and rejected for the same reason as set forth in claim 1. Hence; all limitations for claim 19 have been met in claim 1.
Regarding claim 20, the claim is interpreted and rejected for the same reason as set forth in claim 1. Hence; all limitations for claim 20 have been met in claim 1.
;
Claims 3, 4, 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over David in view of Said and Cheong as applied to claim 3 above and further in view of Kim et al. (WO 2015/194185 A1; given by the applicant in the IDS).
Regarding claim 3 ,the combination of David, Said and Cheong does not explicitly disclose wherein the syntax elements include context coded syntax elements for residual coding, and the context coded syntax elements are classified into N groups, wherein each group has a threshold to control whether a context coded method can be applied, N being an integer.
In an analogous art, Kim teaches wherein the syntax elements include context coded syntax elements for residual coding, and the context coded syntax elements are classified into N groups, wherein each group has a threshold to control whether a context coded method can be applied, N being an integer ([para 0128]- the electronic device 422 is configured to determine whether a count of context coded bins ( of significance flag, greater_than_l flag, and greater_than_2 flag) is greater than a threshold value. The electronic device 422 is configured to bypass code response to the count exceeding the threshold value. Therefore, if the count exceeds the threshold value, then all of the significance flag, greater_than_l flag, and greater_than_2 flag is context coded may be bypass coded), the threshold depends on the initial state and/or probability of contexts in a group and is signaled at at least one of SPS, PPS, Slice, Picture and Tile group level([para 0016]-SPS or a picture parameter set (PPS)).. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to provide the technique of Kim to the modified system of David, Said and Cheong to determine whether the binary symbol is to be decoded using a high throughput palette coding mode and utilizing palette coding in the extension of HEVC for encoding and/or decoding [Kim; abstract].
Regarding claim 4, Kim teaches wherein for one group of the N groups, a corresponding counter is maintained to control a number of context coded bins that can be coded with Context-based Adaptive Binary Arithmetic Coding (CABAC) methods, wherein when the number of context coded bins is larger than a threshold, context coded methods are disallowed, and wherein the threshold depends on an initial context and/or probability of syntax elements in a group and is signaled at at least one of 2sequence parameter set (SPS), picture parameter set (PPS), Slice, Picture and Tile group level ([para 0128]- the electronic device 422 is configured to determine whether a count of context coded bins ( of significance flag, greater_than_l flag, and greater_than_2 flag) is greater than a threshold value. The electronic device 422 is configured to bypass code response to the count exceeding the threshold value. Therefore, if the count exceeds the threshold value, then all of the significance flag, greater_than_l flag, and greater_than_2 flag is context coded may be bypass coded), the threshold depends on the initial state and/or probability of contexts in a group and is signaled at at least one of SPS, PPS, Slice, Picture and Tile group level ([para 0016]-SPS or a picture parameter set (PPS)).
Regarding claim 11, Kim teaches wherein the contexts are classified into N groups, wherein each group has a threshold to control whether context coded method can be applied, N being an integer([para 0128]- the electronic device 422 is configured to determine whether a count of context coded bins ( of significance flag, greater_than_l flag, and greater_than_2 flag) is greater than a threshold value. The electronic device 422 is configured to bypass code response to the count exceeding the threshold value. Therefore, if the count exceeds the threshold value, then all of the significance flag, greater_than_l flag, and greater_than_2 flag is context coded may be bypass coded), the threshold depends on the initial state and/or probability of contexts in a group and is signaled at at least one of SPS, PPS, Slice, Picture and Tile group level([para 0016]-SPS or a picture parameter set (PPS)).
Regarding claim 12, Kim teaches wherein for one group of the N groups, a corresponding counter is maintained to control a number of context coded bins that can be coded with the contexts in the group, wherein when the number of context coded bins is larger than a threshold, context coded methods with contexts in the group are disallowed, and wherein the threshold depends on an initial state and/or probability of contexts in a group and is signaled at at least one of sequence parameter set (SPS), picture parameter set (PPS), Slice, Picture and Tile group level.l ([para 0128]- the electronic device 422 is configured to determine whether a count of context coded bins ( of significance flag, greater_than_l flag, and greater_than_2 flag) is greater than a threshold value. The electronic device 422 is configured to bypass code response to the count exceeding the threshold value. Therefore, if the count exceeds the threshold value, then all of the significance flag, greater_than_l flag, and greater_than_2 flag is context coded may be bypass coded), the threshold depends on the initial state and/or probability of contexts in a group and is signaled at at least one of SPS, PPS, Slice, Picture and Tile group level ([para 0016]-SPS or a picture parameter set (PPS)).
Citation of Pertinent Prior Art
The prior art are made of record and not relied upon but considered pertinent to applicant’s disclosure:
1. Egilmez et al., US 20190200043 A1, discloses video coding and, more particularly, to techniques for binary arithmetic coding of video data.
2. Zhang et. al., US 2016/0353113 A1, discloses techniques related to to an entropy coding module in block-based hybrid video coding.
3. Karczewicz et al, US. Pat. No. 2020/0077117 A1, discloses determining a threshold number of regular coded bins for a first decoding pass.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MD NAZMUL HAQUE whose telephone number is (571)272-5328. The examiner can normally be reached IFW.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at 5712727327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MD N HAQUE/Primary Examiner, Art Unit 2487