Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to amendment filed12/30/2025 in which the claims 1-18 are pending.
Response to Arguments
Applicant’s arguments, see pages 8-11, filed 12/30/2025, with respect to the rejections of claims have been fully considered amended claim 13 overcomes the rejection under 112 (b) rejection and there exists a functional relationship with the claimed bitstream and non-transitory CRM in the amended claim 13, hence the rejection under 35 U.S.C. 102(a)(1) as being anticipated by Zhang et al. (US 2017/0064336 A1) is now withdrawn. Further the arguments with respect to non-statutory double patenting rejections are moot in view of new grounds of rejection as being unpatentable over claims 1-6 of U.S. Patent No. US 11,985,356 B2 in view of Zhang et al. (US 2016/0065964 A1)(as described below).
Non-statutory double patenting (NSDP) rejection
Applicant argues that since Applicant has amended independent claim 1 to recite, in part: "in accordance with a determination that the above coding unit is within the coding tree unit, determining a context index of a context model of the current coding unit based, at least in part, on a syntax element associated with the above coding unit retrieved from a line buffer associated with the coding tree unit, and decoding, from the video bitstream, a corresponding syntax element for the current coding unit in accordance with the context index of the context model of the current coding unit”; and
“in accordance with a determination that the above coding unit is not within the coding tree unit, determining a context index of a context model of the current coding unit based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit, and decoding, from the video bitstream, a corresponding syntax element for the current coding unit in accordance with the context index of the context model of the current coding unit.(emphasis added).
Applicant respectfully asserts that these limitations are not recited in the claims of the Parent Patent. Specifically, the Parent Patent claims do not recite assigning a default value of 0 or 1 to the syntax element when the above coding unit is not within the coding tree unit, or determining a context index for a context model based on that default assignment. Because the present claims recite these additional limitations, the claimed subject matter is narrower in at least these respects and is not coextensive with the Parent Patent claims. Accordingly, the present claims are patentably distinct from the Parent Patent claims. Withdrawal of the nonstatutory double patenting rejections is respectfully requested.
Examiner respectfully clarifies that nonstatutory double patenting exists and claims 1-6 are being unpatentable over claims 1-6 of U.S. Patent No. US 11,985,356 B2 in view of Zhang et al. (US 2016/0065964 A1) (as described in detail below).Further Patent No. US 11,985,356 B2 discloses the limitations as in table below and Zhang teaches a default value of 0 or 1 to the syntax element when the above coding unit is not within the coding tree unit, or determining a context index for a context model based on that default assignment in Para[0007] In 3D-HEVC, CABAC is used to code the control flags, i.e., ic_flag and arp_flag based on a context model. There are 3 context models for the control flags, denoted as X_model[0], X_model[1] and X_model[2], where X corresponds to “ic” or “arp”. For the current block, X_model[idx] is chosen to code X_flag,; Para [0039] the above neighboring block is considered as unavailable if it is in a CTU different from the CTU of the current block.Para , [0044] teaches only two context models are required to code X_flag. For the current block, X_model[idx] is chosen to code X_flag. idx is calculated as idx=X_flag(B) and X_flag(B) represents X_flag in the above neighboring block if the above neighboring block is located in the current CTU row. Otherwise, idx=0, which implies that the above neighboring block is not available. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize limitation in the method of the conflicting patent claim, since context selection is determined based on selected information associated with one or more neighboring blocks of the current block conditionally depending on whether one or more neighboring blocks are available. The syntax element is encoded or decoded using context-based coding based on context selection.
Hence the present claims are not patentably distinct from the Parent Patent claims in view of Zhang and nonstatutory double patenting rejections still exits.
Double Patenting
4. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
5. Claims 1-6 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-6 of U.S. Patent No. US 11,985,356 B2 in view of Zhang et al. (US 2016/0065964 A1).
Although the claims at issue are not identical, they are not patentably distinct from each other because the examined application claim is obvious over the conflicting patent claim
The difference between the instant and conflicting patent claim is the addition of limitation determining a context index of a context model of the current coding unit
based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit in the instant claim. However Zhang discloses determining a context index of a context model of the current coding unit based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit (Para[0007] teaches In 3D-HEVC, CABAC is used to code the control flags, i.e., ic_flag and arp_flag based on a context model. There are 3 context models for the control flags, denoted as X_model[0], X_model[1] and X_model[2], where X corresponds to “ic” or “arp”. For the current block, X_model[idx] is chosen to code X_flag,; Para [0039] the above neighboring block is considered as unavailable if it is in a CTU different from the CTU of the current block.Para , [0044] teaches only two context models are required to code X_flag. For the current block, X_model[idx] is chosen to code X_flag. idx is calculated as idx=X_flag(B) and X_flag(B) represents X_flag in the above neighboring block if the above neighboring block is located in the current CTU row. Otherwise, idx=0, which implies that the above neighboring block is not available). See the table below
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize limitation in the method of the conflicting patent claim, since context selection is determined based on selected information associated with one or more neighboring blocks of the current block conditionally depending on whether one or more neighboring blocks are available. The syntax element is encoded or decoded using context-based coding based on context selection.
Instant application: 18/633,634
Patent No.: US 11,985,356 B2
1. A method of decoding a syntax element for a current coding unit of video data, the method comprising: receiving a video bitstream;
identifying, for the current coding unit, an above coding unit and a coding tree unit including the current coding unit;
in accordance with a determination that the above coding unit is within the coding tree unit,
determining a context index of a context model of the current coding unit based,
at least in part, on a syntax element associated with the above coding unit retrieved from a line buffer associated with the coding tree unit;
and decoding, from the video bitstream, a corresponding syntax element for the current coding unit in accordance with the context index of the context model of the current coding unit; and in accordance with a determination that the above coding unit is not within the coding tree unit, determining a context index of a context model of the current coding unit
based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit, and decoding, from the video bitstream, a corresponding syntax element for the current coding unit in accordance with the context index of the context model of the current coding unit.
1. A method of decoding a syntax element for a current coding unit of video data, the method comprising:
identifying, for the current coding unit, an above coding unit and a coding tree unit including the current coding unit;
in accordance with a determination that the above coding unit is within the coding tree unit,
decoding, from a video bitstream, a corresponding syntax element for the current coding unit based,
at least in part, on a syntax element associated with the above coding unit retrieved from a line buffer associated with the coding tree unit;
and in accordance with a determination that the above coding unit is not within the coding tree unit, decoding, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on a default value assigned to the syntax element associated with the above coding unit.
2. The method of claim 1, wherein the decoding, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on the syntax element associated with the above coding unit retrieved from the line buffer further comprises: updating the corresponding syntax element for the current coding unit in accordance with a width comparison of the above coding unit and the current coding unit and the syntax element associated with the above coding unit retrieved from the line buffer;
determining a context index of the current coding unit based on the updated corresponding syntax element for the current coding unit; and decoding, from the video bitstream, the corresponding syntax element for the current coding unit in accordance with the context index of the current coding unit.
2. The method of claim 1, wherein the determining a context index of a context model of the current coding unit
based, at least in part, on the syntax element associated with the above coding unit retrieved from the line buffer further comprises: updating the corresponding syntax element for the current coding unit in accordance with a width comparison of the above coding unit and the current coding unit and the syntax element associated with the above coding unit retrieved from the line buffer; determining a context index of the current coding unit based on the updated corresponding syntax element for the current coding unit; and decoding, from the video bitstream, the corresponding syntax element for the current coding unit in accordance with the context index of the current coding unit..
2. The method of claim 1, wherein the decoding, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on the syntax element associated with the above coding unit retrieved from the line buffer further comprises: updating the corresponding syntax element for the current coding unit in accordance with a width comparison of the above coding unit and the current coding unit and the syntax element associated with the above coding unit retrieved from the line buffer;
determining a context index of the current coding unit based on the updated corresponding syntax element for the current coding unit; and decoding, from the video bitstream, the corresponding syntax element for the current coding unit in accordance with the context index of the current coding unit.
3. The method of claim 1, wherein the decoding, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit further comprises: updating the corresponding syntax element for the current coding unit in accordance with a height comparison of a left coding unit of the current coding unit and the current coding unit and the default value 0 or 1 assigned to the syntax element associated with the above coding unit; determining a context index of the current coding unit based on the updated corresponding syntax element for the current coding unit; and decoding, from the video bitstream, the corresponding syntax element for the current coding unit in accordance with the context index of the current coding unit.
3. The method of claim 1, wherein the decoding, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on a default value assigned to the syntax element associated with the above coding unit further comprises: updating the corresponding syntax element for the current coding unit in accordance with a height comparison of a left coding unit of the current coding unit and the current coding unit and the default value assigned to the syntax element associated with the above coding unit; determining a context index of the current coding unit based on the updated corresponding syntax element for the current coding unit; and decoding, from the video bitstream, the corresponding syntax element for the current coding unit in accordance with the context index of the current coding unit.
4. The method of claim 1, wherein the syntax element is a binary flag.
4. The method of claim 1, wherein the syntax element is a binary flag.
5. The method of claim 4, wherein the syntax element indicates that the current coding unit is encoded in an intra prediction mode, an intra block copy mode, a matrix-based intra prediction mode, or an affine mode.
5. The method of claim 4, wherein the syntax element indicates that the current coding unit is encoded in an intra prediction mode, an intra block copy mode, a matrix-based intra prediction mode, or an affine mode.
6. The method of claim 1, wherein the line buffer is associated with the coding tree unit.
6. The method of claim 1, wherein the line buffer is associated with the coding tree unit.
6. Claims 7- 12 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 13-18 of U.S. Patent No. US 11,985,356 B2. in view of Zhang et al. (US 2016/0065964 A1).
Although the claims at issue are not identical, they are not patentably distinct from each other because the examined application claim is obvious over the conflicting patent claim
The difference between the instant and conflicting patent claim is the addition of limitation determining a context index of a context model of the current coding unit
based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit in the instant claim. However Zhang discloses determining a context index of a context model of the current coding unit based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit (Para[0007] teaches In 3D-HEVC, CABAC is used to code the control flags, i.e., ic_flag and arp_flag based on a context model. There are 3 context models for the control flags, denoted as X_model[0], X_model[1] and X_model[2], where X corresponds to “ic” or “arp”. For the current block, X_model[idx] is chosen to code X_flag,; Para [0039] the above neighboring block is considered as unavailable if it is in a CTU different from the CTU of the current block.Para , [0044] teaches only two context models are required to code X_flag. For the current block, X_model[idx] is chosen to code X_flag. idx is calculated as idx=X_flag(B) and X_flag(B) represents X_flag in the above neighboring block if the above neighboring block is located in the current CTU row. Otherwise, idx=0, which implies that the above neighboring block is not available). See the table below
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize limitation in the method of the conflicting patent claim, since context selection is determined based on selected information associated with one or more neighboring blocks of the current block conditionally depending on whether one or more neighboring blocks are available. The syntax element is encoded or decoded using context-based coding based on context selection.
Instant application: 18/633,634
Patent No.: US 11,985,356 B2
7. An electronic apparatus comprising: one or more processing units; memory coupled to the one or more processing units; and a plurality of programs stored in the memory that, when executed by the one or more processing units, cause the electronic apparatus to: receive a video bitstream; identify, for a current coding unit of video data, an above coding unit and a coding tree unit including the current coding unit; in accordance with a determination that the above coding unit is within the coding tree unit, decode, from the video bitstream, a corresponding syntax element for the current coding unit based, at least in part, on a syntax element associated with the above coding unit retrieved from a line buffer associated with the coding tree unit; and in accordance with a determination that the above coding unit is not within the coding tree unit, decode, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit, and decode, from the video bitstream, a corresponding syntax element for the current coding unit in accordance with the context index of the context model of the current coding unit.
13. An electronic apparatus comprising: one or more processing units; memory coupled to the one or more processing units; and a plurality of programs stored in the memory that, when executed by the one or more processing units, cause the
electronic apparatus to:
identify, for a current coding unit of video data, an above coding unit and a coding tree unit including the current coding unit; in accordance with a determination that the above coding unit is within the coding tree unit, decode, from a video bitstream, a corresponding syntax element for the current coding unit based, at least in part, on a syntax element associated with the above coding unit retrieved from a line buffer associated with the coding tree unit; and in accordance with a determination that the above coding unit is not within the coding tree unit, decode, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on a default value assigned to the syntax element associated with the above coding unit.
14. The electronic apparatus of claim 13, wherein the decode, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on the syntax element associated with the above coding unit retrieved from the line buffer further comprises: updating the corresponding syntax element for the current coding unit in accordance with a width comparison of the above coding unit and the current coding unit and the syntax element associated with the above coding unit retrieved from the line buffer; determining a context index of the current coding unit based on the updated corresponding syntax element for the current coding unit; and decoding, from the video bitstream, the corresponding syntax element for the current coding unit in accordance with the context index of the current coding unit
8. The electronic apparatus of claim 7, wherein the decode, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on the syntax element associated with the above coding unit retrieved from the line buffer further comprises: updating the corresponding syntax element for the current coding unit in accordance with a width comparison of the above coding unit and the current coding unit and the syntax element associated with the above coding unit retrieved from the line buffer; determining the context index of the context model of the current coding unit based on the updated corresponding syntax element for the current coding unit.
14. The electronic apparatus of claim 13, wherein the decode, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on the syntax element associated with the above coding unit retrieved from the line buffer further comprises: updating the corresponding syntax element for the current coding unit in accordance with a width comparison of the above coding unit and the current coding unit and the syntax element associated with the above coding unit retrieved from the line buffer; determining a context index of the current coding unit based on the updated corresponding syntax element for the current coding unit; and decoding, from the video bitstream, the corresponding syntax element for the current coding unit in accordance with the context index of the current coding unit.
9. The electronic apparatus of claim 7, wherein the decode, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit further comprises: updating the corresponding syntax element for the current coding unit in accordance with a height comparison of a left coding unit of the current coding unit and the current coding unit and the default value 0 or 1 assigned to the syntax element associated with the above coding unit; determining a context index of the current coding unit based on the updated corresponding syntax element for the current coding unit; and decoding, from the video bitstream, the corresponding syntax element for the current coding unit in accordance with the context index of the current coding unit.
15. The electronic apparatus of claim 13, wherein the decode, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on a default value assigned to the syntax element associated with the above coding unit further comprises: updating the corresponding syntax element for the current coding unit in accordance with a height comparison of a left coding unit of the current coding unit and the current coding unit and the default value assigned to the syntax element associated with the above coding unit; determining a context index of the current coding unit based on the updated corresponding syntax element for the current coding unit; and decoding, from the video bitstream, the corresponding syntax element for the current coding unit in accordance with the context index of the current coding unit.
10. The electronic apparatus of claim 7, wherein the syntax element is a binary flag.
16. The electronic apparatus of claim 13, wherein the syntax element is a binary flag.
11. The electronic apparatus of claim 10, wherein the syntax element indicates that the current coding unit is encoded in an intra prediction mode, an intra block copy mode, a matrix-based intra prediction mode, or an affine mode.
17. The electronic apparatus of claim 16, wherein the syntax element indicates that the current coding unit is encoded in an intra prediction mode, an intra block copy mode, a matrix-based intra prediction mode, or an affine mode.
12. The electronic apparatus of claim 7, wherein the line buffer is associated with the coding tree unit.
18. The electronic apparatus of claim 13, wherein the line buffer is associated with the coding tree unit.
7. Claims 13-18 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-6 of U.S. Patent No. US 11,985,356 B2 in view of Zhang et al. (US 2016/0065964 A1). Although the claims at issue are not identical, they are not patentably distinct from each other because the examined application claim is obvious over the conflicting patent claim.
The difference between the instant and conflicting patent claim is the addition of limitation a non-transitory computer-readable storage medium storing a video bitstream formed by instructions which when executed by a computing device having one or more processors, cause the one or more processors to perform to be decoded by a method for video decoding comprising:”, determining a context index of a context model of the current coding unit based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit in the instant claim. However Zhang discloses a non-transitory computer-readable storage medium storing a video bitstream formed by instructions which when executed by a computing device having one or more processors, cause the one or more processors to perform to be decoded by a method for video decoding comprising (para[0053] teaches processors can be configured to perform particular tasks according to the invention, by executing machine-readable software code or firmware code that defines the particular methods embodied by the invention), determining a context index of a context model of the current coding unit based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit (Para[0007] teaches In 3D-HEVC, CABAC is used to code the control flags, i.e., ic_flag and arp_flag based on a context model. There are 3 context models for the control flags, denoted as X_model[0], X_model[1] and X_model[2], where X corresponds to “ic” or “arp”. For the current block, X_model[idx] is chosen to code X_flag,; Para [0039] the above neighboring block is considered as unavailable if it is in a CTU different from the CTU of the current block.Para , [0044] teaches only two context models are required to code X_flag. For the current block, X_model[idx] is chosen to code X_flag. idx is calculated as idx=X_flag(B) and X_flag(B) represents X_flag in the above neighboring block if the above neighboring block is located in the current CTU row. Otherwise, idx=0, which implies that the above neighboring block is not available). See the table below
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize limitation in the method of the conflicting patent claim, since context selection is determined based on selected information associated with one or more neighboring blocks of the current block conditionally depending on whether one or more neighboring blocks are available. The syntax element is encoded or decoded using context-based coding based on context selection.
Instant application: 18/633,634
Patent No.: US 11,985,356 B2
13. A non-transitory computer-readable storage medium storing a video bitstream formed by instructions which when executed by a computing device having one or more processors, cause the one or more processors to perform to be decoded by a method for video decoding comprising: identifying, for the current coding unit, an above coding unit and a coding tree unit including the current coding unit;
in accordance with a determination that the above coding unit is within the coding tree unit,
determining a context index of a context model of the current coding unit based, at least in part, on a syntax element associated with the above coding unit retrieved from a line buffer associated with the coding tree unit;
and decoding, from the video bitstream, a corresponding syntax element for the current coding unit in accordance with the context index of the context model of the current coding unit; and in accordance with a determination that the above coding unit is not within the coding tree unit, determining a context index of a context model of the current coding unit based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit, and decoding, from the video bitstream, a corresponding syntax element for the current coding unit in accordance with the context index of the context model of the current coding unit.
1. A method of decoding a syntax element for a current coding unit of video data, the method comprising:
identifying, for the current coding unit, an above coding unit and a coding tree unit including the current coding unit;
in accordance with a determination that the above coding unit is within the coding tree unit, decoding, from a video bitstream, a corresponding syntax element for the current coding unit based, at least in part, on a syntax element associated with the above coding unit retrieved from a line buffer associated with the coding tree unit;
and in accordance with a determination that the above coding unit is not within the coding tree unit, decoding, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on a default value assigned to the syntax element associated with the above coding unit.
2. The method of claim 1,
wherein the decoding, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on the syntax element associated with the above coding unit retrieved from the line buffer further comprises: updating the corresponding syntax element for the current coding unit in accordance with a width comparison of the above coding unit and the current coding unit and the syntax element associated with the above coding unit retrieved from the line buffer; determining a context index of the current coding unit based on the updated corresponding syntax element for the current coding unit; and decoding, from the video bitstream, the corresponding syntax element for the current coding unit in accordance with the context index of the current coding unit.
14. The non-transitory computer-readable storage medium of claim 13, determining a context index of a context model of the current coding unit
based, at least in part, on the syntax element associated with the above coding unit retrieved from the line buffer further comprises: updating the corresponding syntax element for the current coding unit in accordance with a width comparison of the above coding unit and the current coding unit and the syntax element associated with the above coding unit retrieved from the line buffer; and determining the context index of the context model current coding unit based on the updated corresponding syntax element for the current coding unit.
2. The method of claim 1,
wherein the decoding, from the video bitstream, the corresponding syntax element for the current coding unit
based, at least in part, on the syntax element associated with the above coding unit retrieved from the line buffer further comprises: updating the corresponding syntax element for the current coding unit in accordance with a width comparison of the above coding unit and the current coding unit and the syntax element associated with the above coding unit retrieved from the line buffer; determining a context index of the current coding unit based on the updated corresponding syntax element for the current coding unit; and decoding, from the video bitstream, the corresponding syntax element for the current coding unit in accordance with the context index of the current coding unit.
15. The non-transitory computer-readable storage medium of claim 13, wherein the determining a context index of a context model of the current coding unit
based, at least in part, on a default value 0 or 1 assigned to the syntax element associated with the above coding unit further comprises: updating the corresponding syntax element for the current coding unit in accordance with a height comparison of a left coding unit of the current coding unit and the current coding unit and the default value 0 or 1 assigned to the syntax element associated with the above coding unit; and determining the context index of the context model current coding unit based on the updated corresponding syntax element for the current coding unit.
3. The method of claim 1,
wherein the decoding, from the video bitstream, the corresponding syntax element for the current coding unit based, at least in part, on a default value assigned to the syntax element associated with the above coding unit further comprises: updating the corresponding syntax element for the current coding unit in accordance with a height comparison of a left coding unit of the current coding unit and the current coding unit and the default value assigned to the syntax element associated with the above coding unit; determining a context index of the current coding unit based on the updated corresponding syntax element for the current coding unit; and decoding, from the video bitstream, the corresponding syntax element for the current coding unit in accordance with the context index of the current coding unit.
16. The non-transitory computer-readable storage medium of claim 13, wherein the syntax element is a binary flag.
4. The method of claim 1, wherein the syntax element is a binary flag.
17. The non-transitory computer-readable storage medium of claim 16, wherein the syntax element indicates that the current coding unit is encoded in an intra prediction mode, an intra block copy mode, a matrix-based intra prediction mode, or an affine mode.
5. The method of claim 4, wherein the syntax element indicates that the current coding unit is encoded in an intra prediction mode, an intra block copy mode, a matrix-based intra prediction mode, or an affine mode.
18. The non-transitory computer-readable storage medium of claim 13, wherein the line buffer is associated with the coding tree unit.
6. The method of claim 1, wherein the line buffer is associated with the coding tree unit.
Conclusion
8. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROWINA J CATTUNGAL whose telephone number is (571)270-5922. The examiner can normally be reached Monday-Thursday 7:30am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPT supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached on (571) 272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROWINA J CATTUNGAL/Primary Examiner, Art Unit 2425