Prosecution Insights
Last updated: April 19, 2026
Application No. 18/843,764

Coding method and apparatus, decoding method and apparatus, storage medium, and electronic apparatus

Non-Final OA §102
Filed
Sep 04, 2024
Examiner
NASRI, MARYAM A
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
ZTE CORPORATION
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
76%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
339 granted / 462 resolved
+15.4% vs TC avg
Minimal +3% lift
Without
With
+2.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
22 currently pending
Career history
484
Total Applications
across all art units

Statute-Specific Performance

§101
3.2%
-36.8% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
29.5%
-10.5% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 462 resolved cases

Office Action

§102
DETAILED ACTION This Office Action is a response to an application filed on 09/04/2024, in which claims 1-9, 11-19, and 22 are pending and ready for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 09/04/2024 and 08/05/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Priority Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy has been filed in this application and a copy has been placed of record in the file. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-9, 11-19, and 22-23 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Wang (US 2006/0153294 A1). Regarding claim 1, Wang discloses: A coding method, comprising: determining a target type of a currently coded target sub-bitstream (see Wang, paragraph 48, the coefficients (reads on sub-bitstream) are classified based on the value of the coefficients), and a target sign processing method (see Wang, paragraph 69, different methods used to code the coefficients depending on the value of the coefficients being non-zero) and target sign processing parameters corresponding to the target type (see Wang, paragraph 69, the sign bits of those refinement coefficients at locations which have none-zero coefficients only at the immediate base layer are coded in context different from the sigh bit of other refinement coefficients); performing target sign processing on a coefficient of a current image block based on the target sign processing parameters and the target sign processing method (see Wang, paragraph 71); and encoding the coefficient of the current image block based on a processing result of the target sign processing (see paragraph 70-71 and Fig. 3, video encoder performs entropy coding according to the process disclosed in paragraph 71). Regarding claim 2, Wang discloses: The method according to claim 1, wherein before the performing target sign processing on the coefficient of a current image block based on the target sign processing parameters and the target sign processing method (see Wang, paragraph 69), the method further comprises: determining the need for the target sign processing on the coefficient of the current image block based on a first condition (see Wang, paragraph 50-51, the sign needs to be sent additionally), wherein the first condition comprises at least one of the following: a feature size of the current image block, a distribution state of the coefficient of the current image block, number of non-zero coefficients in the current image block, a processing mode of the target sign processing, a transform method, feature information of other image blocks adjacent to the current image block, local configuration information, the target type of the target sub-bitstream, priorities of a plurality of sign processing methods, features of an image where the current image block is located, features of a sequence where the image with the current image block is located, and a calculation result of a cost function or a rate-distortion function (see Wang, paragraph 48-51). Regarding claim 3, Wang discloses: The method according to claim 1, wherein the coding the coefficient of the current image block based on a processing result of the target sign processing (see Wang, paragraph 70-71) comprises: respectively encoding a coefficient that is subjected to the target sign processing and a coefficient that is not subjected to the target sign processing in the current image block (see Wang, paragraph 48-51), wherein a coding method for coding the coefficient that is not subjected to the target sign processing is the same as or different from a coding method for coding the coefficient that is not subjected to the target sign processing (see Wang, paragraph 69); the target sign processing method comprises one or more processing methods (see Wang, paragraph 69); when the target sign processing method comprises one processing method, one coding method is used to encode the coefficient in the current image block that is subjected to the target sign processing (see Wang, paragraph 69); and when the target sign processing method comprises a plurality of processing methods, the coefficients in the current image block that are subjected to the target sign processing based on different target sign processing methods are coded respectively, and coding methods for the coefficients that are subjected to the target sign processing based on the different target sign processing methods are the same or different from each other (see Wang, paragraph 69). Regarding claim 4, Wang discloses: The method according to claim 3, wherein the coding the coefficients of the current image block based on a processing result of the target sign processing (see Wang, paragraph 70-71) comprises: determining a difference between a predicted sign and an original sign of a coefficient based on a processing result of the sign prediction processing when the target sign processing method comprises a sign prediction processing method (see Wang, paragraph 76); and encoding the difference based on a coding method corresponding to the sign prediction processing method (see Wang, paragraph 70-71). Regarding claim 5, Wang discloses: The method according to claim 1, wherein after the coding the coefficient of the current image block based on a processing result of the target sign processing, the method further comprises: adding the coding result to the target sub-bitstream (see Wang, paragraph 71). Regarding claim 6, Wang discloses: The method according to claim 1, wherein after the coding the coefficient of the current image block based on a processing result of the target sign processing, the method further comprises: determining the need to encode the coefficients of the current image block according to a layered coding method based on coefficient importance (see Wang, paragraph 48-51). Regarding claim 7, Wang discloses: The method according to claim 6, wherein the determining the need to code the coefficient of the current image block according to a layered coding method based on coefficient importance comprises: determining the need to encode the coefficients of the current image block according to a layered coding method based on coefficient importance using at least one of the following (see Wang, paragraph 48-51) methods: determining based on a local configuration of an encoder, determining based on features of the current image block, determining based on a video sequence where the current image block is located, determining based on a known decoder capability or configuration for a coded bitstream to be received, determining based on an adopted dependent enhancement layer coding method, and determining based on received indicative information, wherein the indicative information comes from a transmitting terminal of an uncoded image or a receiving terminal of a coded image (see Wang, paragraph 54-58). Regarding claim 8, Wang discloses: The method according to claim 6, further comprising: reordering the coefficients of the current image block based on coefficient importance (see Wang, paragraph 19 and 48-51); and determining a coding layer to which each coefficient belongs based on a position of each reordered coefficient of the current image block (see Wang, paragraph 19 and 48-51). Regarding claim 9, Wang discloses: The method according to claim 6, wherein after the coding the coefficients of the current image block based on a processing result of the target sign processing, the method further comprises at least one of the following: adding indicative information used to indicate the type of the target sub-bitstream to the target sub-bitstream (see Wang, paragraph 71); and adding identification information to the target sub-bitstream, wherein the identification information is used to indicate at least one of the following: when decoding a reference layer sub-bitstream, a decoding terminal needs to store a decoding result or decoding metadata for decoding a subsequent enhancement layer sub-bitstream (see Wang, paragraph 71); and when decoding the enhancement layer sub-bitstream, the decoding terminal obtains a decoding result or decoding metadata of the reference layer sub-bitstream that the enhancement layer sub-bitstream depends on (see Wang, paragraph 69-71 and claim 13). Regarding claim 11, Wang discloses: A decoding method (see claim 13 and paragraph 8), comprising: obtaining target sign processing methods and target sign processing parameters corresponding to various sub-bitstreams in a case of receiving a layered coding video bitstream (see Wang, claim 13 and paragraph 69); determining signs of coefficients corresponding to the various sub-bitstreams based on the target sign processing methods and the target sign processing parameters corresponding to the various sub-bitstreams (see claim 13 and paragraph 69-71); and decoding the various sub-bitstreams in the layered coding video bitstream based on the signs of the coefficients corresponding to the various sub-bitstreams (see claim 13 and paragraph 69-71). Regarding claim 12, Wang discloses: The method according to claim 11, wherein the obtaining target sign processing parameters corresponding to various sub-bitstreams comprises at least one of the following: obtaining the target sign processing parameters from local configuration information (see Wang, paragraph 48-51); obtaining the target sign processing parameters from the layered coding video bitstream or a target media file (see Wang, paragraph 71); and determining the target sign processing parameters based on parameter information included in the layered coding video bitstream (see Wang, paragraph 71). Regarding claim 13, Wang discloses: The method according to claim 11, wherein after the decoding the various sub-bitstreams in the layered coding video bitstream based on the coefficients corresponding to the various sub-bitstreams and the signs of the coefficients, the method further comprises: in a case that a currently decoded sub-bitstream is determined as a reference layer sub-bitstream, storing a decoding result of the currently decoded sub-bitstream (see Wang, Fig. 3); or, in a case that a currently decoded sub-bitstream is determined as a reference layer sub-bitstream, storing decoding metadata of the currently decoded sub-bitstream (see Wang, Fig. 3). Regarding claim 14, Wang discloses: The method according to claim 11, wherein the obtaining target sign processing methods and target sign processing parameters corresponding to various sub-bitstreams comprises: determining, based on a type of the currently decoded sub-bitstream, a target sign processing method and target sign processing parameters corresponding to the currently decoded sub-bitstream (see Wang, paragraph 48-51 and 69). Regarding claim 15, Wang discloses: The method according to claim 11, further comprising at least one of the following: obtaining indicative information carried in the layered coding video bitstream, and determining a type of a currently decoded sub-bitstream based on the indicative information (see Wang, paragraph 48); and obtaining identification information carried in the layered coding video bitstream, and performing, based on the identification information, at least one of the following operations: in the case of the currently decoded sub-bitstream is a reference layer sub-bitstream, storing a decoding result or decoding metadata of the currently decoded sub-bitstream for decoding a subsequent enhancement layer sub-bitstream; and in the case of the currently decoded sub-bitstream is an enhancement layer sub-bitstream, obtaining a decoding result or decoding metadata of a reference layer sub-bitstream that the enhancement layer sub-bitstream depends on (see Wang, paragraph 71). Regarding claim 16, Wang discloses: The method according to claim 14, wherein the obtaining target sign processing methods and target sign processing parameters corresponding to various sub-bitstreams comprises: obtaining a target quantity of sign processing methods corresponding to a currently decoded sub-bitstream (see Wang, paragraph 54-58); and determining, based on the target quantity, a target sign processing method and target sign processing parameters corresponding to the currently decoded sub-bitstream (see Wang, paragraph 54-58). Regarding claim 17, Wang discloses: The method according to claim 16, wherein the determining, based on the target quantity, a target sign processing method and target sign processing parameters corresponding to the currently decoded sub-bitstream comprises: in a case that the target quantity is 1, determining a target sign processing method and target sign processing parameters corresponding to the currently decoded sub-bitstream based on the type of the currently decoded sub-bitstream (see Wang, paragraph 59-63); and in a case that the target quantity is greater than 1, respectively determining each target sign processing method and corresponding target sign processing parameters, and a processing order of the target sign processing methods (see Wang, paragraph 59-63). Regarding claim 18, Wang discloses: The method according to claim 14, wherein the determining, based on a type of the currently decoded sub-bitstream, a target sign processing method and target sign processing parameters corresponding to the currently decoded sub-bitstream comprises: in a case that a currently decoded sub-bitstream is determined as a dependent enhancement layer sub-bitstream, obtaining decoding metadata of a reference layer sub-bitstream of the currently decoded sub-bitstream included in the layered coding video bitstream, and determining coefficients of the currently decoded sub-bitstream and signs of the coefficients based on the decoding metadata of the reference layer sub-bitstream (see Wang, paragraph 54); and/or, in a case that a currently decoded sub-bitstream is determined as a dependent enhancement layer sub-bitstream, obtaining a decoded image from stored reference images corresponding to the currently decoded sub-bitstream, and determining coefficients of the currently decoded sub-bitstream and signs of the coefficients based on the decoded image (see Wang, paragraph 54). Regarding claim 19, Wang discloses: The method according to claim 17, wherein the determining signs of coefficients corresponding to the various sub-bitstreams based on the target sign processing methods and the target sign processing parameters corresponding to the various sub-bitstreams comprises: determining signs of first coefficients included in the currently decoded sub-bitstream based on sign prediction processing and sign prediction processing parameters, wherein the first coefficients are coefficients that are subjected to the sign prediction processing (see Wang, paragraph 48-51); determining signs of second coefficients included in the currently decoded sub-bitstream based on sign bit hiding processing and sign bit hiding processing parameters, wherein the second coefficients are coefficients that are subjected to the sign bit hiding processing (see Wang, paragraph 65); and determining signs of third coefficients included in the currently decoded sub-bitstream, wherein the third coefficients are coefficients in the currently decoded sub-bitstream, excluding the first coefficients and the second coefficients (see Wang, paragraph 69). Regarding claim 22, Wang discloses: A non-transitory computer-readable storage medium, having a computer program stored therein, wherein the computer program is configured to, when executed by a processor, implement the steps of the method as claimed in claim 1 (see Wang, paragraph 80). Regarding claim 23, Wang discloses: An electronic apparatus, comprising a memory, a processor, and a computer program stored on the memory and running on the processor (see Wang, paragraph 80), wherein the processor is configured to execute the computer program to: determine a target type of a currently coded target sub-bitstream (see Wang, paragraph 48, the coefficients (reads on sub-bitstream) are classified based on the value of the coefficients), and a target sign processing method (see Wang, paragraph 69, different methods used to code the coefficients depending on the value of the coefficients being non-zero) and target sign processing parameters corresponding to the target type (see Wang, paragraph 69, the sign bits of those refinement coefficients at locations which have none-zero coefficients only at the immediate base are coded in context different from the sigh bit of other refinement coefficients); perform target sign processing on a coefficient of a current image block based on the target sign processing parameters and the target sign processing method (see Wang, paragraph 71); and encode the coefficient of the current image block based on a processing result of the target sign processing (see paragraph 70-71 and Fig. 3, video encoder performs entropy coding according to the process disclosed in paragraph 71). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARYAM A NASRI whose telephone number is (571)270-7158. The examiner can normally be reached 10:00-8:00 M-T. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at 5712727383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARYAM A NASRI/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Sep 04, 2024
Application Filed
Mar 07, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604010
METHOD, DEVICE, AND MEDIUM FOR VIDEO PROCESSING
2y 5m to grant Granted Apr 14, 2026
Patent 12604013
THRESHOLD OF SIMILARITY FOR CANDIDATE LIST
2y 5m to grant Granted Apr 14, 2026
Patent 12598305
METHOD, APPARATUS, AND MEDIUM FOR VIDEO PROCESSING
2y 5m to grant Granted Apr 07, 2026
Patent 12598296
VIDEO DECODING METHOD USING BI-PREDICTION AND DEVICE THEREFOR
2y 5m to grant Granted Apr 07, 2026
Patent 12598304
IMAGE PROCESSING METHOD, AND DEVICE FOR SAME
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
76%
With Interview (+2.6%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 462 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month