Prosecution Insights
Last updated: April 19, 2026
Application No. 18/617,016

VIDEO ENCODING AND DECODING METHOD, AND DEVICE

Non-Final OA §102
Filed
Mar 26, 2024
Examiner
MANGIALASCHI, TRACY
Art Unit
2668
Tech Center
2600 — Communications
Assignee
Guangdong OPPO Mobile Telecommunications Corp., Ltd.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
435 granted / 582 resolved
+12.7% vs TC avg
Strong +28% interview lift
Without
With
+28.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
15 currently pending
Career history
597
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
53.9%
+13.9% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 582 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-20, as originally filed, are currently pending and have been considered below. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 14-17, 19 and 20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Seregin et al., U.S. Publication No. 2020/0099924, hereinafter, “Seregin”. As per claim 1, Seregin discloses a method for video decoding, comprising: decoding a bitstream to obtain a target transform coefficient of a current block (Seregin, Figure 12, encoded video bitstream; Seregin, Figure 12, Quantized coefficients; Seregin, ¶0236, FIG. 12 is a block diagram illustrating an example decoding device 112. The decoding device 112 includes an entropy decoding unit 80, prediction processing unit 81, inverse quantization unit 86, inverse transform processing unit 88, summer 90, filter unit 91, and picture memory 92. Prediction processing unit 81 includes motion compensation unit 82 and intra-prediction processing unit 84; Seregin, ¶0237, During the decoding process, the decoding device 112 receives an encoded video bitstream that represents video blocks of an encoded video slice and associated syntax elements sent by the encoding device 104; Seregin, ¶0238, The entropy decoding unit 80 of the decoding device 112 entropy decodes the bitstream to generate quantized coefficients, motion vectors, and other syntax elements); predicting the current block to obtain a prediction block of the current block (Seregin, Figure 12, Prediction Processing; Seregin, ¶0239, When the video slice is coded as an intra-coded (I) slice, intra-prediction processing unit 84 of prediction processing unit 81 may generate prediction data for a video block of the current video slice based on a signaled intra-prediction mode and data from previously decoded blocks of the current frame or picture. When the video frame is coded as an inter-coded (i.e., B, P or GPB) slice, motion compensation unit 82 of prediction processing unit 81 produces predictive blocks for a video block of the current video slice based on the motion vectors and other syntax elements received from entropy decoding unit 80. The predictive blocks may be produced from one of the reference pictures within a reference picture list); determining a transform core corresponding to the current block according to the prediction block (Seregin, ¶0138, adaptive multiple transform (AMT) … Some AMT designs offer five transform options for an encoder to select on a per-block basis (e.g., the selection can be performed based on a rate-distortion metric for a coding block, prediction block, or transform block). Then, the selected transform index is signaled by the video encoder with the video bitstream, which can be decoded and analyzed by the video decoder); and performing inverse transform on the target transform coefficient according to the transform core, and obtaining a residual block of the current block according to a transform result of the inverse transform (Seregin, Figure 12, Inverse Quantization, 86; Seregin, Figure 12, Inverse Transform Processing, 88; Seregin, Figure 12, Residual Blocks; Seregin, ¶0233, Inverse quantization unit 58 and inverse transform processing unit 60 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block in the pixel domain for later use as a reference block of a reference picture; Seregin, ¶0242, Inverse quantization unit 86 inverse quantizes, or de-quantizes, the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 80 ... Inverse transform processing unit 88 applies an inverse transform (e.g., an inverse DCT or other suitable inverse transform), an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients in order to produce residual blocks in the pixel domain). As per claim 14, Seregin discloses the method of claim 1, wherein determining the transform core corresponding to the current block according to the prediction block comprises: inputting the prediction block into a pre-trained model, to obtain transform core indication information output by the model and corresponding to the current block, wherein the transform core indication information is configured to indicate a transform core of secondary transform corresponding to the current block (Seregin, ¶0138, adaptive multiple transform (AMT) or enhanced multiple transform (EMT). Some AMT designs offer five transform options for an encoder to select on a per-block basis (e.g., the selection can be performed based on a rate-distortion metric for a coding block, prediction block, or transform block). Then, the selected transform index is signaled by the video encoder with the video bitstream, which can be decoded and analyzed by the video decoder); and determining the transform core corresponding to the current block according to the transform core indication information (Seregin, ¶0138, adaptive multiple transform (AMT) or enhanced multiple transform (EMT). Some AMT designs offer five transform options for an encoder to select on a per-block basis (e.g., the selection can be performed based on a rate-distortion metric for a coding block, prediction block, or transform block). Then, the selected transform index is signaled by the video encoder with the video bitstream, which can be decoded and analyzed by the video decoder). As per claim 15, Seregin discloses the method of claim 14, wherein inputting the prediction block into the pre-trained model, to obtain the transform core indication information output by the model and corresponding to the current block comprises: down-sampling the prediction block (Seregin, ¶0035, a prediction block can be formed. In the case of intra-prediction, a prediction block may be formed from samples in the current frame that have been previously encoded and reconstructed); and inputting the down-sampled prediction block into the pre-trained model, to obtain the transform core indication information output by the model and corresponding to the current block (Seregin, ¶0036, the prediction block can be subtracted from the current block at the intra/inter prediction stage 402 to produce a residual block (also called a residual). The transform stage 404 transforms the residual into transform coefficients). As per claim 16, Seregin discloses the method of claim 1, wherein decoding the bitstream to obtain the target transform coefficient of the current block comprises: decoding the bitstream to obtain a quantization coefficient of the current block (Seregin, Figure 12, encoded video bitstream; Seregin, Figure 12, Quantized coefficients; Seregin, ¶0236, FIG. 12 is a block diagram illustrating an example decoding device 112; Seregin, ¶0237, During the decoding process, the decoding device 112 receives an encoded video bitstream that represents video blocks of an encoded video slice and associated syntax elements sent by the encoding device 104; Seregin, ¶0238, The entropy decoding unit 80 of the decoding device 112 entropy decodes the bitstream to generate quantized coefficients, motion vectors, and other syntax elements); and performing inverse quantization on the quantization coefficient, to obtain the target transform coefficient of the current block (Seregin, Figure 12, Inverse Quantization; Seregin, ¶0242, Inverse quantization unit 86 inverse quantizes, or de-quantizes, the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 80 ... Inverse transform processing unit 88 applies an inverse transform (e.g., an inverse DCT or other suitable inverse transform), an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients in order to produce residual blocks in the pixel domain). As per claim 17, Seregin discloses a method for video encoding, comprising: predicting a current block to obtain a prediction block of the current block (Seregin, Figure 12, Prediction Processing; Seregin, ¶0239, When the video slice is coded as an intra-coded (I) slice, intra-prediction processing unit 84 of prediction processing unit 81 may generate prediction data for a video block of the current video slice based on a signaled intra-prediction mode and data from previously decoded blocks of the current frame or picture. When the video frame is coded as an inter-coded (i.e., B, P or GPB) slice, motion compensation unit 82 of prediction processing unit 81 produces predictive blocks for a video block of the current video slice based on the motion vectors and other syntax elements received from entropy decoding unit 80. The predictive blocks may be produced from one of the reference pictures within a reference picture list); determining a transform core corresponding to the current block according to the prediction block (Seregin, ¶0138, adaptive multiple transform (AMT) … Some AMT designs offer five transform options for an encoder to select on a per-block basis (e.g., the selection can be performed based on a rate-distortion metric for a coding block, prediction block, or transform block). Then, the selected transform index is signaled by the video encoder with the video bitstream, which can be decoded and analyzed by the video decoder); obtaining a residual block of the current block according to the prediction block and the current block (Seregin, Figure 12, Inverse Quantization, 86; Seregin, Figure 12, Inverse Transform Processing, 88; Seregin, Figure 12, Residual Blocks; Seregin, ¶0233, Inverse quantization unit 58 and inverse transform processing unit 60 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block in the pixel domain for later use as a reference block of a reference picture; Seregin, ¶0242, Inverse quantization unit 86 inverse quantizes, or de-quantizes, the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 80 ... Inverse transform processing unit 88 applies an inverse transform (e.g., an inverse DCT or other suitable inverse transform), an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients in order to produce residual blocks in the pixel domain); and transforming the residual block according to the transform core, and encoding a transformed coefficient to obtain a bitstream (Seregin, Figure 12, encoded video bitstream; Seregin, Figure 12, Quantized coefficients; Seregin, ¶0236, FIG. 12 is a block diagram illustrating an example decoding device 112. The decoding device 112 includes an entropy decoding unit 80, prediction processing unit 81, inverse quantization unit 86, inverse transform processing unit 88, summer 90, filter unit 91, and picture memory 92. Prediction processing unit 81 includes motion compensation unit 82 and intra-prediction processing unit 84; Seregin, ¶0237, During the decoding process, the decoding device 112 receives an encoded video bitstream that represents video blocks of an encoded video slice and associated syntax elements sent by the encoding device 104; Seregin, ¶0238, The entropy decoding unit 80 of the decoding device 112 entropy decodes the bitstream to generate quantized coefficients, motion vectors, and other syntax elements). As per claim 19, Seregin discloses a video encoder, comprising: a processor, and a memory storing a computer program, which, when executed by the processor, cause the processor to perform the method of claim 17 (Seregin, Figure 12; Seregin, ¶0013, a computer-readable storage medium storing instructions that when executed cause one or more processors of a device for decoding video data to: obtain an encoded block of the video data; Seregin, ¶0233; Seregin, ¶0236-0239; Seregin, ¶0242). As per claim 20, Seregin discloses a video decoder, comprising: a processor; and a memory, configured to store a computer program executable by the processor (Seregin, ¶0013, a computer-readable storage medium storing instructions that when executed cause one or more processors of a device for decoding video data to: obtain an encoded block of the video data), wherein the processor is configured to: decode a bitstream to obtain a target transform coefficient of a current block (Seregin, Figure 12, encoded video bitstream; Seregin, Figure 12, Quantized coefficients; Seregin, ¶0236, FIG. 12 is a block diagram illustrating an example decoding device 112. The decoding device 112 includes an entropy decoding unit 80, prediction processing unit 81, inverse quantization unit 86, inverse transform processing unit 88, summer 90, filter unit 91, and picture memory 92. Prediction processing unit 81 includes motion compensation unit 82 and intra-prediction processing unit 84; Seregin, ¶0237, During the decoding process, the decoding device 112 receives an encoded video bitstream that represents video blocks of an encoded video slice and associated syntax elements sent by the encoding device 104; Seregin, ¶0238, The entropy decoding unit 80 of the decoding device 112 entropy decodes the bitstream to generate quantized coefficients, motion vectors, and other syntax elements); predict the current block to obtain a prediction block of the current block (Seregin, Figure 12, Prediction Processing; Seregin, ¶0239, When the video slice is coded as an intra-coded (I) slice, intra-prediction processing unit 84 of prediction processing unit 81 may generate prediction data for a video block of the current video slice based on a signaled intra-prediction mode and data from previously decoded blocks of the current frame or picture. When the video frame is coded as an inter-coded (i.e., B, P or GPB) slice, motion compensation unit 82 of prediction processing unit 81 produces predictive blocks for a video block of the current video slice based on the motion vectors and other syntax elements received from entropy decoding unit 80. The predictive blocks may be produced from one of the reference pictures within a reference picture list); determine a transform core corresponding to the current block according to the prediction block (Seregin, ¶0138, adaptive multiple transform (AMT) … Some AMT designs offer five transform options for an encoder to select on a per-block basis (e.g., the selection can be performed based on a rate-distortion metric for a coding block, prediction block, or transform block). Then, the selected transform index is signaled by the video encoder with the video bitstream, which can be decoded and analyzed by the video decoder); and perform inverse transform on the target transform coefficient according to the transform core, and obtain a residual block of the current block according to a transform result of the inverse transform (Seregin, Figure 12, Inverse Quantization, 86; Seregin, Figure 12, Inverse Transform Processing, 88; Seregin, Figure 12, Residual Blocks; Seregin, ¶0233, Inverse quantization unit 58 and inverse transform processing unit 60 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block in the pixel domain for later use as a reference block of a reference picture; Seregin, ¶0242, Inverse quantization unit 86 inverse quantizes, or de-quantizes, the quantized transform coefficients provided in the bitstream and decoded by entropy decoding unit 80 ... Inverse transform processing unit 88 applies an inverse transform (e.g., an inverse DCT or other suitable inverse transform), an inverse integer transform, or a conceptually similar inverse transform process, to the transform coefficients in order to produce residual blocks in the pixel domain). Allowable Subject Matter Claims 2-13 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Claims 2-13 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims because while the prior art discloses various video encoding and decoding methods the prior does not disclose the limitation, “wherein determining the transform core corresponding to the current block according to the prediction block comprises: determining texture information of the prediction block; and determining the transform core corresponding to the current block according to the texture information of the prediction block, wherein the texture information of the prediction block comprises gradient information of the prediction block, wherein determining the texture information of the prediction block comprises: determining the gradient information of the prediction block, wherein determining the transform core corresponding to the current block according to the texture information of the prediction block comprises: determining the transform core corresponding to the current block according to the gradient information of the prediction block” as recited in dependent claims 2 and 18. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TRACY MANGIALASCHI whose telephone number is (571)270-5189. The examiner can normally be reached M-F, 9:30AM TO 6:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571) 272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TRACY MANGIALASCHI/Primary Examiner, Art Unit 2668
Read full office action

Prosecution Timeline

Mar 26, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602936
LONG-RANGE 3D OBJECT DETECTION USING 2D BOUNDING BOXES
2y 5m to grant Granted Apr 14, 2026
Patent 12592055
MACHINE-LEARNING MODEL ANNOTATION AND TRAINING TECHNIQUES
2y 5m to grant Granted Mar 31, 2026
Patent 12586194
Arrangement and Method for the Optical Assessment of Crop in a Harvesting Machine
2y 5m to grant Granted Mar 24, 2026
Patent 12568876
METHOD FOR CLASSIFYING PLANTS FOR AGRICULTURAL PURPOSES
2y 5m to grant Granted Mar 10, 2026
Patent 12567246
FAIR NEURAL NETWORKS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+28.4%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 582 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month