Prosecution Insights
Last updated: April 19, 2026
Application No. 18/851,477

Image Coding Method and Apparatus, Image Decoding Method and Apparatus, and Electronic Device and Storage Medium

Non-Final OA §103
Filed
Sep 26, 2024
Examiner
KALAPODAS, DRAMOS
Art Unit
2487
Tech Center
2400 — Computer Networks
Assignee
Hangzhou Hikvision Digital Technology Co. Ltd.
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
562 granted / 713 resolved
+20.8% vs TC avg
Strong +28% interview lift
Without
With
+28.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
34 currently pending
Career history
747
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 713 resolved cases

Office Action

§103
ETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement 2. The information disclosure statement (IDS) were submitted on 11/07/2025, 07/30/2025 and 09/26/2024. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections 3. Claims 1, 7-9, 24 and 31 are objected to, because of the following informalities: The claims recite inter alia; “….a predicted value of the each pixel …”. For a correct application of grammatical syntax it would be recommended to delete the article, “the” in the specified claim paragraphs, without resulting in a change of the claimed scope. Correction is recommended. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application does not currently name joint inventors. 4. Claims 1-9 and 24-34, are rejected under 35 U.S.C. 103 as being obvious over Bae Keun Lee et al., (hereinafter Lee) (US 10,419,759) and Hu Ruimin et al., (hereinafter Hu) (CN 1021860086) in view of Je Chang Jeong et al., (hereinafter Jeong) (US 2025/0193414) priority to US 10,824,408 (Appl. 16/098,325 of Nov.1, 2018). Re Claim 1. (Original) Lee discloses, an image decoding method (decoder in Fig.2, Col.2 Lin.10-14), comprising: parsing a code stream of a to-be-decoded block, to determine a target prediction mode for predicting pixels in the to-be-decoded block (receiving a coded bitstream and prediction unit information module 230, Fig.2, Col.11 Lin.4-14); determining a target prediction order corresponding to the target prediction mode based on the target prediction mode (determining the target prediction mode and prediction order according to the residual information scanning order based on the intra prediction modes of blocks, the transform methods to arrange coefficients, of the target intra prediction mode, Col.17 Lin.17-47, peer Fig.5 or Fig.8 or prediction order per scanning order, at Fig.9 at S940-vertical, S950-horizontal and S960-zig-zag order, Col.18 Lin.19-48); predicting each pixel in the to-be-decoded block in the target prediction order according to the target prediction mode (each pixel is predicted according to the target prediction mode depicted in Fig.5 Col.13 Lin.51-67 to Col.18 Lin.1-30); rebuilding each pixel based on a predicted value of the each pixel to obtain a rebuilt block of the to-be-decoded block (rebuilding the pixel block, i.e., reconstructing the block, as determined above, Col.18 Lin.26-30). In an analogous art, Hu expressly teaches the application of an intra-frame prediction based on current block, i.e., target block, as part of the AVC standard using pixel intra prediction from adjacent sub-blocks of pixel references, e.g., from upper-left UL, upper-right UR, below-left BL, and below-right BR, subblocks of pixels, as claimed at, predicting each pixel in the to-be-decoded block in the target prediction order according to the target prediction mode (the current pixel within a target block is predicted as cited below for brevity from Pg.5/17 PNG media_image1.png 200 400 media_image1.png Greyscale according to the Step 1 to Step 5 at Pg.5/17 and at Pg.6/17 Lin.1-13, where the pixels are further scanned in prediction direction, according to the prediction mode, Pg.6/17 Lin.15 and Mode 1 to Mode 4 and Pg.7/17 Lin.1-15); rebuilding each pixel based on a predicted value of the each pixel to obtain a rebuilt block of the to-be-decoded block (predicting each current block pixel for the 8x8 block, based on the reference pixel and the current coding pixel, Pg.12/17 cited below, PNG media_image2.png 200 400 media_image2.png Greyscale or at Pgs.3/17 and PNG media_image3.png 200 400 media_image3.png Greyscale PNG media_image4.png 200 400 media_image4.png Greyscale Pg.4 Lin.1-4). The skilled in the art would have considered the common pixel-based prediction methods where Lee teaches about the intra prediction mode determining the pixel scanning order (at Col.2 Lin.33-41) and find obvious before the effective filing date of invention, to combine with other intra-frame prediction methods corresponding to the scanning order taught in detail by Hu, in order to improve the encoding performance (Abstract) by which finding the combination predictable. Furthermore, the pixel-based intra prediction mode is expressly disclosed by the art to Jeong, teaching about, rebuilding each pixel based on a predicted value of the each pixel to obtain a rebuilt block of the to-be-decoded block (the pixel-based prediction, where a predicted value of the each pixel is used sequentially to reconstruct the target block i.e., the current to-be-decoded block as depicted at Fig.13b, Par.[0049, 0211-0222] of the reconstructed block in Fig. 14 Par.[0050, 0223-0224). Within the same video prediction art, the combined references of Lee and Hu, teaching a common application of the intra prediction mode to the frame coding, the frame being comprised of predicted and to be predicted blocks of pixels, and there the intra prediction is applied at the intra-frame to the respective intra predicted blocks and where the ordinary skilled in the art would have found obvious to apply the respective intra mode down to the intra-block pixel reconstruction level identified in Jeong, by which improving the coding efficiency (Jeong: Par.[0035-0036]), thus finding such combination predictable in terms of the matter claimed. Re Claim 2. (Original) Lee, Hu and Jeong disclose, the method according to claim 1, Lee teaches that, wherein when predicting any pixel in the to-be-decoded block in the target prediction order, a pixel used to predict this pixel has been rebuilt (the target block prediction is based on a residual and a pixel previously rebuilt pixel, at least at Col.6 Lin.39-41 and 53-56). Re Claim 3. (Original) Lee, Hu and Jeong disclose, the method according to claim 2, Lee teaches about, wherein if the target prediction mode indicates that each pixel in the to-be-decoded block is predicted point by point in the target prediction order, then predicting each pixel in the to-be-decoded block in the target prediction order according to the target prediction mode comprises: predicting each pixel in the to-be-decoded block point by point in a direction indicated by the target prediction order according to the target prediction mode (each pixel is predicted according to the scanning order and the intra prediction mode, Col.2 Lin.33-41 and Fig.9 Col.17 Lin.51-67 to Col.18 Lin.1-18); wherein, when the target prediction mode is a first target prediction mode, the target prediction order is a first prediction order (a first target pixel prediction mode is in the vertical prediction mode direction, Col.2 Lin.14-18, Col.17 Lin.64-67), and when the target prediction mode is a second target prediction mode, the target prediction order is a second prediction order (a second target pixel prediction mode is in the horizontal prediction mode direction, Col.2 Lin.18-19, Col.18 Lin.11-14), and the first prediction order and the second prediction order are different (where the vertical and horizontal modes are different in the respective prediction mode as set by the scan order direction which depends on the prediction mode, Col.2 Lin.33-41). Re Claim 4. (Original) Lee, Hu and Jeong disclose, the method according to claim 3, Lee teaches about, wherein, for the to-be-decoded block with a size of a first size, predicting the to-be-decoded block by adopting a third prediction order in the target prediction mode (a third prediction mode and prediction order per Fig.9 Lin.15-18); for the to-be-decoded block with the size of a second size, predicting the to-be- decoded block by adopting a fourth prediction order in the target prediction mode (the fourth diagonal prediction order is selected according to the block size, per Fig.8 Col. 17 Lin.16-30 where the size of the block determines the scanning order in addition to the information on the intra prediction mode, Col.18 Lin.61-67); wherein the third prediction order and the fourth prediction order are different (the third prediction mode and zig-zag scan order Col.18 Lin.15-18 is different than the fourth diagonal order depending on the intra prediction mode and the size of the block per Fig.8 Col.17 Lin.21-34). Re Claim 5. (Original) Lee, Hu and Jeong disclose, the method according to claim 2, Lee teaches about, wherein if the target prediction mode indicates that pixels of each sub-block in the to-be-decoded block are predicted sequentially with a sub-block of a preset size in the to-be-decoded block as a unit, then predicting each pixel in the to-be-decoded block in the target prediction order according to the target prediction mode comprises: predicting the pixels in each sub-block in the to-be-decoded block sequentially in a direction indicated by the target prediction order according to the target prediction mode (per Fig.4 the pixels (420) and (440) of the blocks are decoded in the indicated direction and in sequential order, Col.12 Lin.58-67 and Col.13 Lin.1-22). Re Claim 6. (Original) Lee, Hu and Jeong disclose, the method according to claim 5, Jeong teaches about, wherein the target prediction mode comprises a prediction mode for each sub-block in the to-be-decoded block, and for a first sub-block in the to-be- decoded block, the first sub-block comprises a first pixel and a second pixel (the sub-block comprises two pixels to be part of the prediction per Fig.13b, Par.[0049, 0211-0222] and in the reconstructed block of Fig. 14 Par.[0050, 0223-0224)., and a prediction mode for the first sub-block is used to predict the first pixel and the second pixel in parallel based on rebuilt pixels around the first sub-block (the parallel pixel prediction is possible Par.[0226, 0230] and Fig.15). Re Claim 7. (Currently Amended) Lee, Hu and Jeong disclose, the method according to claim 1 Jeong teaches about, performing inverse quantization on a first residual block of the to-be-decoded block obtained by parsing the code stream of the to-be-decoded block, based on an inverse quantization parameter of each pixel in the to-be-decoded block obtained by parsing the code stream of the to-be-decoded block and an inverse quantization preset array, to obtain a second residual block (the inverse quantization and transform is depicted at Fig.3); rebuilding each pixel based on the predicted value of the each pixel and the second residual block to obtain the rebuilt block (according to Fig.15 andPar.0225-0227] and similarly identified at Fig.16 Par.[0228-0229] for a first and a second residual used to reconstruct the current block). Re Claim 8. (Original) Lee, Hu and Jeong disclose, the method according to claim 7, wherein parsing the code stream of the to-be- decoded block comprises: Jeong teaches about, parsing the code stream of the to-be-decoded block by using a variable code length decoding method to obtain the first residual block and a code length CL for coding each value in a residual block corresponding to the to-be-decoded block (the truncated unary (TU) binarization is considered a Variable Length Code (VLC) as taught at Table 2, Par.[0354-0355]). Re Claim 9. (Original) This claim represents the image coding method where the “coding” is commonly interpreted as encoding/decoding, encoding the residual block (Lee: encoding the video block, module 130, in Fig.1, Col.2 Lin.42-46 along the residual value Col.6 Lin.1-4) in the target prediction order to obtain a code stream of the to-be-coded block, similarly to the method claim 1, hence it is rejected on the same evidentiary probe mutatis mutandis. 10-23. (Cancelled) Re Claim 24. (Currently Amended) This claim represents the decoding electronic device comprising a processor and a memory (Lee: per Fig.2, a memory 240 Col.9 Lin.39 and inverse process, Col.9 Lin.43), wherein the memory is configured to store computer instructions, and the processor is configured to call and execute the computer instructions from the memory to implement each and every limitation of the method claim 1, hence it is rejected on the same evidentiary probe mutatis mutandis. Re Claim 25. (Currently Amended) This claim represents the non-transitory computer-readable storage medium storing a computer program or instructions thereon which, when executed by an electronic device, causes the electronic device to implement each and every limitation of the method claim 1, hence it is rejected on the same evidentiary probe mutatis mutandis. Re Claim 26. (New) This claim represents the decoding electronic device comprising a processor and a memory (Lee: per Fig.2, a memory 240 Col.9 Lin.39 and inverse process, Col.9 Lin.43), wherein the memory is configured to store computer instructions, and the processor is configured to call and execute the computer instructions from the memory to implement each and every limitation of the method claim 2, hence it is rejected on the same evidentiary probe mutatis mutandis. Re Claim 27. (New) This claim represents the decoding electronic device comprising a processor and a memory (Lee: per Fig.2, a memory 240 Col.9 Lin.39 and inverse process, Col.9 Lin.43), wherein the memory is configured to store computer instructions, and the processor is configured to call and execute the computer instructions from the memory to implement each and every limitation of the method claim 3, hence it is rejected on the same evidentiary probe mutatis mutandis. Re Claim 28. (New) This claim represents the decoding electronic device comprising a processor and a memory (Lee: per Fig.2, a memory 240 Col.9 Lin.39 and inverse process, Col.9 Lin.43), wherein the memory is configured to store computer instructions, and the processor is configured to call and execute the computer instructions from the memory to implement each and every limitation of the method claim 4, hence it is rejected on the same evidentiary probe mutatis mutandis. Re Claim 29. (New) This claim represents the decoding electronic device comprising a processor and a memory (Lee: per Fig.2, a memory 240 Col.9 Lin.39 and inverse process, Col.9 Lin.43), wherein the memory is configured to store computer instructions, and the processor is configured to call and execute the computer instructions from the memory to implement each and every limitation of the method claim 5, hence it is rejected on the same evidentiary probe mutatis mutandis. Re Claim 30. (New) This claim represents the decoding electronic device comprising a processor and a memory (Lee: per Fig.2, a memory 240 Col.9 Lin.39 and inverse process, Col.9 Lin.43), wherein the memory is configured to store computer instructions, and the processor is configured to call and execute the computer instructions from the memory to implement each and every limitation of the method claim 6, hence it is rejected on the same evidentiary probe mutatis mutandis. Re Claim 31. (New) This claim represents the decoding electronic device comprising a processor and a memory (Lee: per Fig.2, a memory 240 Col.9 Lin.39 and inverse process, Col.9 Lin.43), wherein the memory is configured to store computer instructions, and the processor is configured to call and execute the computer instructions from the memory to implement each and every limitation of the method claim 7, hence it is rejected on the same evidentiary probe mutatis mutandis. Re Claim 32. (New) This claim represents the decoding electronic device comprising a processor and a memory (Lee: per Fig.2, a memory 240 Col.9 Lin.39 and inverse process, Col.9 Lin.43), wherein the memory is configured to store computer instructions, and the processor is configured to call and execute the computer instructions from the memory to implement each and every limitation of the method claim 8, hence it is rejected on the same evidentiary probe mutatis mutandis. Re Claim 33. (New) Lee, Hu and Jeong disclose, an electronic device comprising a processor and a memory, wherein the memory is configured to store computer instructions, and the processor is configured to call and execute the computer instructions from the memory to implement an image decoding method (Lee: per Fig.2, a memory 240 Col.9 Lin.39 and inverse process, Col.9 Lin.43) comprising: Jeong teaches about, parsing a code stream of a to-be-decoded block to obtain an inverse quantization parameter of each pixel in the to-be-decoded block and a first residual block of the to-be-decoded block (Fig.3, Par.[0109-0110]); performing inverse quantization on the first residual block based on an inverse quantization preset array and a quantization parameter QP indicated by the inverse quantization parameter of each pixel, to obtain a second residual block (inverse quantization at element 315, Fig.3); rebuilding the to-be-decoded block based on the second residual block to obtain a rebuilt block (rebuilding at element 335 the current block “to-be-decoded” by using residual 325, in Fig.3 by using a second residual used to reconstruct the current block according to Fig.15 and Par.0225-0227] and similarly identified at Fig.16 Par.[0228-0229]) . Re Claim 34. (New) Lee, Hu and Jeong disclose, the electronic device according to claim 33, Lee teaches about, wherein parsing the code stream of the to- be-decoded block to obtain the inverse quantization parameter of each pixel in the to-be-decoded block and the first residual block of the to-be-decoded block (parsing the bitstream at element 210, in Fig.2, and NAL syntax including the coding prediction mode, to obtain the inverse quantization parameter of each pixel and the residual Col.9 Lin.32-67) comprises: determining a target prediction mode for predicting pixels in the to-be-decoded block and an inverse quantization parameter of each pixel in the to-be-decoded block based on the code stream of the to-be-decoded block (receiving a coded bitstream and prediction mode per Fig.3, or 5 and unit information, module 235, Fig.2 Col.11 Lin.4-14); determining a residual scanning order corresponding to the target prediction mode based on the target prediction mode (determining the residual and performing prediction per scanning order, at Fig.9 at S940-vertical, S950-horizontal and S960-zig-zag order, Col.18 Lin.19-48) in the ; wherein when the target prediction mode is a first target prediction mode, the residual scanning order is a first scanning order (e,g., the first target prediction mode being at S940-vertical in Fig.9 Col.18 Lin.19-48), and when the target prediction mode is a second target prediction mode, the residual scanning order is a second scanning order, and the first scanning order and the second scanning order are different (when prediction mode is S950-horizontal, Col.18 Lin.19-48); parsing the code stream of the to-be-decoded block based on the residual scanning order to obtain the first residual block (parsing the residual block based on the residual scanned in the order per Fig.8 Col.4 Lin.1-3, indicated by the prediction mode group, per Fig.9, and Fig.10, Col.4 Lin.4-9). Conclusion 5. The prior art made of record and not relied upon, is considered pertinent to applicant's disclosure. Other representative prior art; US 10,834,408; US 2021/0227212; US11,825,099; CN 110383837 A; CN 112070851 B; US 11,930,210; or, Johannes Olsson Sandgren; “Pixel-based video coding”, Uppsala Universitet, Mar. 2014 See PTO-892 form. Applicant is required under 37 C.F.R. 1.111(c) to consider these references when responding to this action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DRAMOS KALAPODAS whose telephone number is (571)272-4622. The examiner can normally be reached on Monday-Friday 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached on 571-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DRAMOS KALAPODAS/Primary Examiner, Art Unit 2487
Read full office action

Prosecution Timeline

Sep 26, 2024
Application Filed
Feb 14, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604039
SIGN PREDICTION FOR BLOCK-BASED VIDEO CODING
2y 5m to grant Granted Apr 14, 2026
Patent 12598327
RESIDUAL CODING CONSTRAINT FLAG SIGNALING
2y 5m to grant Granted Apr 07, 2026
Patent 12598301
BDPCM-BASED IMAGE CODING METHOD AND DEVICE THEREFOR
2y 5m to grant Granted Apr 07, 2026
Patent 12593044
DEEP CONTEXTUAL VIDEO IMAGE COMPRESSION
2y 5m to grant Granted Mar 31, 2026
Patent 12593022
STEREOSCOPIC DISPLAY SYSTEM AND LIQUID CRYSTAL SHUTTER DEVICE
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+28.2%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 713 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month