Prosecution Insights
Last updated: April 19, 2026
Application No. 18/881,255

METHOD, DEVICE, AND RECORDING MEDIUM FOR IMAGE ENCODING/DECODING

Non-Final OA §102
Filed
Jan 03, 2025
Examiner
MATT, MARNIE A
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
96%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
402 granted / 456 resolved
+30.2% vs TC avg
Moderate +8% lift
Without
With
+7.6%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
16 currently pending
Career history
472
Total Applications
across all art units

Statute-Specific Performance

§101
5.8%
-34.2% vs TC avg
§103
47.2%
+7.2% vs TC avg
§102
21.9%
-18.1% vs TC avg
§112
14.9%
-25.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 456 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by ZHANG et al., (From IDS: US 2019/0246143). Regarding claim 1: ZHANG teaches an image decoding method [¶0006 teaches: a block is coded (e.g., encoded or decoded) with a block vector], comprising: deriving a block vector [¶0007 teaches: techniques that a video coder (e.g., video encoder or video decoder) may utilize to determine a block vector]; and deriving a prediction block for a current block by performing prediction based on the block vector [¶0009 teaches: determining one or more block vectors for one or more of the sub-blocks of the second color component that are predicted in intra-block copy (IBC) prediction mode based on one or more block vectors of one or more corresponding blocks of the plurality of blocks of the first color component, and coding the block of the second color component based on the one or more determined block vectors.]. Regarding claim 2: the essence of the claim is taught above in the rejection of claim 1. In addition, ZHANG teaches wherein the prediction is template matching prediction or intra template matching prediction [¶0149 teaches: The proposed decoder-side motion vector derivation (DMVD) is applied for merge mode of bi-prediction with one from the reference picture in the past and the other from the reference picture in the future; and The following describes the bilateral template matching. A bilateral template is generated as the weighted combination of the two prediction blocks, from the initial MV0 of list0 and MV1 of list1, respectively, as shown in FIG. 12. The template matching operation includes calculating cost measures between the generated template and the sample region (around the initial prediction block) in the reference picture.]. Regarding claim 3: the essence of the claim is taught above in the rejection of claim 2. In addition, ZHANG teaches wherein, when a luma component and a chroma component of the current block has independent block partitioning structures [See Figure 17A, elements 704A and 704B compared to Figure 17B, element 706}, template matching prediction mode information for the luma component and template matching prediction mode information for the chroma component are independently decoded for the luma component and the chroma component, respectively [¶0169 teaches: When the partition tree is decoupled for different color components, the signaling of IBC mode and/or motion vectors of IBC for a block may only applied to one color component (e.g., luma component). Alternatively, furthermore, the block of a component (e.g., Cb or Cr) which is coded after a pre-coded component (e.g., Luma) always inherits the usage of IBC mode of a corresponding block of that pre-coded component.]. Regarding claim 4: the essence of the claim is taught above in the rejection of claim 2. In addition, ZHANG teaches wherein: a block vector of the current block is stored in a motion information buffer [¶0241 teaches: In this case, the prediction information syntax elements may indicate a reference picture in DPB 314 from which to retrieve a reference block, as well as a motion vector identifying a location of the reference block in the reference picture relative to the location of the current block in the current picture.], the block vector is added to a block vector candidate list for a next block of the current block [¶0070 teaches: a motion vector itself may be referred in a way that it is assumed that it has an associated reference index. A reference index is used to identify a reference picture in the current reference picture list (RefPicList0 or RefPicList1)], and the prediction is template matching prediction or intra template matching prediction [¶0142 teaches: As shown in FIG. 10, template matching is used to derive motion information of the current block by finding the best match between a template]. Regarding claim 5: the essence of the claim is taught above in the rejection of claim 1. In addition, ZHANG teaches wherein the block vector is a template matching block vector [¶0202 teaches: template matching is conducted with the reference picture identical to the current picture if the seed motion vector refers to the current picture. In one example, bi-literal matching cannot be conducted if at least one of the two motion vectors refers to the current picture, i.e., at least one of the two motion vectors is a block vector in IBC]. Regarding claim 6: the essence of the claim is taught above in the rejection of claim 1. In addition, ZHANG teaches wherein scaling is performed on the template matching block vector [¶0254 teaches: For example, the video coder may determine that chroma sub-blocks 708A and 708B are to inherit the block vectors of luma blocks 704A and 704B, and in response, the video coder may determine the block vectors for chroma sub-blocks 708A and 708B based on the block vectors of luma blocks 704A and 704B; and ¶0255 teaches: In some examples, to determine the one or more block vectors for one or more sub-blocks, the video coder may be configured to scale the one or more block vectors of the one or more corresponding blocks of the plurality of blocks of the first color component based on a sub-sampling format of the first color component and the second color component.]. Regarding claim 7: the claim is merely an image encoding method, which complements the image decoding method of claim 1. ZHANG teaches encoding [techniques that a video coder (e.g., video encoder or video decoder) may utilize, Abstract]. Therefore, the rejection of claim 1 applies equally to this claim. Regarding claim 8: the claim is merely an image encoding method, which complements the image decoding methods of claim 2. ZHANG teaches encoding, [techniques that a video coder (e.g., video encoder or video decoder) may utilize, Abstract]. Therefore, the rejection of claim 2 applies equally to this claim. Regarding claim 9: the claim is merely an image encoding method, which complements the image decoding methods of claim 3. ZHANG teaches encoding, [techniques that a video coder (e.g., video encoder or video decoder) may utilize, Abstract]. Therefore, the rejection of claim 3 applies equally to this claim. Regarding claim 10: the claim is merely an image encoding method, which complements the image decoding methods of claim 4. ZHANG teaches encoding, [techniques that a video coder (e.g., video encoder or video decoder) may utilize, Abstract]. Therefore, the rejection of claim 4 applies equally to this claim. Regarding claim 11: the claim is merely an image encoding method, which complements the image decoding methods of claim 5. ZHANG teaches encoding, [techniques that a video coder (e.g., video encoder or video decoder) may utilize, Abstract]. Therefore, the rejection of claim 5 applies equally to this claim. Regarding claim 12: the claim is merely an image encoding method, which complements the image decoding methods of claim 6. ZHANG teaches encoding, [techniques that a video coder (e.g., video encoder or video decoder) may utilize, Abstract]. Therefore, the rejection of claim 6 applies equally to this claim. Regarding claim 13: ZHANG teaches a non-transitory computer-readable storage medium [¶0207 teaches: The units may be implemented as fixed-function circuits, programmable circuits, or a combination thereof.] for storing a bitstream generated by the image encoding method of claim 7 [See rejection of claim 7 above.]. Regarding claim 14: ZHANG teaches a non-transitory computer-readable storage medium for storing a bitstream [¶0207 teaches: The units may be implemented as fixed-function circuits, programmable circuits, or a combination thereof.] for image decoding, wherein: the bitstream comprises prediction mode information, a block vector is derived using the prediction mode information [¶0007 teaches: techniques that a video coder (e.g., video encoder or video decoder) may utilize to determine a block vector], and a prediction block for a current block is derived by performing prediction based on the block vector [¶0009 teaches: determining one or more block vectors for one or more of the sub-blocks of the second color component that are predicted in intra-block copy (IBC) prediction mode based on one or more block vectors of one or more corresponding blocks of the plurality of blocks of the first color component, and coding the block of the second color component based on the one or more determined block vectors.]. Regarding claim 15: the claim is merely a non-transitory computer-readable storage medium for image decoding wherein the prediction is template matching prediction or intra template matching prediction of claim 2. Therefore, the rejection of claim 2 applies equally to this claim. Regarding claim 16: the claim is merely a non-transitory computer-readable storage medium for storing a bitstream with an image decoding method of claim 3. Therefore, the rejection of claim 3 applies equally to this claim. Regarding claim 17: the claim is merely a non-transitory computer-readable storage medium for storing a bitstream with an image decoding method of claim 4. Therefore, the rejection of claim 4 applies equally to this claim. Regarding claim 18: the claim is merely a non-transitory computer-readable storage medium for storing a bitstream with an image decoding method of claim 5. Therefore, the rejection of claim 5 applies equally to this claim. Regarding claim 19: the claim is merely a non-transitory computer-readable storage medium for storing a bitstream with an image decoding method of claim 6. Therefore, the rejection of claim 6 applies equally to this claim. Conclusion Prior art not relied upon: Please refer to the references listed in an attached PTO-892 and that are not relied upon for the claim rejections detailed above. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. In particular, JANG (US 2022/0132137) teaches an image decoding method of deriving a first motion vector predictor for a first prediction direction of a current block and a second motion vector predictor for a second prediction direction, deriving a first motion vector difference for the first prediction direction of the current block and a second motion vector difference for the second prediction direction using information on a motion vector difference; ABE et al., (US 2024/0048691) teaches a decoder to derive first motion vectors for a first block obtained by splitting a picture, using a first inter prediction scheme, and generates a prediction image corresponding to the first block, by referring to spatial gradients of luminance generated based on the first motion vectors; LEE et al., (US 2022/0256187) teaches deriving an initial motion vector from a merge candidate list of a current block; and LEE et al., (US 2020/0314444) teaches a step of deriving two motion vectors (MV) for a current block, the two MVs including MVL0 for the L0 and MVL1 for the L1. In the case of amending the claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Marnie Matt whose telephone number is (303)297-4255. The examiner can normally be reached Monday - Friday, 8:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARNIE A MATT/Primary Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

Jan 03, 2025
Application Filed
Feb 10, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604246
CHANNEL SWITCHING FOR MULTI-LINK DEVICES
2y 5m to grant Granted Apr 14, 2026
Patent 12604029
Motion Candidates Derivation
2y 5m to grant Granted Apr 14, 2026
Patent 12593017
METHOD FOR ENCODING AND METHOD FOR DECODING A LUT AND CORRESPONDING DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12593033
COLOUR COMPONENT PREDICTION METHOD, ENCODER, DECODER AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12581121
DEVICE AND METHOD FOR ENCODING VIDEO DATA USING MAXIMUM BIT-DEPTH CONSTRAINT INFORMATION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
96%
With Interview (+7.6%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 456 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month