Prosecution Insights
Last updated: April 19, 2026
Application No. 17/994,400

Search Memory Management For Video Coding

Final Rejection §103
Filed
Nov 28, 2022
Examiner
DOBBS, KRISTIN SENSMEIER
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
MediaTek Inc.
OA Round
2 (Final)
61%
Grant Probability
Moderate
3-4
OA Rounds
4y 0m
To Grant
75%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
179 granted / 295 resolved
+2.7% vs TC avg
Moderate +15% lift
Without
With
+14.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
11 currently pending
Career history
306
Total Applications
across all art units

Statute-Specific Performance

§101
5.3%
-34.7% vs TC avg
§103
67.4%
+27.4% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
3.9%
-36.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 295 resolved cases

Office Action

§103
DETAILED ACTION This Office Action for U.S. Patent Application 17/994,400 is responsive to communications filed on 8/18/25, in reply to the Non-Final Rejection of 5/21/25. Currently, claims 1-20 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 7/28/25 is in accordance with provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment Applicant’s amendments to claims 1, 4, and 11 are acknowledged. The addition of new claims 21 and 22 is also acknowledged. Response to Arguments Applicant's arguments filed 8/18/25 have been fully considered but they are not persuasive. Regarding claim 1, Applicant argues on page 12 of the Response that Pang and Suzuki do not teach “determining, for the at least one of the plurality of reference pictures, a SR within the at least one of the plurality of reference pictures”, as amended (emphasized portion of the claim as in the Response page 12). However, Pang teaches providing a coded block to summer 150 to generate residual block data and to summer 162 to reconstruct the encoded block for use within a search region and/or as a reference picture (para [0095]-[0096]). The encoded block would be “within” the search region and also be a reference picture; therefore, the search region would be “within” a reference picture in Pang. Applicant also argues on page 12 of the Response that Pang and Suzuki do not teach “based on pixel data from the SR within the at least one of the plurality of reference pictures”, of claim 1 (as emphasized by applicant). However, Suzuki teaches inter-picture predictive encoding where the predicted signal for a block serving as an encoding target can be generated by searching previously-reconstructed pictures for a signal similar to a pixel signal of the target block 702 in encoding target picture 701. In addition, the picture 703 will be referred to as a reference picture. The search range 705 around the region 704 is set and a region 706 to minimize the sum of absolute errors from the pixel signal of the target block 702 is detected from a pixel signal of this search range (“based on pixel data from the SR”) (Fig. 15; para [0039]-[0040]). Therefore, Pang and Suzuki teach all of the limitations of claim 1. In addition, please see the below-stated rejection of the claims. Regarding claims 2, 11, and 12, please see the above-stated discussion for claim 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 11-12, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Pang et al. (U.S. Pub. No. 2015/0271517) in view of Suzuki et al. (U.S. Pub. No. 2013/0301708). In regard to claim 1, Pang teaches a method of processing a current block of a current picture (i.e., coding mode in which a current block of video data in a current picture is predicted) (para[0006]), comprising: determining, for at least one of the plurality of reference pictures, a search range (SR) size based on the quantity (i.e., the size and/or shape of the search region may…be varied and signaled at the block, slice, picture, sequence, or other level) (para[0026]); determining, for the at least one of the plurality of reference pictures, a SR within at least one of the plurality of reference pictures based on the SR size and a location of the current block (i.e., prediction processing unit 142 may select one of the coding modes, e.g., intra, inter, or Intra BC, based on error results, and provides the resulting coded block to summer 150 to generate residual block data and to summer 162 to reconstruct the encoded block for use within a search region and/or as a reference picture; video encoder 20 may calculate values for sub-integer pixel positions of reference pictures stored in reference picture memory 168; the size and/or shape of search region 48 varies based on a location of current CTU 44 within picture 42 in a manner known to both video encoder 20 and video decoder 30, and the video coders determine the size and/or shape of search region 48 based on the CTU location) (para[0095]-[0096]; Fig. 2; para[0067]). However, Pang does not explicitly teach determining a quantity of a plurality of reference pictures of the current picture nor does it teach coding the current block based on pixel data from the SR within the at least one of the plurality of reference pictures. In the same field of endeavor, Suzuki teaches determining a quantity of a plurality of reference pictures of the current picture (i.e., for example, according to (A-2) of FIG. 3, which is an example of reference picture lists corresponding to (A) of FIG. 2, even if the two of the uni-predictions using L0 and the uni-prediction using L1 are added to candidates for prediction modes, the four reference pictures 401, 402, 404, and 405 become candidates for reference pictures to be used in the uni-prediction) (Figs. 2-3; para[0047], also generally para[0043]-[0048]) and teaches coding the current block based on pixel data from the SR within the at least one of the plurality of reference pictures (i.e., in inter-picture predictive encoding, the predicted signal for a block serving as an encoding target can be generated by searching previously-reconstructed pictures for a signal similar to a pixel signal of the target block; target block 702 in encoding target picture 701; the picture 703 will be referred to as a reference picture; the search range 705 around the region 704 is set and a region 706 to minimize the sum of absolute errors from the pixel signal of the target block 702 is detected from a pixel signal of this search range) (Fig. 15; para[0039]-[0040]). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Pang and Suzuki because Suzuki teaches bi-prediction and uni-prediction, and the related efficiencies based on the number of reference pictures used (See, for example, para[0048] of Suzuki). Therefore, it would been obvious to combine the teachings of Pang and Suzuki. In regard to claim 2, Pang and Suzuki teach all of the limitations of claim 1 as discussed above. However, Pang does not explicitly teach wherein the determining of the quantity comprises examining one or more lists each comprising one or more indices, each of the one or more indices corresponding to one of the plurality of reference pictures. In the same field of endeavor, Suzuki teaches wherein the determining of the quantity comprises examining one or more lists each comprising one or more indices, each of the one or more indices corresponding to one of the plurality of reference pictures (i.e., for example, according to (A-2) of FIG. 3, which is an example of reference picture lists corresponding to (A) of FIG. 2, even if the two of the uni-predictions using L0 and the uni-prediction using L1 are added to candidates for prediction modes, the four reference pictures 401, 402, 404, and 405 become candidates for reference pictures to be used in the uni-prediction) (Figs. 2-3; para[0047], also generally para[0043]-[0048]). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Pang and Suzuki for the same reasons as those discussed above for claim 1. In regard to claim 11, Pang teaches an apparatus, comprising: a reference picture buffer (RPB) (i.e., reference picture memory 168) (para[0092]) configured to store a plurality of reference pictures of a current picture and one or more reference picture lists (RPLs) each configured to store one or more indices (i.e., buffers that store reference video data for use in encoding video data by video encoder 20 (e.g., intra- or inter-coding modes, also referred to as intra- or inter-prediction coding modes, as well as intra-BC mode)) (para[0092]), each of the one or more indices corresponding to one of the plurality of reference pictures (i.e., the reference picture may be selected from a first reference picture list (List 0) or second reference picture list (List 1), each of which identify one or more reference pictures stored in reference picture memory 168) (para[0097]); a search memory (i.e., search region memory 164) (para[0092]); a processor (i.e., video encoder 20 and video encoder 30 each may be implemented as any other variety of suitable encoder decoder circuitry, as applicable, such as one or more microprocessors, digital signal processors (DSPs), etc.) (para[0039]) configured to perform operations comprising: determining, for at least one of the plurality of reference pictures, a search range (SR) size based on the quantity (i.e., the size and/or shape of the search region may…be varied and signaled at the block, slice, picture, sequence, or other level) (para[0026]); determining a SR within the at least one of the plurality of reference pictures based on the SR size and a location of the current block (i.e., prediction processing unit 142 may select one of the coding modes, e.g., intra, inter, or Intra BC, based on error results, and provides the resulting coded block to summer 150 to generate residual block data and to summer 162 to reconstruct the encoded block for use within a search region and/or as a reference picture; video encoder 20 may calculate values for sub-integer pixel positions of reference pictures stored in reference picture memory 168; the size and/or shape of search region 48 varies based on a location of current CTU 44 within picture 42 in a manner known to both video encoder 20 and video decoder 30, and the video coders determine the size and/or shape of search region 48 based on the CTU location) (para[0095]-[0096]; Fig. 2; para[0067]); and storing pixel data within the SR to the search memory (i.e., inverse quantization processing unit 158 and inverse transform processing unit 160 apply inverse quantization and inverse transformation, respectively, to reconstruct the residual block in the pixel domain; summer 162 adds the reconstructed residual block to the predictive block to produce a reconstructed video block for storage in one or both of search region memory 164 and reference picture memory 168) (para[0105]). a coding module configured to code the current block using the pixel data stored in the search memory (i.e., search region memory 164 stores reconstructed video blocks according to the definition or determination of the search region for Intra BC of a current video block by video encoder 20, e.g., Intra BC processing unit 149, using any of the techniques described herein) (para[0105]-[0106]). However, Pang does not explicitly teach determining a quantity of the plurality of reference pictures based on the one or more RPLs. In the same field of endeavor, Suzuki teaches determining a quantity of the plurality of reference pictures based on the one or more RPLs (i.e., for example, according to (A-2) of FIG. 3, which is an example of reference picture lists corresponding to (A) of FIG. 2, even if the two of the uni-predictions using L0 and the uni-prediction using L1 are added to candidates for prediction modes, the four reference pictures 401, 402, 404, and 405 become candidates for reference pictures to be used in the uni-prediction) (Figs. 2-3; para[0047], also generally para[0043]-[0048]). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Pang and Suzuki because Suzuki teaches bi-prediction and uni-prediction, and the related efficiencies based on the number of reference pictures used (See, for example, para[0048] of Suzuki). Therefore, it would been obvious to combine the teachings of Pang and Suzuki. In regard to claim 12, Pang and Suzuki teach all of the limitations of claim 11 as discussed above. In addition, Pang teaches further comprising: a motion estimation module (i.e., motion estimation unit 144) (para[0090]) configured to determine, for the at least one of the plurality of reference pictures, a macro motion vector (MMV) representing a spatial displacement from the current picture to the at least one of the plurality of reference pictures (i.e., motion estimation unit 144 and motion compensation unit 146 perform inter-predictive coding of the received video block relative to one or more blocks in one or more reference pictures to provide temporal compression or provide inter-view compression; intra-prediction processing unit 148 may alternatively perform intra-predictive coding of the received video block relative to one or more neighboring blocks in the same picture or slice as the block to be coded to provide spatial compression) (para[0094]), wherein the determining of the SR is further based on the MMV (i.e., intra BC processing unit 149 may generate block vectors and fetch predictive blocks in a manner similar to that described above with respect to motion vectors, motion estimation unit 144, and motion compensation unit 146, but with the predictive blocks being in the same picture or frame as the current block and, more particularly, within a search region within the current picture) (para[0101]). In regard to claim 22, Pang and Suzuki teach all of the limitations of claim 1 as discussed above. In addition, Pang teaches wherein the current block and the SR are located on different pictures (i.e., a portion of the search region may be unavailable because it is on a different side of a boundary, such as a…picture boundary, then the current block) (Figs. 3A-3B; para[0076]). Allowable Subject Matter Claims 3-10 and 13-21 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kristin Dobbs whose telephone number is (571)270-7936. The examiner can normally be reached Monday and Thursday 9:30am-5:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sathyanarayanan Perungavoor can be reached at (571)272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. KRISTIN DOBBS Examiner Art Unit 2488 /KRISTIN DOBBS/Examiner, Art Unit 2488 /WILLIAM C VAUGHN JR/Supervisory Patent Examiner, Art Unit 2481
Read full office action

Prosecution Timeline

Nov 28, 2022
Application Filed
May 17, 2025
Non-Final Rejection — §103
Aug 18, 2025
Response Filed
Dec 05, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593063
IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, IMAGE DECODING APPARATUS, AND IMAGE CODING AND DECODING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12542965
OBJECT TRACKING METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Feb 03, 2026
Patent 12526417
METHODS AND APPARATUS FOR DETERMINING QUANTIZATION PARAMETER PREDICTORS FROM A PLURALITY OF NEIGHBORING QUANTIZATION PARAMETERS
2y 5m to grant Granted Jan 13, 2026
Patent 12511915
IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Dec 30, 2025
Patent 12505505
SUPER-RESOLUTION VIDEO PROCESSING METHOD AND SYSTEM FOR EFFECTIVE VIDEO COMPRESSION
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
61%
Grant Probability
75%
With Interview (+14.7%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 295 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month