Prosecution Insights
Last updated: April 19, 2026
Application No. 18/896,773

METHOD FOR PICTURE PROCESSING AND STORAGE MEDIUM

Non-Final OA §102§112
Filed
Sep 25, 2024
Examiner
MAHMUD, FARHAN
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Guangdong OPPO Mobile Telecommunications Corp., Ltd.
OA Round
1 (Non-Final)
55%
Grant Probability
Moderate
1-2
OA Rounds
3y 4m
To Grant
65%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
212 granted / 386 resolved
-3.1% vs TC avg
Moderate +10% lift
Without
With
+10.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
40 currently pending
Career history
426
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
43.7%
+3.7% vs TC avg
§102
38.0%
-2.0% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 386 resolved cases

Office Action

§102 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted 09/25/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3-10 and 16-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 and 16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential steps, such omission amounting to a gap between the steps. See MPEP § 2172.01. The omitted steps are: the variable N is undefined. The variable N seems to be used both as an integer in the index I, and also as a variable for a particular feature information selected by index i. Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lim et al. (US 20210136379 A1). Regarding Claim 1, Lim et al. teaches a method for picture processing (Abstract), comprising: decoding a bitstream to obtain a quantization coefficient of a current picture block (Paragraph 119); determining a quantization parameter corresponding to the current picture block, and obtaining a transform coefficient of the current picture block by performing inverse quantization on the quantization coefficient based on the quantization parameter (Paragraph 7); determining a reconstructed picture block of the current picture block according to the transform coefficient (Paragraphs 128-132); and obtaining an enhanced picture block by performing quality enhancement on the reconstructed picture block based on the quantization parameter (Paragraph 5; Paragraph 9; Paragraphs 174-175; Paragraph 201; Paragraph 227; Paragraphs 253-254). Regarding Claim 2, Lim et al. teaches the method of claim 1, wherein obtaining the enhanced picture block by performing quality enhancement on the reconstructed picture block based on the quantization parameter comprises: obtaining first feature information of the reconstructed picture block by performing feature weighting on the reconstructed picture block based on the quantization parameter; and determining the enhanced picture block according to the first feature information (Paragraph 146; Paragraph 166; Paragraph 233; Paragraph 241-247; Paragraphs 253-255). Regarding Claim 3, Lim et al. teaches the method of claim 2, wherein obtaining the first feature information of the reconstructed picture block by performing feature weighting on the reconstructed picture block based on the quantization parameter comprises: obtaining i-th feature information of the reconstructed picture block by performing feature weighting on (i-1)-th feature information of the reconstructed picture block based on the quantization parameter, where i is a positive integer from 1 to N, and obtaining N-th feature information of the reconstructed picture block by repeating feature weighting, wherein when i = 1, the (i-1)-th feature information is the reconstructed picture block; and determining the first feature information of the reconstructed picture block according to the N-th feature information (Paragraphs 112-117; Paragraph 146; Paragraph 166; Paragraph 233; Paragraph 241-247; Paragraphs 253-255; Paragraph 270). Regarding Claim 4, Lim et al. teaches the method of claim 3, wherein obtaining the i-th feature information of the reconstructed picture block by performing feature weighting on the (i-1)-th feature information of the reconstructed picture block based on the quantization parameter comprises: extracting M feature information of different scales from the (i-1)-th feature information, where M is a positive integer greater than 1; obtaining i-th weighted feature information by weighting the M feature information of different scales; and determining the i-th feature information according to the i-th weighted feature information (Paragraphs 112-117; Paragraph 144-149; Paragraph 164-167; Paragraphs 190-195; Paragraph 233; Paragraph 241-248; Paragraphs 253-255; Paragraph 270). Regarding Claim 5, Lim et al. teaches the method of claim 4, wherein obtaining the i-th weighted feature information by weighting the M feature information of different scales comprises: obtaining first concatenated feature information by concatenating the M feature information of different scales; and obtaining the i-th weighted feature information by weighting the first concatenated feature information (Paragraph 78; Paragraphs 112-117; Paragraphs 120-121; Paragraph 144-149; Paragraph 164-167; Paragraphs 190-195; Paragraph 233; Paragraph 241-248; Paragraphs 253-255; Paragraph 270). Regarding Claim 6, Lim et al. teaches the method of claim 4, wherein determining the i-th feature information according to the i-th weighted feature information comprises: determining a sum of the i-th weighted feature information and the (i-1)-th feature information as the i-th feature information; or determining the i-th weighted feature information as the i-th feature information (Paragraph 78; Paragraphs 112-117; Paragraphs 120-121; Paragraph 144-149; Paragraph 164-167; Paragraphs 190-195; Paragraph 233; Paragraph 241-248; Paragraphs 253-255; Paragraph 270). Regarding Claim 7, Lim et al. teaches the method of claim 3, wherein obtaining the first feature information of the reconstructed picture block by performing feature weighting on the reconstructed picture block based on the quantization parameter comprises: extracting second feature information of the reconstructed picture block based on the quantization parameter; and obtaining the first feature information of the reconstructed picture block by performing feature weighting on the second feature information (Paragraph 146; Paragraph 166; Paragraph 233; Paragraph 241-247; Paragraphs 253-255). Regarding Claim 8, Lim et al. teaches the method of claim 7, wherein extracting the second feature information of the reconstructed picture block based on the quantization parameter comprises: obtaining concatenated information by concatenating the reconstructed picture block and the quantization parameter; and obtaining the second feature information by performing feature extraction on the concatenated information (Paragraph 144-149; Paragraph 164-167; Paragraphs 190-195; Paragraph 233; Paragraph 241-247; Paragraphs 253-255). Regarding Claim 9, Lim et al. teaches the method of claim 7, wherein determining the first feature information of the reconstructed picture block according to the N-th feature information comprises: obtaining the first feature information of the reconstructed picture block according to the N-th feature information and at least one of first N-1 feature information prior to the N-th feature information (Paragraph 78; Paragraphs 112-117; Paragraphs 120-121; Paragraph 144-149; Paragraph 164-167; Paragraphs 190-195; Paragraph 233; Paragraph 241-248; Paragraphs 253-255; Paragraph 270). Regarding Claim 10, Lim et al. teaches the method of claim 9, wherein obtaining the first feature information of the reconstructed picture block according to the N-th feature information and the at least one of the first N-1 feature information prior to the N-th feature information comprises: obtaining second concatenated feature information by concatenating the at least one of the first N-1 feature information, the N-th feature information, and the second feature information; and obtaining the first feature information of the reconstructed picture block by performing feature extraction on the second concatenated feature information (Paragraph 78; Paragraphs 112-117; Paragraphs 120-121; Paragraph 144-149; Paragraph 164-167; Paragraphs 190-195; Paragraph 233; Paragraph 241-248; Paragraphs 253-255; Paragraph 270). Regarding Claim 11, Lim et al. teaches the method of claim 1, further comprising: decoding the bitstream to obtain a first flag, wherein the first flag indicates whether quality enhancement is allowed for the reconstructed picture block of the current picture block; wherein obtaining the enhanced picture block by performing quality enhancement on the reconstructed picture block based on the quantization parameter comprises: obtaining the enhanced picture block by performing quality enhancement on the reconstructed picture block based on the quantization parameter, based on a determination according to the first flag that quality enhancement is allowed for the reconstructed picture block (Paragraph 146; Paragraph 166; Paragraph 233; Paragraph 241-247; Paragraphs 253-255; ). Regarding Claim 12, Lim et al. teaches the method of claim 1, further comprising: obtaining a test enhanced picture block by performing quality enhancement on the reconstructed picture block based on the quantization parameter; and determining a first picture quality of the test enhanced picture block and a second picture quality of the reconstructed picture block; wherein obtaining the enhanced picture block by performing quality enhancement on the reconstructed picture block based on the quantization parameter comprises: determining the test enhanced picture block as the enhanced picture block of the reconstructed picture block if the first picture quality is greater than the second picture quality (Paragraphs 100-107; Paragraph 146; Paragraph 166; Paragraph 233; Paragraph 241-247; Paragraphs 253-255). Regarding Claim 13, Lim et al. teaches the method of claim 1, wherein obtaining the enhanced picture block by performing quality enhancement on the reconstructed picture block based on the quantization parameter comprises: normalizing the reconstructed picture block and the quantization parameter; and obtaining the enhanced picture block based on the normalized reconstructed picture block and the normalized quantization parameter (Paragraph 146; Paragraph 166; Paragraph 233; Paragraph 241-247; Paragraphs 253-255). Method claims 14-19 are drawn to the encoding method associated with the decoding method of claims 1-4, 7 and 12 above, is drawn to similar limitations performed in the inverse, and is rejected for the same reasons as used above. Lim et al. further teaches encoding the current picture block based on the quantization parameter to obtain a quantization coefficient of the current picture block (Paragraph 6; Paragraph 9; Paragraphs 51-54). Regarding claim 20, claim 20 claims a product by process claim limitation where the product is the bitstream and the process is the method steps to generate the bitstream. MPEP §2113 recites “Product-by-Process claims are not limited to the manipulations of the recited steps, only the structure implied by the steps”. Thus, the scope of the claim is the storage medium storing the bitstream (with the structure implied by the method steps). The structure includes the information and samples manipulated by the steps. “To be given patentable weight, the printed matter and associated product must be in a functional relationship. A functional relationship can be found where the printed matter performs some function with respect to the product to which it is associated”. MPEP §2111.05(I)(A). When a claimed “computer-readable medium merely serves as a support for information or data, no functional relationship exists. MPEP §2111.05(III). The non-transitory computer-readable storage medium storing the claimed bitstream in claim 20 merely services as a support for the storage of the bitstream and provides no functional relationship between the stored bitstream and storage medium. Therefore the bitstream, which scope is implied by the method steps, is non-functional descriptive material and given no patentable weight. MPEP §2111.05(III). Thus, the claim scope is just a storage medium storing data and is anticipated by Lim et al. as follows. Regarding Claim 20, Lim et al. teaches the limitations of claim 14 above, Lim et al. further teaches a non-transitory computer-readable storage medium storing a bitstream, the bitstream being generated according to the method of claim 14 (Paragraph 74; Paragraph 8). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARHAN MAHMUD whose telephone number is (571)272-7712. The examiner can normally be reached 10-7. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at 5712727383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FARHAN MAHMUD/Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Sep 25, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604019
SYSTEM AND APPARATUS FOR VIDEO DISPLAY ON A PORTABLE DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12581077
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12563229
3D PREDICTION METHOD FOR VIDEO CODING
2y 5m to grant Granted Feb 24, 2026
Patent 12542908
SYSTEM AND METHOD FOR FACILITATING MACHINE-LEARNING BASED MEDIA COMPRESSION
2y 5m to grant Granted Feb 03, 2026
Patent 12537951
METHOD FOR IMAGE PROCESSING AND APPARATUS FOR IMPLEMENTING THE SAME
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
55%
Grant Probability
65%
With Interview (+10.1%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 386 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month