Prosecution Insights
Last updated: April 18, 2026
Application No. 18/879,886

ENCODING MODE PREDICTION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

Non-Final OA §102§103§112
Filed
Dec 30, 2024
Examiner
GLOVER, CHRISTOPHER KINGSBURY
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
Sanechips Technology Co. Ltd.
OA Round
1 (Non-Final)
56%
Grant Probability
Moderate
1-2
OA Rounds
2y 2m
To Grant
85%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
100 granted / 177 resolved
-1.5% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
15 currently pending
Career history
192
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 177 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to because the font in many of the drawings is too small to comport with the font size requirements of the MPEP for readability. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. New corrected drawings in compliance with 37 CFR 1.121(d) are required in this application because as per above, the font in many of the drawings is too small to comport with the readability requirements of the MPEP. Applicant is advised to employ the services of a competent patent draftsperson outside the Office, as the U.S. Patent and Trademark Office no longer prepares new drawings. The corrected drawings are required in reply to the Office action to avoid abandonment of the application. The requirement for corrected drawings will not be held in abeyance. Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract of the disclosure is objected to because said abstract is in claim, not narrative form as required. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential elements, such omission amounting to a gap between the elements. See MPEP § 2172.01. The omitted elements are: a corresponding neural network for processing each block size of the multitudinous block sizes in the independent claims. Namely, the claims are indefinite because the independent claims recite wherein the encoding mode prediction network is a network obtained by training a convolutional neural network based on multi-size pixel blocks, which is indefinite because such could be most readily interpreted as a single neural network is fed the multi-size pixel blocks, whereas per claim 2 and paragraph 0067, a corresponding neural network is required to process each block size such that the number of neural networks is directly correlated to the number of block sizes for frame processing. Such is missing from the claims. For purposes of examination, the claims will be so interpreted. Clarification is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 7-9 and 12-14 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Dumas (US 2023/0224454). Regarding claim 1, Dumas discloses an encoding mode prediction method, (Abstract, encoding image block prediction method) comprising: acquiring information of at least two frames of images to be processed, (paragraph 0137, group of t frames processed) the at least two frames of images to be processed being at least two continuous frames of images; (Abstract, encoding video sequence of frames) and inputting the information of the at least two frames of images to be processed to an encoding mode prediction network for prediction, (paragraph 0007, video frames processed by neural network to obtain prediction mode for blocks in frames) and determining a target encoding mode, (paragraph 0004, prediction mode determined for frame block predictions) wherein the encoding mode prediction network is a network obtained by training a convolutional neural network based on multi-size pixel blocks, (as per above, feature is indefinite as recited, interpreted as network per block size; paragraphs 0085/0087, multi-size blocks fed to set of neural networks to train same) and the target encoding mode is used for coding and/or decoding of the images to be processed. (paragraph 0094, determined prediction mode used for coding image blocks) Regarding claims 12-14, claims 12-14 are apparatus, device and computer program product claims, respectively, reciting features similar to claim 1 and are therefore also anticipated by Dumas for reasons similar to claim 1 above. Paragraphs 0190/0191 identically disclose any additional recited processor or memory features. Regarding claim 7, Dumas discloses training the convolutional neural network according to a plurality of sample images and a plurality of preset pixel block sizes to obtain a plurality of encoding mode prediction networks corresponding to the plurality of preset pixel block sizes. (paragraphs 0085/0087, multi-size blocks fed to corresponding set of neural networks to train same) Regarding claim 8, Dumas discloses screening the plurality of sample images according to the plurality of preset pixel block sizes to obtain a plurality of to-be-tested sample image sets, (paragraph 0136, training image data compiled) wherein a plurality of to-be-tested sample images in a same to-be-tested sample image set correspond to a same pixel block size, and the to-be-tested sample images in different to-be-tested sample image sets correspond to different pixel block sizes; (paragraph 0085, neural networks trained based on data of corresponding block sizes) and inputting to-be-tested sample images in the plurality of to-be-tested sample image sets to the convolutional neural network for training to obtain the plurality of encoding mode prediction networks corresponding to the plurality of preset pixel block sizes. (paragraph 0087, training image data blocks used to train neural networks) Regarding claim 9, Dumas discloses respectively processing each of the plurality of to-be-tested sample image sets as follows: inputting the to-be-tested sample images in the to-be-tested sample image set to the convolutional neural network for training to obtain a to-be-verified encoding mode prediction network; (this merely means training the neural network; Figure 29, paragraph 0136, training data used to train neural network to a functionality) and in a case where an output result of the to-be-verified encoding mode prediction network meets a preset condition, obtaining the encoding mode prediction network corresponding to the preset pixel block size. (this merely means that the network is trained to a desired level of functionality; set of networks trained to a desired level of functionality on a per block size basis) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2, 4 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Dumas in view of Dumas II (US 2023/0254507). Regarding claim 2, Dumas discloses a first frame of image to be processed and a second frame of image to be processed, (paragraph 0137, group of t frames processed, with t at least 2) and ... screening a plurality of encoding mode prediction networks according to the pixel block size corresponding to the first frame of image to be processed to obtain a target encoding mode prediction network, wherein the target encoding mode prediction network is matched with the pixel block size corresponding to the first frame of image to be processed; (paragraph 0085, set of neural networks to process blocks on a per network/per size basis) and inputting the information of the first frame of image to be processed and the information of the second frame of image to be processed to the target encoding mode prediction network for prediction, and determining the target encoding mode. (paragraph 0093, block of frames processed by respective neural network to determine prediction mode for coding) While Dumas discloses the VVC standard, (paragraph 0003) and VVC implicates CTUs in trees, Dumas fails to identically disclose determining a pixel block size corresponding to the first frame of image to be processed according to acquired Coding Tree Unit (CTU) information of the first frame of image to be processed, wherein the CTU information is configured to represent coding complexity corresponding to the first frame of image to be processed. However, Dumas II clearly teaches determining a pixel block size corresponding to the first frame of image to be processed according to acquired Coding Tree Unit (CTU) information of the first frame of image to be processed, wherein the CTU information is configured to represent coding complexity corresponding to the first frame of image to be processed. (paragraph 0084, frames divided by trees into CTUs with CUs, and the tree division thereof is a coding complexity of the frame processed) It would have been obvious to one of skill in the art before the effective filing date of the instant application that the trees and CUs of Dumas further include CTUs because Dumas further indicated that it incorporated VVC, which includes CTUs, such that the CTUs of Dumas II would be well known in the art as part of the VVC specification, and one of skill in the art would understand that CTUs may be part of the trees of Dumas before the effective filing date of the instant application as evinced by Dumas II. (paragraph 0084) Regarding claim 4, Dumas discloses determining the pixel block size corresponding to the first frame of image to be processed according to at least one of a number of Coding Units (CUs), a number of Prediction Units (PUs), and a number of Transform Units (TUs) which correspond to the first frame of image to be processed. (paragraph 0181, frame split into CUs of a block size, as part of processing frame) Regarding claim 11, Dumas discloses wherein the information of the images to be processed comprises at least one of pixel block information of the images to be processed, a prediction mode corresponding to the pixel block information, a number of prediction modes, and CU division information. (paragraphs 0143/0144, frame divided into luminance/chroma tree of CUs) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Dumas III (US 2023/0095387) provides for training a single neural network to process multi-size blocks. Chadha (US 2022/0321879) implicates neural network prediction mode selection. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER KINGSBURY GLOVER whose telephone number is (303)297-4401. The examiner can normally be reached Monday-Friday 8-6 MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jay Patel can be reached at 571 272 2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHRISTOPHER KINGSBURY GLOVER/Examiner, Art Unit 2485 /JAYANTI K PATEL/Supervisory Patent Examiner, Art Unit 2485 April 5, 2026
Read full office action

Prosecution Timeline

Dec 30, 2024
Application Filed
Apr 01, 2026
Examiner Interview (Telephonic)
Apr 04, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598316
REUSE OF BLOCK TREE PATTERN IN VIDEO COMPRESSION
2y 5m to grant Granted Apr 07, 2026
Patent 12598336
A/V TRANSMISSION DEVICE AND A/V RECEPTION DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12586453
System and Method for Monitoring Life Signs of a Person
2y 5m to grant Granted Mar 24, 2026
Patent 12556672
VIDEO PROCESSING APPARATUS FOR DESIGNATING AN OBJECT ON A PREDETERMINED VIDEO AND CONTROL METHOD OF THE SAME, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12556725
ADAPTIVE RESOLUTION CODING FOR VIDEO CODING
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
56%
Grant Probability
85%
With Interview (+28.3%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 177 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month