Prosecution Insights
Last updated: April 19, 2026
Application No. 18/856,236

FILTERING FOR VIDEO ENCODING AND DECODING

Non-Final OA §103
Filed
Oct 11, 2024
Examiner
SHAHNAMI, AMIR
Art Unit
2483
Tech Center
2400 — Computer Networks
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
91%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
345 granted / 427 resolved
+22.8% vs TC avg
Moderate +10% lift
Without
With
+10.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
15 currently pending
Career history
442
Total Applications
across all art units

Statute-Specific Performance

§101
4.1%
-35.9% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
21.0%
-19.0% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 427 resolved cases

Office Action

§103
DETAILED ACTION Claims 39-57 are pending for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant's claim under US PRO 63/330035 filed on 4/12/2022. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 39, 50, 54 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al (AHG11: Neural Network based in-loop filter with constrained storage and low complexity), in view of Zhang et al, US 2022/0335269 A1. Regarding Claim 39, Wang discloses a method for generating an encoded video or a decoded video, the method comprising: obtaining values of reconstructed samples (Wang Fig.2, page 2 – reconstructed image [rec_yuv]); obtaining input information comprising any one or a combination of: i) information about filtered samples, the information comprising a prediction mode indicating that a filtered sample block is an intra-predicted block, an inter-predicted block that is uni-predicted, or an inter-predicted block that is bi-predicted, ii) information about predicted samples, the information indicating a number of motion vectors used for prediction, or iii) information about skipped samples (Wang Fig.2, page 2 – prediction image [pred_yuv]); providing the values of reconstructed samples and the input information to a machine learning, ML, model, thereby generating at least one ML output data (Wang Fig.2, Sec 2.1 page 2 – Proposed NN filter shown in Fig.2 and – see inputs and output_yuv). Even though Wang teaches a NN based in-loop filter, Wang does not explicitly disclose based at least on said at least one ML output data, generating the encoded video or the decoded video. Zhang teaches based at least on said at least one ML output data, generating the encoded video or the decoded video (Zhang [0042] – The difference between the weights of the finetuned neural network and the weights of the neural network before finetuning is referred to as the weight-update. This weight-update needs to be encoded, provided to the decoder side together with the encoded video data, and used at the decoder side for updating the neural network filter. The updated neural network filter is then used as part of the video decoding process). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify Wang to have at least one ML output data, generating the encoded video or the decoded video, as taught by Li. One would be motivated as the means encoded/decoded video data would what is being generated with the input information and the ML model. With regard to claim 50, the claim limitations are essentially the same as claim 39 but in a different embodiment. Therefore, the rational used to reject claim 39 is applied to claim 50. With regard to claim 54, the claim limitations are essentially the same as claim 39 but in a different embodiment. Therefore, the rational used to reject claim 39 is applied to claim 54. Claim(s) 45, 46, 56 are rejected under 35 U.S.C. 103 as being unpatentable over Wang and Zhang, in view of Li et al US 2022/0101095 A1. Regarding Claim 45, Wang and Zhang teach the method of claim 39, as outlined above. However, Wang does not explicitly disclose the information about filtered samples comprises values of deblocked samples. Li teaches the information about filtered samples comprises values of deblocked samples (Li [0146] – One or more convolutional neural network (CNN) filter models are trained as an in-loop filter or post-processing method for reducing the distortion incurred during compression. The interaction between the CNN filtering and the non-deep learning-based filtering method denoted by NDLF, controlling of our CNN filtering method, CNN filter models will be discussed in this invention. In one example, the NDLF may include one or more of Deblocking filter, SAO, ALF, CC-ALF, LMCS, bilateral filter, transform-domain filtering method, etc. al; [0060] – FIG. 5 shows an example of encoder block diagram of VVC, which contains three in-loop filtering blocks: deblocking filter (DF), sample adaptive offset (SAO) and ALF). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify Wang to have the information about filtered samples comprises values of deblocked samples, as taught by Li. One would be motivated as the deblocking samples assist quality by assisting edges in the blocks. Regarding Claim 46, Wang and Zhang teach the method of claim 39, as outlined above. However, Wang does not explicitly disclose the information about skipped samples indicates whether samples belong to a block that did not go through a process processing residual samples, and the process comprises inverse quantization and inverse transformation Li teaches the information about skipped samples indicates whether samples belong to a block that did not go through a process processing residual samples, and the process comprises inverse quantization and inverse transformation (Li [0138]-[0143] – The current CNN-based loop filtering has the following problems: 4. CNN-based loop filters in prior-arts are utilized on all of reconstructed frames, causing the frames coded later to be overly filtered a. For example, in the Random Access (RA) configuration, blocks in frames within high temporal layers may choose skip mode with a large probability, which means that the reconstruction of current frame is copied from the previous reconstruction frames. Since the previous frames are filtered using the CNN-based loop filter, applying the CNN-based loop filter on the current frame is equivalent to applying the CNN-filter twice on same content). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify Wang to have information about skipped samples indicates whether samples belong to a block that did not go through a process processing residual samples, and the process comprises inverse quantization and inverse transformation, as taught by Li. One would be motivated as the skipped samples reduce processing power. Regarding Claim 56, Wang discloses an apparatus comprising: memory; and processing circuitry, wherein the apparatus is configured to: obtain machine learning, ML, input data, wherein the ML input data comprises: i) values of components of reconstructed samples; ii) values of components of reconstructed samples; iii) values of components of predicted samples; iv) values of components of predicted samples; and iv) quantization parameters, QP; provide the ML input data to a ML model, thereby generating ML output data; and generate, based at least on the ML output data, the encoded video or the decoded video (see citations from claim 39 of the Wang reference). However, Wang does not explicitly disclose v) first block boundary strength, BBS, information indicating strength of a filtering applied to a boundary of luma components of samples; vi) second BBS information indicating strength of a filtering applied to a boundary of chroma components of samples; the values of the reconstructed and predicted samples contain luna and chroma components. Li teaches first block boundary strength, BBS, information indicating strength of a filtering applied to a boundary of luma components of samples; second BBS information indicating strength of a filtering applied to a boundary of chroma components of samples; (Lee pages 1-2, sec 2.1 – proposed CNN uses the boundary strength (BS) as one of the inputs for the output of the module). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify Wang to have the use of block boundary strength, BBS, information indicating strength of a filtering applied to a boundary of samples, as taught by Li. One would be motivated to include the BS as a factor into the machine learning as to provide a more refined output. Li teaches the values of the reconstructed and predicted samples contain luna and chroma components (Li p.12 2nd column [0215]-[0230] – The input of CNN filters include reconstructed samples and prediction information… containing the samples in luma and chroma) Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify Wang where the values of the reconstructed and predicted samples contain luna and chroma components, as taught by Li. One would be motivated to include data of a color space to assist the ML model. Claim(s) 51 and 55 are rejected under 35 U.S.C. 103 as being unpatentable over Wang and Zhang, in view of Li et al (EE1-1.6: Combined Test of EE1-1.2 and EE1-1.4). Regarding Claim 51, Wang discloses a method for generating an encoded video or a decoded video, the method comprising: obtaining machine learning, ML, input data, wherein the ML input data comprises: i) values of reconstructed samples; ii) values of predicted samples; and iv) quantization parameters, QP; providing the ML input data to a ML model, thereby generating ML output data; and generating, based at least on the ML output data, the encoded video or the decoded video, wherein the ML input data does not include partition information indicating how a luma picture is partitioned into coding tree units, CTUs, and how luma CTUs are partitioned into coding units, CUs (see citations from claim 39 of the Wang reference). Wang does not explicitly disclose the use of the block boundary strength, BBS, information indicating strength of a filtering applied to a boundary of samples Li teaches the use of block boundary strength, BBS, information indicating strength of a filtering applied to a boundary of samples (Lee pages 1-2, sec 2.1 – proposed CNN uses the boundary strength (BS) as one of the inputs for the output of the module). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to modify Wang to have the use of block boundary strength, BBS, information indicating strength of a filtering applied to a boundary of samples, as taught by Li. One would be motivated to include the BS as a factor into the machine learning as to provide a more refined output. With regard to claim 55, the claim limitations are essentially the same as claim 51 but in a different embodiment. Therefore, the rational used to reject claim 51 is applied to claim 55. Allowable Subject Matter Claim 57 is allowed. The closest prior arts are the Wang, Zhang, and Li references cited above. Neither Wang, Zhang, nor Li nor other relevant art or combination of relevant art, teaches an apparatus comprising: memory; and processing circuitry, wherein the apparatus is configured to: obtain values of reconstructed samples; obtain quantization parameters, QPs; provide the reconstructed sample values and the quantization parameters to a machine learning, ML, model, thereby generating ML output data; generate, based at least on the ML output data, first output sample values; provide the first output sample values to a group of two or more attention residual blocks connected in series, thereby generating second output sample values; and generate the encoded video or the decoded video based on the second output sample values, wherein the group of attention residual blocks comprises a first attention residual block disposed at one end of the series of attention residual blocks, and the first attention residual block is configured to receive input data consisting of the first output sample values and the QPs. Claims 40-44, 47-49, 52, 53 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The various claimed limitations mentioned in the claims are not taught or suggested by the prior art taken either singly or in combination, with emphasize that it is each claim, taken as a whole, including the interrelationships and interconnections between various claimed elements make them allowable over the prior art of record. The various claimed limitations mentioned including the interrelationships and all of the limitations of the base claim and the elements with respect to: ML model comprises a first computational module, CM, and a second CM, the first CM comprises a first convolution layer, CL, and a first parametric rectified linear unit, PReLU, coupled to the first CL, the second CM comprises a second CL and a second PReLU coupled to the second CL, the values of the reconstructed samples are provided to the first CM, and the input information is provided to the second CM obtaining values of predicted samples; obtaining block boundary strength information, BBS, indicating strength of filtering applied to a boundary of samples; obtaining quantization parameters, QPs; providing the values of the predicted samples to a CM, thereby generating first CM output data; providing the BBS information to a CM, thereby generating second CM output data; providing the QPs to a CM, thereby generating third CM output data; and combining at least the first CM output data, the second CM output data, and the third CM output data, thereby generating combined CM output data, and the encoded video or the decoded video is generated based at least on the combined CM output data concatenating the values of reconstructed samples and the input information, thereby generating concatenated CM input data, wherein the concatenated CM input data are provided to a CM the ML model comprises a first convolution layer, CL, the first CL is configured to convert the concatenated CM input data into N CM output data, and N is the number of kernel filters included in the first CL. the ML model comprises a first CM and a second CM, the first CM comprises a first convolution layer, CL, and a first parametric rectified linear unit, PReLU, coupled to the first CL, the second CM comprises a second CL and a second PReLU coupled to the second CL, the first CM is configured to perform downsampling, and the second CM is configured to perform upsampling Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR SHAHNAMI whose telephone number is (571)270-0707. The examiner can normally be reached Monday - Friday 8:00 am to 4:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at 571-272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMIR SHAHNAMI/ Primary Examiner, Art Unit 2483
Read full office action

Prosecution Timeline

Oct 11, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604016
CONDITIONAL APPLICATION OF REFINEMENT TECHNIQUE
2y 5m to grant Granted Apr 14, 2026
Patent 12598325
Signaling of Preselection Information in Media Files Based on a Movie-level Track Group Information Box
2y 5m to grant Granted Apr 07, 2026
Patent 12593130
TRACKING CAMERA, TRACKING CAMERA SYSTEMS, AND OPERATION THEREOF
2y 5m to grant Granted Mar 31, 2026
Patent 12592081
ASSISTANCE CONTROLLING APPARATUS, ASSISTANCE CONTROLLING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12593051
COMPUTATIONAL COMPLEXITY INDICATOR
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
91%
With Interview (+10.4%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 427 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month