Prosecution Insights
Last updated: April 19, 2026
Application No. 18/866,158

A METHOD OR AN APPARATUS IMPLEMENTING A NEURAL NETWORK-BASED PROCESSING AT LOW COMPLEXITY

Non-Final OA §103
Filed
Nov 15, 2024
Examiner
SUH, JOSEPH JINWOO
Art Unit
2485
Tech Center
2400 — Computer Networks
Assignee
Interdigital Ce Patent Holdings SAS
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
86%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
399 granted / 514 resolved
+19.6% vs TC avg
Moderate +8% lift
Without
With
+8.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
17 currently pending
Career history
531
Total Applications
across all art units

Statute-Specific Performance

§101
5.2%
-34.8% vs TC avg
§103
59.6%
+19.6% vs TC avg
§102
13.0%
-27.0% vs TC avg
§112
14.7%
-25.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 514 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status This Office Action responds to application 18/866158 filed on 11/15/24. Claims 1, 4-9, 12, and 20-33 are pending. Priority Acknowledgment is made of applicant's claim for foreign priority under 35 U.S.C. 119(a)-(d). The certified copy has been filed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 1. Claims 1, 12, 26, and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Brand et al., WO 2023/091040 A1 (hereinafter Brand) in view of Pandey et al., US 2022/0292334 A1 (hereinafter Pandey). As for claim 1, Brand discloses a computer-implemented method, comprising: obtaining a tensor (p. 14, l. 14-p. 15, l. 6, e.g., input vector) of input data representative of coded data samples (p. 13, ll. 16-19, e.g., picture and/or video data) of an image block (p. 13, ll. 16-19, e.g., block); and applying a neural network-based processing (p. 13, ll. 27-34, e.g., neural network) to the tensor of input data to generate a tensor of output data (p. 14, l. 14-p. 15, l. 6, e.g., output vector); and decoding (p. 13, ll. 16-19, e.g., decoding) the image block based on the tensor of output data; wherein the neural network-based processing comprises a plurality of processing layers (p. 14, l. 14-p. 15, l. 6, e.g., layers), wherein each of the plurality of processing layers generates an intermediate tensor (p. 14, l. 14-p. 15, l. 6, e.g., output vector), wherein at least one of the plurality of processing layers is represented as a tensor product (p. 14, l. 14-p. 15, l. 6, e.g., product of W and input vector in line 23) between the tensor of input data (p. 14, l. 14-p. 15, l. 6, e.g., input vector) and a weight tensor (p. 14, l. 14-p. 15, l. 6, e.g., weight), and wherein at least one of the plurality of processing layers is represented as an addition of a bias tensor (p. 14, l. 14-p. 15, l. 6, e.g., bias vector). Brand does not explicitly disclose, but Pandey teaches wherein a scaling factor of a quantized representation of tensor of input data, a scaling factor of a quantized representation of the weight tensor, a scaling factor of a quantized representation of the bias tensor, a scaling factor of a quantized representation of an intermediate tensor, and a scaling factor of a quantized representation of the tensor of output data are powers of two ([0062], e.g., scaling factor and power-of-two), a quantized representation of a tensor being obtained by a shift according to a power of two of a respective scaling factor ([0062], e.g., bit shift). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Brand and Pandey before him/her to modify the generalized difference coder for residual coding in video compression of Brand with the teaching of efficient memory use optimization for neural network deployment and execution of Pandey with a motivation to provide an efficient and quick processing by using the approximation and bit shifting. As for claim 12, the claim recites a computer-implemented method, comprising encoding an image block of the method of claim 1, and is similarly analyzed. As for claim 26, the claim recites an apparatus comprising a memory and one or more processors of the method of claim 1, and is similarly analyzed. As for claim 30, the claim recites an apparatus comprising a memory and one or more processors of the method of claim 1, and is similarly analyzed. 2. Claims 4 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Brand in view of Pandey, and further in view of Pfaff et al., US 12413725 B2 (hereinafter Pfaff). As for claim 4, most of limitations of this claim have been noted in the rejection of Claim 1. Brand as modified by Pandey does not explicitly teach, but Pfaff teaches wherein an offset parameter of a quantized representation of tensor of input data, an offset parameter of a quantized representation of the weight tensor, an offset parameter of a quantized representation of the bias tensor, an offset parameter of a quantized representation of an intermediate tensor, and an offset parameter of a quantized representation of the tensor of output data are equal to zero (col. 36, l. 64-col. 37, l. 21, e.g., offset vector that might be zero). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Brand, Pandey, and Pfaff before him/her to modify the generalized difference coder for residual coding in video compression of Brand with the teaching of coding using matrix based intra-prediction and secondary transforms of Pfaff with a motivation to maintain the range of the data when offset of the data is not needed by using the zero valued offset data. As for claim 20, the claim recites a computer-implemented method, comprising encoding an image block of the method of claim 4, and is similarly analyzed. 3. Claims 5, 21, 27, and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Brand in view of Pandey, and further in view of Kim et al., US 2023/0004349 A1 (hereinafter Kim). As for claim 5, most of limitations of this claim have been noted in the rejection of Claim 1. Brand as modified by Pandey does not explicitly teach, but Kim teaches the at least one of the plurality of processing layer representing the addition of the bias tensor is fused with the at least one of the plurality of processing layer representing the tensor product ([0058], e.g., fused multiplication and addition). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Brand, Pandey, and Kim before him/her to modify the generalized difference coder for residual coding in video compression of Brand with the teaching of operating method of floating point operation circuit and integrated circuit including floating point operation circuit of Kim with a motivation to provide quick and accurate processing performance by using the fused calculations. As for claim 21, the claim recites a computer-implemented method, comprising encoding an image block of the method of claim 5, and is similarly analyzed. As for claim 27, the claim recites an apparatus comprising a memory and one or more processors of the method of claim 5, and is similarly analyzed. As for claim 31, the claim recites an apparatus comprising a memory and one or more processors of the method of claim 5, and is similarly analyzed. 4. Claims 8, 24, 29, and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Brand in view of Pandey, Kim, and further in view of Ren et al., US 2022/0215832 A1 (hereinafter Ren). As for claim 8, most of limitations of this claim have been noted in the rejection of Claim 5. Brand as modified by Pandey and Kim does not explicitly teach, but Ren teaches at least one of the plurality of processing layers comprises an activation layer that is fused with the at least one of the plurality of processing layers representing the fused a fusion of the tensor product and bias tensor addition ([0114], e.g., sublayer and activation layer are fused). Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Brand, Pandey, Kim, and Ren before him/her to modify the generalized difference coder for residual coding in video compression of Brand with the teaching of systems and methods for automatic speech recognition based on graphics processing units of Ren with a motivation to implement the processing efficiently in various aspects such as reduced memory access and processing instructions by using the fusion. As for claim 24, the claim recites a computer-implemented method, comprising encoding an image block of the method of claim 8, and is similarly analyzed. As for claim 29, the claim recites an apparatus comprising a memory and one or more processors of the method of claim 8, and is similarly analyzed. As for claim 33, the claim recites an apparatus comprising a memory and one or more processors of the method of claim 8, and is similarly analyzed. Allowable Subject Matter Claims 6-7, 9, 22-23, 25, 28, and 32 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Citation of Pertinent Prior Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: 1. US 2003/0108099 discloses picture encoding method and apparatus, picture decoding method and apparatus and furnishing medium. 2. US 2005/0053294 discloses techniques and tools for progressive and interlaced video coding and decoding. 3. US 2006/0126962 discloses methods and systems for reducing blocking artifacts with reduced complexity for spatially-scalable video coding. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH SUH whose telephone number is 571-270-7484. The examiner can normally be reached on Monday - Thursday, 7:30 AM - 6:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Jay Patel can be reached on 571-272-2988. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSEPH SUH/ Primary Examiner, Art Unit 2485
Read full office action

Prosecution Timeline

Nov 15, 2024
Application Filed
Jan 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604035
METHOD AND APPARATUS FOR MULTI VIEW VIDEO ENCODING AND DECODING, AND METHOD FOR TRANSMITTING BITSTREAM GENERATED BY THE MULTI VIEW VIDEO ENCODING METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12603991
VIDEO ENCODING AND DECODING USING INTRA BLOCK COPY
2y 5m to grant Granted Apr 14, 2026
Patent 12603992
VIDEO ENCODING AND DECODING USING INTRA BLOCK COPY
2y 5m to grant Granted Apr 14, 2026
Patent 12603993
VIDEO ENCODING AND DECODING USING INTRA BLOCK COPY
2y 5m to grant Granted Apr 14, 2026
Patent 12603994
VIDEO ENCODING AND DECODING USING INTRA BLOCK COPY
2y 5m to grant Granted Apr 14, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
86%
With Interview (+8.1%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 514 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month