Prosecution Insights
Last updated: April 19, 2026
Application No. 18/875,595

NEURAL NETWORK CODEC WITH HYBRID ENTROPY MODEL AND FLEXIBLE QUANTIZATION

Non-Final OA §102
Filed
Dec 16, 2024
Examiner
PONTIUS, JAMES M
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
88%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
404 granted / 514 resolved
+20.6% vs TC avg
Moderate +10% lift
Without
With
+9.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
17 currently pending
Career history
531
Total Applications
across all art units

Statute-Specific Performance

§101
9.1%
-30.9% vs TC avg
§103
32.7%
-7.3% vs TC avg
§102
24.6%
-15.4% vs TC avg
§112
25.9%
-14.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 514 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-17, 19-20 and 64 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Li et al., "Deep Contextual Video Compression," arXiv:2109.15047v1, 19 pp. (September 2021). It is noted that a copy of this reference was provided by Applicant on 12/16/2024. Regarding claim 1, Li discloses: In a computer system that implements a neural video encoder (abs.; section 1, 3, fig. 1, 2), a method comprising: receiving a current video frame ("xt is the current frame": section 3, fig. 1, 2); encoding the current video frame to produce encoded data, wherein the encoding the current video frame comprises: determining a current latent representation for the current video frame (section 3.1 "Through contextual encoder, xt is encoded into latent codes yt"); and encoding the current latent representation using an entropy model network (section 1, 3.2, fig. 1, 2, 4) that includes one or more convolutional layers ("Subsequently, many works boost the performance by more advanced entropy models and network structures ... For the network structure, some RNN (recurrent neural network)-based methods [21-23] were proposed in the early development stage, but most of recent methods are based on CNN (convolutional neural network)": section 2), wherein the encoding the current latent representation using the entropy model network comprises: estimating statistical characteristics of a quantized version of the current latent representation ("pyt ( yt) and qyt ( yt) are estimated and true probability mass functions of quantized latent codes yt,respectively ... So our target is designing an entropy model which can accurately estimate the probability distribution of latent codes pyt ( yt) ": section 3.2) based at least in part on a previous latent representation for a previous video frame ("we propose using the context x-t to generate the temporal prior": section 3.2, fig. 4); and entropy coding the quantized version of the current latent representation based at least in part on the estimated statistical characteristics (section 3.2); and outputting the encoded data as part of a bitstream (section 3). Regarding claim 2, Li discloses; The method of claim 1, further comprising: quantizing the current latent representation, thereby producing the quantized version of the current latent representation (section 3, fig. 4). Regarding claim 3, Li discloses: The method of claim 2, wherein the encoding the current latent presentation using the entropy model network further comprises: determining at least some quantization step ("QS")values for the current latent representation based at least in part on the previous latent representation, wherein the quantizing uses the at least some QS values (section 3.1, 3.2). Regarding claim 4, Li discloses: The method of claim 1, wherein the current latent representation is a current latent sample value ("SV") representation for the current video frame, wherein the previous latent representation is a previous latent SV representation for the previous video frame, and wherein the determining the current latent representation comprises determining the currentlatent SV representation using a contextual encoder that includes one or more convolutional layers (section 3, fig. 1 ). Regarding claim 5, Li discloses: The method of claim 1, wherein the current latent representation is a current latent motion vector ("MV") representation for the current video frame, wherein the previous latent representation is a previous latent MV representation for the previous video frame, and wherein the determining the current latent representation comprises: using motion estimation to determine MV values for the current video frame relative to the previous video frame; and determining the current latent MV representation from the MV values using a MV contextual encoder that includes one or more convolutional layers (section 3, fig. 1 ). Regarding claim 6-17, 19-20 and 64, Li discloses the method and system limitations of these claims as shown above with respect to the mapped portions of Li above, including the codec of Li in fig 2 and section 3.1. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Mohan et al. (US 2025/0056036) teaches temporal attention-based neural networks; Liu et a; (US 2021/0152831) teaches conditional entropy encoding; Mandt (US 2020/0090069) teaches machine learning based video compression. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES M PONTIUS whose telephone number is (571)270-7687. The examiner can normally be reached M-Th 8-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sath V Perungavoor can be reached at (571)272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAMES M PONTIUS/Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Dec 16, 2024
Application Filed
Jan 10, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602934
VEHICULAR DRIVING ASSIST SYSTEM WITH TRAFFIC LIGHT RECOGNITION
2y 5m to grant Granted Apr 14, 2026
Patent 12587726
ELECTRIC SHAVER WITH IMAGING CAPABILITY
2y 5m to grant Granted Mar 24, 2026
Patent 12583389
SYSTEM FOR PROVIDING THREE-DIMENSIONAL IMAGE OF VEHICLE AND VEHICLE INCLUDING THE SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12583400
SYSTEM AND METHOD FOR OPERATING A VEHICLE ACCESS POINT
2y 5m to grant Granted Mar 24, 2026
Patent 12587616
IMAGE CAPTURING SYSTEM AND VEHICLE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
88%
With Interview (+9.8%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 514 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month