Prosecution Insights
Last updated: April 19, 2026
Application No. 18/485,483

BANDWIDTH PRESERVATION THROUGH SELECTIVE APPLICATION OF ERROR MITIGATION TECHNIQUES FOR VIDEO FRAME REGIONS

Final Rejection §103
Filed
Oct 12, 2023
Examiner
DOSHI, AKSHAY
Art Unit
2422
Tech Center
2400 — Computer Networks
Assignee
Nvidia Corporation
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
171 granted / 268 resolved
+5.8% vs TC avg
Strong +39% interview lift
Without
With
+39.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
298
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
53.8%
+13.8% vs TC avg
§102
16.9%
-23.1% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 268 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1, and 17 are amended. No claims are canceled. No newly added claim. Claims 12-16 are withdrawn Claims 1-11 and 17-20 are presented for examination Response to Arguments Applicant's arguments filed on 2/17/2026 have been fully considered but they are not persuasive. Reasons are set forth below. Regarding applicant (Remark’s page 7-8) making statement that “During the interview, the Examiner agreed that the proposed amendments appeared to overcome the current rejections under 35 U.S.C. § 103. In particular, the cited references fail to teach or suggest at least "identify[ing]...a subset of network packets corresponding to a region of a video frame ... [that is] selected based at least on state data of an application that generated the video frame," as recited in amended independent claim 1 and similarly in amended independent claim 17.” The examiner respectfully points out that as per the interview summary of interview conducted on 11 February 2026, it was mentioned in the interview summary that, general discussion was done related to proposed amendment in relation previously cited arts. However, no agreement was reached. As per the rejection below, Gu does teach the amended feature of, “the region of the video frame selected based at least on state data of an application that generated the video frame” as disclosed in Gu, Col. 1, line 24 discloses, application such as video conferencing. Col. 2, line 60-62, business meetings via video conferencing (i.e. application running video conferencing on different user terminal during video conference. Col. 10, line 65-67, col. 11, line 1, the central section is fixed it does not change from frame to frame but in different example, central section may adjust overtime according to user input, i.e. based on user input while video conference changes the central section. Therefore region of the video frame is selected based on state data (i.e. user input) of the video conference application that generated the video frame of video conference. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-7, 9, 10, 17, 19 and 20 are rejected under U.S.C. 103 as being unpatentable over Paniconi (US 8856624) in view of Gu et al. (US 9185429). Regarding claim 1, Paniconi discloses, a processor comprising: one or more circuits to: identify, from a plurality of network packets corresponding to an encoded video stream, a subset of network packets corresponding to a video frame of the encoded video stream (Col. 3, line 7-10, an encoder 12 that encodes video source 10 to create an encoded video stream 14, which is packetized by a packetization process 16 to form encoded packets. 18.Col.4, line 27-29, source packets 92 are divided up into two groups, important packets 1, 2 and 3 (collectively, packets 93) and less important packets 4, 5, 6, 7, 8 (collectively, packets 95), grouping of important packets of encoded video stream 14 = identify subset of network packets that corresponds to encoded video stream); generate at least one error correction packet for the subset of network packets that encode the video frame (Col.4, line 27-29, fig. 4, The FEC-UEP scheme is implemented by selecting pre-calculated FEC-EP, FECUEP packets 96); and transmit, to a receiver client device, the plurality of network packets and the at least one error correction packet (Col. 5, line 4-10, This scheme is able to use pre-calculated FEC-EP packets to accomplish FEC-UEP by considering a subset of the source packets to be transmitted, in this case packets 1-3, and selecting the appropriate FEC-EP packets 9-11 to correct these packets. The scheme then considers the entire set of 12, 13 to correct all eight packets 92, i.e. transmitting subset of source packets 1-3 and appropriate FEC packets to correct these packets). Paniconi discloses identifying important packets associated with frame to generate the error correction (FEC) packets, however Paniconi does not disclose, a subset of network packets corresponding to a region of a video frame, the region of the video frame selected based at least on state data of an application that generated the video frame. Gu discloses, a subset of network packets corresponding to a region of a video frame (Col. 7, line 10-Col. 15, line 9-12, an encoder, such as encoder 70, applies a first level of error protection to data associated with a central section of a frame in the form of forward error correction, Col. 15 line 33-37, The encoder has total of four packets, namely, packet-I, packet-2, the FEC packet and packet-3. When receiving station receives any two of packet-I, packet-2 and the FEC packet, then the center section of the image can be reconstructed with error protection, i.e. FEC applied to network packets are from central region of the video frame), the region of the video frame selected based at least on state data of an application that generated the video frame (Col. 1, line 24 discloses, application such as video conferencing. Col. 2, line 60-62, business meetings via video conferencing (i.e. application running video conferencing on different user terminal during video conference. Col. 10, line 65-67, col. 11, line 1, the central section is fixed it does not change from frame to frame but in different example, central section may adjust overtime according to user input, i.e. based on user input while video conference changes the central section. Therefore region of the video frame is selected based on state data (i.e. user input) of the video conference application that generated the video frame of video conference). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filling date of the claimed invention to modify Paniconi, by teaching of a subset of network packets corresponding to a region of a video frame, the region of the video frame selected based at least on state data of an application that generated the video frame, as taught by Gu, to provide dynamic error correction for most important part of the image such as center portion having more interesting image data as well as state of the application which generated the video frame such as user input during video conference on source end adjust the central portion, as disclosed in Gu, col. 7, line 6-12. Regarding claim 3, The processor of claim 1, Paniconi in view of Gu further discloses wherein the encoded video stream is formatted in compliance with a real-time transport protocol (RTP) (Paniconi col. 9, line 57-59, a video stream to be encoded. In an exemplary implementation, the real-time transport protocol (RTP) is used). Regarding claim 4, The processor of claim 1, Paniconi further discloses, wherein the one or more circuits are to generate the encoded video stream (Col. 3, line 7-9, an encoder 12 that encodes video source 10 to create an encoded video stream). Regarding claim 5, The processor of claim 4, Paniconi further discloses, wherein the one or more circuits are to generate the plurality of network packets, at least one network packet of the plurality of network packets comprising a portion of the encoded video stream corresponding to the region of the video frame (Col. 7, line 10-Col. 15, line 9-12, an encoder, such as encoder 70, applies a first level of error protection to data associated with a central section of a frame in the form of forward error correction, Col. 15 line 33-37, The encoder has total of four packets, namely, packet-I , packet-2, the FEC packet and packet-3. When receiving station receives any two of packet-I, packet-2 and the FEC packet, then the center section of the image can be reconstructed with error protection, i.e. encoded packet-I, packet-2 corresponds to central region of the video frame). Regarding claim 6, The processor of claim 1, Paniconi in view of Gu discloses, wherein the one or more circuits are to identify the region of the video frame based at least on a configuration associated with the encoded video stream (Gu, col 8, line 49-54, less emphasis may be placed on the order in which the blocks are selected for encoding from the edges of frame 56, in comparison to the order in which blocks are selected from central section 134 when it is assumed that less important image information resides in the edges. According to one implementation in a counter-clockwise spiral scan for a central section, the left strip is scanned and coded first, i.e. configuration is place scan to identify center region of video frame associated with video stream). Regarding claim 7, The processor of claim 1, Paniconi in view of Gu discloses, wherein the region of the video frame comprises one or more slices or one or more tiles of the video frame (Gu, col. 11, line 38-44, a slice based approach where all slices are treated equally for motion search, motion vectors and other information. Due to the spiral scan, the center section of macroblocks are scanned and coded before the outer section(s) of macro blocks). Regarding claim 9, The processor of claim 1, Paniconi further discloses, wherein the at least one error correction packet comprises forward error correction (FEC) data generated based at least on the subset of network packets (Col. 4, line, 24-29, an unequal forward error correction code formed from two equal error correction codes according to an embodiment of the invention. In this example, source packets 92 are divided up into two groups, important packets 1, 2 and 3 (collectively, packets 93) and less important packets 4, 5, 6, 7, 8 (collectively, packets 95)). Regarding claim 10, The processor of claim 1, Paniconi further discloses, wherein the encoded video stream is encoded according to at least one codec standard from a list of codec standards comprising: h.264 (Col. 9, line 38-41, encode the video stream including formats such as VPx, H.264)); h.265; h.266; VP8; VP9; or AVI. Regarding claims 17, 19 and 20 Paniconi in view of Gu meets the claim limitations as set forth in claims 1, 3 and 6. Claims 2 and 18 are rejected under U.S.C. 103 as being unpatentable over Paniconi (US 8856624) in view of Gu et al. (US 9185429), in further view of De la Oliva et al. (US 20210211914). Regarding claim 2, The processor of claim 1, Paniconi in view of Gu does not disclose, wherein the plurality of network packets are transmitted via a user datagram protocol (UDP). De la Oliva discloses, wherein the plurality of network packets are transmitted via a user datagram protocol (UDP) (Par. 0155, The adaptive FEC control unit 1108 also communicates with and is in control of a UDP/IP communication unit employed in accordance with or under HTTP 1112 in order to transmit video source packets and parity packets). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filling date of the claimed invention to modify Paniconi in view of Gu, by teaching of the plurality of network packets are transmitted via a user datagram protocol (UDP), as taught by Gu, to provide speed and efficiency of connection less design of UDP protocol based communication. Regarding claim 18, Paniconi in view of Gu in further view of De la Oliva meets the claim limitations as set forth in claim 2. Claims 8 and 11 are rejected under U.S.C. 103 as being unpatentable over Paniconi (US 8856624) in view of Gu et al. (US 9185429), in further view of Kim et al. (US 20230076650). Regarding claim 8, The processor of claim 1, Paniconi in view of Gu does not disclose, wherein the region is a selected region, and the one or more circuits are to: allocate a first percentage of bandwidth for the encoded video stream to one or more first error correction packets for the subset of network packets that carry the selected region of the video frame, allocate a second percentage of bandwidth for the encoded video stream to one or more second error correction packets for a packet sequence that carries regions of the video frame other than the selected region. Kim discloses, wherein the region is a selected region (Par. 0081, an area viewable by the user through the viewport area of the electronic device 1000. In a case in which the orientation of the electronic device 1000 is changed according to the movement of the user's head over time, and thus the area of an image viewable by the user is changed, i.e. user selects region of content frame to be viewed based on head movement), and the one or more circuits are to: allocate a first percentage of bandwidth for the encoded video stream to one or more first error correction packets for the subset of network packets that carry the selected region of the video frame (Par. 0083, The edge data network 2000 may encode the user FoV image 210 to generate a first user FoV frame. Also, the edge data network 2000 may encode the extra FoV image 215 to generate a first extra FoV frame, i.e. Fov = user field of view or user has selected the view the area of frame. Par. edge data network 2000 may encode the user FoV image 210 by using a relatively high-image quality parameter (e.g., a high bit rate) and a high frame rate to generate a user FoV frame, and generate user FoV frame packets 220 including information about the user FoV frame. Par. 0088, encode the user FoV image 210 by using a relatively high-image quality parameter (e.g., a high bit rate) and a high frame rate to generate a user FoV frame, and generate user FoV frame packets 220 including information about the user FoV frame Par. 0106, appropriately allocate resources for the user FoV image and the extra FoV image according to an available bandwidth and a required latency given by a VR application, the edge data network 2000 may identify a transmission parameter, such as an FEC code rate or an FEC block size, and an encoding parameter related to image quality of each area (e.g., a frame data size), ), i.e. allocate higher amount bandwidth resources as fov image portion and associated FEC as they are sent in higher resolution quality, here higher amount of bandwidth assigned of total bandwidth reads on higher percentage of total bandwidth); and allocate a second percentage of bandwidth for the encoded video stream to one or more second error correction packets for a packet sequence that carries regions of the video frame other than the selected region (Par. 0083The edge data network 2000 may encode the user FoV image 210 to generate a first user FoV frame. Also, the edge data network 2000 may encode the extra FoV image 215 to generate a first extra FoV frame, i.e. extra Fov = video frames other than user field of view or user that has not selected to view. Par. 0089, The edge data network 2000 may encode the extra FoV image 215 using a relatively low-image quality parameter (e.g., a low bit rate) and a low frame rate to generate an extra FoV frame, and generate extra FoV image packets 245 including information about the extra FoV frame, Par. 0106, appropriately allocate resources for the user FoV image and the extra FoV image according to an available bandwidth and a required latency given by a VR application, the edge data network 2000 may identify a transmission parameter, such as an FEC code rate or an FEC block size, and an encoding parameter related to image quality of each area (e.g., a frame data size), i.e. allocate less amount bandwidth resource as extra fov image portion and associated FEC as they are sent in lower resolution quality, here lower amount of bandwidth assigned of total bandwidth reads on lower percentage of total bandwidth). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filling date of the claimed invention to modify Paniconi in view of Gu, by teaching of wherein the region is a selected region, allocate a first percentage of bandwidth for the encoded video stream to one or more first error correction packets for the subset of network packets that carry the selected region of the video frame, allocate a second percentage of bandwidth for the encoded video stream to one or more second error correction packets for a packet sequence that carries regions of the video frame other than the selected region, as taught by Kim, to adapt the bandwidth to provide higher quality resolution quality for important portion of image compared to other part of the image, as disclosed in Kim, par. 0088-0089. Regarding claim 11, The processor of claim 1, Paniconi in view of Gu does not disclose, wherein the processor is comprised in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations); a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 4D assets; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system incorporating one or more language models; a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources. Kim discloses, wherein the processor is comprised in at least one of: a control system for an autonomous or semi-autonomous machine (Par. 0334, the electronic device 110 may request one or more of the external electronic devices 1702 and 1704 to perform at least a portion of the function or service, additionally or instead of autonomously executing the function or the service); a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations); a system for performing digital twin operations; a system for performing light transport simulation; a system for performing collaborative content creation for 4D assets; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system for performing conversational AI operations; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system incorporating one or more language models; a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources (Par. 0063, the first and second application clients 122 and 124, in the electronic device 1000 may perform data transmission and reception with the cloud server 3000 based on a required network service type or perform data transmission and reception with the edge data network 2000 based on edge computing). Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filling date of the claimed invention to modify Paniconi in view of Gu, by teaching of a system implemented at least partially using cloud computing resources, as taught by Kim, cloud computing resources provides increased accessibility from anywhere with internet connection, provides improved security and disaster recovery. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AKSHAY DOSHI whose telephone number is (571)272-2736. The examiner can normally be reached M-F 9:30 AM to 6:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN W MILLER can be reached at (571)272-7353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.D./Examiner, Art Unit 2422 /BRIAN P YENKE/Primary Examiner, Art Unit 2422
Read full office action

Prosecution Timeline

Oct 12, 2023
Application Filed
Nov 15, 2025
Non-Final Rejection — §103
Feb 11, 2026
Examiner Interview Summary
Feb 11, 2026
Applicant Interview (Telephonic)
Feb 17, 2026
Response Filed
Mar 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12568270
ELEMENT DISPLAY METHOD AND APPARATUS, ELEMENT SELECTION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12568255
METHODS AND APPARATUS FOR IDENTIFYING MEDIA CONTENT USING TEMPORAL SIGNAL CHARACTERISTICS
2y 5m to grant Granted Mar 03, 2026
Patent 12563264
TECHNIQUES FOR REUSING PORTIONS OF ENCODED ORIGINAL VIDEOS WHEN ENCODING LOCALIZED VIDEOS
2y 5m to grant Granted Feb 24, 2026
Patent 12549810
INFORMATION PROCESSING APPARATUS, CONTROL METHOD OF INFORMATION PROCESSING APPARATUS, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12500841
DEVICE, METHOD AND PROGRAM FOR COMPUTER AND SYSTEM FOR DISTRIBUTING CONTENT BASED ON THE QUALITY OF EXPERIENCE
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+39.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 268 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month