Prosecution Insights
Last updated: April 19, 2026
Application No. 18/520,644

Delivery of Video Content

Final Rejection §102§103
Filed
Nov 28, 2023
Examiner
MENDOZA, JUNIOR O
Art Unit
2424
Tech Center
2400 — Computer Networks
Assignee
Comcast Cable Communications LLC
OA Round
2 (Final)
65%
Grant Probability
Favorable
3-4
OA Rounds
3y 0m
To Grant
88%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
333 granted / 512 resolved
+7.0% vs TC avg
Strong +23% interview lift
Without
With
+22.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
24 currently pending
Career history
536
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
49.9%
+9.9% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 512 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 01/20/2026 have been fully considered but they are not persuasive. Regarding claims 1 and 17, applicant argues that Han does not teach “dividing, based on the connectivity conditions, a video frame of the sequence into a plurality of spatial regions”, remarks page 6. However, the examiner respectfully disagrees with the applicant. Han discloses requesting for a plurality of portions of video content and determining network conditions for each communication network 258; paragraphs [0036] [0037] figure 2E. Han further recites classifying the video portions, wherein a video content server configures transmission of the first group of packets to the high priority, e.g. field of view tiles, and configuring a transmission of the second group of packets to the low priority, e.g. out of sight OOS tiles; paragraphs [0012] [0038] [0056] figure 2C, 2E and 2H. The distribution scheme in Han distributes urgent FOV tiles over network 204 and regular OOS video tiles over network 206, paragraph [0032] figure 2A. Moreover, Han also discloses that video content can be 360 degree video content or video content that is less than 360 degrees, i.e. video frame; paragraphs [0026] [0040]. The applicant also argues that Han divides the tiles based on perspective of the end user, not network conditions. The examiner respectfully disagrees with the applicant. Hand divides the tiles based on urgent tiles that require a priority transmission channel, and low priority tiles that may use the slower network, e.g. urgent chunks vs low priority chunks. Therefore, Han clearly discloses the features of claims 1 and 17 as described on the current office action. Allowable Subject Matter Claims 3, 12 and 16 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 2, 4-7, 10, 14, 17, 19 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Han et al. (Pub No US 2020/0007905). Hereinafter, referenced as Han. Regarding claim 1, Han discloses a method comprising: receiving, by a computing device (e.g. video content server 202) and from a requesting device (e.g. client device 212), a request for a content item comprising a sequence of video frames (Paragraphs [0012] [0036] figures 2A and 2E; request for a plurality of portions of video content); receiving information indicating connectivity conditions of a plurality of networks that are available to the requesting device (Paragraphs [0036] [0037] figure 2E; determine network conditions for each communication network 258); dividing (e.g. urgent chunks vs regular chunks), based on the connectivity conditions, a video frame of the sequence into a plurality of spatial regions (Paragraphs [0012] [0038] [0056] figure 2C, 2E and 2H; video content server configuring a transmission of the first group of packets to the high priority, e.g. field of view tiles, and configuring a transmission of the second group of packets to the low priority, e.g. out of sight OOS tiles); sending, via a first network of the plurality of networks, a first spatial region of the plurality of spatial regions (Paragraphs [0032] figure 2A; distribute urgent FOV tiles over network 204); and sending, via a second network of the plurality of networks, a second spatial region of the plurality of spatial regions (Paragraphs [0032] figure 2A; distribute regular OOS video tiles over network 206). Regarding claim 2, Han discloses the method of claim 1; moreover, Han discloses that the dividing comprises determining a quantity of the spatial regions based on a quantity of the plurality of networks (Paragraphs [0034] [0054] figure 2A; available network paths). Regarding claim 4, Han discloses the method of claim 1; moreover, Han discloses that the dividing comprises: determining a frame template (Figure 2C; FOV video tiles and OOS video tiles) that corresponds to the connectivity conditions; and splitting, based on the frame template, the video frame into the plurality of spatial regions (Paragraphs [0012] [0038] figure 2C, 2E and 2H; video content server configuring a transmission of the first group of packets to the high priority, e.g. field of view tiles, and configuring a transmission of the second group of packets to the low priority, e.g. out of sight OOS tiles). Regarding claim 5, Han discloses the method of claim 1; moreover, Han discloses determining a plurality of different frame templates that correspond to a plurality of different network conditions (Paragraph [0056]; e.g. high-quality and low-quality network paths) and that divide a video frame in a plurality of different ways (Paragraphs [0012] [0038] figure 2C, 2E and 2H; high priority video tiles, e.g. field of view tiles, and low priority video tiles, e.g. out of sight OOS tiles). Regarding claim 6, Han discloses the method of claim 1; moreover, Han discloses dividing different video frames of the content item into different combinations of spatial regions (Paragraphs [0012] [0038] figure 2C, 2E and 2H; high priority video tiles, e.g. field of view tiles, and low priority video tiles, e.g. out of sight OOS tiles). Regarding claim 7, Han discloses the method of claim 1; moreover, Han discloses that the first spatial region is adjacent to the second spatial region (Paragraphs [0012] [0038] figure 2C, 2E and 2H; high priority video tiles, e.g. field of view tiles, and low priority video tiles, e.g. out of sight OOS tiles). Regarding claim 10, Han discloses the method of claim 1; moreover, Han discloses that the first spatial region (e.g. FOV video tiles) comprises more data than the second spatial region (e.g. OOS video tiles), and the first network comprises more capacity than the second network (Paragraphs [0049] figure 2C; FOV tiles are distributed at a high quality, whereas remaining tiles, e.g. OOS video tiles, are distributed at a lower quality. Wherein the server prioritizes FoV and OOS chunks over the high-quality and low-quality paths; paragraph [0056]). Regarding claim 14, Han discloses the method of claim 1; moreover, Han discloses encoding, based on the connectivity conditions of the first network, the first spatial region; and encoding, based on the connectivity conditions of the second network, the second spatial region (Paragraph [0056]; prioritize FoV and OOS chunks over the high-quality and low-quality paths, respectively, and deliver them in different networks, e.g. reliable vs. best-effort respectively). Regarding claims 17, 19 and 20, Han discloses all the limitations of claims 17, 19 and 20; therefore, claims 17, 19 and 20 are rejected for the same reasons stated in claim 1. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 8, 9, 13, 15, 18 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Han in view of Frusina et al. (Pub No US 2014/0337473). Hereinafter, referenced as Frusina. Regarding claim 8, Han discloses the method of claim 1; however, it is noted that Han is silent to explicitly disclose sending, to the requesting device, instructions for assembling the plurality of spatial regions. Nevertheless, in a similar field of endeavor Frusina discloses sending, to the requesting device, instructions for assembling the plurality of spatial regions (Paragraph [0156] figures 2; re-assembly module is configured to connect to each supported Wireless Network 70, 72 and 74 and reassemble data packets from all, e.g. three, networks using timing and sequence data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Han by specifically providing the elements mentioned above, as taught by Frusina, for the predictable result of incorporating information that will allow the client device to know how to re-assemble the video stream after it was distributed over multiple networks. Regarding claim 9, Han discloses the method of claim 1; however, it is noted that Han is silent to explicitly disclose determining the plurality of spatial regions based on a frame template; and sending the frame template for assembling the plurality of spatial regions. Nevertheless, in a similar field of endeavor Frusina discloses determining the plurality of spatial regions based on a frame template (e.g. packets); and sending the frame template for assembling the plurality of spatial regions (Paragraph [0156] figures 2; re-assembly module is configured to connect to each supported Wireless Network 70, 72 and 74 and reassemble data packets from all, e.g. three, networks using timing and sequence data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Han by specifically providing the elements mentioned above, as taught by Frusina, for the predictable result of incorporating information that will allow the client device to know how to re-assemble the video stream after it was distributed over multiple networks. Regarding claim 13, Han discloses the method of claim 1; however, it is noted that Han is silent to explicitly disclose determining, based on the connectivity conditions of the first network, a frame rate of the first spatial region; and determining, based on the connectivity conditions of the second network, a frame rate of the second spatial region. Nevertheless, in a similar field of endeavor Frusina discloses determining, based on the connectivity conditions of the first network, a frame rate of the first spatial region; and determining, based on the connectivity conditions of the second network, a frame rate of the second spatial region (Paragraphs [0187] [0189]-[0192] figure 2; reduce frame rate of the video stream based on network conditions). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Han by specifically providing the elements mentioned above, as taught by Frusina, for the predictable result of adjusting the frame rate of the video content according to network conditions. Regarding claim 15, Han discloses the method of claim 1; however, it is noted that Han is silent to explicitly disclose sending, to the requesting device, information indicating that the first network is associated with the first spatial region, and the second network is associated with the second spatial region. Nevertheless, in a similar field of endeavor Frusina discloses sending, to the requesting device, information indicating that the first network is associated with the first spatial region, and the second network is associated with the second spatial region (Paragraph [0156] figures 2; re-assembly module is configured to connect to each supported Wireless Network 70, 72 and 74 and reassemble data packets from all, e.g. three, networks using timing and sequence data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Han by specifically providing the elements mentioned above, as taught by Frusina, for the predictable result of incorporating information that will allow the client device to know how to re-assemble the video stream after it was distributed over multiple networks. Regarding claims 18 and 21, Han and Frusina disclose all the limitations of claims 18 and 21; therefore, claims 18 and 21 are rejected for the same reasons stated in claims 8 and 15, respectively. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Han in view of Huang et al. (Pub No US 2023/0045884). Hereinafter, referenced as Huang. Regarding claim 11, Han discloses the method of claim 1; moreover, Han discloses that the first network comprises more capacity than the second network (Paragraph [0056]; prioritize FoV and OOS chunks over the high-quality and low-quality paths, respectively, and deliver them in different networks, e.g. reliable vs. best-effort respectively). However, it is noted that Han is silent to explicitly disclose that the first spatial region comprises a higher degree of motion than the second spatial region. Nevertheless, in a similar field of endeavor Huang discloses that the first spatial region comprises a higher degree of motion than the second spatial region (Paragraphs [0023] figure 4; dividing a video frame based on a region of interest ROI and a background region, e.g. less degree of motion). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Han by specifically providing the elements mentioned above, as taught by Huang, for the predictable result of splitting and differentiating video frames into regions that have different amounts of motion in order to decide how to prioritize sections of the video frame. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUNIOR O MENDOZA whose telephone number is (571)270-3573. The examiner can normally be reached Mon-Fri 10am-6pm EST.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Bruckart can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JUNIOR O. MENDOZA Primary Examiner Art Unit 2424 /JUNIOR O MENDOZA/Primary Examiner, Art Unit 2424
Read full office action

Prosecution Timeline

Nov 28, 2023
Application Filed
Oct 16, 2025
Non-Final Rejection — §102, §103
Jan 20, 2026
Response Filed
Feb 13, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587692
METHODS AND SYSTEMS TO SYNERGIZE CONTEXT OF END-USER WITH QUALITY-OF-EXPERIENCE OF LIVE VIDEO FEED
2y 5m to grant Granted Mar 24, 2026
Patent 12581140
METHODS AND SYSTEMS FOR CONTENT STORAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12537997
SHOPPING INTERFACE AND METHOD
2y 5m to grant Granted Jan 27, 2026
Patent 12536569
MEDIA SHARING AND COMMUNICATION SYSTEM
2y 5m to grant Granted Jan 27, 2026
Patent 12532051
Dynamic Content Allocation And Optimization
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
65%
Grant Probability
88%
With Interview (+22.8%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 512 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month