Prosecution Insights
Last updated: April 19, 2026
Application No. 17/585,178

Determining Location Within Video Content for Presentation to a User

Non-Final OA §103
Filed
Jan 26, 2022
Examiner
TRAN, LOI H
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Comcast Cable Communications LLC
OA Round
4 (Non-Final)
64%
Grant Probability
Moderate
4-5
OA Rounds
2y 10m
To Grant
88%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
394 granted / 611 resolved
+6.5% vs TC avg
Strong +24% interview lift
Without
With
+23.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
636
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
54.9%
+14.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 611 resolved cases

Office Action

§103
DETAILED ACTION The present application is being examined under the pre-AIA first to invent provisions. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant’s submission filed on 12/17/2025 has been entered. Response to Arguments Applicant’s arguments with respect to the rejections of claims 1-3, 6-7, 9-13, 16-17, 19-23, 26-27, and 29 have been fully considered but they are moot in view of new grounds of rejections. New claims 38-46 are rejected as described below. Response to Amendments Claim Rejections - 35 USC § 103 5. The text of those sections of Title 35, U.S. Code not included in this section can be found in a prior Office action. 6. Claims 1, 6, 10-11, 16, 20-21, 26, and 38-46 are rejected under 35 U.S.C. 103(a) as being unpatentable over Ekstrand et al. (US Publication 2009/0282454) in view of Dimitrova et al. (US Patent 6,100,941), and further in view of Rui et al. (English Translation of Chinese Publication CN101030432 09-2007). Regarding claim 1, Ekstrand discloses an apparatus comprising: one or more processors; and memory storing instructions that (Ekstrand, para. 0036, processor and memory storing instructions), when executed by the one or more processors, cause the apparatus to: determine a first image associated with a first location within video content (Ekstrand, para. 0072, fig. 6, the advertising identification unit 242 may more quickly and accurately identify the beginning and/or end of each instance of the advertising content by searching within each of the time windows 610a-c to identify where a change of scene content occurs, with the change of scene content being defined as the beginning or end of the advertising content. The advertising identification unit 242 may, for example, identify a transition between program/movie content and advertising content in response to identifying a black video frame occurring within the time windows 610a-c; as such, the advertising identification unit 242 may determine a black image at a first time point or location within window 610a-c as a first image that identifies the beginning of an advertising content); determine a second image associated with a second location within the video content (Ekstrand, para. 0072, fig. 6, the advertising identification unit 242 may more quickly and accurately identify the beginning and/or end of each instance of the advertising content by searching within each of the time windows 610a-c to identify where a change of scene content occurs, with the change of scene content being defined as the beginning or end of the advertising content. The advertising identification unit 242 may, for example, identify a transition between program/movie content and advertising content in response to identifying a black video frame occurring within the time windows 610a-c; as such, the advertising identification unit 242 may determine a black image at a second time point or location within window 610a-c as a second image that identifies the end of the advertising content, wherein the first image indicates the beginning of the advertising content); determine a second location that is within the video content and that is associated with the second image (Ekstrand, para. 0072, fig. 6, the advertising identification unit 242 may more quickly and accurately identify the beginning and/or end of each instance of the advertising content by searching within each of the time windows 610a-c to identify where a change of scene content occurs, with the change of scene content being defined as the beginning or end of the advertising content. The advertising identification unit 242 may, for example, identify a transition between program/movie content and advertising content in response to identifying a black video frame occurring within the time windows 610a-c; as such, the second image indicates the transition end of the advertising content, and a second time point or location can be determined at or near the position of the second image). Ekstrand does not explicitly disclose: comparing the first image with a second image to determine a difference between the first image and the second image satisfying a predetermined threshold; based on the difference satisfying a predetermined threshold difference, cause a device to output the video content beginning at the second location associated with the second image Dimitrova discloses comparing the first image with a second image within the video content to determine a difference between the first image and the second image satisfying a predetermined threshold (It is noted that a comparison between video frame A and video frame B can be inferred by comparing both to a common reference frame C, provided all frames are in a comparable format. This method works by analyzing how both A and B differ from C, allowing for an indirect, relative comparison. Dimitrova, col. 2, lines 10-22, detecting a black frame occurring prior to or following a commercial within a video data stream divided into a plurality of frames comprises a black frame detector. The black frame detector performs the steps of dividing an analyzed frame of said frames into a plurality of regions; calculating an average maximum luminance value for said regions; calculating an average minimum luminance value for said regions; comparing said average maximum luminance value and said average minimum luminance value with a black frame threshold; and identifying the occurrence of a black frame based on said step of comparing. In this case, each of the analyzed frames is compared with a reference black frame to determine if the analyzed frames are associated with each other). It would have been obvious to one of ordinary skill in the art at the time of the invention to incorporate/combine Dimitrova’s features into/with Ekstrand’s invention for enhancing user’s playback experience by skipping advertising content based on detected black frames prior to and following the advertising content. Ekstrand-Dimitrova discloses the second location associated with the second image within the video content; the difference between the first image and the second image satisfying a predetermined threshold, but does not explicitly disclose cause a device to output the video content beginning at the second location. Rui discloses cause a device to output the video content beginning at the second location (Rui, para. 0021, extract an image frame specified by a user from a plurality of image frames of a moving image, and simultaneously display of a predetermined number of image frames together with the extracted image frame as an initial image frame; para. 0141, the frame number or timecode “location” of a playback image can be determined). It would have been obvious to one of ordinary skill in the art at the time of the invention to incorporate/combine Rui’s features into/with Ekstrand’s invention for enhancing user’s playback experience by skipping advertising content identified by the black frames. Regarding claim 6, Ekstrand-Dimitrova-Rui discloses the apparatus of claim 1, wherein the instructions, when executed by the one or more processors, further cause the apparatus to: cause the device to output the video content, comprising the first image, beginning prior to the first location (Ekstrand, fig’s 5 and 6 illustrate displaying the video content beginning prior to the first location located in window 610a); and determine the first image by at least: determining the first image while the device is outputting the video content that comprises the first image (Ekstrand, para. 0072, fig’s 5 and 6, identify the beginning of the advertising content by searching within the time windows 610a to identify where a change of scene content occurs, with the change of scene content being defined as the beginning of the advertising content; identify a transition between program/movie content and advertising content in response to identifying a black video frame occurring within the time windows 610a while displaying the video). Regarding claim 10, Ekstrand-Dimitrova-Rui discloses the apparatus of claim 1, wherein the second image is associated with a video frame of the video content (Ekstrand, para. 0072, fig. 6, the second image is a black frame). Claims 11, 16, 20-21, 26, 41-46 are rejected for the same reasons set forth in claims 1, 6, 10, and 38-40. Ekstrand-Dimitrova-Rui further discloses processors, memory, and computer readable medium (see Ekstrand, para’s 0081-0082). Regarding claim 21, Ekstrand-Dimitrova-Rui further discloses the first computing device and the second computing device; the second computing device is configured to output the video content received from the first computing device (see Ekstrand, fig. 1, para. 0024, “first” device can send data to a second device for displaying data) and (Rui, para’s 0026-0028). Regarding claim 38, Ekstrand-Dimitrova-Rui discloses the apparatus of claim 1, wherein the first image and the second image are not identical images (Dimitrova, col. 2, lines 10-22, col. 18, lines 19-35 the analyzed frames may be black or containing brand names, and may not be identical frames). Regarding claim 39, Ekstrand-Dimitrova-Rui discloses the apparatus of claim 38, wherein the instructions, when executed by the one or more processors, cause the apparatus to cause the device to output the video content beginning at the second location by skipping output of the video content between the first location and the second location (Dimitrova, col. 15, lines 6-14, and col. 19, lines 51-55, skipping advertising content). Regarding claim 40, Ekstrand-Dimitrova-Rui discloses the apparatus of claim 1, wherein the second location is before the first location in the video content, and wherein the instructions, when executed by the one or more processors, cause the apparatus to cause the device to output the video content beginning at the second location by skipping backward in the video content from the first location to the second location (Ekstrand, para. 0057, jump the video stream playback location backward in time to cause playback of successively earlier occurring time locations defined by the sequentially ordered addressable chapter marks 310). Claims 11, 16, 20-21, 26, 41-46 are rejected for the same reasons set forth in claims 1, 6, 10, and 38-40. Ekstrand-Dimitrova-Rui further discloses processors, memory, and computer readable medium (see Ekstrand, para’s 0081-0082). Regarding claim 21, Ekstrand-Dimitrova-Rui further discloses the first computing device and the second computing device; the second computing device is configured to output the video content received from the first computing device (see Ekstrand, fig. 1, para. 0024, “first” device can send data to a second device for displaying data) and (Rui, para’s 0026-0028). 7. Claims 2, 9, 12, 19, 22, and 29 are rejected under 35 U.S.C. 103(a) as being unpatentable over Ekstrand-Rui, as applied to claims 1, 11, and 21 above, in view of Yamagami et al. (US Publication 2005/0044489). Regarding claim 2, Ekstrand-Dimitrova-Rui discloses the apparatus of claim 1, comprising determine the first image at the first location within the video content as described above. Ekstrand-Dimitrova-Rui does not explicitly disclose but Yamagami discloses determine the first image by at least: receiving, from the device, an indication that is associated with a video frame of the video content and that is based on a user command initiated during output of the video content (Yamagami, para’s 0133-0134, when an image of a target scene is displayed on the display screen 201 and a user recognizes the image “the first image”, the user presses a key of the remote control 150 to put a so-called mark to the image of the target scene, i.e., receiving, from a device, a user press to put a mark “an indication” associated with an image recognized by the user); and determining the first image based on the indication (Yamagami, para’s 0133-0134, determine the image recognized by the user based on indication generated by the user key press). It would have been obvious to one of ordinary skill in the art at the time of the invention to incorporate/combine Yamagami’s features into/with Ekstrand-Rui’s invention for enhancing user’s playback navigation by allowing the user to specifically identify a particular image in the video content. Regarding claim 9, Ekstrand-Dimitrova-Rui discloses the apparatus of claim 1, wherein the instructions, when executed by the one or more processors, further discloses each of a plurality of users operates respective device to: identify and receive a third location of a third image, within the video content, that was selected by a first user, and identify and receive a fourth location of a fourth image, within the video content, that was selected by a second user; determining the first location based on the third location and the fourth location; and determining the first image based on a video frame at the first location (see Ekstrand, para. 0038, a plurality of users, para. 0072, fig. 6, each user of the plurality of users determines a respective black image at a respective first time point or location within window 610a-c, i.e., a third location of a third image and a fourth location of a fourth image that identify the beginning of an advertising content); determining the first location as a location within the video content between the third location and the fourth location; and determining the first image based on the first location (Ekstrand, para. 0038, the first image can be determined at a first location located between a location of the third image and location of the fourth image including the third location or the fourth location). Ekstrand-Dimitrova-Rui does not explicitly disclose but Yamagami discloses: cause the apparatus to determine the first image by at least: receiving an indication of a third image, within the video content, that was selected by a first user; receiving an indication of a fourth image, within the video content, that was selected by a second user (para’s 0133-0134, receiving a respective mark “an indication” associated with a respective image “third image and fourth image” recognized by each respective user of the plurality of users). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate/combine Yamagami’s features into/with Ekstrand-Rui’s invention for enhancing user’s playback navigation by allowing each respective user to specifically identify a particular image in the video content. Claims 12, 19, 22, and 29 are rejected for the same reasons set forth in claims 2 and 9 above. 8. Claims 3, 13, and 23 are rejected under 35 U.S.C. 103(a) as being unpatentable over Ekstrand-Rui, as applied to claim 1 above, in view of Agnihotri et al. (US Publication 2003/0126598). Regarding claim 3, Ekstrand-Dimitrova-Rui discloses the apparatus of claim 1, wherein the instructions, when executed by the one or more processors, further cause the apparatus to determine the first image by at least selecting the first image from a plurality of images within the video content (Ekstrand, para. 0072, fig. 6, determine a black image at a first time point or location from a plurality of images within window 610a-c as a first image that identifies the beginning of an advertising content). Ekstrand-Dimitrova-Rui does not explicitly disclose but Agnihotri discloses wherein the plurality of images comprise predetermined images within the video content (Agnihotri, para. 0004, claim 5, detecting a plurality of black or unicolor frames in the video information stream; identifying the presence of a beginning portion of a commercial based on the detection of at least one of the plurality of black or unicolor frames; and identifying the presence of an ending portion of the commercial based on the detection of at least one other of the plurality of black or unicolor frames; see also Dimitrova, US Patent 6,100,941, col. 3, lines 10-49, and a video section containing a series of black frames; col. col. 15, lines 28-33, commercial detection thread 86 may be active when some triggering event occurs. This triggering event may be a sequence of at least 10-30 black frames). It would have been obvious to one of ordinary skill in the art at the time of the invention to incorporate/combine Agnihotri’s features into/with Ekstrand-Rui’s invention for enhancing user’s playback navigation by allowing each respective user to specifically identify a particular image in a plurality of predetermined images. Claims 13 and 23 are rejected for the same reasons set forth in claim 3 above. 9. Claims 7, 17, and 27 is rejected under 35 U.S.C. 103(a) as being unpatentable over Ekstrand-Rui, as applied to claim 1, 11, and 21 above, in view of Yamagami et al. (US Publication 2005/0044489) and Ozluturk (US Publication 2021/0044752). Regarding claim 7, Ekstrand-Dimitrova-Rui discloses the apparatus of claim 1, wherein the instructions, when executed by the one or more processors, further discloses each of a plurality of users operates respective device to determine respective first image and second image (see Ekstrand, para. 0038, a plurality of users, para. 0072, fig. 6, each user of the plurality of users determines a respective black image at a respective time point or location within window 610a-c as a third image and a fourth that identify the beginning of an advertising content. It is noted that a transition section between advertising content and program content or vice versa may contain a sequence of black frames or unicolor frames as known in the art; see Agnihotri et al., US Publication 2003/0126598, para. 0051 and claim 5; the controller 10 recognizes that the frame detected in step 200 is potentially one of a series of black/unicolor frames following the ending of a commercial segment). Ekstrand-Dimitrova-Rui does not explicitly disclose: cause the apparatus to determine the first image by at least: receiving an indication of a third image, within the video content, that was selected by a first user; receiving an indication of a fourth image, within the video content, that was selected by a second user; generating the first image to be a combination of the third image and the fourth image. Yamagami discloses: cause the apparatus to determine the first image by at least: receiving an indication of a third image, within the video content, that was selected by a first user; receiving an indication of a fourth image, within the video content, that was selected by a second user (para’s 0133-0134, receiving a respective mark “an indication” associated with a respective image “third image and fourth image” recognized by each respective user of the plurality of users). It would have been obvious to one of ordinary skill in the art at the time of the invention to incorporate/combine Yamagami’s features into/with Ekstrand-Rui’s invention for enhancing user’s playback navigation by allowing each respective user to specifically identify a particular image in the video content. Ekstrand-Rui-Yamagami does not explicitly disclose but Ozluturk discloses generating the first image to be a combination of the third image and the fourth image (Ozluturk, claim 1, generating a target image by combining two or more images of the sequence of images). It would have been obvious to one of ordinary skill in the art at the time of the invention to incorporate/combine Ozluturk’s features into/with Ekstrand-Rui-Yamagami’s invention for effectively estimating an image frame from among a sequence of image frames. Claims 17 and 27 are rejected for the same reasons set forth in claim 7 above. Allowed claims 10. Claims 30-37 contain allowable subject matters. Conclusion 11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOI H TRAN whose telephone number is (571)270-5645. The examiner can normally be reached 8:00AM-5:00PM PST FIRST FRIDAY OF BIWEEK OFF. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOI H TRAN/ Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Jan 26, 2022
Application Filed
Aug 23, 2024
Non-Final Rejection — §103
Oct 04, 2024
Response Filed
Dec 27, 2024
Non-Final Rejection — §103
Feb 13, 2025
Response Filed
Mar 19, 2025
Final Rejection — §103
Jun 24, 2025
Notice of Allowance
Jun 24, 2025
Response after Non-Final Action
Jul 17, 2025
Response after Non-Final Action
Aug 25, 2025
Response after Non-Final Action
Aug 31, 2025
Response after Non-Final Action
Oct 22, 2025
Response after Non-Final Action
Dec 17, 2025
Request for Continued Examination
Dec 20, 2025
Response after Non-Final Action
Feb 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598366
CONTENT DATA PROCESSING METHOD AND CONTENT DATA PROCESSING APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12593112
METHOD, DEVICE, AND COMPUTER PROGRAM FOR ENCAPSULATING REGION ANNOTATIONS IN MEDIA TRACKS
2y 5m to grant Granted Mar 31, 2026
Patent 12592261
VIDEO EDITING METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12576798
CAMERA SYSTEM AND ASSISTANCE SYSTEM FOR A VEHICLE AND A METHOD FOR OPERATING A CAMERA SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12579810
SYSTEM AND METHOD FOR AUTOMATIC EVENTS IDENTIFICATION ON VIDEO
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
64%
Grant Probability
88%
With Interview (+23.6%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 611 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month