Prosecution Insights
Last updated: April 19, 2026
Application No. 18/458,111

TECHNIQUES FOR AUTOMATICALLY GENERATING REPLAY CLIPS OF MEDIA CONTENT FOR KEY EVENTS

Non-Final OA §103
Filed
Aug 29, 2023
Examiner
HUERTA, ALEXANDER Q
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Apple Inc.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 6m
To Grant
80%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
351 granted / 520 resolved
+9.5% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
16 currently pending
Career history
536
Total Applications
across all art units

Statute-Specific Performance

§101
6.0%
-34.0% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
11.1%
-28.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 520 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on October 9, 2025 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1-7, 9-13, 15-18, and 20-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claims 1, 10, 16, 22 are objected to because of the following informalities: the claims include a minor typographical error. Specifically, claims 1, 10, 16 recite “wherein a first audio and/or video steam of the first segment is substantially the same…” Similarly, claim 22 recites “wherein the first audio and/or video steam of the first segment…” For examination purposes, “steam” is interpreted as “stream”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-5, 9-13, 15-18, 20-23 are rejected under 35 U.S.C. 103 as being unpatentable over Quennesson (US Pub. 2018/0025078) in view of Tzoukermann et al. (US Pub. 2019/0260969) and in further view of Chen et al. (US Pub. 2016/0014482), herein referenced as Quennesson, Tzoukermann, and Chen respectively. Regarding claim 1, Quennesson discloses “A method for dynamically generating replay clips for key events that occur, the method comprising, at a computing device: …receiving a plurality of key events; and for each key event of the plurality of key events: analyzing at least one segment of the plurality of segments against the key event to determine starting and ending points for a replay clip for the key event, and generating the replay clip based on (i) the media content, and (ii) the starting and ending points for the replay clip.” ([0062]-[0069], Fig. 3, i.e., a video highlight creator 380 determines key moments of a video broadcast stream based on correlated social media engagement. A video analyzer 322 is further able to determine the starting and ending points for the video highlights 381). Quennesson fails to explicitly disclose analyzing media content to detect transitions in the media content; determining, based on the detected transitions, starting and ending points for each of a plurality of segments of the media content; after determining the starting and end points for each of the plurality of segments, tagging each of the plurality of segments with a respective at least one classification that describes a nature of the segment; wherein the replay clip is further generated by: adding, to the replay clip, a first segment of the plurality of segments of the media content based on the classification of the first segment, identifying a second segment of the plurality of segments of the media content based on the classification of the second segment. Tzoukermann teaches the technique of analyzing media content to detect transitions in the media content; determining, based on the detected transitions, starting and ending points for each of a plurality of segments of the media content; after determining the starting and end points for each of the plurality of segments, tagging each of the plurality of segments with a respective at least one classification that describes a nature of the segment; wherein the replay clip is further generated by: adding, to the replay clip, a first segment of the plurality of segments of the media content based on the classification of the first segment, identifying a second segment of the plurality of segments of the media content based on the classification of the second segment ([0006], [0030], [0039]-[0041], Fig. 3, i.e., the video processing device may analyze frames of the content stream to identify program boundaries within the stream using a transition in the content stream. Once the video portion has been segmented, the segments may be classified to separate television show segments from commercial segments. Segments of the same content type may be merged together sequentially to form a single video content item). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of analyzing media content to detect transitions in the media content; determining, based on the detected transitions, starting and ending points for each of a plurality of segments of the media content; after determining the starting and end points for each of the plurality of segments, tagging each of the plurality of segments with a respective at least one classification that describes a nature of the segment; wherein the replay clip is further generated by: adding, to the replay clip, a first segment of the plurality of segments of the media content based on the classification of the first segment, identifying a second segment of the plurality of segments of the media content based on the classification of the second segment as taught by Tzoukermann, to improve the highlight replay system of Quennesson for the predictable result of automatically segmenting and classifying video streams without using manual judgment and determinations ([0004]). The combination still fails to disclose avoiding redundancy in the replay clip by omitting the second segment from the replay clip, wherein a first audio and/or video steam of the first segment is substantially the same as a second audio and/or video stream of the second segment. Chen teaches the technique of avoiding redundancy in the replay clip by omitting the second segment from the replay clip, wherein a first audio and/or video steam of the first segment is substantially the same as a second audio and/or video stream of the second segment ([0186], i.e., similar video clips can be determined using techniques including (but not limited to) by applying thresholds to similarity measurements and/or using decision trees to determine similarity based upon similarity measurements. In numerous embodiments, a duplicate removal process can exclude video clips that are too similar to other video clips from being included in the video summary sequence). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of avoiding redundancy in the replay clip by omitting the second segment from the replay clip, wherein a first audio and/or video steam of the first segment is substantially the same as a second audio and/or video stream of the second segment as taught by Chen, to improve the highlight replay system of Quennesson for the predictable result of identifying and excluding duplicate video clips thus creating a more concise and condensed highlight replay. Regarding claim 2, Quennesson fails to explicitly disclose “wherein analyzing the media content to detect transitions in the media content further includes: analyzing an optical flow of the media content to detect the transitions in the media content.” Tzoukermann teaches the technique of analyzing an optical flow of the media content to detect the transitions in the media content ([0030], [0035]-[0039], i.e., identifying transitions in a video program). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of analyzing an optical flow of the media content to detect the transitions in the media content as taught by Tzoukermann, to improve the highlight replay system of Quennesson for the predictable result of automatically segmenting and classifying video streams without using manual judgment and determinations ([0004]). Regarding claim 4, Quennesson fails to explicitly disclose “wherein analyzing the media content to detect transitions in the media content further includes: analyzing audio data of the media content to detect the transitions in the media content.” Tzoukermann teaches the technique of analyzing audio data of the media content to detect the transitions in the media content ([0007], [0039], [0041]-[0042], i.e., transition may comprise particular audio content). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of analyzing audio data of the media content to detect the transitions in the media content as taught by Tzoukermann, to improve the highlight replay system of Quennesson for the predictable result of automatically segmenting and classifying video streams without using manual judgment and determinations ([0004]). Regarding claim 5, Quennesson discloses “wherein the media content comprises media content from a plurality of different video sources.” ([0004]-[0007], i.e., broadcast streams from different geographic locations). Regarding claim 9, the combination fails to explicitly disclose “tagging each of the plurality of segments with the respective at least one classification based on one or more of a type of the media content, a type of an event to which the media content corresponds, or a type of a device that generates the media content.” Tzoukermann teaches the technique of tagging each of the plurality of segments with the respective at least one classification based on one or more of a type of the media content, a type of an event to which the media content corresponds, or a type of a device that generates the media content ([0006], [0030], [0039]-[0041], Fig. 3, i.e., the video processing device may analyze frames of the content stream to identify program boundaries within the stream using a transition in the content stream. Once the video portion has been segmented, the segments may be classified to separate television show segments from commercial segments). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of tagging each of the plurality of segments with the respective at least one classification based on one or more of a type of the media content, a type of an event to which the media content corresponds, or a type of a device that generates the media content as taught by Tzoukermann, to improve the highlight replay system of Quennesson for the predictable result of automatically segmenting and classifying video streams without using manual judgment and determinations ([0004]). Regarding claim 10, Quennesson discloses “A non-transitory computer readable storage medium configured to store instructions that, when executed by a processor included in a computing device, cause the computing device to generate replay clips for key events that occur ([0098]-[0099], Fig. 8), by carrying out steps that include: …receiving a plurality of key events; and for each key event of the plurality of key events: analyzing at least one segment of the plurality of segments against the key event to determine starting and ending points for a replay clip for the key event, and generating the replay clip based on (i) the media content, and (ii) the starting and ending points for the replay clip.” ([0062]-[0069], Fig. 3, i.e., a video highlight creator 380 determines key moments of a video broadcast stream based on correlated social media engagement. A video analyzer 322 is further able to determine the starting and ending points for the video highlights 381). Quennesson fails to explicitly disclose analyzing media content to detect transitions in the media content; determining, based on the detected transitions, starting and ending points for each of a plurality of segments of the media content; after determining the starting and ending points for each of the plurality of segments, tagging each of the plurality of segments with a respective at least one classification that describes a nature of the segment; wherein the replay clip is further generated by: adding, to the replay clip, a first segment of the plurality of segments of the media content based on the classification of the first segment, identifying a second segment of the plurality of segments of the media content based on the classification of the second segment. Tzoukermann teaches the technique of analyzing media content to detect transitions in the media content; determining, based on the detected transitions, starting and ending points for each of a plurality of segments of the media content; after determining the starting and ending points for each of the plurality of segments, tagging each of the plurality of segments with a respective at least one classification that describes a nature of the segment; wherein the replay clip is further generated by: adding, to the replay clip, a first segment of the plurality of segments of the media content based on the classification of the first segment, identifying a second segment of the plurality of segments of the media content based on the classification of the second segment ([0006], [0030], [0039]-[0041], Fig. 3, i.e., the video processing device may analyze frames of the content stream to identify program boundaries within the stream using a transition in the content stream. Once the video portion has been segmented, the segments may be classified to separate television show segments from commercial segments. Segments of the same content type may be merged together sequentially to form a single video content item). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of analyzing media content to detect transitions in the media content; determining, based on the detected transitions, starting and ending points for each of a plurality of segments of the media content; after determining the starting and ending points for each of the plurality of segments, tagging each of the plurality of segments with a respective at least one classification that describes a nature of the segment; wherein the replay clip is further generated by: adding, to the replay clip, a first segment of the plurality of segments of the media content based on the classification of the first segment, identifying a second segment of the plurality of segments of the media content based on the classification of the second segment as taught by Tzoukermann, to improve the highlight replay system of Quennesson for the predictable result of automatically segmenting and classifying video streams without using manual judgment and determinations ([0004]). The combination still fails to disclose avoiding redundancy in the replay clip by omitting the second segment from the replay clip, wherein a first audio and/or video steam of the first segment is substantially the same as a second audio and/or video stream of the second segment. Chen teaches the technique of avoiding redundancy in the replay clip by omitting the second segment from the replay clip, wherein a first audio and/or video steam of the first segment is substantially the same as a second audio and/or video stream of the second segment ([0186], i.e., similar video clips can be determined using techniques including (but not limited to) by applying thresholds to similarity measurements and/or using decision trees to determine similarity based upon similarity measurements. In numerous embodiments, a duplicate removal process can exclude video clips that are too similar to other video clips from being included in the video summary sequence). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of avoiding redundancy in the replay clip by omitting the second segment from the replay clip, wherein a first audio and/or video steam of the first segment is substantially the same as a second audio and/or video stream of the second segment as taught by Chen, to improve the highlight replay system of Quennesson for the predictable result of identifying and excluding duplicate video clips thus creating a more concise and condensed highlight replay. Regarding claim 11, claim 11 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 2. Regarding claim 12, claim 12 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 4. Regarding claim 13, claim 13 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 5. Regarding claim 15, claim 15 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 9. Regarding claim 16, Quennesson discloses “A computing device, comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the computing device to generate replay clips for key events that occur ([0098]-[0099], Fig. 8), by carrying out steps that include: …receiving a plurality of key events; and for each key event of the plurality of key events: analyzing at least one segment of the plurality of segments against the key event to determine starting and ending points for a replay clip for the key event, and generating the replay clip based on (i) the media content, and (ii) the starting and ending points for the replay clip.” ([0062]-[0069], Fig. 3, i.e., a video highlight creator 380 determines key moments of a video broadcast stream based on correlated social media engagement. A video analyzer 322 is further able to determine the starting and ending points for the video highlights 381). Quennesson fails to explicitly disclose analyzing media content to detect transitions in the media content; determining, based on the detected transitions, starting and ending points for each of a plurality of segments of the media content; after determining the starting and ending points for each of the plurality of segments, tagging each of the plurality of segments with a respective at least one classification that describes a nature of the segment; wherein the replay clip is further generated by: adding, to the replay clip, a first segment of the plurality of segments of the media content based on the classification of the first segment, identifying a second segment of the plurality of segments of the media content based on the classification of the second segment. Tzoukermann teaches the technique of analyzing media content to detect transitions in the media content; determining, based on the detected transitions, starting and ending points for each of a plurality of segments of the media content; after determining the starting and ending points for each of the plurality of segments, tagging each of the plurality of segments with a respective at least one classification that describes a nature of the segment; wherein the replay clip is further generated by: adding, to the replay clip, a first segment of the plurality of segments of the media content based on the classification of the first segment, identifying a second segment of the plurality of segments of the media content based on the classification of the second segment ([0006], [0030], [0039]-[0041], Fig. 3, i.e., the video processing device may analyze frames of the content stream to identify program boundaries within the stream using a transition in the content stream. Once the video portion has been segmented, the segments may be classified to separate television show segments from commercial segments. Segments of the same content type may be merged together sequentially to form a single video content item). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of analyzing media content to detect transitions in the media content; determining, based on the detected transitions, starting and ending points for each of a plurality of segments of the media content; after determining the starting and ending points for each of the plurality of segments, tagging each of the plurality of segments with a respective at least one classification that describes a nature of the segment; wherein the replay clip is further generated by: adding, to the replay clip, a first segment of the plurality of segments of the media content based on the classification of the first segment, identifying a second segment of the plurality of segments of the media content based on the classification of the second segment as taught by Tzoukermann, to improve the highlight replay system of Quennesson for the predictable result of automatically segmenting and classifying video streams without using manual judgment and determinations ([0004]). The combination still fails to disclose avoiding redundancy in the replay clip by omitting the second segment from the replay clip, wherein a first audio and/or video steam of the first segment is substantially the same as a second audio and/or video stream of the second segment. Chen teaches the technique of avoiding redundancy in the replay clip by omitting the second segment from the replay clip, wherein a first audio and/or video steam of the first segment is substantially the same as a second audio and/or video stream of the second segment ([0186], i.e., similar video clips can be determined using techniques including (but not limited to) by applying thresholds to similarity measurements and/or using decision trees to determine similarity based upon similarity measurements. In numerous embodiments, a duplicate removal process can exclude video clips that are too similar to other video clips from being included in the video summary sequence). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of avoiding redundancy in the replay clip by omitting the second segment from the replay clip, wherein a first audio and/or video steam of the first segment is substantially the same as a second audio and/or video stream of the second segment as taught by Chen, to improve the highlight replay system of Quennesson for the predictable result of identifying and excluding duplicate video clips thus creating a more concise and condensed highlight replay. Regarding claim 17, claim 17 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 2. Regarding claim 18, claim 18 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 4. Regarding claim 20, claim 20 is interpreted and thus rejected for the reasons set forth above in the rejection of claim 9. Regarding claim 21, the combination fails to explicitly disclose “wherein a first audio stream of the first segment is substantially the same as a second audio stream of the second segment.” Chen teaches the technique of providing wherein a first audio stream of the first segment is substantially the same as a second audio stream of the second segment ([0186], i.e., shots, text, and/or audio within video clips can be used to measure similarity). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein a first audio stream of the first segment is substantially the same as a second audio stream of the second segment as taught by Chen, to improve the highlight replay system of Quennesson for the predictable result of identifying and excluding duplicate video clips thus creating a more concise and condensed highlight replay. Regarding claim 22, the combination fails to disclose “wherein the first audio and/or video steam of the first segment is the same as the second audio and/or video stream of the second segment.” Chen teaches the technique of providing wherein the first audio and/or video steam of the first segment is the same as the second audio and/or video stream of the second segment ([0186], i.e., shots, text, and/or audio within video clips can be used to measure similarity). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the first audio and/or video steam of the first segment is the same as the second audio and/or video stream of the second segment as taught by Chen, to improve the highlight replay system of Quennesson for the predictable result of identifying and excluding duplicate video clips thus creating a more concise and condensed highlight replay. Regarding claim 23, the combination fails to disclose “wherein the first segment is associated with a first video source of the plurality of different video sources, and wherein the second segment is associated with a second video source of the plurality of different video sources.” Chen teaches the technique of providing wherein the first segment is associated with a first video source of the plurality of different video sources, and wherein the second segment is associated with a second video source of the plurality of different video sources ([0005], [0062]-[0063], [0176]-[0177], Fig. 1, i.e., data streams of video content are aggregated from various sources). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the first segment is associated with a first video source of the plurality of different video sources, and wherein the second segment is associated with a second video source of the plurality of different video sources as taught by Chen, to improve the highlight replay system of Quennesson for the predictable result of generating a summary of content from a variety of sources thereby providing a more comprehensive viewing experience. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Quennesson in view of Tzoukermann, Chen, and in further view of Shichman et al. (US Pub. 2018/0132011), herein referenced as Shichman. Regarding claim 3, the combination fails to explicitly disclose “wherein the optical flow comprises one or more of a camera panning direction, a change in camera panning direction, a change in camera panning speed, a change in camera zoom level, a change in camera zoom speed, or a change in camera source video.” Shichman teaches the technique of providing wherein the optical flow comprises one or more of a camera panning direction, a change in camera panning direction, a change in camera panning speed, a change in camera zoom level, a change in camera zoom speed, or a change in camera source video ([0138]-[0140], i.e., any one of a transition effect inserted into (or included in) the input video may be identified. In other cases, e.g., when more than one camera is used for capturing the input video, a change of the source camera used for capturing the input video may be identified. In other embodiments, any of: a close-up, a pan, tilt and/or zoom (PTZ effects) of a camera may be identified). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the optical flow comprises one or more of a camera panning direction, a change in camera panning direction, a change in camera panning speed, a change in camera zoom level, a change in camera zoom speed, or a change in camera source video as taught by Shichman, to improve the highlight replay system of Quennesson for the predictable result of generating metadata and/or identify events ([0138]). Claims 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Quennesson in view of Tzoukermann, Chen, and in further view of Packard et al. (US Pub. 2016/0105733), herein referenced as Packard. Regarding claim 6, the combination fails to explicitly disclose “wherein the replay clip is generated using the media content from the plurality of different video sources.” Packard teaches the technique of providing wherein the replay clip is generated using the media content from the plurality of different video sources ([0054], i.e., video from different sources can be used, and can be combined to generate the customized highlight sequence). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the replay clip is generated using the media content from the plurality of different video sources as taught by Packard, to improve the highlight replay system of Quennesson for the predictable result of providing a more entertaining or interesting highlight for a fan ([0054]). Regarding claim 7, the combination fails to explicitly disclose “wherein the replay clip is generated using the media content by splicing different ones of the plurality of different video sources.” Packard teaches the technique of providing wherein the replay clip is generated using the media content by splicing different ones of the plurality of different video sources ([0054], i.e., a customized highlight sequence can include the television feed for a grand slam, followed by a YouTube video of the same grand slam as captured by a fan who attended the game; since the YouTube video captures the occurrence from a different perspective). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of providing wherein the replay clip is generated using the media content by splicing different ones of the plurality of different video sources as taught by Packard, to improve the highlight replay system of Quennesson for the predictable result of providing a more entertaining or interesting highlight for a fan ([0054]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alexander Q Huerta whose telephone number is (571)270-3582. The examiner can normally be reached M-F 9:00 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571)272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER Q HUERTA/Primary Examiner, Art Unit 2425 November 6, 2025
Read full office action

Prosecution Timeline

Aug 29, 2023
Application Filed
Feb 04, 2025
Non-Final Rejection — §103
Apr 25, 2025
Examiner Interview Summary
Apr 25, 2025
Applicant Interview (Telephonic)
Jun 09, 2025
Response Filed
Jul 17, 2025
Final Rejection — §103
Sep 12, 2025
Interview Requested
Sep 18, 2025
Examiner Interview Summary
Sep 18, 2025
Applicant Interview (Telephonic)
Oct 09, 2025
Request for Continued Examination
Oct 23, 2025
Response after Non-Final Action
Nov 06, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604061
CLOSED CAPTIONING SUMMARIZATION
2y 5m to grant Granted Apr 14, 2026
Patent 12593088
METHODS AND APPARATUS TO DETERMINE MEDIA EXPOSURE OF A PANELIST
2y 5m to grant Granted Mar 31, 2026
Patent 12587717
FACILITATING VIDEO GENERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12587694
METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR VIDEO GENERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12563266
USER-BASED CONTENT FILTERING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
80%
With Interview (+12.8%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 520 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month