Prosecution Insights
Last updated: April 19, 2026
Application No. 17/507,557

SYSTEMS AND METHODS FOR GENERATING DIGITAL VIDEO CONTENT FROM NON-VIDEO CONTENT

Non-Final OA §103§112
Filed
Oct 21, 2021
Examiner
BOYD, ALEXANDER L
Art Unit
2424
Tech Center
2400 — Computer Networks
Assignee
Allstar Gaming, Inc.
OA Round
5 (Non-Final)
74%
Grant Probability
Favorable
5-6
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
222 granted / 299 resolved
+16.2% vs TC avg
Strong +24% interview lift
Without
With
+24.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
35 currently pending
Career history
334
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
53.9%
+13.9% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
18.5%
-21.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 299 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/20/2025 has been entered. Claim Status Claims 1-2, 4, 6-12, 14, and 16-28 are pending in this Office Action. Claims 1-2, 6, 11, and 16 are amended. Claims 23-28 are new. Claims 3, 5, 13, and 15 are cancelled. Response to Arguments Applicant’s arguments with respect to claims 1 and 11 have been considered, but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1-2, 4, 6-12, 14, and 16-28 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, at the time the application was filed, had possession of the claimed invention. Claim 1 recites “modifying at least a subset of the non-video demo/replay content based on content customization preferences to produce modified non-video demo/replay content; causing execution of the digital video game environment based on the modified non-video demo/replay content to generate a digital video”. The specification does not describe these features. Instead, the specification recites “(c) combine the non-video content, the extracted metadata, and user preferences into a digital content instruction package; and (d) generate the digital video content based on the digital content instructions package, wherein the generating of the digital video content includes (i) modifying the digital video content based on the user preferences” (par. 7, Fig. 1). The non-video content can be a demo/replay file (par. 19). The instructions can include user preferences (par. 24-27). However, generating a digital content instructions package is not the same as producing modified non-video demo/replay content. The specification does not disclose modifying the non-video demo/replay content based on content customization preferences to produce modified non-video demo/replay content. Therefore, it also does not disclose causing execution of the digital video game environment based on the modified non-video demo/replay content to generate a digital video. Claim 11 recites similar features to claim 1 and is rejected for the same reasons as those given above. Claims 2, 4, 6-10, 12, 14, and 16-28 are rejected as being dependent on independent claims 1 and 11. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 6, 11-12, 16, 23-24, and 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over Condrey (US 2022/0331703) in view of Yilmazcoban et al. (US 2021/0394060) and further in view of Zahn et al. (US 2014/0370979). Regarding claims 1 and 11, Condrey teaches: A method and a system for generating enriched digital video content from non-video demo/replay content that represents digital gameplay in a digital video game environment [generating a customized replay of gameplay based on gameplay state information (par. 12 and 16, Fig. 1). The method may be performed by server 150 (par. 42)], comprising: retrieving the non-video demo/replay content of the digital gameplay, the non-video demo/replay content including data of the digital gameplay replayable in the digital video game environment [retrieve gameplay state information recorded by event log engine 124. The gameplay state information includes user inputs, character movements and actions, positions and attributes of other characters and objects, depiction of surrounding game environment and conditions, and other captured gameplay state information that is used to replay a game event for the user (par. 65-70, Fig. 3 and 5)] modifying at least a subset of the non-video demo/replay content based on content customization preferences to produce modified non-video demo/replay content; causing execution of the digital video game environment based on the modified non-video demo/replay content to generate a digital video [customization engine 128 may apply customization content to a playback (by replay engine 126) of a trigger event using system-defined customization templates and/or user-defined customization templates. As an example, in a shooter game where a trigger event comprises the death of a player, the trigger event replay may show a player his or her death from the perspective of the killer. As such, each player may be provided with an option to select one or more system-defined customization templates (including customization content) that will be added to the trigger event replay shown to opponents (par. 65-75, Fig. 4 and 5)] generating the enriched digital video content [generating a customized replay of gameplay based on gameplay state information (par. 12, 16, and 112, Fig. 1 and 3)]. Condrey does not explicitly disclose: extracting in-game data from the subset of the non-video demo/replay content to generate extracted metadata; and generating the enriched digital video content by combining the digital video with the extracted metadata. Yilmazcoban teaches: extracting in game data from the subset of the non-video demo/replay content to generate extracted metadata [extracting game-related information, such as in-game behavior of players from a replay file (par. 22, 24-25, 44, and 48, Fig. 1)] and generating enriched digital video content [editing the video clip by adding transition effects, placing icons, bubbles and texts (par. 44)]. It would have been obvious to one of ordinary skill in the art, having the teachings of Condrey and Yilmazcoban before the effective filing date of the claimed invention to modify the method of Condrey by incorporating Yilmazcoban’s method of extracting in game metadata and generating enriched digital video content. The motivation for doing so would have been to analyze the in-game data and provide high quality video content (Yilmazcoban - par. 25 and 44). Therefore, it would have been obvious to combine the teachings of Condrey and Yilmazcoban in obtaining the invention as specified in the instant claim. Yilmazcoban does not explicitly disclose: combining the digital video with the extracted metadata. Zahn teaches: combining the digital video with the extracted metadata [the metadata is added to the corresponding video's metadata (par. 21 and 35)]. It would have been obvious to one of ordinary skill in the art, having the teachings of Condrey, Yilmazcoban, and Zahn before the effective filing date of the claimed invention to modify the method of Condrey and Yilmazcoban by incorporating combining the digital video with the extracted metadata as disclosed by Zahn. The motivation for doing so would have been to enhance the videogame-generated video, such as by including in the video metadata describing what users are doing, where they are in their games, what inventory or abilities they have (Zahn – title and par. 18). Therefore, it would have been obvious to combine the teachings of Condrey and Yilmazcoban with Zahn to obtain the invention as specified in the instant claim. Regarding claims 2 and 12, Condrey, Yilmazcoban, and Zahn teach the method of claim 1; Yilmazcoban further teaches: the non-video demo/replay content comprises a file format selected from the group consisting of a demo file and a replay file for the digital video game environment [replay file (par. 24 and 49)]. Regarding claims 6 and 16, Condrey, Yilmazcoban, and Zahn teach the method of claim 1; Condrey further teaches: the extracted in-game data comprises local user event data or game server-side event data [gameplay state information recorded by event log engine 124 may include user commands (par. 59, Fig. 1)]. Regarding claim 23, Condrey, Yilmazcoban, and Zahn teach the method of claim 1; Condrey further teaches: the causing the execution of the digital video game environment includes: passing instructions represented by the modified non-video demo/replay content to the digital video game environment, via at least one of an indication of a key press or a programmatic interface, to manipulate the digital gameplay in the digital video game environment to generate the digital video [passing instructions to a processor to manipulate data (par. 46). Replay engine may generate a replay of one or more of the game actions comprising the trigger event. The replay of a trigger event comprises a playback of the actions comprising the trigger event based on retrieved gameplay state information (e.g., retrieved from an event log engine) (par. 12, 16, and 109-112, Fig. 1 and 3)]. Regarding claim 24, Condrey, Yilmazcoban, and Zahn teach the method of claim 1; Condrey further teaches: the causing the execution of the digital video game environment includes: passing instructions represented by the modified non-video demo/replay content to the digital video game environment, via an application layer that executes in parallel to the digital video game environment, to manipulate the digital gameplay in the digital video game environment to generate the digital video [passing instructions to a processor to manipulate data (par. 46). Executing a customization application 120 (par. 36-38, Fig. 1). Replay engine may generate a replay of one or more of the game actions comprising the trigger event. The replay of a trigger event comprises a playback of the actions comprising the trigger event based on retrieved gameplay state information (e.g., retrieved from an event log engine) (par. 12, 16, and 109-112, Fig. 1 and 3)]. Regarding claim 26, Condrey, Yilmazcoban, and Zahn teach the method of claim 1; Condrey further teaches: the modified non-video demo/replay content represents at least one of an in-game camera change, a physical timing change, or a heads-up display (HUD) change, relative to the digital gameplay [the replay generated based on gameplay state information and depicting a number of different player perspectives and a period of time (par. 65-69)]. Regarding claim 27, Condrey, Yilmazcoban, and Zahn teach the method of claim 1; Condrey and Yilmazcoban further teach: parsing the non-video demo/replay content based on an in-game event to produce the at least the subset of the non-video demo/replay content [Yilmazcoban – parsing the replay file to extract in-game details (par. 24-25). Condrey – Based on a trigger event within the game, retrieving the gameplay state information for a period of time around the trigger event (par. 65-67)]. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Condrey (US 2022/0331703) in view of Yilmazcoban et al. (US 2021/0394060), further in view of Zahn et al. (US 2014/0370979), and further in view of Ramchandran et al. (US 2011/0137724). Regarding claims 4 and 14, Condrey, Yilmazcoban, and Zahn teach the method of claim 2; Condrey, Yilmazcoban, and Zahn do not explicitly disclose: the non-video demo/replay content comprises a .DEM format file, REPLAY format file, .REC format file, .ROFL format file, .HSREPLAY format file, StormReplay format file, REP format file, .LRF format file, .OSR format file, .YDR format file, .SC2REPLAY format file, .WOTREPLAY format file, .WOWSREPLAY format file, .W3G format file, .ARP format file, .MGL format file, .RPL format file, .WOTBREPLAY format file, .MGX format file, .KWREPLAY format file, .PEGN format file, .QWD format file, .DM2 format file, or .DMO format file. Ramchandran teaches: the non-video demo/replay content comprises a REP format file or .DMO format file [(par. 30, Fig. 4)]. It would have been obvious to one of ordinary skill in the art, having the teachings of Condrey, Yilmazcoban, Zahn, and Ramchandran before the effective filing date of the claimed invention to modify the method of Condrey, Yilmazcoban, and Zahn by incorporating the non-video demo/replay content comprises a REP format file or .DMO format file as disclosed by Ramchandran. The motivation for doing so would have been to determine game metadata from the file, such as game data from Starcraft or Duke Nukem 3D (Ramchandran – par. 11 and 30). Therefore, it would have been obvious to combine the teachings of Condrey, Yilmazcoban, and Zahn with Ramchandran to obtain the invention as specified in the instant claim. Claims 7-10 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Condrey (US 2022/0331703) in view of Yilmazcoban et al. (US 2021/0394060), further in view of Zahn et al. (US 2014/0370979), and further in view of Strutton et al. (US 2012/0011432). Regarding claims 7 and 17, Condrey, Yilmazcoban, and Zahn teach the method of claim 1; Zahn further teaches: generating a title and at least one organizational tag for the enriched digital video content [generating a Title ID and organization IDs, such as user ID and video ID (par. 20, Fig. 1)]. Condrey, Yilmazcoban, and Zahn do not explicitly disclose: generating a digital two-dimensional (2D) image thumbnail based on the title and the at least one organizational tag. Strutton teaches: generating a digital two-dimensional (2D) image thumbnail based on the title and the at least one organizational tag [Fig. 8 illustrates a name or title and organization tags and a thumbnail image 722 could be selected or generated based on the title and tag (par. 64, 72, and 90)]. It would have been obvious to one of ordinary skill in the art, having the teachings of Condrey, Yilmazcoban, Zahn, and Strutton before the effective filing date of the claimed invention to modify the method of Condrey, Yilmazcoban, and Zahn by incorporating the teaching of Strutton to generate a digital two-dimensional (2D) image thumbnail based on the title and the at least one organizational tag. The motivation for doing so would have been to provide a visual image representing the video clip, such as for sharing the video clip on social media (Strutton – par. 5). Therefore, it would have been obvious to combine the teachings of Condrey, Yilmazcoban, and Zahn with Strutton to obtain the invention as specified in the instant claim. Regarding claims 8 and 18, Condrey, Yilmazcoban, Zahn, and Strutton teach the method of claim 7; Strutton further teaches: the 2D image thumbnail is compliant with Open Graph Protocol [a social graph protocol called as the "Open Graph Protocol" including meta tags for the thumbnail image (par. 64)]. Regarding claims 9 and 19, Condrey, Yilmazcoban, Zahn, and Strutton teach the method of claim 8; Strutton further teaches: distributing the enriched digital video content, the title, at least one organizational tag, and the 2D image thumbnail, to an Internet-based software platform [sharing the clip, tags, and thumbnail image to a social media or web page (par. 35, 64, 72, and 90, Fig. 1-2)]. Regarding claims 10 and 20, Condrey, Yilmazcoban, Zahn, and Strutton teach the method of claim 9; Yilmazcoban and Strutton further teach: the Internet-based software platform comprises a cloud computing technology based graphics processing unit (GPU) [Yilmazcoban - graphics processing units (GPUs) (par. 40). Strutton – cloud computing (par. 33 and 50, Fig. 1)]. Claims 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Condrey (US 2022/0331703) in view of Yilmazcoban et al. (US 2021/0394060), further in view of Zahn et al. (US 2014/0370979), and further in view of Rimon (US 2014/0364228)). Regarding claims 21 and 22, Condrey, Yilmazcoban, and Zahn teach the method of claim 1; Condrey, Yilmazcoban, and Zahn do not explicitly disclose: the digital video game environment is a virtual reality (VR) digital environment or an augmented reality (AR) digital environment. Rimon teaches: the digital video game environment is a virtual reality (VR) digital environment or an augmented reality (AR) digital environment [the gaming environment may include an immersive device, such as a head-mounted display (HMD) for viewing a virtual space (par. 8 and 10, Fig. 1 and 9) and may also provide an augmented reality experience (par. 54)]. It would have been obvious to one of ordinary skill in the art, having the teachings of Condrey, Yilmazcoban, Zahn, and Rimon before the effective filing date of the claimed invention to modify the method of Condrey, Yilmazcoban, and Zahn by incorporating a virtual reality (VR) digital environment or an augmented reality (AR) digital environment as disclosed by Rimon. The motivation for doing so would have been to provide the user with a more immersive interactive experience (Rimon – par. 8). Therefore, it would have been obvious to combine the teachings of Condrey, Yilmazcoban, and Zahn with Rimon to obtain the invention as specified in the instant claim Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Swaminathan et al. (US 2019/0377955) - Generating Digital Video Summaries Utilizing Aesthetics, Relevancy, And Generative Neural Networks. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alexander Boyd whose telephone number is (571)270-0676. The examiner can normally be reached Monday - Friday 9am-5pm PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Bruckart can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALEXANDER BOYD/ Examiner, Art Unit 2424
Read full office action

Prosecution Timeline

Oct 21, 2021
Application Filed
Dec 07, 2022
Non-Final Rejection — §103, §112
Jun 14, 2023
Response Filed
Aug 18, 2023
Final Rejection — §103, §112
Feb 26, 2024
Notice of Allowance
Sep 26, 2024
Request for Continued Examination
Oct 04, 2024
Response after Non-Final Action
Oct 23, 2024
Non-Final Rejection — §103, §112
May 05, 2025
Response Filed
Jun 24, 2025
Final Rejection — §103, §112
Oct 29, 2025
Applicant Interview (Telephonic)
Oct 29, 2025
Examiner Interview Summary
Nov 20, 2025
Request for Continued Examination
Nov 30, 2025
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §103, §112
Mar 10, 2026
Applicant Interview (Telephonic)
Mar 10, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587698
OPTIMIZATION OF ENCODING PROFILES FOR MEDIA STREAMING
2y 5m to grant Granted Mar 24, 2026
Patent 12581167
DYNAMIC CONTENT SELECTION MENU
2y 5m to grant Granted Mar 17, 2026
Patent 12549798
SMART TV REMOTE-CONTROL SYSTEM OR METHOD WITH NON-STANDARD RC COMMAND TRANSLATION CAPABILITY
2y 5m to grant Granted Feb 10, 2026
Patent 12506889
CODEC MANAGEMENT AT AN INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Dec 23, 2025
Patent 12489938
VIDEO TRANSMISSION APPARATUS, COMPUTER-READABLE STORAGE MEDIUM, VIDEO TRANSMISSION METHOD, AND SYSTEM
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+24.4%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 299 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month