Prosecution Insights
Last updated: April 18, 2026
Application No. 18/684,191

VIDEO PROCESSING APPARATUS, VIDEO PROCESSING METHOD, AND PROGRAM

Final Rejection §101§102§112
Filed
Feb 16, 2024
Examiner
JONES, COURTNEY PATRICE
Art Unit
3699
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Sony Group Corporation
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
90%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
158 granted / 235 resolved
+15.2% vs TC avg
Strong +23% interview lift
Without
With
+23.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
37 currently pending
Career history
272
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
47.8%
+7.8% vs TC avg
§102
23.5%
-16.5% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§101 §102 §112
Acknowledgements This communication is in response to applicant’s response filed on 02/20/2026. Claims 1, 2, 6, 9, 11-16, and 19-20 have been amended. Claims 1-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Regarding applicant’s arguments: Applicant’s arguments, see pg. 8, filed 02/20/2026, with respect to the interpretations of claims 1 and 12-16 under Claim Interpretation - 35 USC § 112(f) that the amendments to the claims avoid the invocation of 112(f) have been fully considered and are persuasive. Therefore, the Claim Interpretation - 35 USC § 112(f) has been withdrawn. Applicant’s arguments, see pg. 8, filed 02/20/2026, with respect to the rejections of claims 9 and 11 under Claim Rejections - 35 USC § 112(a) that the amendments to the claims recite subject matter that is clearly supported and enabled by the original written description, have been fully considered and are persuasive. Therefore, the Claim Rejections - 35 USC § 112(a) has been withdrawn. Applicant’s arguments, see pgs. 8-9, filed 02/20/2026, with respect to the rejection(s) of claim 20 under Claim Rejections - 35 USC § 101 that the claim has been amended to no longer recite a signal per se, has been fully considered and are persuasive. However, claim 20 is still rejected under Claim Rejections - 35 USC § 101. More detail is provided below. Applicant’s arguments, see pgs. 9-15, filed 02/20/2026, with respect to the rejection(s) of claims 1-20 under Claim Rejections - 35 USC § 101 that the independent claims 1, 19, and 20 recite features that cannot be satisfied by mental steps or processes, are not merely abstract ideas as defined by case law, and relate to a specific improvement in a computer-related technology including structural elements, have been fully considered and are not persuasive. Receiving a plurality of input videos, creating video creation information, and determining which input video to output as part of a material video, is an example of basic computer functionality that is performed by generic computers and does not impose a meaningful limit on the judicial exception and in a manner that integrates the exception into a practical application of the exception. That is, other than reciting circuitry to create and determine the material video, nothing in the claim precludes the language from being practically performed in the mind. Therefore, the Claim Rejections - 35 USC § 101 rejection has been maintained. Applicant’s arguments, see pgs. 15-17, filed 02/20/2026, with respect to the rejection(s) of claims 1, 19, and 20 under Claim Rejections - 35 USC § 102(a)(1) that Miron does not teach the amended limitations “circuitry configured to create video creation information based on a plurality of triggers detected from a plurality of input videos, determine at least one material video to be associated with creation of an output video from among the plurality of input videos, based on the video creation information, and control output of the at least one material video associated with the output video, wherein the output video is created according to a positional relationship between positions corresponding to the plurality of input videos,” have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is in view of Spencer (US 20190262726). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Under Step 1 of the Section 101 analysis, claim 1 is directed to a method, claim 19 is directed to a system, and claim 20 is directed to a non-transitory computer-readable storage medium (an apparatus, a process, and an article of manufacture). Under Step 2A Prong One, Claims 1, 19, and 20 recite: creating video creation information based on a plurality of triggers detected from a plurality of input videos; determining a material video to be associated with creation of an output video from among the plurality of input videos, based on the video creation information; and outputting the at least one material video associated with the output video, wherein the output video is created according to a positional relationship between positions corresponding to the plurality of input videos. Claims 1, 19, and 20 as drafted include language (see underlined language above) that recite an abstract idea of creating a highlight reel from a plurality of input videos, which falls under mental process (i.e., observation, evaluation, judgment, opinion). The creating and determining steps cover performance of the limitations in the mind but for recitation of generic computer components. There is no transformation of data. Under Step 2A Prong Two, the additional claim element(s), considered individually, do not apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception and in a manner that integrates the exception into a practical application of the exception. The additional claim elements(s) “a video processing apparatus comprising: circuitry” and “non-transitory computer-readable storage medium” generally “apply” the concept of creating a highlight reel from a plurality of input videos. The claimed computer components are recited at a high level of generality and are merely invoked as tools to perform the abstract idea. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Under Step 2A Prong Two, the additional claim element(s), considered in combination, do not apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception and in a manner that integrates the exception into a practical application of the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a video processing apparatus comprising: circuitry and non-transitory computer-readable storage medium amounts to no more than applying the abstract idea of creating a highlight reel from a plurality of input videos. Mere instructions to apply an exception using a generic component cannot provide an inventive concept. The claim is not patent eligible. Under Step 2B, the additional claim element(s), considered individually and in combination, do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself for similar reasons outlined under Step 2A Prong Two. A similar analysis can be applied to dependent claims 2-11 which claim “wherein the circuitry creates trigger information as the video creation information, based on the plurality of triggers; wherein the trigger information includes a video ID for identification of each of the input videos in which each of the triggers has been detected; wherein the trigger information includes trigger type information indicating a type of each of the triggers; wherein each of the triggers is a characteristic instance that has occurred in each of the input videos; wherein the circuitry creates event information as the video creation information based on the trigger information; wherein the event information includes a trigger ID for identification of a target trigger and a corresponding trigger corresponding to the target trigger, the target trigger and the corresponding trigger being any of the plurality of triggers and constituting an event; wherein the event information includes event type information indicating a type of an event; wherein the circuitry creates event scene information as the video creation information based on the event information; wherein the event scene information includes an event ID for identification of the event constituting an event scene; wherein circuitry creates statistical information as the video creation information based on the trigger information” which merely elaborate on the abstract idea without reciting any new additional elements. When the limitations are considered individually and as a whole in combination with the independent claims from which they depend from, the claims do not recite additional elements that amount to significantly more than the judicial exception. A similar analysis can be applied to dependent claims 12-18 which claims “wherein the circuitry determines, as the at least one material video, each of the input videos that can be specified by the video ID included in the trigger information; wherein the circuitry determines, as the at least one material video, each of the input videos that can be specified by the video ID included in the trigger information indicated by the trigger ID included in the event information; wherein the circuitry determines, as the at least one material video, each of the input videos that can be specified by the video ID included in the event scene information; wherein the circuitry creates the output video by using the at least one material video associated with creation of the output video; wherein the circuitry arranges a plurality of material videos in one video to create the output video; wherein each of the input videos is a gameplay video output from a game machine; and wherein each of the input videos is a video obtained by image capture of a player playing a game” which merely elaborate on the abstract idea without reciting any new additional elements. When the limitations are considered individually and as a whole in combination with the independent claims from which they depend from, the claims do not recite additional elements that amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 102(a)(1) In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Spencer (US 20190262726). Regarding Claims 1, 19, and 20, Spencer teaches create video creation information based on a plurality of triggers detected from a plurality of input videos (Paragraph 0100 teaches a first step comprises respectively receiving, from a plurality of videogame devices (acting as video recording apparatuses), a videogame identifier and one or more associated in-game events and their respective in-game positions; an event analyser receives data relating to a particular videogame that identifies in-game events, and where they occur within the game, from a plurality of videogame devices), determine at least one material video to be associated with creation of an output video from among the plurality of input videos, based on the video creation information (Paragraphs 0102, 0119, and 0072-0073 teach a second then comprises performing an analysis on one or more aspects of the in-game events associated with a videogame identifier, and their respective in-game positions, to identify statistically significant in-game events; in response then the event analysis server selects analysis data relating to one or more identified statistically significant in-game events associated with the received videogame identifier and having an in-game position within a predetermined distance of at least one received position (as noted above, corresponding to a level, periodic time or distance, or current or impending video frame); the predetermined distance may be the draw distance of the videogame, so that an indicator object for the event can appear as if it is part of the in-game render, but in principle may be any predetermined distance from the received position; optionally one or more in-game events and their respective in-game positions are associated with the identifier for the videogame; the in-game events, their respective in-game positions, and the identifier for the videogame may then optionally be uploaded to a remote server operable as the event analyser, which receives such data from a plurality of client devices acting as video recording apparatuses, and identifies statistically significant features of the data, as described later herein), and control output of the at least one material video associated with the output video, wherein the output video is created according to a positional relationship between positions corresponding to the plurality of input videos (Paragraphs 0121, 0093-0094, and 0123 teach finally, the event analysis server transmitting data indicative of the in-game event analysis data and its in-game position to the video playback apparatus; then as noted previously, the video playback apparatus can use the data to construct an augmentation layer for the video, as illustrated in FIG. 7; if for a given event, the corresponding X, Y coordinate in the currently display video image is determined to have a Z coordinate that is closer than the Z coordinate of the event, then in effect the event is obscured from the current viewpoint of the display video image by an object within the virtual environment depicted within the video image; a video playback device can augment the current video image with a graphical representation of an in game event, responsive to the calculated position; in particular the video playback device can decide whether or not to occlude some or all of a graphical representation of the in game event based on whether elements of the displayed environment are currently in between the game event location and the viewpoint presented by the video; for example, the video playback device may prepare for rendering a simple polygon based object such as a tetrahedron, acting as a pointer, and then use the Z values of the video image to perform a so-called z-culling on the tetrahedron in a final render so that the tetrahedron appears to be naturally embedded within the environment of the video, being occluded as suitable from the current viewpoint of the virtual camera that recorded the video image; it will be appreciated that in principle a videogame console could operate as both a video recording apparatus and a video playback apparatus, so that a user could review their own play almost immediately with the benefit of statistical event data overlaid on top; furthermore, a videogame console could in principle also operate as an event analysis server, for example analyzing historical records of play by one user, such as a professional e-sports player, to assist them in identifying trends in their play). Regarding Claim 1, Spencer teaches a video processing apparatus comprising: circuitry (Paragraph 0143 teaches hardware for the video recording apparatus may thus be a conventional computing device such as a PlayStation 4 operating under suitable software instruction, comprising a recording processor adapted to record a first sequence of video image output by a videogame to a video storage means, the recording processor being adapted to record a sequence of depth buffer values for a depth buffer used by the videogame, the recording processor being adapted to record a sequence of in-game virtual camera positions used to generate the video images, and the recording processor being adapted to record one or more in-game events and their respective in-game positions; a video generating processor adapted to generate a second sequence of video images encoding the depth buffer value sequence; and an association processor being adapted to associate the in-game virtual camera position sequence with at least one of the first and second sequence of video images). Regarding Claim 19, Spencer teaches a video processing method (Paragraph 0099 teaches turning now to FIG. 8, the server operating as an event analyser may operate according to the following event analysis method). Regarding Claim 20, Spencer teaches a non-transitory computer-readable storage medium having embodied thereon a program, which when executed by a computer causes the computer to execute a video processing method (Paragraph 0142 teaches the required adaptation to existing parts of a conventional equivalent device may be implemented in the form of a computer program product comprising processor implementable instructions stored on a non-transitory machine-readable medium such as a floppy disk, optical disk, hard disk, PROM, RAM, flash memory or any combination of these or other storage media, or realized in hardware as an ASIC (application specific integrated circuit) or an FPGA (field programmable gate array) or other configurable circuit suitable to use in adapting the conventional equivalent device). Regarding Claim 2, Spencer teaches all the limitations of claim 1 above; and Spencer further teaches wherein the circuitry creates trigger information as the video creation information, based on a basis of the plurality of triggers (Paragraphs 0064-0065, 0068-0069, and 0075 teach recording one or more in-game events and their respective in-game positions, using a similar scheme to that for the virtual camera location and optional player location; the choice of what in-game events to record in this manner will be made by a designer, but may typically include one or more of crashes/character deaths, overtaking/ beating a real or computer based opponent, changing an in-game state of the user (e.g. changing equipped weapons or the like, or engaging a nitrox boost in a car), and player choices (such as turning left or right to avoid an obstacle, or electing to jump over it); the data is recorded for each of a sequence of video images output by the videogame, but generally is not recorded as part of the sequence of output video images itself but instead is recorded as a parallel sequence of data with at least the depth data encoded as video images; the in-game virtual camera position sequence is associated with at least one of the first and second sequence of video images (typically the second sequence of video images); in a further optional step, an identifier for the videogame is also associated with one or both video image sequences (together with any of the optional additional information also encoded, such as player position, user choices and the like); each recorded image sequence of a videogame (video recording) may have a unique video ID, which may optionally be transmitted to the event analyser. The event data may then be transmitted to the event analyser in association with the unique video ID. Subsequently the event analyser may then optionally transmit the event data, in addition to any statistical analyses, back to a video playback device that transmits the unique video ID to it). Regarding Claim 3, Spencer teaches all the limitations of claim 2 above; and Spencer further teaches wherein the trigger information includes a video ID for identification of each of the input videos in which each of the triggers has been detected (Paragraph 0075 teaches each recorded image sequence of a videogame (video recording) may have a unique video ID, which may optionally be transmitted to the event analyser; the event data may then be transmitted to the event analyser in association with the unique video ID; subsequently the event analyser may then optionally transmit the event data, in addition to any statistical analyses, back to a video playback device that transmits the unique video ID to it). Regarding Claim 4, Spencer teaches all the limitations of claim 2 above; and Spencer further teaches wherein the trigger information includes trigger type information indicating a type of each of the triggers (Paragraphs 0064 and 0089 teach recording one or more in-game events and their respective in-game positions, using a similar scheme to that for the virtual camera location and optional player location; the choice of what in-game events to record in this manner will be made by a designer, but may typically include one or more of crashes/character deaths, overtaking/beating a real or computer based opponent, changing an in-game state of the user (e.g. changing equipped weapons or the like, or engaging a nitrox boost in a car), and player choices (such as turning left or right to avoid an obstacle, or electing to jump over it); the choice of what in-game events have been recorded may have been made by designer and may include one or more of crashes, character deaths, overtaking or beating an opponent or indeed being overtaken or beaten by an opponent, changing the in-game state of the user, player choices and/or player inputs; augmentations based upon these events per se may be provided; however, optionally this data may be analyzed as described elsewhere herein, and data relating to this analysis may then be associated with the event location). Regarding Claim 5, Spencer teaches all the limitations of claim 1 above; and Spencer further teaches wherein each of the triggers is a characteristic instance that has occurred in each of the input videos (Paragraphs 0088-0089 teach obtaining data indicative of a statistically significant in-game event and an in-game event position; this data is obtained from the event analyser, either as a file, or streamed to the video player during playback; the data typically comprises data indicative of the in-game event analysis data, e.g. data relating to the significance of the event and optionally other statistical data (and typically also the type of event, etc., to assist with selecting how to graphically represent the event); as was noted previously, the choice of what in-game events have been recorded may have been made by designer and may include one or more of crashes, character deaths, overtaking or beating an opponent or indeed being overtaken or beaten by an opponent, changing the in-game state of the user, player choices and/or player inputs; as noted above, augmentations based upon these events per se may be provided; however, optionally this data may be analyzed as described elsewhere herein, and data relating to this analysis may then be associated with the event location). Regarding Claim 6, Spencer teaches all the limitations of claim 2 above; and Spencer further teaches wherein the circuitry creates event information as the video creation information based on the trigger information (Paragraphs 0075, 0100-0101, and 0110-0111 teach each recorded image sequence of a videogame (video recording) may have a unique video ID, which may optionally be transmitted to the event analyser; the event data may then be transmitted to the event analyser in association with the unique video ID; subsequently the event analyser may then optionally transmit the event data, in addition to any statistical analyses, back to a video playback device that transmits the unique video ID to it; receiving, from a plurality of videogame devices (acting as video recording apparatuses), a videogame identifier and one or more associated in-game events and their respective in-game positions; hence as described previously herein, the event analyser receives data relating to a particular videogame that identifies in-game events, and where they occur within the game, from a plurality of videogame devices; as noted previously herein, optionally the event analyser may receive any of the other supplementary data recorded by a video recording apparatus, together with a unique video recording ID; a user-generated event marker or user ID may be associated with a particular uploaded set of event data. This allows the event analyser to provide event data corresponding to specific individuals, such as for example players found on a user's friend list associated with their own user ID). Regarding Claim 7, Spencer teaches all the limitations of claim 6 above; and Spencer further teaches wherein the event information includes a trigger ID for identification of a target trigger and a corresponding trigger corresponding to the target trigger, the target trigger and the corresponding trigger being any of the plurality of triggers and constituting an event (Paragraph 0064 teaches recording one or more in-game events and their respective in-game positions, using a similar scheme to that for the virtual camera location and optional player location; the choice of what in-game events to record in this manner will be made by a designer, but may typically include one or more of crashes/character deaths, overtaking/beating a real or computer based opponent, changing an in-game state of the user (e.g. changing equipped weapons or the like, or engaging a nitrox boost in a car), and player choices (such as turning left or right to avoid an obstacle, or electing to jump over it); in this latter case, the choice may be associated with a predetermined in-game decision point that may be location based (e.g. an obstacle or path choice) or may be logical (e.g. when navigating a dialog tree with an in-game character); in the case of a location based choice, due to user variability regarding when they respond to the choice, the choice made may be associated with the position of the in-game decision point rather than the position of the user or camera, to assist with subsequent analysis of the decision; alternatively or in addition, such a decision may be encoded when made by the user, or when the in-game decision point is at the nearest draw position with respect to the virtual camera, or at some other predetermined relationship with the virtual camera (for example within a predetermined distance) so as to provide predictability as to which video image may be associated with the choice data, or the choice data may be encoded for each image between these two moments (or similarly for any video frame where the camera and/or user avatar are within a predetermined distance of the in-game decision point); in addition to location specific events, on-going events may also be recorded; hence optionally for each video image, the current user input or inputs (e.g. buttons pressed, or associated input values) may also be recorded in a similar manner to provide an approximate record of the user's interactions with the game, and similarly the user's in-game position (e.g. avatar position) may be treated as an ongoing event if different from the camera position; as is noted later herein, whilst this recording step typically occurs during game play and reflects events arising directly from game play, alternatively or in addition the recording step for such in-game events may occur after the video images and other data have been output, and optionally after they have been broadcast/streamed; that is to say, a viewer subsequently watching the video using a viewer compatible with the techniques herein with have sufficient information available to define their own in-game events after the fact). Regarding Claim 8, Spencer teaches all the limitations of claim 7 above; and Spencer further teaches wherein the event information includes event type information indicating a type of an event (Paragraphs 0064 and 0073 teach recording one or more in-game events and their respective in-game positions, using a similar scheme to that for the virtual camera location and optional player location; the choice of what in-game events to record in this manner will be made by a designer, but may typically include one or more of crashes/character deaths, overtaking/beating a real or computer based opponent, changing an in-game state of the user (e.g. changing equipped weapons or the like, or engaging a nitrox boost in a car), and player choices (such as turning left or right to avoid an obstacle, or electing to jump over it); the in-game events, their respective in-game positions, and the identifier for the videogame may then optionally be uploaded to a remote server operable as the event analyser, which receives such data from a plurality of client devices acting as video recording apparatuses, and identifies statistically significant features of the data, as described later herein). Regarding Claim 9, Spencer teaches all the limitations of claim 7 above; and Spencer further teaches wherein the circuitry creates event scene information as the video creation information based on the event information (Paragraphs 0038-0039 teach recording a sequence of depth buffer values for a depth buffer used by the videogame; the depth buffer is used by the entertainment device when calculating which parts of a virtual scene are in front of each other and hence potentially occlude each other in the final rendered image; as such it can provide depth data for each pixel of the rendered image; an array of depth data of corresponding pixels of a rendered image can in turn be treated as depth image; hence for example 8 bit or 16 bit depth values may be stored as an 8 bit or 16 bit grayscale image corresponding to the rendered image; the depth image can have the same resolution as the corresponding video image, or a reduced resolution version can be used (e.g. 50% size, have ¼ pixels)). Regarding Claim 10, Spencer teaches all the limitations of claim 9 above; and Spencer further teaches wherein the event scene information includes an event ID for identification of the event constituting an event scene (Paragraph 0072-0074 teach optionally one or more in-game events and their respective in-game positions are associated with the identifier for the videogame; the in-game events, their respective in-game positions, and the identifier for the videogame may then optionally be uploaded to a remote server operable as the event analyser, which receives such data from a plurality of client devices acting as video recording apparatuses, and identifies statistically significant features of the data, as described later herein; the in-game events and their respective in-game positions may alternatively or in addition be encoded along with the depth buffer value sequence, the in-game virtual camera position sequence, and the identifier for the videogame within a color channel of the supplementary image sequence, thereby (also) associating them with the identifier for the videogame in this manner; this allows the specific instance of the in-game events to be associated with the specific video recording). Regarding Claim 11, Spencer teaches all the limitations of claim 1 above; and Spencer further teaches wherein the circuitry creates statistical information as the video creation information based on the trigger information (Paragraphs 0088 and 0102-0104 teach an optional further step comprises obtaining data indicative of a statistically significant in-game event and an in-game event position; this data is obtained from the event analyser, either as a file, or streamed to the video player during playback; the data typically comprises data indicative of the in-game event analysis data, e.g. data relating to the significance of the event and optionally other statistical data (and typically also the type of event, etc., to assist with selecting how to graphically represent the event); performing an analysis on one or more aspects of the in-game events associated with a videogame identifier, and their respective in-game positions, to identify statistically significant in-game events; this may be done for example by performing a geospatial analysis of a plurality of events of a similar kind to identify hotspots, cold spots and other group statistics indicative of the behavior of a corpus of players for that kind of event, or for a particular instance of an event at a particular position; an example form of geospatial analysis may be the known Getis-Ord-Gi* statistic; this analysis evaluates features with respect to their neighbors, so that clusters of similar features gain significance with respect to a global evaluation and are thus identified as hot-spots; cold-spots may be identified in converse fashion if required). Regarding Claim 12, Spencer teaches all the limitations of claim 3 above; and Spencer further teaches wherein the circuitry determines, as the at least one material video, each of the input videos that can be specified by the video ID included in the trigger information (Paragraphs 0121 and 0123 teach the event analysis server transmitting data indicative of the in-game event analysis data and its in-game position to the video playback apparatus; then as noted previously, the video playback apparatus can use the data to construct an augmentation layer for the video, as illustrated in FIG. 7; it will be appreciated that in principle a videogame console could operate as both a video recording apparatus and a video playback apparatus, so that a user could review their own play almost immediately with the benefit of statistical event data overlaid on top; furthermore, a videogame console could in principle also operate as an event analysis server, for example analyzing historical records of play by one user, such as a professional e-sports player, to assist them in identifying trends in their play). Regarding Claim 13, Spencer teaches all the limitations of claim 7 above; and Spencer further teaches wherein the circuitry determines, as the at least one material video, each of the input videos that can be specified by the video ID included in the trigger information indicated by the trigger ID included in the event information (Paragraphs 0075-0078 teach each recorded image sequence of a videogame (video recording) may have a unique video ID, which may optionally be transmitted to the event analyser; the event data may then be transmitted to the event analyser in association with the unique video ID; subsequently the event analyser may then optionally transmit the event data, in addition to any statistical analyses, back to a video playback device that transmits the unique video ID to it; similarly optionally the depth buffer value sequence and/or the in-game virtual camera position sequence and any of the other optional data (such as player avatar position) could also be uploaded to the event analyser in association with the unique video ID; if all the supplementary data is uploaded in this fashion, it may be provided to the server as a parallel video recording encoded as described previously herein, or as the separate data elements for the server to encode in this manner; subsequently when a video playback device transmits the unique video ID found in a video recording, it can receive all of the supplementary data, for example as a parallel video recording encoded as described previously herein). Regarding Claim 14, Spencer teaches all the limitations of claim 10 above; and Spencer further teaches wherein the circuitry determines, as the at least one material video, each of the input videos that can be specified by the video ID included in the event scene information (Paragraph 0146 teaches an event analyser may be a conventional computing device such as a server or a PlayStation 4 operating under suitable software instruction, comprising a receiver adapted to respectively receive, from a plurality of video recording apparatuses, a videogame identifier and one or more associated in-game events and their respective in-game positions; an analysis processor adapted to perform an analysis on one or more aspects of the in-game events associated with a videogame identifier, and their respective in-game positions, to identify statistically significant in-game events; the receiver being adapted to subsequently receive, from a video playback apparatus, a videogame identifier and at least one of an in-game virtual camera position and an in-game player position). Regarding Claim 15, Spencer teaches all the limitations of claim 1 above; and Spencer further teaches wherein the circuitry creates the output video by using the at least one material video associated with creation of the output video (Paragraph 0146 teaches a selection processor adapted to select one or more identified statistically significant in-game events associated with the received videogame identifier and having an in-game position within a predetermined distance of at least one received position; and a transmitter adapted to transmit data indicative of the in-game event and its in-game position to the video playback device)). Regarding Claim 16, Spencer teaches all the limitations of claim 15 above; and Spencer further teaches wherein the circuitry arranges a plurality of material videos in one video to create the output video (Paragraph 0119 teaches in response then the event analysis server selects analysis data relating to one or more identified statistically significant in-game events associated with the received videogame identifier and having an in-game position within a predetermined distance of at least one received position (as noted above, corresponding to a level, periodic time or distance, or current or impending video frame)). Regarding Claim 17, Spencer teaches all the limitations of claim 1 above; and Spencer further teaches wherein each of the input videos is a gameplay video output from a game machine (Paragraph 0100 teaches receiving, from a plurality of videogame devices (acting as video recording apparatuses), a videogame identifier and one or more associated in-game events and their respective in-game positions; hence as described previously herein, the event analyser receives data relating to a particular videogame that identifies in-game events, and where they occur within the game, from a plurality of videogame devices). Regarding Claim 18, Spencer teaches all the limitations of claim 1 above; and Spencer further teaches wherein each of the input videos is a video obtained by image capture of a player playing a game (Paragraph 0037 teaches recording a first sequence of video images output by a videogame; for example, the PlayStation 4 routinely saves a video of the current video image output in a data loop that allows the last N minutes of gameplay to be stored, where N may be for example 20 minutes; subsequently, in response to a user input, an in-game event or scheduled event, such video data can also be copied into long term storage). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dutilly et al. (US 20100190555) teaches a system and method for providing dynamic recap sequences to a player of a video game. There is provided a method for use by a processor to present a recap sequence to a player of a video game, the method comprising storing, in a memory, identification data identifying the player and associated event data relating to the video game played by the player after a first game play, receiving the identification data to identify the player for a second game play, retrieving the event data from the memory, creating a prioritized event list including a number of events from the event data, generating the recap sequence based on the prioritized event list, and presenting the recap sequence to the player prior to the second game play. Creation of the prioritized event list can be customized to suit particular requirements of the video game. White et al. (US 20210129017) teaches a game-agnostic event detector can be used to automatically identify game events. Game-specific configuration data can be used to specify types of pre-processing to be performed on media for a game session, as well as types of detectors to be used to detect events for the game. Event data for detected events can be written to an event log in a form that is both human- and process-readable. The event data can be used for various purposes, such as to generate highlight videos or provide player performance feedback. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to COURTNEY JONES whose telephone number is (469)295-9137. The examiner can normally be reached on 7:30 am - 4:30 pm CST (M-Th). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neha Patel can be reached at (571) 270-1492. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated-interview-request-air-form. /COURTNEY P JONES/Primary Examiner, Art Unit 3699
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Nov 19, 2025
Non-Final Rejection — §101, §102, §112
Feb 20, 2026
Response Filed
Apr 07, 2026
Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597018
DECENTRALIZED IDENTITY-BASED COMMUNICATION SERVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591894
FRAUD PREVENTION VIA BENEFICIARY ACCOUNT VALIDATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586077
SYSTEMS AND METHODS FOR END TO END ENCRYPTION UTILIZING A COMMERCE PLATFORM FOR CARD NOT PRESENT TRANSACTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12579543
HIERARCHICAL DIGITAL ISSUANCE TOKENS AND CLAIM TOKENS
2y 5m to grant Granted Mar 17, 2026
Patent 12572936
QR CODE PAYOR TRACKING AND REPEAT PAYMENT PREVENTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
90%
With Interview (+23.3%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month