Prosecution Insights
Last updated: April 19, 2026
Application No. 18/618,407

SYSTEM AND METHOD FOR GENERATING PERSONALIZED VIDEO TRAILERS

Final Rejection §103§DP
Filed
Mar 27, 2024
Examiner
TRAN, LOI H
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Adeia Guides Inc.
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
88%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
394 granted / 611 resolved
+6.5% vs TC avg
Strong +24% interview lift
Without
With
+23.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
25 currently pending
Career history
636
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
54.9%
+14.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 611 resolved cases

Office Action

§103 §DP
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments with respect to the rejections of claims 34-51 and 53 have been considered but are moot in view of new grounds of rejection. Response to Amendment Double Patenting 3. Claims 34-35, 38-45, and 48-51 and 53 are rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claims 1-18 of U.S. Patent No. 11,276,434 in view of Chen et al. (US Publication 2016/0029106) and further in view of Liu et al. (US Patent 11,694,726). Claim 34 of the instant application comprises elements that are not patentably distinct from claim 1 of patent no. 11,276,434, except for: receiving a request from a user to interact with a content platform, wherein the user is associated with a user profile on the content platform; automatically initiating a process for generating a personalized trailer of a video content item, the process comprising: retrieving preferences of the user profile stored in a user profile database; selecting a subset of segments of the plurality of segments having metadata that matches one or more of the retrieved preferences; transmitting the personalized trailer of the video content item for display to a device of the use. Chen discloses: retrieving preferences of the user profile stored in a user profile database (Chen, para. 0034, receiving user input for selecting parameters for generating pictorial summary of the video; parameters can be fixed, i.e., stored in a file or database, and not required selection by a user; para’s 0039-0042, parameters used in scene weighting may include a name of a primary character “James Bond”, a list of highlight actions or objects (for example, the user may principally be interested in the car chases in a movie), parameters used in evaluating pictures in the video, such as, for example, parameters selecting a measure of picture quality, and/or parameters used in selecting pictures from a scene for inclusion in the pictorial summary, such as, for example, a number of pictures to be selected per shot. to emphasize in the weighting; para’s 0148-0151, accessing and retrieving one or more parameters from a configuration guide that includes one or more parameters for configuring a pictorial summary of a video; generating, for the video, a pictorial summary based on the accessed/retrieved parameter In at least one implementation, wherein the pictorial summary conforms to “matches” one or more accessed parameters from the configuration guide); selecting a subset of segments of the plurality of segments having metadata that matches one or more of the retrieved preferences (Chen, para’s 0148-0151, accessing and retrieving one or more parameters from a configuration guide that includes one or more parameters for configuring a pictorial summary of a video; generating, for the video, a pictorial summary based on the accessed/retrieved parameter In at least one implementation, wherein the pictorial summary conforms to “matches” one or more accessed parameters from the configuration guide; para’s 0188-0197, at least one evaluation and selection system includes the functions of evaluating pictures in a video and selecting certain pictures, based on the evaluations, to include in a pictorial summary; identifying different levels of importance (weights) to a scene (or other portion of a video) by considering one or more features such as, for example, the scene position within a video, the appearance frequency of main characters, the length of the scene, and the level/amount of highlighted actions or objects in the scene, considering factors related to how “interesting” a scene, a shot, or a picture is, when determining a weight or ranking, such as, for example, by considering the presence of highlight actions/words and the presence of main characters, and/or using one or more of the following factors in a hierarchical process that analyzes scenes, shots, and individual pictures in generating a pictorial summary: (i) favoring the start scene and the end scene, (ii) the appearance frequency of the main characters, (iii) the length of the scene, (iv) the level of highlighted actions or objects in the scene, or (v) an “appealing quality” factor for a picture); transmitting the personalized trailer of the video content item for display to a device of the user (Chen, para. 0100, providing the highlight video “trailer” to the user who provides user input as parameter). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Chen’s features into the invention of patent no. 11,276,434 for enhancing user’s playback experience by providing user a video trailer comprising content that matches user profile. Chen does not explicitly disclose: automatically initiating a process for generating a personalized trailer of a video content item; Liu discloses: automatically initiating a process for generating a personalized trailer of a video content item (Liu, col. 14, lines 34-36, automatically generating a trailer for a movie). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Liu’s features into the invention of patent no. 11,276,434 in view of Chen for enhancing user’s video viewing experience by automatically providing a personalized video trailer. 4. Claims 34-35, 38-45, and 48-51 and 53 are rejected under the judicially created doctrine of obviousness-type double patenting as being unpatentable over claims 1-9 of U.S. Patent No. 11,972,782 in view of Chen et al. (US Publication 2016/0029106) and further in view of Liu et al. (US Patent 11,694,726). Claim 34 of the instant application comprises elements that are not patentably distinct from claim 1 of patent no. 11,972,782, except for: receiving a request from a user to interact with a content platform, wherein the user is associated with a user profile on the content platform; automatically initiating a process for generating a personalized trailer of a video content item, the process comprising: retrieving preferences of the user profile stored in a user profile database; selecting a subset of segments of the plurality of segments having metadata that matches one or more of the retrieved preferences; transmitting the personalized trailer of the video content item for display to a device of the use. Chen discloses: retrieving preferences of the user profile stored in a user profile database (Chen, para. 0034, receiving user input for selecting parameters for generating pictorial summary of the video; parameters can be fixed, i.e., stored in a file or database, and not required selection by a user; para’s 0039-0042, parameters used in scene weighting may include a name of a primary character “James Bond”, a list of highlight actions or objects (for example, the user may principally be interested in the car chases in a movie), parameters used in evaluating pictures in the video, such as, for example, parameters selecting a measure of picture quality, and/or parameters used in selecting pictures from a scene for inclusion in the pictorial summary, such as, for example, a number of pictures to be selected per shot. to emphasize in the weighting; para’s 0148-0151, accessing and retrieving one or more parameters from a configuration guide that includes one or more parameters for configuring a pictorial summary of a video; generating, for the video, a pictorial summary based on the accessed/retrieved parameter In at least one implementation, wherein the pictorial summary conforms to “matches” one or more accessed parameters from the configuration guide); selecting a subset of segments of the plurality of segments having metadata that matches one or more of the retrieved preferences (Chen, para’s 0148-0151, accessing and retrieving one or more parameters from a configuration guide that includes one or more parameters for configuring a pictorial summary of a video; generating, for the video, a pictorial summary based on the accessed/retrieved parameter In at least one implementation, wherein the pictorial summary conforms to “matches” one or more accessed parameters from the configuration guide; para’s 0188-0197, at least one evaluation and selection system includes the functions of evaluating pictures in a video and selecting certain pictures, based on the evaluations, to include in a pictorial summary; identifying different levels of importance (weights) to a scene (or other portion of a video) by considering one or more features such as, for example, the scene position within a video, the appearance frequency of main characters, the length of the scene, and the level/amount of highlighted actions or objects in the scene, considering factors related to how “interesting” a scene, a shot, or a picture is, when determining a weight or ranking, such as, for example, by considering the presence of highlight actions/words and the presence of main characters, and/or using one or more of the following factors in a hierarchical process that analyzes scenes, shots, and individual pictures in generating a pictorial summary: (i) favoring the start scene and the end scene, (ii) the appearance frequency of the main characters, (iii) the length of the scene, (iv) the level of highlighted actions or objects in the scene, or (v) an “appealing quality” factor for a picture); transmitting the personalized trailer of the video content item for display to a device of the user (Chen, para. 0100, providing the highlight video “trailer” to the user who provides user input as parameter). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Chen’s features into the invention of patent no. 11,972,782 for enhancing user’s playback experience by providing user a video trailer comprising content that matches user profile. Chen does not explicitly disclose: automatically initiating a process for generating a personalized trailer of a video content item; Liu discloses: automatically initiating a process for generating a personalized trailer of a video content item (Liu, col. 14, lines 34-36, automatically generating a trailer for a movie). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Liu’s features into the invention of patent no. 11,972,782 in view of Chen for enhancing user’s video viewing experience by automatically providing a personalized video trailer. Claim Rejections - 35 USC § 103 5. The text of those sections of Title 35, U.S. Code not included in this section can be found in a prior Office action. 6. Claims 34-37 and 44-47 are rejected under AIA 35 U.S.C. 103 as unpatentable over Chen et al. (US Publication 2016/0029106) in view of Liu et al. (US Patent 11,694,726). Regarding claim 34, Chen discloses a computer-implemented method, comprising: receiving a request from a user to interact with a content platform, wherein the user is associated with a user profile on the content platform (Chen, para’s 0148-0150, receiving user input for accessing a video with an input screen; para. 0034, user interest or preference can be obtained from fixed parameter profile or user input); initiating a process for generating a personalized trailer of a video content item, the process comprising: retrieving preferences of the user profile stored in a user profile database (Chen, para. 0034, receiving user input for selecting parameters for generating pictorial summary of the video; parameters can be fixed, i.e., stored in a file or database, and not required selection by a user; para’s 0039-0042, parameters used in scene weighting may include a name of a primary character “James Bond”, a list of highlight actions or objects (for example, the user may principally be interested in the car chases in a movie), parameters used in evaluating pictures in the video, such as, for example, parameters selecting a measure of picture quality, and/or parameters used in selecting pictures from a scene for inclusion in the pictorial summary, such as, for example, a number of pictures to be selected per shot. to emphasize in the weighting; para’s 0148-0151, accessing and retrieving one or more parameters from a configuration guide that includes one or more parameters for configuring a pictorial summary of a video; generating, for the video, a pictorial summary based on the accessed/retrieved parameter In at least one implementation, wherein the pictorial summary conforms to “matches” one or more accessed parameters from the configuration guide); identifying a plurality of segments of the content item (Chen, para’s 0046-0054, obtaining one or more portions of the video, such as, shots, or groups of scenes, i.e., segments in the video, for parameter weighting); identifying, for each respective segment of the plurality of segments of the video content item, metadata related to the respective segment of the video content item (Chen, para’s 0046-0054, determining the weight of a scene based on the starting scene and/or the ending scene in the video, appearance frequency of main characters in the scene, i.e., identifying metadata relating to plot structure associated with a scene/segment or metadata relating to the appearance of one or more objects in the scene/segment); selecting a subset of segments of the plurality of segments having metadata that matches one or more of the retrieved preferences (Chen, para’s 0148-0151, accessing and retrieving one or more parameters from a configuration guide that includes one or more parameters for configuring a pictorial summary of a video; generating, for the video, a pictorial summary based on the accessed/retrieved parameter In at least one implementation, wherein the pictorial summary conforms to “matches” one or more accessed parameters from the configuration guide; para’s 0188-0197, at least one evaluation and selection system includes the functions of evaluating pictures in a video and selecting certain pictures, based on the evaluations, to include in a pictorial summary; identifying different levels of importance (weights) to a scene (or other portion of a video) by considering one or more features such as, for example, the scene position within a video, the appearance frequency of main characters, the length of the scene, and the level/amount of highlighted actions or objects in the scene, considering factors related to how “interesting” a scene, a shot, or a picture is, when determining a weight or ranking, such as, for example, by considering the presence of highlight actions/words and the presence of main characters, and/or using one or more of the following factors in a hierarchical process that analyzes scenes, shots, and individual pictures in generating a pictorial summary: (i) favoring the start scene and the end scene, (ii) the appearance frequency of the main characters, (iii) the length of the scene, (iv) the level of highlighted actions or objects in the scene, or (v) an “appealing quality” factor for a picture); generating the personalized trailer of the video content by arranging the selected subsets of segments (Chen, para’s 0063-0064, assigning weights to scenes except for the start scene and the end scene which are given different weights; para. 0069, the weights of start scene and end scene are given the highest values in order to increase the representation of the start scene and the end scene in the pictorial summary. This is done because the start scene and the end scene are typically important in the narration of the video; para’s 0148-0151, generating, for the video, a pictorial summary “a trailer” based on the accessed parameter In at least one implementation, wherein the pictorial summary conforms to “matches” one or more accessed parameters from the configuration guide); and transmitting the personalized trailer of the video content item for display to a device of the user (Chen, para. 0100, providing the highlight video “trailer” to the user who provides user input as parameter). Chen does not explicitly disclose: automatically initiating a process for generating a personalized trailer of a video content item; Liu discloses: automatically initiating a process for generating a personalized trailer of a video content item (Liu, col. 14, lines 34-36, automatically generating a trailer for a movie). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Liu’s features into Chen’s invention for enhancing user’s video viewing experience by automatically providing a personalized trailer. Regarding claim 35, Chen-Liu discloses the method of claim 34, wherein: the metadata related to the respective segment comprises one or more objects appearing in the respective segment; and the retrieved preferences indicate a preference for at least one of the one or more objects (Chen, para’s 0046-0054, determining the weight of a scene based on determining appearance frequency of main characters in the scene, i.e., identifying metadata relating to the appearance of one or more objects in the scene/segment). Regarding claim 36, Chen-Liu discloses the method of claim 34, wherein: the metadata related to the respective segment comprises one or more actors or actresses that appear in the respective segment; and the retrieved preferences indicate a preference for at least one of the one or more actors or actresses (Chen, para’s 0046-0054, determining the weight of a scene based on determining appearance of main characters in the scene, i.e., identifying metadata relating to the appearance of one or more actors or actresses in the scene/segment). Regarding claim 37, Chen-Liu discloses the method of claim 34, wherein the metadata related to the respective segment comprises at least one of (i) a genre of the respective segment, (ii) an event of the respective segment, or (iii) audio of the respective segment (Chen, para. 0193, identifying different levels of importance (weights) to a scene (or other portion of a video) by considering one or more features such as, for example, the scene position within a video, the appearance frequency of main characters, the length of the scene, and the highlighted actions or objects in the scene; Liu, col. 2 lines 16-22 and col. 11, lines 11-14, identify genre category for media segments). The obviousness arguments and motivation to combine the references are the same as claim 34. Claims 44-47 are rejected for the same reasons set forth in claims 34-37. 7. Claims 38-40 and 48-50 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Chen-Liu, as applied to claims 34 and 44 above, in view of Tian et al. (English Translation of Chinese Publication CN110347872 2019). Regarding claim 38, Chen-Liu discloses the method of claim 34, further comprising identifying a plot element label to each respective segment of the subset of segments, wherein the arranging the segments of the subset of segments is based at least in part on the plot element labels (Chen, para’s 0063-0064, assigning weights to scenes except for the start/intro scene and the end/conclusion scene “plot element labels” which are given different weights; para. 0069, the weights of start scene and end scene are given the highest values in order to increase the representation of the start scene and the end scene in the pictorial summary. This is done because the start scene and the end scene are typically important in the narration of the video). Chen-Liu does not explicitly disclose but Tian discloses assigning a plot element label to each respective segment of the subset of segments, wherein the arranging the segments of the subset of segments is based at least in part on the plot element labels (Tian, para’s 0129-0131, generating a highlight video “trailer” arranged in order of start stage, middle stage, and end stage). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Tian’s features into Chen-Liu’s invention for enhancing user’s playback experience by organizing segments of the trailer according to plot elements of the segments. Claim 48 is rejected for the same reasons set forth in claim 38. Regarding claim 39, Chen-Liu-Tian discloses the method of claim 38 comprising training a machine learning model to accept as input the metadata related to each respective segment and output the plot element label for each respective segment (Tian, para’s 0129-0132, the comprehensive feature information “metadata” of all the moments “segments” is input into the second discriminant model, so as to determine the probability value of each moment “segment” belonging to the start stage, the middle stage, or the end stage of the highlights, i.e., the second discriminant model can be trained to receive metadata related to the moments or segments and output the start stage, the middle stage, or the end stage “the plot element labels” for the moments or segments of the highlights “trailer”). The motivation to combine the references and obviousness arguments are the same as claim 38. Regarding claim 40: Chen-Liu-Tian discloses the method of claim 39, wherein the assigning further comprises: identifying one or more audible phrases of each respective segment; determining a corresponding one or more times at which each of the one or more identified audible phrases occur within the video content item; and providing the one or more identified audible phrases and the corresponding one or more times as inputs of the trained machine learning model, to determine the plot element label (Chen, para. 0158, analyze audio that has been turned into text using voice recognition software; para. 0053, by counting the occurrences of the name “Tom”, the system can determine the time(s) and the number of “Tom” appears in the Scene Description 220 text or the Speaking Character 230 text; Tian, para. 0130, the comprehensive feature information of all the moments or times is input into the second discriminant model, so as to determine the probability value of each moment belonging to the start stage of the highlights and the probability of belonging to the end stage of the highlights through the second discrimination model value and the probability value of belonging to the middle stage of the highlight clip; put it differently, the model can receive as input comprehensive feature information of all the moments or times and determine a start stage, an end stage, or middle stage, i.e., plot element labels for highlight portion of each moment or time; see also Wang et al., English Translation of Chinese Publication CN108986785 12-2018, para’s 0100-0103, in order to extract the main plot of the original story text, it is necessary to analyze the original story text and extract the main plot based on the plot elements contained in the original story text. The process of information extraction from the original story text mainly includes three parts: text preprocessing, character recognition, and plot element extraction; the original story text is segmented on a sentence-by-sentence basis using existing or future word segmentation methods, and each word obtained by word segmentation is tagged with part-of-speech; character identification is performed. Specifically, based on the above-mentioned part-of-speech tagging results, a grammatical analyzer or parser can be used to extract the subject of each sentence in the original story text (whose part of speech is generally noun), and then the improved named entity rule model can be used to match the character name, that is, to determine whether the extracted subject is the character name in the original story text. If the subject matches the character name, it may be the character name in the original story text. The improved named entity rule model is based on rule and statistical methods “models”. It trains and learns name entities that appear in a large number of pre-collected novels and story corpus; storyline elements in the original story text can be extracted based on the above-mentioned part-of-speech tagging results, wherein storyline elements refer to the time, place, tasks, props (whose part of speech is generally noun) and actions (whose part of speech is generally verb) in the original story text where the story takes place. The original story text can be segmented using a word segmentation method to obtain each word in the original story text. Then, words that can represent the main storyline elements are selected to summarize the main storyline of the original story text. For example, each word can be selected based on importance, such as selecting words with relatively high frequency of occurrence as words representing the main storyline elements). The motivation to combine the references and obviousness arguments are the same as claim 38. Claims 48-50 are rejected for the same reasons set forth in claims 38-40. 8. Claims 41 and 51 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Chen-Liu, as applied to claims 34 and 44 above, in view of Sharifi (US Publication 2015/0039646). Regarding claim 41, Chen-Liu discloses the method of claim 34, comprising generating the personalized trailer of the video content item. Chen-Liu does not explicitly disclose but Sharifi discloses selecting an audio track of the video content item, wherein a duration of the audio track corresponds to a duration of the generated personalized trailer of the video content item; and designating the selected audio track as an audio track of the generated personalized trailer of the video content item (Sharifi, para. 0084, select an audio track, from candidate audio tracks, that corresponds to a time period of a movie). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Sharifi’s technique into Chen-Liu’s invention for enhancing user’s playback experience by effectively time synchronizing video and audio content. Claim 51 is rejected for the same reasons set forth in claim 41. 9. Claims 43 and 53 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Chen-Liu, as applied to claims 34 and 44 above, in view of Dubey et al. (US Patent 12,231,745) or Sardar et al. (US Publication 2021/0011939). Regarding claim 43, Chen-Liu discloses the method of claim 34, comprising generating the personalized trailer of the video content item. Chen-Liu does not explicitly disclose but Dubey and Sardar each discloses identifying the personalized trailer of the video content item as a first trailer; retrieving a second trailer from a database, the second trailer being different from the first trailer; determining a template for a preferred order of segments that matches an order of segments in the second trailer; and arranging the segments of the first trailer according to the template for the preferred order of segments (Dubey, col. 10 line 58 through col. 11 line 7, as popular quotes change over time, the trailer for the video content may be periodically updated. Accordingly, as different quotes become popular over time, past trailer with previously popular quotes may be replaced with new trailer with currently popular quotes in the same order of arrangement; Sardar et al. 20210011939, para. 0037-0039, applying a model to a particular subset of games; receiving, from the model, score data that identifies the top recommended games and an order thereof; and generating a video comprising trailers of these recommended games in the defined order; as a result, the generated trailer comprises arrangement of recommended previous trailers). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Sardar’s technique or Dubey’s technique into Chen-Liu’s invention for effectively enhancing video content promotion by generating new video trailer having updated material in the same order of arrangement as previous trailer or by generating composite trailer comprising previous content arranged trailers. Claim 53 is rejected for the same reasons set forth in claim 43. 10. Claim 54 is rejected under AIA 35 U.S.C. 103 as unpatentable over Chen et al. (US Publication 2016/0029106) in view of Liu et al. (US Patent 11,694,726), and further in view of Gaur et al. (US Publication 2020/0275158). Regarding claim 54, Chen discloses a computer-implemented method, comprising: receiving a request from a user to interact with a content platform, wherein the user is associated with a user profile on the content platform (Chen, para’s 0148-0150, receiving user input for accessing a video with an input screen; para. 0034, user interest or preference can be obtained from fixed parameter profile or user input); identifying a plurality of segments of a video content item (Chen, para’s 0046-0054, obtaining one or more portions of the video, such as, shots, or groups of scenes, i.e., segments in the video, for parameter weighting); identifying, for each respective segment of the plurality of segments of the video content item, metadata related to the respective segment of the video content item (Chen, para’s 0046-0054, determining the weight of a scene based on the starting scene and/or the ending scene in the video, appearance frequency of main characters in the scene, i.e., identifying metadata relating to plot structure associated with a scene/segment or metadata relating to the appearance of one or more objects in the scene/segment); selecting a subset of segments of the plurality of segments having metadata that matches one or more preferences of the user profile (Chen, para’s 0148-0151, accessing and retrieving one or more parameters from a configuration guide that includes one or more parameters for configuring a pictorial summary of a video; generating, for the video, a pictorial summary based on the accessed/retrieved parameter In at least one implementation, wherein the pictorial summary conforms to “matches” one or more accessed parameters from the configuration guide; para’s 0188-0197, at least one evaluation and selection system includes the functions of evaluating pictures in a video and selecting certain pictures, based on the evaluations, to include in a pictorial summary; identifying different levels of importance (weights) to a scene (or other portion of a video) by considering one or more features such as, for example, the scene position within a video, the appearance frequency of main characters, the length of the scene, and the level/amount of highlighted actions or objects in the scene, considering factors related to how “interesting” a scene, a shot, or a picture is, when determining a weight or ranking, such as, for example, by considering the presence of highlight actions/words and the presence of main characters, and/or using one or more of the following factors in a hierarchical process that analyzes scenes, shots, and individual pictures in generating a pictorial summary: (i) favoring the start scene and the end scene, (ii) the appearance frequency of the main characters, (iii) the length of the scene, (iv) the level of highlighted actions or objects in the scene, or (v) an “appealing quality” factor for a picture); generating a trailer of the video content item by arranging the selected subset of segments (Chen, para’s 0063-0064, assigning weights to scenes except for the start scene and the end scene which are given different weights; para. 0069, the weights of start scene and end scene are given the highest values in order to increase the representation of the start scene and the end scene in the pictorial summary. This is done because the start scene and the end scene are typically important in the narration of the video; para’s 0148-0151, generating, for the video, a pictorial summary “a trailer” based on the accessed parameter In at least one implementation, wherein the pictorial summary conforms to “matches” one or more accessed parameters from the configuration guide); and transmitting the trailer of the video content item for display to a device of the user (Chen, para. 0100, providing the highlight video “trailer” to the user who provides user input as parameter). Chen does not explicitly disclose: wherein metadata for a first segment of the plurality of segments comprises an indication of a genre of the first segment; wherein the selecting of the subset of segments comprises selecting the first segment based on the user profile indicating a preference for the genre of the first segment. Liu discloses wherein metadata for a first segment of the plurality of segments comprises an indication of a genre of the first segment (Liu, col. 2 lines 16-22 and col. 11, lines 11-14, identify genre category for media segments). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Liu’s features into Chen’s invention for enhancing user’s video viewing experience by automatically providing a personalized trailer comprising segment metadata indicating a media genre. Chen-Liu does not explicitly disclose but Gaur discloses wherein the selecting of the subset of segments comprises selecting the first segment based on the user profile indicating a preference for the genre of the first segment (Gaur, para. 0052, recommending any content items 201 determined to have one or more scenes associated with the matching genre. For example, if the genre map 202 for a particular content item 201 includes at least one action scene, the recommendation module 252 may recommend the content item 201 to a user determined to have a preference for the action genre). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Gaur’s features into Chen-Liu’s invention for enhancing user’s video viewing experience by providing content items determined to have matching genre with user’s genre preference. Conclusion 11. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. 12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOI H TRAN whose telephone number is (571)270-5645. The examiner can normally be reached 8:00AM-5:00PM PST FIRST FRIDAY OF BIWEEK OFF. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, THAI TRAN can be reached at 571-272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOI H TRAN/Primary Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
Aug 27, 2025
Non-Final Rejection — §103, §DP
Dec 29, 2025
Response Filed
Mar 04, 2026
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598366
CONTENT DATA PROCESSING METHOD AND CONTENT DATA PROCESSING APPARATUS
2y 5m to grant Granted Apr 07, 2026
Patent 12593112
METHOD, DEVICE, AND COMPUTER PROGRAM FOR ENCAPSULATING REGION ANNOTATIONS IN MEDIA TRACKS
2y 5m to grant Granted Mar 31, 2026
Patent 12592261
VIDEO EDITING METHOD AND APPARATUS, AND DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12576798
CAMERA SYSTEM AND ASSISTANCE SYSTEM FOR A VEHICLE AND A METHOD FOR OPERATING A CAMERA SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12579810
SYSTEM AND METHOD FOR AUTOMATIC EVENTS IDENTIFICATION ON VIDEO
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
88%
With Interview (+23.6%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 611 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month