Prosecution Insights
Last updated: April 18, 2026
Application No. 18/533,637

METHODS AND SYSTEMS FOR GENERATING MEME CONTENT

Final Rejection §103§DP
Filed
Dec 08, 2023
Examiner
SMITH, STEPHEN R
Art Unit
2484
Tech Center
2400 — Computer Networks
Assignee
Adeia Guides Inc.
OA Round
4 (Final)
71%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
82%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
306 granted / 433 resolved
+12.7% vs TC avg
Moderate +11% lift
Without
With
+11.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
13 currently pending
Career history
446
Total Applications
across all art units

Statute-Specific Performance

§101
4.4%
-35.6% vs TC avg
§103
57.9%
+17.9% vs TC avg
§102
23.7%
-16.3% vs TC avg
§112
4.8%
-35.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 433 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claims 41 and 51 have been fully considered but are moot because the arguments do not directly apply to the new combination of references being used in the current rejection. Applicant’s indication to postpone prosecution of double patenting rejections is acknowledged. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 41, 45-46, 48-51, 55-56 and 58-60 of the instant application rejected on the ground of nonstatutory obvious-type double patenting as being unpatentable over US Pat. 11881233 B2 in view of US 20190200090 A1 to Merced et al. (“Merced”), further in view of US 20130326406 A1 to Reiley et al. (“Reiley”), and further in view of US 20200097731 A1 Gupta et al. (“Gupta”). Although the claims at issue are not identical, they are not patentably distinct from each other as shown below: Instant Application US 11881233 B2 41. (Currently Amended) A method comprising: receiving metadata associated with a content item, wherein the metadata indicates that the content item is trending; tagging the content item with a first tag based at least in part on the metadata; receiving, from a content provider, the content item having the first tag, at a user device; providing the content item for playback at the user device; storing in a user profile, a user preference based at least in part on an input received via the user device relating to a playback operation of the content item and detecting one or more user responses during playback of the content item, wherein the one or more user responses comprises a rewind, a segment skip or a pause; tagging the content item with a second tag based at least in part on the user preference; identifying a segment of the content item that is of interest to a user based at least in part on the first tag and the second tag; generating for display a progress bar having a user-selectable non-textual display element corresponding to a position of the identified segment in the content item, wherein a visual presentation of the user-selectable non-textual display element is based at least in part on the metadata, wherein the visual presentation of the user-selectable non-textual display element is associated with a time stamp indicated by the progress bar, and wherein a position of the visual presentation of the user-selectable non-textual display element is perpendicular to the progress bar and is aligned with the time stamp; and based at least in part on determining that the identified segment does not contain one or more third tags: storing the identified segment in a memory; accessing the memory; and providing the identified segment for editing at the user device. 1. A method for generating meme content comprising: tagging a content item with one or more first tags and one or more third tags based on metadata for the content item; receiving at user equipment the content item having the one or more first tags and the one or more third tags; subsequently tagging the content item with one or more second tags at the user equipment based on a user profile, wherein the one or more second tags are added to the content item concurrently with delivery of the content item to the user equipment; identifying a segment of the content item based on the first and second tags; determining whether the segment of the content item includes the one or more third tags; in response to determining the segment of the content item does not include the one or more third tags, storing the identified segment for use in generating meme content, wherein meme content comprises a message, and wherein the metadata for the content item indicates a popularity of memes that have been generated based on the segment of the content item; and in response to determining the segment of the content item includes the one or more third tags, preventing storage of the identified segment for use in generating meme content based on the one or more third tags indicating restricted content. Regarding claim 41, claim 1 of US 11881233 B2 recites all the limitations of claim 41, as shown above, except for the features: “wherein the metadata indicates that the content item is trending,” “wherein the one or more user responses comprises a rewind, a segment skip or a pause,” and “generating for display a progress bar having a user-selectable non-textual display element corresponding to a position of the identified segment in the content item, wherein a visual presentation of the user-selectable non-textual display element is based at least in part on the metadata, wherein the visual presentation of the user-selectable non-textual display element is associated with a time stamp indicated by the progress bar, and wherein a position of the visual presentation of the user-selectable non-textual display element is perpendicular to the progress bar and is aligned with the time stamp.” In analogous art, Merced, Reiley and Gupta disclose the above features, respectively, as examined regarding the 35 USC § 103 rejection of claim 41 below. It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to modify the method of claim 1 of the patent to include information indicating popularity/trendiness of media segments in order identify media segments of interest to a user (Merced, par. [0001]), and to store user preferences based on user input such as rewinding to automatically provide personalized content without explicit user request (Reiley, par. [0044]), and to enable the user to easily perform actions on the key moments (Gupta: par. [0141]). Therefore claim 41 is rejected on the ground of nonstatutory obvious-type double patenting. Regarding claim 45, the additional limitations are recited in claim 6 of the patent. Regarding claim 46, claim 6 of the patent fails to recite, “indicate an exact portion of a scene that contains a preferred actor of the user and to which one or more other users had a positive response.” but the combination of Drake and Reiley teaches the limitation as examined regarding the 35 USC § 103 rejection of claim 46 below. It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to modify the method of claim 6 in view of the cited teachings of Drake and Reiley because Drake teaches that user preferences may incorporate information about certain actors in a scene (Drake, par. [0031]), and because Reiley teaches that identifying particular segments of media items which will likely be of interest to other users to identify content that is more interesting to the user (Reiley, par. [0053]). Regarding claim 48, the additional limitations are recited in claim 3 of the patent. Regarding claim 49, the additional limitations are recited in claim 1 of the patent. Regarding claim 50, the additional limitations are recited in claim 1 of the patent. Regarding claim 51, the system is rejected on the ground of nonstatutory obvious-type double patenting based on the same rationale applied to the method of claim 41 and because Gupta further discloses a system embodiment comprising a memory and control circuitry (par. [0015], [0047]) applicable to the associated method of claim 41. It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to recognize that the method of claim 1 of the patent could be implemented as a system, such as taught by Gupta, as a matter or design choice representing, for example, cost vs. efficiency tradeoffs. Therefore claim 51 is rejected on the ground of nonstatutory obvious-type double patenting. Regarding claims 55-56 and 58-60, the system is rejected on the ground of nonstatutory obvious-type double patenting based on the same rationale applied to the method of claims 45-46 and 48-50, respectively. Claims 42 and 52 and of the instant application rejected on the ground of nonstatutory obvious-type double patenting as being unpatentable over US 11881233 B2 in view of Merced, Reiley and Gupta, further in view of US 20180330756 A1 to MacDonald. Regarding claim 42, claim 1 of the patent fails to recite, “notifying the user of a restriction associated with the identified segment,” but in analogous art, MacDonald teaches the limitation as examined regarding the 35 USC § 103 rejection of claim 42 below. It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to modify the method of claim 1 of the patent to enable notifying the user of a restriction associated with the identified segment in order to allow for the digital rights management of the original video and the newly created video, through the database structure and through metadata tags inserted into the new composite videos (MacDonald, par. [0002]). Regarding claim 52, the system is rejected on the ground of nonstatutory obvious-type double patenting based on the same rationale applied to the method of claim 42. Claims 43-44 and 53-54 of the instant application rejected on the ground of nonstatutory obvious-type double patenting as being unpatentable over US 11881233 B2 in view of Merced, Reiley and Gupta, further in view of US 20160373817 A1 to Drake et al. (“Drake”). Regarding claim 43, claim 1 of the patent fails to recite, “wherein the first tag indicates a particular frame or scene of the content item,” but in analogous art, Drake teaches the limitation as examined regarding the 35 USC § 103 rejection of claim 43 below. It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to modify the method of claim 1 of the patent to associate metadata with a particular frame or scene of the content item in order to, for example, “provide a mechanism for creating a thematic thread(s) within the storyline or narrative of a media content” (Drake, par. [0028]). Regarding claim 44, claim 1 of the patent recites third tags for restricting content, but fails to recite the restrictions based on (1) an age of the user, (2) a number indicating how many times a meme has been generated based at least in part on the identified segment of the content item, or (3) a genre of the content item. In analogous art, Drake teaches limitation (3) as examined regarding the 35 USC § 103 rejection of claim 44 below. It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to modify the method of claim 1 of the patent to restricting content based on genre, such as scary or violent scenes, such as taught by Drake to better achieve user preferences. Regarding claims 53-54, the system is rejected on the ground of nonstatutory obvious-type double patenting based on the same rationale applied to the method of claim 43-44, respectively. Claims 47 and 57 of the instant application rejected on the ground of nonstatutory obvious-type double patenting as being unpatentable over US 11881233 B2 in view of Merced, Reiley and Gupta, further in view of US 20210350139 A1 to Pardeshi et al. (“Pardeshi”). Regarding claim 47, claim 1 of the patent fails to recite, “monitoring one or more physical characteristics of the user based on a user's physical responses while consuming the content item; and logging the user's physical responses with a time stamp,” but Pardeshi teaches the limitation as examined regarding the 35 USC § 103 rejection of claim 47 below. It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to modify the method of claim 1 of the patent in view of the cited teachings of Pardeshi in order to accurately determine which portions of content would be of most interest (Pardeshi, par. [0002]). Regarding claim 57, the system is rejected on the ground of nonstatutory obvious-type double patenting based on the same rationale applied to the method of claim 47. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 41, 43-44, 48, 50-51, 53-54, 58 and 60 rejected under 35 U.S.C. 103 as being unpatentable over US 20160373817 A1 to Drake et al. (hereinafter “Drake”) in view of US 20190200090 A1 to Merced et al. (“Merced”), further in view of US 20130326406 A1 to Reiley et al. (“Reiley”), and further in view of US 20200097731 A1 Gupta et al. (“Gupta”). Consider claim 41, Drake discloses a method comprising: receiving metadata associated with a content item, Drake does not explicitly disclose wherein the metadata indicates that the content item is trending. In analogous art, Merced discloses wherein the metadata indicates that the content item is trending (par. [0017]-[0018]: “The media guidance application may also use the viewer's user profile to identify segments that may be more relevant to the viewer [. . .] a type of events in the segments may be determined by the media guidance application based on metadata for the respective event [. . .] In the basketball game example, the halftime show may be a matching event that may be selected for inclusion in the segment list because it matches with information in the user profile. Of course, the halftime show could also be included in the segment list because it is trending on social media”; also par. [0016]: “a special play trending on Twitter”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to modify the teachings of Drake in view of the above teachings of Merced to include information indicating popularity/trendiness of media segments in order identify media segments of interest to a user (Merced, par. [0001]). The combination of Drake-Merced further teaches: receiving, from a content provider, the content item having the first tag, at a user device; providing the content item for playback at the user device (Drake, fig. 2-3 and par. [0029]: temporal engine 300 may retrieve media content 216 that is tagged with temporal metadata 302; par. [0033]: The resulting user-specific temporal cut definition may then be utilized by a distribution and playback infrastructure 310 (e.g., system 200 of FIG. 2); par. [0023]: presentation on a display 208); storing in a user profile, a user preference based at least in part on an input received via the user device relating to a playback operation of the content item may be specified by the consumer and maintained in, e.g., a user profile . . Such information may be obtained via analytics applied to data collected based on the consumer's previous viewing behaviors or preferences and/or current or past media content collection. User preferences 304 may be directly received from a user, while user history and viewing preferences 306 may be analyzed by a computer); tagging the content item with a second tag based at least in part on the user preference; identifying a segment of the content item that is of interest to a user based at least in part on the first tag and the second tag tagging the content item with a second tag based at least in part on the user preference; identifying a segment of the content item that is of interest to a user based at least in part on the first tag and the second tag (Drake, fig. 3 and par. [0031]-[0035] describes the process of identifying segments of media, e.g., start and end points, to output “a user-specific temporal cut definition 308, i.e., an aforementioned playlist of the relevant scenes or portions of media content that meet the user's time criteria along with any user preferences, historical or media collection data . . . User-specific temporal cut definition 308 can be a set of data that defines the cut for that specific user and input criteria etc.; it may not be the cut or video file itself” Note, therefore identifying the segments for the output cut definition 308 is interpreted as tagging the content with a second tag); based at least in part on determining that the identified segment does not contain one or more third tags (Drake, fig. 2 and par. [0024] describes an entitlement database, “any required entitlements which can be maintained in entitlement database 218”; par. [0039] further describes that entitlements to view the long-form content may be required to view the short-form content. Note, therefore the entitlements are metadata stored in association with the content and therefore interpreted as a third tag. Alternately, par. [0031] describes a feature whereby the user may specify content not to include in the selected scenes, e.g., “violent or scary scenes”, therefore identifying content to be excluded could be alternatively interpreted as a “third tag”): storing the identified segment in a memory (Drake, par. [0039]: the user will have the option to create and download a short-form version of the media content for offline consumption); accessing the memory; and providing the identified segment for editing at the user device (Drake, par. [0040]: the consumer may wish to change the focus of the short-form version of the media content from, e.g., less action to more action (bar 512) . . . Accordingly, temporal engine 300 of FIG. 3 can re-analyze the metadata-tagged media content to distribute a new user-specific temporal definition in accordance with the altered specifics to comport with the consumer's needs/desires). Although Drake discloses storing user preferences based on viewing history, Drake does not explicitly disclose storing a user preference based at least in part on an input received via the user device relating to a playback operation of the content item and detecting one or more user responses during playback of the content item, wherein the one or more user responses comprises a rewind, a segment skip or a pause. In analogous art, Reiley discloses storing a user preference based at least in part on an input received via the user device relating to a playback operation of the content item and detecting one or more user responses during playback of the content item, wherein the one or more user responses comprises a rewind, a segment skip or a pause ([Abstract]: “User interactions with the media items are analyzed and metadata of segments of media items that are determined to be of particular interest to the users is recorded”; par. [0053]: “the content index 122 can be built from interesting segments identified from media items by monitoring user behavior [. . .] the user monitoring module 210 can comprise a receiving module 212 that receives as input various user actions such as but not limited to, fast forwarding, rewinding, pausing”; par. [0078]: “In an embodiment, the user interactions can also include interactions such as fast forwarding or skipping to a next clip via a skip button on the user interface which can indicate low user interest. At 764, information regarding such user interactions is transmitted to the content provider so that the content provider can update the level of interest variable and/or other metadata of the segments accordingly”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to modify the teachings of modified Drake further in view of the above teachings of Reiley in order to automatically provide personalized content without explicit user request (Reiley, par. [0044]). Although Drake discloses further editing of the short-form content (par. [0040]), modified Drake fails to explicitly disclose generating for display a progress bar having a user-selectable display element corresponding to a position of the identified segment in the content item, wherein a visual presentation of the user-selectable non-textual display element is based at least in part on the metadata, wherein the visual presentation of the user-selectable non-textual display element is associated with a time stamp indicated by the progress bar, and wherein a position of the visual presentation of the user-selectable non-textual display element is perpendicular to the progress bar and is aligned with the time stamp. In analogous art, Gupta discloses: generating for display a progress bar having a user-selectable non-textual display element corresponding to a position of the identified segment in the content item (par. [0132] and element 510 of fig. 5B: “the at least one key moment which includes the positive key moments and the negative key moments are displayed as along the timeline of the multimedia content. Further, the unique identifier is provided at a position above the timeline where the key moment appears”; par. [0141]: “the electronic device 100 allows the user to perform actions on the key moments (both the positive key moments and the negative key moments). The preview of the key moment also includes options such as sharing the specific positive key moment”), wherein a visual presentation of the user-selectable non-textual display element is based at least in part on the metadata (par. [0056]: “the positive key moment, the negative key moment or the neutral key moment is differenced in the actionable user interface by using a unique identifier”; par. [0067]: “the unique identifier which is used to differentiate the positive key moment, the negative key moment and the neutral key moment”), wherein the visual presentation of the user-selectable non-textual display element is associated with a time stamp indicated by the progress bar (par. [0132] and element 510 of fig. 5B), and wherein a position of the visual presentation of the user-selectable non-textual display element is perpendicular to the progress bar and is aligned with the time stamp (par. [0132] and element 510 of fig. 5B). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to modify the teachings of modified Drake further in view of the non-textual display element of Gupta to enable the user to easily perform actions on the key moments (Gupta: par. [0141]). Consider claim 43, modified Drake discloses the method of claim 41, wherein the first tag indicates a particular frame or scene of the content item (Drake, par. [0028]: “Computer analysis of a long-form movie 216 (FIG. 3), for example, can generate metadata 302 (FIG. 3) describing objects and actions based on the temporal nature of the movie at a frame level, camera shot level, cut level, or scene level”). Consider claim 44, modified Drake discloses the method of claim 41, wherein the one or more third tags indicate at least one of restrictions or limitations on the use of a particular segment of the content item based at least in part on: (1) an age of the user, (2) a number indicating how many times a meme has been generated based at least in part on the identified segment of the content item, or (3) a genre of the content item (Drake, par. [0031]-[0033]: “Alternatively, the consumer may wish to eliminate all violent or scary scenes. Such user preferences 304 may be specified by the consumer and maintained in, e.g., a user profile [. . .] Based on one or more of the above inputs to temporal engine 300, temporal engine 300 may output a user-specific temporal cut definition 308, i.e., an aforementioned playlist of the relevant scenes or portions of media content that meet the user's time criteria along with any user preferences” Note, therefore tags indicating scary or violent scenes are implicit and are interpreted as equivalent to a genre, and the tags are used for the purpose of excluding/restricting certain scenes from the final composition). Consider claim 48, modified Drake discloses the method of claim 41, wherein the user-selectable non-textual display element displays information based at least in part on at least one of the metadata for the content item and the user profile (Drake: par. [0031]-[0032]: user preferences 304 may be specified by the consumer and maintained in, e.g., a user profile; Gupta: par. [0067]: “the unique identifier which is used to differentiate the positive key moment, the negative key moment and the neutral key moment”). The motivation to combine the references is the same as regarding claim 41. Consider claim 50, modified Drake discloses the method of claim 41, further comprising: tagging the content item with the one or more third tags based at least in part on the metadata for the content item; and restricting storage of the identified segment where it contains the one or more third tags (Drake, fig. 2 and par. [0024] describes an entitlement database, “any required entitlements which can be maintained in entitlement database 218”; par. [0039] further describes that entitlements to view the long-form content may be required to view the short-form content. Note, therefore the entitlements are metadata stored in association with the content and therefore interpreted as a third tag. Alternately, par. [0031] describes a feature whereby the user may specify content not to include in the selected scenes, e.g., “violent or scary scenes”; therefore, identifying content to be excluded could be alternatively interpreted as a “third tag.” Therefore, according to either interpretation of the ‘third tag’ feature, the short-form content would not be generated and/or stored if the third tags are present, i.e., would be restricted from being stored). Consider claim 51, the system is rejected along the same rationale as the method of claim 41 and because Drake further discloses a system embodiment comprising a memory and control circuitry (par. [0044] and fig. 6) for performing the associated method. Consider claims 53-54, 58 and 60, the system is rejected based on the same rationale as the method of claims 43-44, 48 and 50, respectively Claims 45-46 and 55-56 rejected under 35 U.S.C. 103 as being unpatentable over Drake in view of Merced, Reiley and Gupta, further in view of US 20140074855 A1 to Zhao et al. (“Zhao”). Consider claim 45, modified Drake discloses the method of claim 41, further comprising transmitting, to the content provider, in response to a user selection of the user-selectable non-textual display element, data indicating start and end points of the identified segment (Drake, par. [0033] and fig. 3: “temporal engine 300 may output a user-specific temporal cut definition 308, i.e., an aforementioned playlist of the relevant scenes or portions of media content that meet the user's time criteria along with any user preferences, historical or media collection data [. . .] The resulting user-specific temporal cut definition may then be utilized by a distribution and playback infrastructure 310 (e.g., system 200 of FIG. 2) to deliver the short-form version of the media content to the requesting consumer [. . .] The definition 308 will effectively comprise of a set of content start and end points that when played continuously represent the user specific cut.” Drake does not explicitly disclose that specific segment start/end times are selected by the user. In analogous art, Zhao discloses wherein segment start/end times are selected by the user (par. [0099]: “Such a button allows particular sections of the content to be tagged by, for example, specifying the starting point and/or the ending point of the content sections associated with a tag”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to further modify modified Drake in view of the above teachings of Zhao in order to facilitate enhanced use and interaction with a multimedia content through the use of tags (Zhao, [Abstract]). Consider claim 46, modified Drake discloses the method of claim 45, wherein the start and end points of the identified segment indicate an exact portion of a scene (Drake, as examined regarding claim 45) that contains a preferred actor of the user (Drake, par. [0031]: “User preferences 304 may further include indications of particular content that the consumer wishes to have included within or excluded from the short-form version of the media content (see FIG. 5A). This may be an indication of a particular actor(s) that the consumer wishes to focus on and see more of” Note, therefore the output user-specific temporal cut definition of par. [0033] can be information designating start/stop points and containing a preferred actor) Modified Drake fails to explicitly disclose that the identified segment is one to which one or more other users had a positive response. In analogous art, Reiley discloses wherein the identified segment is one to which one or more other users had a positive response (par. [0053]: The content providing engine 100 can comprise a user monitoring module 210 that monitors and records user input or behavior of users as they interact with particular media items, for example, as they listen to, view or tag media items. The user behavior thus recorded is analyzed to determine significant trends which can then be employed to identify particular segments of media items which will likely be of interest to other users). It would have been an obvious design choice to one with ordinary skill, in the art before the effective filing date of the invention, to modify the user preferences of modified Drake further in view of the above teachings of Reiley to determine significant trends which can then be employed to identify particular segments of media items which will likely be of interest to other users to identify content that is more interesting to the user (Reiley, par. [0053]). Consider claims 55-56, the system is rejected based on the same rationale as the method of claims 45-46, respectively. Claims 42 and 52 rejected under 35 U.S.C. 103 as being unpatentable over Drake in view of Merced, Reiley and Gupta, further in view of US 20180330756 A1 to MacDonald. Consider claim 42, modified Drake discloses the method of claim 41, further comprising: based at least in part on determining that the identified segment contains the one or more third tags: preventing the identified segment from being stored in the memory (Drake, fig. 2 and par. [0024] describes an entitlement database, “any required entitlements which can be maintained in entitlement database 218”; par. [0039] further describes that entitlements to view the long-form content may be required to view the short-form content. Note, therefore the entitlements are metadata stored in association with the content and therefore interpreted as a third tag. Alternately, par. [0031] describes a feature whereby the user may specify content not to include in the selected scenes, e.g., “violent or scary scenes”; therefore, identifying content to be excluded could be alternatively interpreted as a “third tag.” Therefore, according to either interpretation of the ‘third tag’ feature, the short-form content would not be generated and/or stored if the third tags are present). Modified Drake fails to explicitly disclose notifying the user of a restriction associated with the identified segment. In analogous art, MacDonald discloses notifying the user of a restriction associated with the identified segment (par. [0081]: “in a preferred embodiment, a user views display 207 and choses a scene to play . . . the user makes a decision to select a video “scene” to play 208, user receives a user message 209 which can include clip purchase instructions, rights restrictions, or links to sample scenes by other users, the system then accesses the predefined process 210 which creates an editing room for the video scene selected”). It would have been an obvious design choice to one with ordinary skill, in the art before the effective filing date of the invention, to further modify modified Drake in view of the above teachings of MacDonald in order to allow for the digital rights management of the original video and the newly created video, through the database structure and through metadata tags inserted into the new composite videos (MacDonald, par. [0002]), and/or in order to provide notification to the user of the restricted status of specific segments to improve ease of use. Consider claim 52, the system is rejected based on the same rationale as the method of claim 42. Claims 49 and 59 rejected under 35 U.S.C. 103 as being unpatentable over Drake in view of Merced, Reiley and Gupta, further in view of US 20120102021 A1 to Hill et al., (“Hill”) (cited in the IDS filed 12/08/2023). Consider claim 49, modified Drake discloses the method of claim 41, but fails to explicitly disclose wherein the metadata for the content item indicates how many times a meme has been generated based on the segment of the content item. In analogous art, Hill discloses wherein the metadata for the content item indicates how many times a meme has been generated based on the segment of the content item (par. [0040]: Identifying the most re-posted bits of visual content can therefore be a very strong indicator of the content's "interestingness", more so than the content view count, which can be quite non-indicative of relevance. Therefore, the number of times a piece of content is reposted may be employed as a way of identifying interesting content). It would have been an obvious design choice to one with ordinary skill, in the art before the effective filing date of the invention, to modify the content metadata of modified Drake further in view of the above teachings of Hill to identify content that is more interesting to the user (Hill, par. [0040]). Consider claim 59, the system is rejected based on the same rationale as the method of claim 49. Claims 47 and 57 rejected under 35 U.S.C. 103 as being unpatentable over Drake in view of Merced, Reiley and Gupta, further in view of US 20210350139 A1 to Pardeshi et al. (“Pardeshi”). Consider claim 47, modified Drake discloses the method of claim 41, but fails to explicitly disclose monitoring one or more physical characteristics of the user based at least in part on a user's physical responses while consuming the content item; and logging the user's physical responses with one or more time stamps. In analogous art, Pardeshi discloses monitoring one or more physical characteristics of the user based at least in part on a user's physical responses while consuming the content item; and logging the user's physical responses with one or more time stamps (par. [0047]: capture images of both eyes of a user wearing headset 104 . . . information such as a start time and end time for a detected emotion can be provided to a content manager 116 component or module, which can correlate those times with corresponding clips or segments of video that were being presented at those times; par. [0049]: indications of emotion for both segments may be used to select clips for a highlight montage that may be of interest to a user; par. [0053]: start and end timestamps of this emotion are recorded). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the invention, to modify the teachings of modified Drake further in view of the above teachings of Pardeshi in order to accurately determine which portions of content would be of most interest (Pardeshi, par. [0002]). Consider claim 57, the system is rejected based on the same rationale as the method of claim 47. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to STEPHEN R SMITH whose telephone number is (571)270-1318. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Q Tran can be reached at (571) 272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. STEPHEN R. SMITH Examiner Art Unit 2484 /THAI Q TRAN/Supervisory Patent Examiner, Art Unit 2484
Read full office action

Prosecution Timeline

Dec 08, 2023
Application Filed
Dec 06, 2024
Non-Final Rejection — §103, §DP
Mar 14, 2025
Response Filed
Apr 19, 2025
Final Rejection — §103, §DP
Jun 18, 2025
Request for Continued Examination
Jun 21, 2025
Response after Non-Final Action
Jul 16, 2025
Non-Final Rejection — §103, §DP
Oct 01, 2025
Applicant Interview (Telephonic)
Oct 03, 2025
Examiner Interview Summary
Oct 13, 2025
Response Filed
Dec 15, 2025
Final Rejection — §103, §DP
Mar 17, 2026
Request for Continued Examination
Apr 01, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598272
PT/PT-Z CAMERA COMMAND, CONTROL & VISUALIZATION SYSTEM AND METHOD UTILIZING ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Apr 07, 2026
Patent 12598280
VIDEO DATA PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, COMPUTER READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Apr 07, 2026
Patent 12597256
PARKING LOT MONITORING AND PERMITTING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12587623
IMAGE SYNTHESIS
2y 5m to grant Granted Mar 24, 2026
Patent 12567443
METHOD FOR READING AND WRITING FRAME IMAGES HAVING VARIABLE FRAME RATES AND SYSTEM THEREFOR
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
71%
Grant Probability
82%
With Interview (+11.2%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 433 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month