Prosecution Insights
Last updated: April 19, 2026
Application No. 18/849,257

MULTIMEDIA CONTENT MANAGEMENT AND PACKAGING DISTRIBUTED LEDGER SYSTEM AND METHOD OF OPERATION THEREOF

Final Rejection §103
Filed
Sep 20, 2024
Examiner
OCAK, ADIL
Art Unit
2426
Tech Center
2400 — Computer Networks
Assignee
Edge Video B V
OA Round
1 (Final)
74%
Grant Probability
Favorable
2-3
OA Rounds
2y 4m
To Grant
92%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
279 granted / 376 resolved
+16.2% vs TC avg
Strong +18% interview lift
Without
With
+18.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
21 currently pending
Career history
397
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
57.9%
+17.9% vs TC avg
§102
21.7%
-18.3% vs TC avg
§112
6.5%
-33.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 376 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Amendment This Office Action is made in response to claims, filed 2/25/2026. Claims 29-31 and 62-71 are withdrawn. No claims are amended. Response to Arguments Applicant’s arguments see “Remarks”, made in an Amendment”, filed 2/25/2026. The Applicant elects Group I, Claims 1-7, for prosecution. Applicant’s arguments traversing the restriction requirement have been considered but are not persuasive. Although the claims of Group I and II share certain common features relating to video content analysis, the claims are directed to distinct inventions involving different uses of the detected content objects. Examination of the different groups would require searches in different areas of prior art, thereby imposing a serious search burden if restriction were not required. Accordingly, the restriction requirement is maintained. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-6 are rejected under 35 U.S.C. 103 as being unpatentable over Fleischman et al., Pat No US 8,516,374 (hereafter Fleischman) and further in view Butterfield et al., Pub No US 2006/0242139 (hereafter Butterfield). Regarding Claim 1, Fleischman discloses a method of operation of a multimedia content management and packaging system [FIG.1, col.2, lines 57-61: Discloses a computing environment for associating social media content items and references to events therein with time-based media events and determining social interest in the events based on the resulting associations.] comprising: detecting, by video content analysis, content objects of at least one video program, thereby identifying detected content objects [col.13, lines 1-7; Discloses image features are features generated from individual frames within a video … features include … detection of faces… Analyzes video content by extracting image features from individual frames of the video. These extracted features include detection of faces within the frames, thereby identifying objects present in the video. Thus, the reference teaches detecting content objects through video content analysis.]; displaying the at least one video program [FIG.9a, col. lines 6-8: Discloses the media display area (element 920a) shows a media player for displaying the time-based media associated with the selected event]. Thus, discloses displaying time-based media (video) via a media player.; displaying, while the at least one video program is being displayed, a usage score graphic according to the calculated usage score [col.23, lines 3-11: Discloses a social interest heat map (element 810) corresponding to a football game. The social interest heat map is a graphical display corresponding to ae calculated score. It calculates a level of social interest for events in a video using the number of social media content items corresponding to each event. Because the score is based on user-generated interactions associated with the video content, the calculated social interest level corresponds to a usage score for the detected content objects.]. Fleischman does not explicitly disclose detecting, from an external media feed, media references to at least a sub-plurality of the detected content objects; and calculating, while the at least one video program is being displayed, a usage score for the at least sub-plurality of detected content objects based on at least the media references, the calculating step comprising utilizing a time decay factor for at least partially determining the usage score, thereby providing a calculated usage score; However, in analogous art, Butterfield discloses the following: detecting, from an external media feed, media references to at least a sub-plurality of the detected content objects [para.0017: Discloses the media server may communicate with multiple clients over a network, such as the Internet; and para.0020: Discloses the metadata processing logic permits the user to enter metadata to describe each image. This shows the system receiving metadata/media information from external users/clients, which corresponds to detecting media references.]; calculating, while the at least one video program is being displayed, a usage score for the at least sub-plurality of detected content objects based on at least the media references, the calculating step comprising utilizing a time decay factor for at least partially determining the usage score, thereby providing a calculated usage score [FIG.1, para.0035: Discloses the metadata processing logic (element 118) may compute an "interestingness" metric for each media object; and para.0047: Discloses another score component may take time into account … this time decay may cause the score to decrement by 2% per day from the day of posting. Thus, showing calculating a score (interestingness metric), based on usage signals/metadata, and including a time decay factor; all corresponds to the claimed usage score with time decay.]; Accordingly, Butterfield teaches calculating an “interestingness” metric for media objects based on usage-related signals and metadata, and further teaches incorporating a time-based decay factor such that the score decreases as the media object ages. Thus, Butterfield discloses calculating a usage-based score derived from media references and user interactions, including a time decay component. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Fleischman with the scoring and time-decay techniques, as taught by Butterfield in order to yield predictable result such as improve the ranking of media objects based on user interaction and age [Butterfield: para.0007]. Regarding Claim 2, the combined teachings of Fleischman and Butterfield discloses the method of claim 1, and Fleischman further discloses wherein the external media feed includes a social media feed for detecting the media references on a social media site [FIG.6, col.22, lines 19-23: Discloses in social media/annotated event alignment (element 330) … content feature representations express the amount of co-occurring content between event metadata and terms within social media content items. Thus, teaches use of social media content as a feed.]. Regarding Claim 3, the combined teachings of Fleischman and Butterfield discloses the method of claim 1, and Butterfield further discloses wherein the external media feed includes a usage feed for detecting the media references [FIG.1, para.0039: Discloses the metadata processing logic (element 118) may factor into the interestingness score access patterns for the media object. Thus, media objects and associated metadata are received from users via a network, corresponding to an external media feed. The system further collects usage signals such as viewings, playbacks, and click-through interactions associated with the media objects, which are used in computing the interestingness score. These usage-based interaction signals represent a usage feed associated with the externally received media content.]. This claim is rejected on the same grounds as claim 1. Regarding Claim 4, the combined teachings of Fleischman and Butterfield discloses the method of claim 1, and Fleischman further discloses wherein the external media feed includes an external media feed for detecting the media references on a broadcast or digital media system [col.19, lines 60-62: Discloses the time-based media, e.g., a broadcast television feed for a football game, is segmented into semantically meaningful segments. This teaches broadcast media feeds.]. Regarding Claim 5, the combined teachings of Fleischman and Butterfield discloses the method of claim 1, and Butterfield further discloses with the calculating the usage score step comprising updating the usage score based on a quality score [para.0035-0036: Discloses the metadata processing logic may compute an "interestingness" metric for each media object, according to an embodiment of the invention. Interestingness may be a function of user actions related to a media object, including, for example, the quantity of user-entered and/or user edited metadata and/or access patterns for the media objects. Alternatively, or in addition to those factors, interestingness may be a function of time, system settings, and/or the relationship of the user to the poster of the media object. Each factor may be clipped by a maximum value set by the system designer, which is one way of weighting each factor. Alternatively, or in addition, before any clipping, each factor may be more directly weighted by a weighting coefficient that multiplies the factor. In either case, the factors (whether weighted or not) may be summed together to create an interestingness score (i.e., rank). Thus, teaches computing an interestingness score using multiple weighted factors modifies the resulting score, corresponding to updating the usage score based on an additional scoring factor such a quality score.]. This claim is rejected on the same grounds as claim 1. Regarding Claim 6, the combined teachings of Fleischman and Butterfield discloses the method of claim 1, and Butterfield further discloses with the calculating the usage score step comprising modifying the usage score according to a competitiveness score [para.0046: Discloses other interestingness score components may be set by the system designer. For example, some media objects may be treated as undesirable because they contain objectionable content. The system designer may, for example, set up the score computation to decrement the thus-far accumulated score by a predetermined score offset percentage assigned to a media object. Thus, teaches adjusting the computed score by applying additional score components and decrements. Such adjustment modifies the score according to additional factors, corresponding to modifying the usage score based on a competitiveness-type score.]. This claim is rejected on the same grounds as claim 1. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Fleischman et al., Pat No US 8,516,374 (hereafter Fleischman) and further in view Butterfield et al., Pub No US 2006/0242139 (hereafter Butterfield) and further in view Folta et al., Pub No US 2012/0106806 (hereafter Folta). Regarding Claim 7, the combined teachings of Fleischman and Butterfield discloses the method of claim 1, the combined teachings do not explicitly disclose with the detected content objects step comprising, detecting, by facial recognition, content objects in the at least one video program, thereby identifying human individuals as the identified detected content objects. However, in analogous art, Folta teaches detecting facial information to recognized individuals appearing throughout the video [para.0020]. For example, Folta discloses that “face detection data in frames of input data are used to generate face galleries, which are labeled and used in recognizing faces throughout the video” [ABSTRACT]. Thus, Folta teaches detecting content objects by facial recognition and identifying human individuals within video content. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Fleischman and Butterfield with detecting, by facial recognition, content objects in the at least one video program, thereby identifying human individuals as the identified detected content objects, as taught by Folta in order to achieve predictable result such as enabling automated identification of individuals appearing within the video (e.g., allowing a user to pause a video program and automatically identify an actor appearing in a scene) [Folta: para.0003]. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dankberg et al., (US 2014/0337875) – Discloses calculating a request engagement relationship between the requested content object and a plurality of watch-nowable content objects, wherein the watch-nowable content objects are determined to be watch-nowable content objects with respect to the requesting subscriber-side system, and each of at least some of the watch-nowable content objects has an associated content engagement relationship between itself and others of the plurality of watch-nowable content objects (claim 1). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADIL OCAK whose telephone number is (571) 272-2774. The examiner can normally be reached on M-F 8:00 AM - 5:00 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached on 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system; contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADIL OCAK/Primary Examiner, Art Unit 2426
Read full office action

Prosecution Timeline

Sep 20, 2024
Application Filed
Mar 11, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598348
METHODS AND APPARATUS TO CREDIT MEDIA SEGMENTS SHARED AMONG MULTIPLE MEDIA ASSETS
2y 5m to grant Granted Apr 07, 2026
Patent 12598334
LIVE-STREAMING STARTING METHOD, DEVICE AND PROGRAM PRODUCT
2y 5m to grant Granted Apr 07, 2026
Patent 12586039
Chat And Email Messaging Integration
2y 5m to grant Granted Mar 24, 2026
Patent 12574591
SYSTEM AND METHOD FOR PROVIDING ENHANCED AUDIO FOR STREAMING VIDEO CONTENT
2y 5m to grant Granted Mar 10, 2026
Patent 12572588
Local Public Notification Network Mediation
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
74%
Grant Probability
92%
With Interview (+18.3%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 376 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month