DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,167,094. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the patent anticipate the claims of the present application.
Patent No. 12,167,094.
Application No.18/941,952
Claim1 recites a method, comprising: receiving, by a processing system comprising a processor, from a mobile device, an input criterion related to media content, the input criterion including an input-specified time constraint;
retrieving, by the processing system, media source data based on the input criterion, wherein the media source data comprises respective time-demarked segments of the media content associated with respective metadata describing the respective time-demarked segments, and wherein the media source data includes broadcast media source data and a user-generated media source data recorded by an attendee at a live event; comparing, by the processing system, the respective metadata describing the respective time-demarked segments with the input criterion to identify matching segments from among the respective time-demarked segments in the broadcast media source data and user-generated media source data;
creating, by the processing system, media playback content based on the matching segments, wherein the matching segments comprise a first matching scene from the broadcast media source data and a second matching scene from the user-generated media source data;
receiving, by the processing system, instructions for presentation of the first matching scene at a first playback speed that differs from a second playback speed of the second matching scene,
wherein the first playback speed is determined at least in part in response to crowd noise in the first matching scene; determining that the input-specified time constraint is longer than a time to playback the first matching scene and the second matching scene; in response to the determining, modifying the media playback content to play the second matching scene more than once with different audio feeds; and providing the media playback content to a media display device, wherein the media display device is different from the mobile device.
Claim1 recites a method, comprising:
receiving, by a processing system including a processor, an input criterion related to media
source data;
accessing, by the processing system, the media source data based on the input criterion, wherein the media source data includes broadcast media source data and a user-generated media source data recorded by an attendee at a live event;
determining, by the processing system, matching scenes of the media source data,
comprising evaluating the input criterion with respect to respective metadata information that describes respective scenes of the media source data, the matching scenes including a first matching scene of the broadcast media source data and a second matching scene of the user-generated media
source data; generating, by the processing system, media playback content based on the matching scenes;
receiving, by the processing system, instructions for presentation of the first matching scene of the matching scenes at a first playback speed that differs from a second playback speed of the second matching scene of the matching scenes, wherein the first playback speed is determined at least in part on crowd noise in the first matching scene; and
providing, by the processing system, the media playback content to a media display device.
Claim2 recites the method of claim 1, wherein receiving the input criterion comprises receiving a search request comprising the input criterion.
Claim2 recites the method of claim 1, wherein receiving the input criterion comprises receiving a search request comprising the input criterion.
Claim3 recites the method of claim 1, wherein presentation of the media playback content is modified based on a user request
during the presentation.
Claim3 recites the method of claim 1, wherein the providing the media playback content is modified based on a user request.
Claim4 recites the method of claim 1, wherein the first matching scene is associated with first accompanying audio content, wherein the second matching scene is associated with second accompanying audio content, and further comprising receiving, by the processing system, instructions for output, in conjunction with playback of the first matching scene, of the first accompanying audio content at a first playback volume level that differs from a second playback volume level of the second accompanying audio content for output in conjunction with playback of the second matching scene.
Claim4 recites the method of claim 1, wherein the first matching scene is associated with first accompanying audio content, wherein the second matching scene is associated with second accompanying audio content, and further comprising receiving, by the processing system,
instructions for output, in conjunction with
playback of the first matching scene, of the first accompanying audio content at a first playback volume level that differs from a second playback volume level of the second accompanying audio content for output in conjunction with playback of the second matching scene.
Claim5 recites the method of claim 1, wherein the input-specified time constraint is specified by a default value provided by the mobile device.
Claim5 recites the method of claim 1, wherein the input criterion comprises input-specified time constraint specified by a default value provided by a mobile device
Claim6 recites the method of claim 1, further comprising presenting, by the processing system to the mobile device, the media playback content.
Claim6 recites the method of claim 1, further comprising presenting, by the processing system to a mobile device, the media playback content
Claim7 recites the method of claim 6, wherein presenting the media playback content comprises presenting annotation data of the first matching scene and the second matching scene based on the respective metadata of the respective first matching scene and second matching scene.
Claim7 recites the method of claim 6, wherein presenting the media playback content comprises presenting annotation data of the first matching scene and the second matching scene based on the respective metadata of the respective first matching scene and second matching scene.
Claim8 recites the method of claim 1, further comprising receiving, by the processing system, playback order instructions for presentation of the first matching scene of the matching scenes before presenting the second matching scene.
Claim8 the method of claim 1, further comprising receiving, by the processing system, playback order instructions for presentation of the first matching scene of the matching scenes before presenting the second matching scene.
Claim9 the method of claim 1, further comprising receiving, by the processing system, transition instructions for a transition from presenting the first matching scene to presenting the second matching scene.
Claim9 the method of claim 1, further comprising receiving, by the processing system, transition instructions for a transition from presenting the first matching scene to presenting the second matching scene.
Claim10 recites a system, comprising:
a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising: receiving, from a mobile device, an input criterion related to media source data, the input criterion including an input-specified time constraint;
accessing the media source data based on the input criterion, wherein the media source data includes broadcast media source data and a user-generated media source data recorded by an attendee at a live event; determining matching scenes of the media source data, comprising evaluating the input criterion with respect to respective metadata information that describes respective scenes of the media source data, the matching scenes including a first matching scene of the broadcast media source data and a second matching scene of the user-generated media source data;
generating media playback content based on the matching scenes;
and receiving instructions for presentation of the first matching scene of the matching scenes at a first playback speed that differs from a second playback speed of the second matching scene of the matching scenes, wherein the first playback speed is determined at least in part on crowd noise in the first matching scene;
determining that the input-specified time constraint is longer than a time to playback the first matching scene and the second matching scene; in response to the determining that the input-specified time constraint is longer than a time to playback the first matching scene and the second matching scene, modifying the media playback content to play the second matching scene more than once with
different audio feeds;
and providing the media playback content to a media display device, wherein the media display device is different from the mobile device.
Claim10 recites a system, comprising:
a processing system including a processor; and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, the operations comprising: receiving an input criterion related to media source data;
accessing the media source data based on the input criterion, wherein the media source data includes broadcast media source data and a user-generated media source data recorded by an attendee at a live event; determining matching scenes of the media source data, comprising evaluating the input criterion with respect to respective metadata information that describes respective scenes of the media source data, the matching scenes including a first matching scene of the broadcast media source data and a second matching scene of the user-generated media source data;
generating media playback content based on the matching scenes;
receiving instructions for presentation of the first matching scene of the matching scenes at a first playback speed that differs from a second playback speed of the second matching scene of the matching scenes, wherein the first playback speed is determined at least in part on crowd noise in the first matching scene;
and providing the media playback content to a media display device.
Claim11 recites the system of claim 10, wherein the operations further comprise outputting the media playback content to the mobile device.
Claim11 recites the system of claim 10, wherein the operations further comprise outputting the media playback content to the mobile device.
Claim12 the system of claim 10, wherein the operations further comprise receiving ordering instructions usable to present the matching scenes in a specified order, and wherein the operations further comprise outputting the media playback content in the specified order.
Claim12 the system of claim 10, wherein the operations further comprise receiving ordering instructions usable to present the matching scenes in a specified order, and wherein the operations further comprise outputting the media playback content in the specified order.
Claim13 the system of claim 10, wherein the first matching scene is associated with first accompanying audio content, wherein the second matching scene is associated with second accompanying audio content, and wherein the operations further comprise receiving volume instructions usable to output the first accompanying audio content at a first playback volume level that differs from a second playback volume level of the second accompanying audio content, usable to output the first matching scene to the media display device, usable to output, in conjunction with outputting the first matching scene, the first accompanying audio content at the first playback volume level to an audio device, usable to output the second matching scene to the media display device, and usable to output, in conjunction with outputting the second matching scene, the second accompanying audio content at the second playback volume level to the audio device.
Claim13 the system of claim 10, wherein the first matching scene is associated with first accompanying audio content, wherein the second matching scene is associated with second accompanying audio content, and wherein the operations further comprise
receiving volume instructions usable to output the first accompanying audio content at a first playback volume level that differs from a second playback volume level of the second accompanying audio content, usable
to output the first matching scene to the media display device, usable to output, in conjunction with outputting the first matching scene, the first accompanying audio content at the first playback volume level to an audio device, usable to output the second matching scene to the media display device, and usable to output, in conjunction with outputting the second matching scene, the second accompanying audio content at the second playback volume level to the audio device.
Claim14 the system of claim 10,
wherein the input-specified time constraint comprises a default value provided by the mobile device.
Claim14 recites the system of claim 10, wherein the input criterion includes an input-specified time constraint provided by a mobile device.
Claim15 recites the system of claim 10, wherein the operations further comprise presenting annotation data of a respective matching segment based on the respective metadata of the respective matching segment.
Claim15 recites the system of claim 10, wherein the operations further comprise presenting annotation data of a respective matching segment based on the respective metadata of the respective matching segment.
Claim16 recites the system of claim 10, wherein the first matching scene is associated with first accompanying annotation data, wherein the second matching scene is associated with second accompanying annotation data, and wherein the operations further comprise outputting the first accompanying annotation data to the mobile device, and outputting the second accompanying annotation data to the media display device.
Claim16 recites the system of claim 10, wherein the first matching scene is associated with first accompanying annotation data, wherein the second matching scene is associated with second
accompanying annotation data, and wherein the operations further comprise outputting the first accompanying annotation data to a mobile device, and outputting the second accompanying annotation data to the media display device.
Claim17 recites the system of claim 16, wherein the operations further comprise outputting the first matching scene to the media display device in conjunction with the outputting the first accompanying annotation data to the mobile device.
Claim17 recites the system of claim 16, wherein the operations further comprise outputting the first matching scene to the media display device in conjunction with the outputting the first accompanying annotation data to the mobile device.
Claim18 recites a non-transitory machine-readable medium, comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations, the operations comprising:
receiving an input criterion related to media content, the input criterion including an input-specified time constraint, matching selection criterion data, and playback instructions;
generating media content based on segments of first source media data including broadcast source media data and segments of second source media data including user-generated source media data, in which respective segments of the first source media data and second source media data are associated with respective descriptive metadata datasets comprising respective description data that describes the respective segments, the generating comprising: determining, based on the matching selection criterion data to the respective description data of the respective descriptive metadata datasets, matching segments of the respective segments, the matching segments including a first matching segment from the broadcast source media data and a second matching segment from the user-
generated source media data;
and combining the matching segments into the media content for presentation according to the playback instructions for presentation of the first matching segment of the matching segments at a first playback speed that differs from a second playback speed of the second matching segment of the matching segments, wherein the first playback speed is determined at least in part on crowd noise in the first matching segment, and wherein the second matching segment is duplicated with different audio feeds in the media content for presentation when a time to play the first matching segment and the second matching segment is less than the input-specified time constraint.
Claim18 recites a non-transitory machine-readable medium, comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations,
the operations comprising:
receiving an input criterion related to media source data;
accessing the media source data based on the input criterion, wherein the media source data includes broadcast media source data and a user-generated media source data recorded by an attendee at a live event; determining matching scenes of the media source data, comprising evaluating the input criterion with respect to respective metadata information that describes respective scenes of the media source data, the matching scenes including a first matching scene of the broadcast media source data and a second matching scene of the user-generated media source data; generating media playback content based on the matching scenes; receiving instructions for presentation of the first matching scene of the matching scenes at a first playback speed that differs from a second playback speed of the second matching scene of the matching scenes, wherein the first playback speed is determined at least in part on crowd noise in
the first matching scene; and providing the media playback content to a media display device.
Claim19 recites the non-transitory machine-readable medium of claim 18, wherein combining the matching segments into the media content for presentation according to the playback instructions comprises configuring the matching segments for playback based on at least one of: presentation order data for presenting the matching segments in a playback order determined via the presentation order data, playback speed data for presenting at least one matching segment at a playback speed determined via the playback speed data, transition data for transitioning from playback of the first matching segment to playback of the second matching segment based on the transition data, volume data for presenting the first matching segment in conjunction with audio accompanying the first matching segment at a first volume level based on the volume data, or annotation data for presenting the first matching segment with first annotated data based on a first descriptive metadata dataset associated with the first matching segment or annotation data for presenting the first matching segment with first annotated data based on a first descriptive metadata dataset associated with the first matching segment.
Claim19 recites the non-transitory machine-readable medium of claim 18, wherein the generating the media playback content based on the matching scenes comprises configuring the matching scenes for playback
based on at least one of:
presentation order data for presenting the matching scenes in a playback order determined via the presentation order data, playback speed data for presenting at least one matching scene at a playback speed determined via the playback speed data, transition data for transitioning from playback of the first matching scene to playback of the second matching scene based on the transition data, volume data for presenting the first matching scene in conjunction with audio accompanying the first matching scene at a first volume level based on the volume data, or annotation data for presenting the first matching scene with first annotated data based on a first descriptive metadata dataset associated with the first matching scene or annotation
data for presenting the first matching scene
with first annotated data based on a first descriptive metadata dataset associated with the first matching scene.
Claim20 recites the non-transitory machine-readable medium of claim 18, wherein the operations further comprise modifying presentation of the media content based on a user request during the presentation.
Claim20 recites the non-transitory machine-readable medium of claim 18, wherein the operations further comprise modifying a presentation of the media playback content based on a user request.
Allowable Subject Matter
Claims1-20 would be allowable if applicant overcomes the applied non-statutory double patenting rejection.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GIRUMSEW WENDMAGEGN whose telephone number is (571)270-1118. The examiner can normally be reached 9:00-7:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thai Tran can be reached at (571) 272-7382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
GIRUMSEW WENDMAGEGN
Primary Examiner
Art Unit 2484
/GIRUMSEW WENDMAGEGN/Primary Examiner, Art Unit 2484