DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 10-11, 14, 23-24 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 6, 13, 24-26, 29 of U.S. Patent No. 12,273,585. Although the claims at issue are not identical, they are not patentably distinct from each other.
Instant Application Claim 1
US Pat. 12,273,585 Claim 1
A system that improves navigation of video content, the system comprising: one or more processing devices; a network interface; and non-transitory memory that stores instructions that when executed by the one or more processing devices are configured to cause the system to perform operations comprising: receive first video content using the network interface; store the first video content;
A method of providing improved navigation on a viewer device, the method comprising: displaying a first user interface on a user device; receiving over a network a first video uploaded via the first user interface at a first system, wherein the first system comprises multiple processors,
perform an analysis of the first video content using the one or more processing devices;
performing image analysis on a plurality of frames in the first video using the first system comprising multiple cores
automatically generate descriptive text based at least in part on the analysis of the first video content;
automatically generating descriptive text based on a result of said image analysis of the plurality of frames in the first video;
stream, using the network interface, the first video content to a viewer device associated with a viewer;
streaming over the network, by the first system comprising multiple processors, the first video to the viewer device in a playback area,
cause, at least in part: a draggable control associated with the first video content to be displayed by the viewer device, wherein the draggable control is useable to indicate a position within the video content for beginning playback of the first video content and/or to browse frames in the first video content, and the automatically generated descriptive text to be displayed by the viewer device in association with the first video content;
rendered on the display and enabling a draggable control to be displayed with the playback of the first video and overlaying the first video, wherein the draggable control comprises a draggable control useable to indicate where the viewer wants the video playback to begin or to browse frames in the first video; displaying on the viewer device the automatically generated descriptive text in association with the playback of the first video;
detect a viewer interaction with the automatically generated descriptive text; and cause, at least in part, a segment of the first video content corresponding to the interaction with the automatically generated descriptive text to be transmitted to the viewer device
detecting a viewer interaction with the automatically generated descriptive text; and at least partly in response to the viewer interaction, initiating a navigation event whereby a playback of the first video is initiated at a desired segment.
Since claim 1 in the instant application is a broader recitation of claim 1 in Pat. 12,273,585 it would have been obvious to modify claim 1 in Pat. 12,273,585 to get claim 1 in the instant application.
Claim 10 of the instant application corresponds to patent claim 6.
Claim 11 of the instant application corresponds to patent claim 13.
Claim 14 of the instant application corresponds to patent claim 29.
Claim 23 of the instant application corresponds to patent claim 24.
Claim 24 of the instant application corresponds to patent claim 26.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 11, 13-16, 24, 26 are rejected under 35 U.S.C. 103 as being unpatentable over Logan et al. (US Pub. 2003/0093790) in view of Geer, III et al. (US Pub. 2011/0112915), herein referenced as Logan and Geer, respectively.
Regarding claim 1, Logan discloses “A system that improves navigation of video content, the system comprising: one or more processing devices; a network interface; and non-transitory memory that stores instructions that when executed by the one or more processing devices are configured to cause the system to perform operations ([0007], [0043]-[0046], [0050], Fig. 1) comprising:
receive first video content using the network interface; store the first video content ([0045], Fig. 1, i.e., at the remote location, broadcast programming from a source 100 is received at 101 and may be processed immediately or saved in a storage unit 103 for later processing);
perform an analysis of the first video content using the one or more processing devices; automatically generate descriptive text based at least in part on the analysis of the first video content ([0155]-[0157], Fig. 1, i.e., close caption text will be feed into a Natural Language Processing Engine (NLPE) in order to interpret the meaning of the material. When the system determines a change in topic, a marker is set. The system will also attempt to categorize the material and generate a short "slug" describing the material);
stream, using the network interface, the first video content to a viewer device associated with a viewer ([0050], [0052], Fig. 1, i.e., broadcast programming signals are received at 141 as programming content received from the remote location via the communications link 130);
… and the automatically generated descriptive text to be displayed by the viewer device in association with the first video content ([0007], [0046], [0312]-[0315], Figs. 3-4, i.e., a vertical list of the program's segments displayed on the left or right side of the video image);
detect a viewer interaction with the automatically generated descriptive text; and cause, at least in part, a segment of the first video content corresponding to the interaction with the automatically generated descriptive text to be transmitted to the viewer device.” ([0007], [0046], [0312]-[0315], Figs. 3-4, i.e., the viewer can choose a different segment to be viewed by selecting the text description of that segment on the displayed index listing).
Logan fails to explicitly disclose causing, at least in part: a draggable control associated with the first video content to be displayed by the viewer device, wherein the draggable control is useable to indicate a position within the video content for beginning playback of the first video content and/or to browse frames in the first video content.
Geer teaches the technique of causing, at least in part: a draggable control associated with the first video content to be displayed by the viewer device, wherein the draggable control is useable to indicate a position within the video content for beginning playback of the first video content and/or to browse frames in the first video content ([0039]-[0041], [0050], Figs. 3A, i.e., displaying a scrubber bar 306, wherein a user can slide the visual indicator 310 anywhere in the scrubber bar and controls playback of the media content by moving forward or backward in the media content depending on the user’s action).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of causing, at least in part: a draggable control associated with the first video content to be displayed by the viewer device, wherein the draggable control is useable to indicate a position within the video content for beginning playback of the first video content and/or to browse frames in the first video content as taught by Geer, to improve the media playback with metadata system of Logan for the predictable result of allowing viewers to navigate and control media playback based on their preference.
Regarding claim 2, Logan discloses “add segments within the first video content to a searchable index.” ([0271], [0331], [0336], i.e., metadata which describes programs and the segments which make up those programs may be advantageously stored in a relational, hierarchical or object-oriented database. When the metadata includes descriptive text, keyword searches can be performed to identify segments described with matching words).
Regarding claim 3, Logan discloses “enable segments within the first video content to be added on a searchable index hosted on a separate system.” ([0007], [0271], [0331], [0336], Fig. 1, i.e., video program segments retrieved from a mass storage device under the control of playlist metadata).
Regarding claim 11, Logan discloses “cause, at least in part, textual comments regarding the first video content from a plurality of viewers to be displayed by the viewer device at a same time as the first video content.” ([0137], [0244]-[0245], i.e., viewers can create comments, wherein comments can be displayed as -closed-caption text or in a separate screen window during playback of segments).
Regarding claim 13, Logan discloses “cause the automatically generated text to be displayed on the viewer device, displaced from a first video playback area rendered on the viewer device and without overlaying the first video content in the first video playback area, wherein, in response to the interaction with the automatically generated descriptive text, the segment of the first video content corresponding to the interaction with the automatically generated descriptive text is transmitted to the viewer device.” ([0312]-[0315], Figs. 4-5, i.e., when the segment guide is launched, the portion of the display showing content is shrunk as shown at 405 in FIG. 4, providing room at the right for an index list of segment labels at 410 and an "information pane at 412 below the content window 405. The selected segment is played in response to the viewer's movement of the highlighting using the remote's UP and Down cursor buttons).
Regarding claim 14, Logan discloses “A computer implemented method ([0007], [0043]-[0046], [0050], Fig. 1), the method comprising:
receiving at a computer system comprising a memory device first video content; storing the first video content into memory ([0045], Fig. 1, i.e., at the remote location, broadcast programming from a source 100 is received at 101 and may be processed immediately or saved in a storage unit 103 for later processing);
performing an analysis of the first video content using the computer system; automatically generating descriptive text based at least in part on the analysis of the first video content ([0155]-[0157], Fig. 1, i.e., close caption text will be feed into a Natural Language Processing Engine (NLPE) in order to interpret the meaning of the material. When the system determines a change in topic, a marker is set. The system will also attempt to categorize the material and generate a short "slug" describing the material);
streaming the first video content over a network to a viewer device associated with a viewer ([0050], [0052], Fig. 1, i.e., broadcast programming signals are received at 141 as programming content received from the remote location via the communications link 130);
…and the automatically generated descriptive text to be displayed by the viewer device in association with the first video content ([0007], [0046], [0312]-[0315], Figs. 3-4, i.e., a vertical list of the program's segments displayed on the left or right side of the video image);
detecting a viewer interaction with the automatically generated descriptive text; and causing, at least in part, a segment of the first video content corresponding to the interaction with the automatically generated descriptive text to be transmitted to the viewer device.” ([0007], [0046], [0312]-[0315], Figs. 3-4, i.e., the viewer can choose a different segment to be viewed by selecting the text description of that segment on the displayed index listing).
Logan fails to explicitly disclose causing, at least in part: a draggable control associated with the first video content to be displayed by the viewer device, wherein the draggable control is useable to indicate a position within the video content for beginning playback of the first video content and/or to browse frames in the first video content.
Geer teaches the technique of causing, at least in part: a draggable control associated with the first video content to be displayed by the viewer device, wherein the draggable control is useable to indicate a position within the video content for beginning playback of the first video content and/or to browse frames in the first video content ([0039]-[0041], [0050], Figs. 3A, i.e., displaying a scrubber bar 306, wherein a user can slide the visual indicator 310 anywhere in the scrubber bar and controls playback of the media content by moving forward or backward in the media content depending on the user’s action).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of causing, at least in part: a draggable control associated with the first video content to be displayed by the viewer device, wherein the draggable control is useable to indicate a position within the video content for beginning playback of the first video content and/or to browse frames in the first video content as taught by Geer, to improve the media playback with metadata system of Logan for the predictable result of allowing viewers to navigate and control media playback based on their preference.
Regarding claim 15, claim 15 is interpreted and thus rejected for the reasons set forth in the rejection of claim 2.
Regarding claim 16, claim 16 is interpreted and thus rejected for the reasons set forth in the rejection of claim 3.
Regarding claim 24, claim 24 is interpreted and thus rejected for the reasons set forth in the rejection of claim 11.
Regarding claim 26, claim 26 is interpreted and thus rejected for the reasons set forth in the rejection of claim 13.
Claims 4-7, 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Logan in view of Geer and in further view of Wattenhofer et al. (US Pat. 8,806,000), herein referenced as Wattenhofer.
Regarding claim 4, Logan discloses “enable a search index corresponding to a plurality of videos comprising video segments to be stored on a search system…” ([0271], [0331], [0336], i.e., metadata which describes programs and the segments which make up those programs may be advantageously stored in a relational, hierarchical or object-oriented database. When the metadata includes descriptive text, keyword searches can be performed to identify segments described with matching words).
The combination fails to explicitly disclose receive a search query from the viewer device via a search user interface; utilize the search system storing the search index to identify a matching video segment corresponding to the search query using tags associated with respective video segments; enable time data and automatically generated text corresponding to the matching video segment and an image from the matching video segment to be presented on the viewer device; and enable playing of the matching video segment by the viewer device at least partly in response to a viewer interaction with the image from the matching video segment or with the automatically generated text corresponding to the matching video segment.
Wattenhofer teaches the technique of receiving a search query from the viewer device via a search user interface; utilize the search system storing the search index to identify a matching video segment corresponding to the search query using tags associated with respective video segments; enable time data and … text corresponding to the matching video segment and an image from the matching video segment to be presented on the viewer device; and enable playing of the matching video segment by the viewer device at least partly in response to a viewer interaction with the image from the matching video segment or with the … text corresponding to the matching video segment (Col. 3 lines 38-62, Figs. 3-4, i.e., users can search for videos hosted on the video hosting server based on a video's title, description, tags, author, category, comment, and so forth. Users select a video for playback and displays a video playback interface including playback time and duration in addition to the video's title, description, tags and various other metadata).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of receiving a search query from the viewer device via a search user interface; utilize the search system storing the search index to identify a matching video segment corresponding to the search query using tags associated with respective video segments; enable time data and … text corresponding to the matching video segment and an image from the matching video segment to be presented on the viewer device; and enable playing of the matching video segment by the viewer device at least partly in response to a viewer interaction with the image from the matching video segment or with the … text corresponding to the matching video segment as taught by Wattenhofer, to improve the media playback with metadata system of Logan for the predictable result of allowing user to browse and search for media content for playback.
Regarding claim 5, the combination fails to explicitly discloses “receive a search query via the network interface from the viewer device; utilize a search engine to identify one or more matching video segments corresponding to the search query using automatically generated descriptive text associated with respective video segments and to generate search results; and transmit at least a portion of the search results to the viewer device, the search results comprising respective images and respective automatically generated descriptive text associated with the identified one or more matching video segments.”
Wattenhofer teaches the technique of receiving a search query via the network interface from the viewer device; utilize a search engine to identify one or more matching video segments corresponding to the search query using … descriptive text associated with respective video segments and to generate search results; and transmit at least a portion of the search results to the viewer device, the search results comprising respective images and respective … descriptive text associated with the identified one or more matching video segments (Col. 3 lines 38-62, Figs. 3-4, i.e., users may use a search engine to a perform keyword search for videos hosted on the video hosting server and select a video for playback).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of receiving a search query via the network interface from the viewer device; utilize a search engine to identify one or more matching video segments corresponding to the search query using … descriptive text associated with respective video segments and to generate search results; and transmit at least a portion of the search results to the viewer device, the search results comprising respective images and respective … descriptive text associated with the identified one or more matching video segments as taught by Wattenhofer, to improve the media playback with metadata system of Logan for the predictable result of allowing user to browse and search for media content for playback.
Regarding claim 6, the combination fails to explicitly disclose receive a search query via the network interface from the viewer device; utilize a search engine to identify one or more matching video segments corresponding to the search query using automatically generated descriptive text associated with respective video segments and to generate search results; and transmit at least a portion of the search results to the viewer device, the search results comprising respective images and respective automatically generated descriptive text associated with the identified one or more matching video segments; and cause, at least in part, a video in the search results to be streamed to the viewer device in response to the viewer interacting with a respective image in the search results.
Wattenhofer teaches the technique of receiving a search query via the network interface from the viewer device; utilize a search engine to identify one or more matching video segments corresponding to the search query using … descriptive text associated with respective video segments and to generate search results; and transmit at least a portion of the search results to the viewer device, the search results comprising respective images and respective … descriptive text associated with the identified one or more matching video segments; and cause, at least in part, a video in the search results to be streamed to the viewer device in response to the viewer interacting with a respective image in the search results (Col. 3 lines 38-62, Figs. 3-4, i.e., users may use a search engine to a perform keyword search for videos hosted on the video hosting server and select a video for playback. The search results display thumbnails of the videos matching the search query).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of receiving a search query via the network interface from the viewer device; utilize a search engine to identify one or more matching video segments corresponding to the search query using … descriptive text associated with respective video segments and to generate search results; and transmit at least a portion of the search results to the viewer device, the search results comprising respective images and respective … descriptive text associated with the identified one or more matching video segments; and cause, at least in part, a video in the search results to be streamed to the viewer device in response to the viewer interacting with a respective image in the search results as taught by Wattenhofer, to improve the media playback with metadata system of Logan for the predictable result of allowing user to browse and search for media content for playback.
Regarding claim 7, the combination fails to explicitly disclose “cause, at least in part, a search interface to be displayed by the viewer device, the search interface comprising a search query field; at least partly in in response to receiving a search query entered into the search query field, utilize a search engine to identify one or more segments of video content that match the search query using: automatically generated descriptive text associated with respective segments in the one or more segments, and/or user-provided tags associated with the respective segments in the one or more segments; generate search results comprising the identified one or more segments, wherein a given search result includes: an image representative of an identified segment; time data indicating a location of the identified segment within a corresponding video; cause, at least in part, at least a portion of the search results to be displayed by the viewer device; and in response to the viewer selecting a search result, in the search results, corresponding to a first segment, cause at least in part a playback of the first segment in association with a display of the corresponding time data.”
Wattenhofer teaches the technique of causing, at least in part, a search interface to be displayed by the viewer device, the search interface comprising a search query field; at least partly in in response to receiving a search query entered into the search query field, utilize a search engine to identify one or more segments of video content that match the search query using: … descriptive text associated with respective segments in the one or more segments, and/or user-provided tags associated with the respective segments in the one or more segments; generate search results comprising the identified one or more segments, wherein a given search result includes: an image representative of an identified segment; time data indicating a location of the identified segment within a corresponding video; cause, at least in part, at least a portion of the search results to be displayed by the viewer device; and in response to the viewer selecting a search result, in the search results, corresponding to a first segment, cause at least in part a playback of the first segment in association with a display of the corresponding time data (Col. 3 lines 38-62, Figs. 3-4, i.e., users can search for videos hosted on the video hosting server based on a video's title, description, tags, author, category, comment, and so forth. The search results display thumbnails of the videos matching the search query. Users select a video for playback and displays a video playback interface including playback time and duration in addition to the video's title, description, tags and various other metadata).
Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of causing, at least in part, a search interface to be displayed by the viewer device, the search interface comprising a search query field; at least partly in in response to receiving a search query entered into the search query field, utilize a search engine to identify one or more segments of video content that match the search query using: … descriptive text associated with respective segments in the one or more segments, and/or user-provided tags associated with the respective segments in the one or more segments; generate search results comprising the identified one or more segments, wherein a given search result includes: an image representative of an identified segment; time data indicating a location of the identified segment within a corresponding video; cause, at least in part, at least a portion of the search results to be displayed by the viewer device; and in response to the viewer selecting a search result, in the search results, corresponding to a first segment, cause at least in part a playback of the first segment in association with a display of the corresponding time data as taught by Wattenhofer, to improve the media playback with metadata system of Logan for the predictable result of allowing user to browse and search for media content for playback.
Regarding claim 17, claim 17 is interpreted and thus rejected for the reasons set forth in the rejection of claim 4.
Regarding claim 18, claim 18 is interpreted and thus rejected for the reasons set forth in the rejection of claim 5.
Regarding claim 19, claim 19 is interpreted and thus rejected for the reasons set forth in the rejection of claim 6.
Regarding claim 20, claim 20 is interpreted and thus rejected for the reasons set forth in the rejection of claim 7.
Claims 8-9, 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Logan in view of Geer and in further view of Bryan et al. (US Pat. 8,196,168), herein referenced as Bryan.
Regarding claim 8, Logan discloses “receive inputs from a plurality of viewers regarding the first video content…” ([0040], [0048], [0057], [0178]-[0179], i.e. user may create descriptive metadata. In addition, user may create bookmarks by clicking a button as they watch programming). The combination fails to disclose generate graph data using the inputs from the plurality of viewers regarding the first video content; and cause a graph corresponding to the graph data to be rendered on the viewer device.
Bryan teaches the technique of generating graph data using the inputs from the plurality of viewers regarding the first video content; and cause a graph corresponding to the graph data to be rendered on the viewer device (Col. 2 lines 13-29, Col. 11 lines 31-62, Figs. 7-9, i.e., hot-spot indicators 730, cold-spot indicators 740 and a Hot-Spot/Cold-Spot magnitude indicator 830 provide graphical representations of user inputs). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of generating graph data using the inputs from the plurality of viewers regarding the first video content; and cause a graph corresponding to the graph data to be rendered on the viewer device as taught by Bryan, to improve the media playback with metadata system of Logan for the predictable result of focusing the attention of the viewer on only the most important or interesting segments of a program, and effectively avoid program segments that have little appeal (Col. 1 lines 42-58).
Regarding claim 9, Logan discloses “receive inputs from a plurality of viewers regarding respective segments of the first video content…” ([0040], [0048], [0057], [0178]-[0179], i.e. user may create descriptive metadata. In addition, user may create bookmarks by clicking a button as they watch programming).
The combination fails to explicitly disclose generate graph data using the inputs from the plurality of viewers regarding the respective segments of the first video content; and cause a graph corresponding to the graph data generated using the inputs from the plurality of viewers regarding the respective segments of the first video content to be rendered on the viewer device.
Bryan teaches the technique of generating graph data using the inputs from the plurality of viewers regarding the respective segments of the first video content; and cause a graph corresponding to the graph data generated using the inputs from the plurality of viewers regarding the respective segments of the first video content to be rendered on the viewer device (Col. 2 lines 13-29, Col. 11 lines 31-62, Figs. 7-9, i.e., hot-spot indicators 730, cold-spot indicators 740 and a Hot-Spot/Cold-Spot magnitude indicator 830 provide graphical representations of user inputs). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of generating graph data using the inputs from the plurality of viewers regarding the respective segments of the first video content; and cause a graph corresponding to the graph data generated using the inputs from the plurality of viewers regarding the respective segments of the first video content to be rendered on the viewer device as taught by Bryan, to improve the media playback with metadata system of Logan for the predictable result of focusing the attention of the viewer on only the most important or interesting segments of a program, and effectively avoid program segments that have little appeal (Col. 1 lines 42-58).
Regarding claim 21, claim 21 is interpreted and thus rejected for the reasons set forth in the rejection of claim 8.
Regarding claim 22, claim 22 is interpreted and thus rejected for the reasons set forth in the rejection of claim 9.
Claims 10, 23 are rejected under 35 U.S.C. 103 as being unpatentable over Logan in view of Geer and in further view of Folgner et al. (US Pub. 2012/0096357), herein referenced as Folgner.
Regarding claim 10, the combination fails to explicitly disclose “cause, at least in part, a user interface to be rendered on the viewer device enabling the viewer to share, via a link, a first segment in the first video.”
Folgner teaches the technique of causing, at least in part, a user interface to be rendered on the viewer device enabling the viewer to share, via a link, a first segment in the first video ([0004], [0015], [0023], [0028], [0059], Figs. 1-2, 5, i.e., users can select a snapshot or a portion of media clips (e.g., video and/or audio clips) and share them with other users). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of causing, at least in part, a user interface to be rendered on the viewer device enabling the viewer to share, via a link, a first segment in the first video as taught by Folgner, to improve the media playback with metadata system of Logan for the predictable result of allowing users to share media clips along with their reactions and comments with other users ([0004]).
Regarding claim 23, claim 23 is interpreted and thus rejected for the reasons set forth in the rejection of claim 10.
Claims 12, 25 are rejected under 35 U.S.C. 103 as being unpatentable over Logan in view of Geer and in further view of Pan et al. (US Pat. 8,700,714), herein referenced as Pan.
Regarding claim 12, the combination fails to explicitly disclose “cause, at least in part, images corresponding to other videos having a plurality of segments with associated automatically generated descriptive text to be displayed by the viewer device with the first video content.”
Pan teaches the technique of causing, at least in part, images corresponding to other videos having a plurality of segments with associated … descriptive text to be displayed by the viewer device with the first video content (Col. 8 line 63-Col. 9 line 64, Fig. 7A, i.e., the stream library of the stream "Video Stream Pilot" visually represents the videos added by the three stream community members in the display area 710). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the technique of causing, at least in part, images corresponding to other videos having a plurality of segments with associated … descriptive text to be displayed by the viewer device with the first video content as taught by Pan, to improve the media playback with metadata system of Logan for the predictable result of providing viewers the convenience of browsing additional videos while watching a video.
Regarding claim 25, claim 25 is interpreted and thus rejected for the reasons set forth in the rejection of claim 12.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alexander Q Huerta whose telephone number is (571)270-3582. The examiner can normally be reached M-F 9:00 AM-5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571)272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEXANDER Q HUERTA/Primary Examiner, Art Unit 2425 November 24, 2025