Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-24 have been considered but are not persuasive. The following are applicant’s argument and Examiner’s response:
PNG
media_image1.png
106
740
media_image1.png
Greyscale
Examiner does not agree with Applicant’s argument since Applicant admitted in the argument that Houh is generally directed to a method for timed tagging of content…individual segment of audio/video content and timing information defining the boundaries of each segment (page 9, second paragraph), further Houh that a user could search the time to get the required content (page 9, third paragraph) . Furthermore, Applicant argued that the timed segment index is searched for a match to the keyword tab and not searched for a time or duration and a user can request content based on a tag , and not based on a “desired duration”. Based on the claim recite “query associated with content, wherein the query indicated a desired duration”. Houh discloses query associated with content (applicant admitted above) and the query indicated a desired duration (mean preferred or intended length of time a task, project, event or process is planned to take) and Examiner asserts that query for content that indicated a desired duration, mean the content with time or length of segment with time. Hough discloses query associated with content, wherein the query indicated a desired duration (i.e., “ the timed segment index identifying content segments of the discrete media content and corresponding timing boundaries of the content segments; searching the timed segment index for a match to the at least one keyword tag, the match corresponding to at least one of the content segments identified in the segment index”(abstract), “generating or otherwise obtaining such enhanced metadata that identifies content segments and corresponding timing information from the underlying media content, a number of for audio/video search-driven applications can be implemented as described herein”(0005), “he results of such media processing define timing boundaries of a number of content segment within a media file/stream, including timed word segments 105a, timed audio speech segments 105b, timed video segments 105c, timed non-speech audio segments 105d, timed marker segments 105e, as well as miscellaneous content attributes 105f, for example.’(0032)and Examiner asserts search the timed segment index is search content with indicated a desired duration as claimed invention that including segment with time, mean length of segment of content is count with time ). Hough also teach desired duration (i.e., “the metadata 220 includes descriptive parameters for each of the timed word segments 225, including a segment identifier 225a, the text of an individual word 225b, timing information defining the boundaries of that content segment (i.e., start offset 225c, end offset 225d, and/or duration 225e), and optionally a confidence score 225f.”(0035)) Furthermore, Hough discloses determining, based on the desired duration, an end boundary following the time associated with the first match (i.e., “the metadata 220 includes descriptive parameters for each of the timed word segments 225, including a segment identifier 225a, the text of an individual word 225b, timing information defining the boundaries of that content segment (i.e., start offset 225c, end offset 225d, and/or duration 225e), and optionally a confidence score 225f.”(0035) and “The method involves obtaining metadata associated with discrete media content that satisfies a search query. The metadata identifies a number of content segments and corresponding timing information derived from the underlying media content using one or more automated media processing techniques. Using the timing information identified in the metadata, a search result or "snippet" can be generated that enables a user to arbitrarily select and commence playback of the underlying media content at any of the individual content segments”(0041) and “ The start offset and the end offset/duration define the timing boundaries of the content segment. By referencing the enhanced metadata, the text of words spoken during that segment, if any, can be determined by identifying each of the word segments falling within the start and end offsets’(0054)). Therefore, Houh discloses all limitations recites in claim and the Applicant’s arguments are not persuasive.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-24 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Houh et al. (U.S. Pub. 2007/0112837 A1)
With respect claim 1, Houh et al. discloses a method comprising:
receiving, by a computing device, and from a user device, a query associated with content wherein the query indicates a desired duration (i.e., “ a client 410 interfaces with a search engine module 420 for searching an index 430 for desired audio/video content”(0047) and “searching the timed segment index for a match to the at least one keyword tag, the match corresponding to at least one of the content segments identified in the segment index; and generating a timed tag index that includes the at least one keyword tag and the timing boundaries corresponding to the least one content segment of the discrete media content containing the match.”(abstract) and search the timed segment index is “indicates a desired duration” of claimed invention or “With the timed tag indexes 1250, 1255, a search engine, or other online system, can enable a user to request audio/video content based on a specific tag and, in return, provide such content in a manner such that the user can readily access the desired segment of content associated with the desired tag. For example, FIG. 11 is a diagram illustrating a system for accessing timed tagged media content from a search engine”(0089));
determining, based on the query, a first match in content metadata (“ searching the timed segment index for a match to the at least one keyword tag, the match corresponding to at least one of the content segments identified in the segment index; and generating a timed tag index that includes the at least one keyword tag and the timing boundaries corresponding to the least one content segment of the discrete media content containing the match.”(abstract) );
determining, based on the first match, a start boundary preceding a time associated with the first match (i.e., ” searching the timed segment index for a match to the at least one keyword tag, the match corresponding to at least one of the content segments identified in the segment index; and generating a timed tag index that includes the at least one keyword tag and the timing boundaries corresponding to the least one content segment of the discrete media content containing the match.”(abstract) and “ For each content segment, the information obtained preferably includes the location of the underlying media content (e.g. URL), a segment identifier, a segment type, a start offset, an end offset (or duration), the word or the group of words spoken during that segment, if any, and an optional confidence score.”(0050));
determining based on the desired duration, an end boundary following the time associated with the first match (“With the timed tag indexes 1250, 1255, a search engine, or other online system, can enable a user to request audio/video content based on a specific tag and, in return, provide such content in a manner such that the user can readily access the desired segment of content associated with the desired tag. For example, FIG. 11 is a diagram illustrating a system for accessing timed tagged media content from a search engine” (0089) and “ The metadata 230 includes descriptive parameters for each of the timed audio speech segments 235, including a segment identifier 235a, an audio speech segment type 235b, timing information defining the boundaries of the content segment (e.g., start offset 235c, end offset 235d, and/or duration 235e), and optionally a confidence score 235f”(0036) and “By referencing the enhanced metadata, the text of words spoken during that segment, if any, can be determined by identifying each of the word segments falling within the start and end offsets”(0054) and “searching the timed segment index for a match to the at least one keyword tag, the match corresponding to at least one of the content segments identified in the segment index; and corresponding to the least one content segment of the discrete media content containing the mat”(0077) and “Each of the timed word segments 220 can include a segment identifier 225a, the text of an individual word 225b, timing information defining the boundaries of that content segment (i.e., start offset 225c, end offset 225d, and/or duration 225e), and optionally a confidence score 225f.”(0081) and timing information is desired duration as claimed invention and 0086); and
extracting in response to the query and based on the start boundary and the end boundary, a portion of the content as video clip for output at the user device (“The search engine can then generate instructions to present one or more of timed tagged segments of media content to the request via a browser interface 1340,”(0090) (timed tagged segment of media” as video clip of claimed invention)and “presenting a search result that enables a user to arbitrarily select and commence playback of the discrete media content at any of the content segments associated with the at least one keyword tag using the timing boundaries identified within the timed tag index.”(0011) ; 0085 and 0086-89 explain the matching duration (boundaries) (time as desired duration as claimed) and match word (keyword tag) (query as claimed invention) and 0089 explains search engine can provides user to request (query) content based specific time tag (key word) (time tag index 1250, 1255)(fig. 10A, B) the system provides or search result or extract such content or clip in a manner such that user can ready access the desired segment or clip of content with start and end boundary as time tag (desired duration as claimed invention and location of clip (see fig. 10A or B with link and “the toolbar 1350 includes a button 1352 for jumping to the timed segment associated with the tag "world baseball classic." and another button 1354 for jumping to the timed segment associated with the tag "steroids." Any number of different ways can be implemented for presented timed tagged segments to a user.’(0090) shows clip “world baseball classic” and clip “steroids”).
With respect claim 2, Hou et al. discloses wherein determining the end boundary comprises, determining, based on the query, the end boundary (0085 and 0086 explain the matching duration (boundaries) (time as desired duration as claimed) and match word (keyword tag) (query as claimed invention) and 0089 explains search engine can provides user to request (query) content based specific time tag (key word) (time tag index 1250, 1255)(fig. 10A, B) the system provides or search result or extract such content or clip in a manner such that user can ready access the desired segment or clip of content with start and end boundary as time tag (desired duration as claimed invention and location of clip (see fig. 10A or B with link) and “As shown in FIG. 10A, the timed tag index 1250 can be implemented as a table corresponding to a specific tag (e.g., "steroids"). The entries of the table can include identifiers (e.g., AV1 . . . AV5) for each of audio/video files associated with the specific tag, the timing boundaries of the audio/video content associated with the tag (e.g. "start= . . . ", "end= . . . ") and links or pointers to the audio/video files in the database or other remote locations (e.g., "location= . . . ")’(0087));
With respect claim 3, Hou et al. discloses further comprising: determining, based on the query, a second match in the content metadata, wherein determining the end boundary comprises determining, based on the second match in the content metadata, the end boundary following a time associated with the second match (i.e., “ figs. 10A and B shows second match such clip steroids “the toolbar 1350 includes a button 1352 for jumping to the timed segment associated with the tag "world baseball classic." and another button 1354 for jumping to the timed segment associated with the tag "steroids." Any number of different ways can be implemented for presented timed tagged segments to a user.’(0090) shows clip “world baseball classic” and clip “steroids”).
With respect claim 4, Hou et al. discloses wherein determining, based on the first match, the start boundary preceding the time associated with the first match comprises: determining a time associated with a duration preceding the time associated with the first match (“generating a timed segment index of discrete media content, the timed segment index identifying content segments of the discrete media content and corresponding timing boundaries of the content segments; searching the timed segment index for a match to the at least one keyword tag, the match corresponding to at least one of the content segments identified in the segment index; and generating a timed tag index that includes the at least one keyword tag and the timing boundaries corresponding to the least one content segment of the discrete media content containing the match.”(abstract)); and determining a time associated with a content transition nearest the time associated with the duration and “As shown in FIG. 10A, the timed tag index 1250 can be implemented as a table corresponding to a specific tag (e.g., "steroids"). The entries of the table can include identifiers (e.g., AV1 . . . AV5) for each of audio/video files associated with the specific tag, the timing boundaries of the audio/video content associated with the tag (e.g. "start= . . . ", "end= . . . ") and links or pointers to the audio/video files in the database or other remote locations (e.g., "location= . . . ")’(0087));
With respect claim 5, Hou et al. discloses wherein determining the end boundary following the time associated with the first match comprises: determining a time associated with the desired duration (I “(i.e., “ a client 410 interfaces with a search engine module 420 for searching an index 430 for desired audio/video content”(0047) and “searching the timed segment index for a match to the at least one keyword tag, the match corresponding to at least one of the content segments identified in the segment index; and generating a timed tag index that includes the at least one keyword tag and the timing boundaries corresponding to the least one content segment of the discrete media content containing the match.”(abstract) and search the timed segment index is “indicates a desired duration” of claimed invention); and determining a time associated with a content transition nearest the time associated with the desired duration ((fig. 2 and 9shows all time associated with content such as seg.id and duration) and “Each of the timed word segments 220 can include a segment identifier 225a, the text of an individual word 225b, timing information defining the boundaries of that content segment (i.e., start offset 225c, end offset 225d, and/or duration 225e), and optionally a confidence score 225f.”(0081) and timing information is desired duration as claimed invention and 0086)).
With respect claim 6, Hou et al. discloses wherein extracting in response to the query and based on the start boundary and the end boundary, the portion of the content as the video clip further comprises storing in association with the video clip, a content identifier, the start boundary, and the end boundary (“The search engine can then generate instructions to present one or more of timed tagged segments of media content to the request via a browser interface 1340,”(0090) (timed tagged segment of media” as video clip of claimed invention)and “presenting a search result that enables a user to arbitrarily select and commence playback of the discrete media content at any of the content segments associated with the at least one keyword tag using the timing boundaries identified within the timed tag index.”(0011) ; 0085 and 0086-89 explain the matching duration (boundaries) (time as desired duration as claimed) and match word (keyword tag) (query as claimed invention) and 0089 explains search engine can provides user to request (query) content based specific time tag (key word) (time tag index 1250, 1255)(fig. 10A, B) the system provides or search result or extract such content or clip in a manner such that user can ready access the desired segment or clip of content with start and end boundary as time tag (desired duration as claimed invention and location of clip (see fig. 10A or B with link and “the toolbar 1350 includes a button 1352 for jumping to the timed segment associated with the tag "world baseball classic." and another button 1354 for jumping to the timed segment associated with the tag "steroids." Any number of different ways can be implemented for presented timed tagged segments to a user.’(0090) shows clip “world baseball classic” and clip “steroids”)).
With respect to claims 7-24, the claims 7-24 are rejected as set of claims 1-6 above since set of the claim 7-24 are similar with set of claims 1-7 but different form.
Citation of Pertinent References
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
The patent to Singe et al. discloses video clip creation using social media, U.S. 10.057.651 B1.
The patent to Birnbaum et al. discloses media management and sharing system, U.S.pub. 2016/0149956 A1.
The patent to Luks et al. discloses Multimedia metadata analyzing using inverted index with temporal and segment identifying payloads. U.S. 2013/0151534 A1.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNG T VY whose telephone number is (571)272-1954. The examiner can normally be reached on M-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on (571)272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUNG T VY/Primary Examiner, Art Unit 2163 February 25, 2026