DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claims 1-3, 5-7 have been considered but are moot as discussed in new ground of rejection below.
Applicant argues the combined prior art teaches only segment compilation via path identifier (Godsey) or static start-point addressing (Itoh), they neither disclose the necessary virtualization structure nor provide the motivation to adapt their technologies to perform this specialized threshold-based switching between a header and a source file, thus preserving the non-obviousness of the claimed method (page 7). This argument is respectfully traversed.
The amended claims do not recite “virtualization structure nor provide the motivation to adapt their technologies to perform this specialized threshold-based switching between a header and a source file”. The prior discloses the recited limitation as further discussed below.
Applicant argues Godsey does not disclose the specific claimed mechanism of utilizing a client-provided address offset for a snapshot header file (file A) and then dynamically applying the same offset to redirect the retrieval to the original video file (file B) if the offset value is larger than the size of the header itself. Godsey’s focus is on accessing pre-assembled segments via unique paths, not on creating a virtual proxy file whose internal size acts as a conditional switch for redirecting content requests to a separate, underlying source file. The secondary reference, Itoh, equally fails to disclose the condition cross-file addressing scheme. Nowhere does Itoh teach or suggest that an address offset supplied by a client intended for a header file should be compared against the header file size, and subsequently used as an index to retrieve data from a completely different, raw content source file (File B) if the offset exceeds that size (pages 7-8). This argument is respectfully traversed.
Again, the claims do not recite limitation of “mechanism of utilizing a client-provided address offset for a snapshot header file (file A) and then dynamically applying the same offset to redirect the retrieval to the original video file (file B) if the offset value is larger than the size of the header itself”. Instead, amended claim 1 recites “wherein, a first content located within the second header information is provided to the client when the second read instruction requests for the snapshot header file with an address offset not greater than a size of the second header information, a second content within the original video file is provided to the client when the second read instruction request for the snapshot header file with the address offset greater than the size of the second header information, and the second content is determined based on a file specified by a metadata included in the snapshot header file and the address offset.” This limitation is interpreted as discussed in the previously recited (now canceled claim 8) wherein, the “first content…is provided when the second read instruction request….” is read on in response to request is not greater or not exceed a particular point of clip file (for example, 20-seconds file or newly tuned/live content (as described in Godsey) or within (not greater than) cliptimelineduration file 95e, the first content of live content/within 2 second file or content with clip timeline duration file 95e is provided. “the second content” is provided to the client when second request for file with the address offset greater than the size of second header information…” is read second content that that is not in particular file or particular clip when request with time information that that greater or exceeds the size of a particular 2 second file or clip (see include, but are not limited to, Godsey: paragraphs 0026, 0033, 0037, 0044-0046, 0048; Itol: paragraphs 0165-0167). Therefore, the combination of the prior art discloses all limitations in amended claims.
See also Gupta (US 20170195746) for teaching of providing first content in located within second header information when second read instruction for file within an address offset not greater than a size of the second header information (response to request is not exceeds second header information (e.g., not greater than start time 120 = 24 seconds), a second content within the original video file is provided to the client when the second read instruction request for header file with the address offset greater than the size of second header information, the second content is determined based on a file specified by a metadata included in header file and the address offset (e.g., providing second content started from beginning (0 seconds/advertisement portion 302) within original video file to the client when a second request for header file with the address offset greater than the size of header information at 20 seconds (advertisement portion 308) , the second content is determined based on a file specified by a metadata included in the header information and the address offset associated with each portion 302, 304, 306, 308 – see include, but are not limited to, figures 3, 7-8, paragraphs 0018, 0043-0046).
For reasons given above, rejection of claims 1-3, 5-7 are discussed below.
Claims 4 and 8 have been canceled.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 5, 7 are rejected under 35 U.S.C. 103 as being unpatentable over Godsey et al. (US 20230041829) in view of Itoh et al. (US 20090080509).
Regarding claim 5, Godsey discloses a system for generating snapshot of video under recording (system for generating images/pictures of live video – see figures 1-2b, paragraphs 0027, 0041, 0044), comprising:
a server storing an original video file, wherein the original video file comprises a first header information, and a first identifier (a server in CDN 110 or content server being adapted for storing original video file, wherein the original video file comprises a first header information and first file/manifest name or identifier – see include, but are not limited to, figures 1, 2B-3D, paragraphs 0012, 0027, 0041, 0044, 0084, 0086); and
a client electrically coupled to the server, wherein, when the original video file is under recording and a first read instruction is received by the server for requesting the original video file, the server retrieves the first header information, replaces a first identifier stored in the first header information with a second identifier to generate a second header information of a snapshot header file (a client computing device 50 electronically coupled to the server in CDN 110, wherein when the original video file of live media is recording and a read/playback request is received by the server for requesting the original file, the server retrieves the first header information, replaces/modifies a first header information stored in the file/manifest with a second identifier/information/bits to generate a second header information of frame/image header file based on time-shifted playback point/frame of the live media content– see include, but are not limited to, figures 2B-3B, 3E-4B, paragraphs 0009, 0012, 0023-0024, 0037-0038, 0040, 0044-0046, 0047, 0051, 0056, 0073, 0084, 0086); and
when the snapshot header file is requested by the client with a second read instruction, the server generates a snapshot video file and providing the snapshot video file to the client, wherein, a header information of the snapshot video file is generated in accordance with the second header information, and a snapshot content of the snapshot video file is generated by retrieving a part of the original video file defined by the snapshot header file (when an image/frame of header file is requested/selected by the client with a subsequent or second request, the server generates a video file/unique copy of video started from selected image/point and providing video file of unique copy of live media content or time-shift playback associated with the live content, a header information of the video file is generated according to the header information of unique copy selected by the client by retrieving a part of original video file defined in the header file started from the selected image/frame or from particular point in the past of the time-shifted playback of the live content- see include, but are not limited to, figures 3A-4B, paragraphs 0009, 0012, 0022-0024, 0037-0038, 0040, 0044-0046, 0047, 0051, 0056, 0073, 0084, 0086),
wherein, a first content located within the second header information is provided to the client when the second read instruction requests for the snapshot header file within an address offset not greater than a size of the second header information, a second content within the original video file is provided to the client when the second read instruction requests for the snapshot header file with the address offset greater than the size of the second header information, and the second content is determined based on a file specified by a metadata included in the snapshot header file and the address offset (see discussion in “response to arguments” and include, but are not limited to, paragraphs 0026, 0033, 0037, 0044-0046, 0048).
Godsey does not explicitly discloses a first unique material identifier.
Additionally and/or alternatively, Itoh discloses a system comprises:
a server storing an original video file (a server with removable mediums 112 for storing an original video file(s) – see include, but are not limited to, figures 1-2, 8), wherein the original video file comprises a first header information, and a first unique material identifier (wherein the original video file/clip comprises a first header information and a first unique material identifier (UMID) – see include, but are not limited to, figures 6, 8-10, 12, 21, 23, 29-30, paragraphs 0134, 0144, 0146, 0150, 0196); and
when the original video file is under recording and a first read instruction is received by the server for requesting the original video file, the server retrieves the first header information, replaces a first unique material identifier stored in the first header information with a second unique material identifier to generate a second header information of a snapshot header file; and
when the snapshot header file is requested by the client with a second read instruction, the server generates a snapshot video file and providing the snapshot video file to the client, wherein, a header information of the snapshot video file is generated in accordance with the second header information, and a snapshot content of the snapshot video file is generated by retrieving a part of the original video file defined by the snapshot header file (replaces or modify a first UMID stored in the first header information with a second UMID for clip based on different type, frame, etc. and when a video file/clip is requested, the server generates a video file/video clip and providing the video file/clip of the video content/shot, wherein the head information of the video file/clip is generated in accordance with second header information and a content of the video file/clip is generated by retrieving a portion/section of the original video file device by the header file/clip - see include, but are not limited to, figures 6, 8-10, 12, 21, 23, 29-30, paragraphs 0134, 0144, 0146, 0150, 0196);
wherein, a first content located within the second header information is provided to the client when the second read instruction requests for the snapshot header file within an address offset not greater than a size of the second header information, a second content within the original video file is provided to the client when the second read instruction requests for the snapshot header file with the address offset greater than the size of the second header information, and the second content is determined based on a file specified by a metadata included in the snapshot header file and the address offset (see discussion in “response to arguments” and include, but are not limited to, Itol: paragraphs 0091, 0093, 0163, 0165-0167, 0170, 0185, 0189, 0213, figures 8-13. 16-17, 20, 23, 27-30).
See also the disclosure of Gupta as explained in the “response to arguments” above.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Godsey with the teachings comprises original video file comprises first unique material identifier and replacing first unique material identifier with a second unique material identifier as taught as taught by Itoh in order to yield predictable result for easily identifying or managing content in a given file (see for example, paragraphs 0001, 0014, 0144).
See also Suma (US 20120191666: ) for teaching of unique material identifier included in headers, replacing first UMID with second UMID and header file comprises number of frame, size, and time duration (paragraphs 0037-0039, 0043, 0050, claims 4-5).
Regarding claim 7, Godsey in view of Itoh discloses the system according to claim 5, wherein the metadata comprises a storage path where the original video file is stored in the server (metadata comprises a storage path/URL/address where the original video file is stored in the server – see include, but are not limited to, Godsey: paragraphs 0026, 0037, 0067; Itoh: paragraphs 0136, 0153, 0183, figures 8-10).
Regarding claim 1, limitations of a method that correspond to the limitations of a system in claim 5 are analyzed as discussed in the rejection of claim 5 above. Particularly, Godsey in view of Itoh discloses a method for generating snapshot of video under recording, which provides a current content of an original video file when a first read instruction is received while recording the original video, comprising:
retrieving a header information of the original video file stored in a server as a first header information when the first read instruction requesting the original video file is received while recording the original video file;
replacing a first unique material identifier stored in the first header information with a second unique material identifier to form a second header information; storing the second header information as a header information of a snapshot header file; and
generating a snapshot video file and providing the snapshot video file to a client when the snapshot header file is requested by a second read instruction, wherein, a header information of the snapshot video file is generated in accordance with the second header information, and a snapshot content of the snapshot video file is generated by retrieving a part of the original video file defined by the snapshot header file;
wherein, a first content located within the second header information is provided to the client when the second read instruction requests for the snapshot header file with an address offset not greater than a size of the second header information, a second content within the original video file is provided to the client when the second read instruction requests for the snapshot header file with the address offset greater than the size of the second header information, and the second content is determined based on a file specified by a metadata included in the snapshot header file and the address offset (see similar discussion in the rejection of claim 5 above).
Regarding claim 3, the additional limitations of the method that correspond to the additional limitations of the system in claim 7 are analyzed as discussed in the rejection of 7. Particularly, Godsey and Itoh discloses the method according to claim 1, wherein the metadata comprises a storage path where the original video file is stored in the server (see similar discussion in the rejection of claim 7).
Claim 2 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Godsey et al. (US 20230041829) in view of Itoh et al. (US 20090080509) as applied to claim 1 or claim 5 above, and further in view of Suma (US 20120191666)
Regarding claim 6, Godsey in view of Itoh discloses the method according to claim 1, wherein the second header information includes a time length of the current content, a size of the current content (time length/duration/interval, size of the content of a clip/file – see include, but are not limited to, Godsey: paragraphs 0026, 0037, 0041, 0044, 0046, 0048, 0050, 0086, 0073; Itoh: figures 9-16, 20, 23, 28, paragraphs 0091, 0134, 0136. 0152, 0160-0161, 0172, 0178, 0284), number/index of frames in video clip/file (see include, but are not limited to, Itoh: figures 11-16, 23).
However, Godsey does not explicitly disclose second header information includes a number of frames of current content.
Suma discloses second header information includes a time length of current content, a size of current content, and a number of frames of the current content (see paragraphs 0038-0039, claims 4-5).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to further modify Godsey with the teachings including header information includes number of frames as taught by Sumar in order to yield predictable result of easily identifying a number of frames in a file and recovery of 3D data (see paragraphs 0003, 0037-0038, claim 4 or claim 5).
Regarding claim 2, the additional limitations of the method that correspond to the additional limitations of the system in claim 6 are analyzed as discussed in the rejection of claim 6.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gupta (US 20170195746) discloses controlling start times at which skippable video advertisements begin playback in a digital medium environment.
Rabinowitz et al. ̣US 20240185891Đ discloses common timeline processing for unique manifest.
Gupta ̣7313808Đ discloses browsing continuous multimedia content.
O’Connor et al. ̣US 20080155627Đ discloses systems and methods for searching for and presenting video and audio.
Watanabe ̣US 20120075968Đ discloses recording apparatus, recording method, recording medium, reproducing apparatus and reproducing method.
Benson et al. ̣US 10904639Đ discloses server-side fragment insertion and delivery.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AN SON PHI HUYNH whose telephone number is (571)272-7295. The examiner can normally be reached 9:00 am-6:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NASSER M. GOODARZI can be reached at 571-272-4195. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AN SON P HUYNH/Primary Examiner, Art Unit 2426
December 29, 2025