DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments and amendments received January 21, 2026 have been fully considered. with regard to 35 U.S.C. § 102, Applicant argues that the cited prior art does not disclose “see applicant argument pages 7-13”. This language corresponds to claims 1- 18, specifically to independent claims.
As such, these have been considered but they are not persuasive as addressed below. See the rejection how the art on record reads on the claimed invention as well as the examiner's interpretation of the cited art in view of the presented claim set as outlined below. Furthermore,
The 35 USC § 101 rejection to claim 14 withdrawn based amendment to the claim, however the examiner stands with 35 USC § 102 rejection as outlined below. Furthermore, Kvochko teaches:
(16) Each block 124 in blockchain 123 includes information derived from a preceding block 124. For example, every block 124 in blockchain 123 includes a hash 142 of the previous block 124. By including hashes 142, blockchain 123 forms a chain of blocks 124 from a genesis block 124 to the current block 124c. Each block 124 is guaranteed to come after the previous block 124 chronologically because the previous block's hash 142 would otherwise not be known. In certain embodiments, blocks 124 in blockchain 123 may be linked together by identifying a preceding block with a cryptographic checksum (e.g. secure hash algorithm (SHA)-256) of its contents (e.g. the transaction and additional metadata). Links are formed by storing the cryptographic checksum identifier 142 of one block 124 in the metadata of another block 124, such that the former block 124 becomes the predecessor of the latter block 124. In this way, blocks 124 form a chain that can be navigated from block-to-block by retrieving the cryptographic checksum 142 of a particular block's predecessor from the particular block's own metadata. Each block 124 is computationally impractical to modify once it has been added to blockchain 123 because every block 124 after it would also have to be regenerated. These features protect data stored in blockchain 123 from being modified by bad actors, thereby providing security to the information stored in the blockchain. When a network node 120 publishes an entry (e.g. one or more transactions 140 in a block 124) in its ledger 122, the blockchain 123 for all other network nodes 120 in the blockchain network 118 is also updated with the new entry. Thus, data published in block chain 123 is available and accessible to every network node 120 with a ledger 122. This allows the data stored in the blocks 124 to be accessible for inspection and verification at any time by any device with a copy of ledger 122.
Kvochko, Col. 7 line 14 to col. 8 line 3, emphasis added
(30) After generating hash values 208 and/or 214, registration server 102 stores the hash values in blockchain 123. FIG. 2B presents an example blockchain 123. As illustrated in FIG. 2B, in certain embodiments, for each video segment 206 and/or audio segment 212, registration server 102 generates a blockchain transaction 140 that includes the hash value 208 generated from the video segment 206 and/or the hash value 214 generated from the audio segment 212. Registration server 102 then stores the blockchain transaction 140 as a block 124 in blockchain 123. For example, as illustrated in FIG. 2B, registration server 102 may store hash value 208a, generated from first video segment 206a, and/or hash value 214a, generated from first audio segment 212a, as first blockchain transaction 140a in block 124a. Similarly, registration server 102 may store hash value 208b, generated from second video segment 206b, and/or hash value 214b, generated from first audio segment 212b, as second blockchain transaction 140b in block 124b. Registration server 102 may also store hash value 208c, generated from third video segment 206c, and/or hash value 214c, generated from first audio segment 212c, as third blockchain transaction 140c in block 124c. In some embodiments, registration server 102 may store the hash values 208 and/or 214 for all video segments 206 and/or audio segments 212 as a single transaction 140, in a single block 124 of blockchain 123.
Kvochko, Col. 12 lines 23-46, emphasis added
(36) FIG. 3 presents a flowchart illustrating the process by which registration server 102 registers a source video 114 with blockchain 123. In step 302 registration server 102 receives a source video 114 for registration. In step 304 registration server 102 forms a video segment 206 from the first N frames 202 of source video 114. N may be any number greater than or equal to one. In step 306, registration server 102 generates one or more hash values 208 and/or 214 from video segment 206. As an example, in certain embodiments, registration server 102 may generate a hash value 208 from the values of the set of pixels included in video segment 206. In some embodiments, registration server 102 may generate a hash value 214 from the values of the audio signals included in an audio segment 212 associated with video segment 206.
Kvochko, Col. 14 lines 46-60, emphasis added
As outlined above, Kvochko teaches hash value created for multiple video and audio segments [video segments 208a-206c and audio segments 214a-214c] and the correspond video and audio segment stored with blockchains [blockchains 140a-140c]. Applicant argument states Kvochko stores video and audio hash values separately; the examiner disagrees, Kvochko store video and audio hash values as s single transaction in a single blockhain as outlined above. Furthermore, on page 9 applicant states in regarding same paragraph citing repeatedly, the cited art is based column, lines and figures not paragraphs, and the column, lines and Figs. cited in the rejection are correct since the claimed invention is within the cited columns, lines and figs. In addition, the Examiners position that Applicant has not yet submitted claims drawn to limitations, which define the operation and apparatus of Applicant's disclosed invention in manner, which distinguishes over the prior art. As it is Applicant's right to continue to claim as broadly as possible their invention. It is also the Examiners right to continue to interpret the claim language as broadly as possible. the claimed invention “digest algorithm, signature, inserting” consider broader terms compare to art of Kvochko. The cited art explicitly defines the process of developing signature-authentication for video and audio beyond the claimed invention using hash value, as such, the examiner stands with the rejection. Further, for future amendment or response to office action, it is an examiner advise applicant to consider art of Kvochko beyond the cited column, lines and Figs. since both the claimed invention and art of Kvochko appears substantially similar.
VI. PRIOR ART MUST BE CONSIDERED IN ITS ENTIRETY, INCLUDING DISCLOSURES THAT TEACH AWAY FROM THE CLAIMS
PNG
media_image1.png
18
19
media_image1.png
Greyscale
A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert. denied, 469 U.S. 851 (1984) (Claims were directed to a process of producing a porous article by expanding shaped, unsintered, highly crystalline poly(tetrafluoroethylene) (PTFE) by stretching said PTFE at a 10% per second rate to more than five times the original length. The prior art teachings with regard to unsintered PTFE indicated the material does not respond to conventional plastics processing, and the material should be stretched slowly. A reference teaching rapid stretching of conventional plastic polypropylene with reduced crystallinity combined with a reference teaching stretching unsintered PTFE would not suggest rapid stretching of highly crystalline PTFE, in light of the disclosures in the art that teach away from the invention, i.e., that the conventional polypropylene should have reduced crystallinity before stretching, and that PTFE should be stretched slowly.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Kvochko US 11,368,289.
In regarding to claim 1: Kvochko teaches:
1. A method of digitally signing a video sequence and an audio sequence, the audio and the video sequences having been captured at the same time such that they represent the same captured scene, the video sequence comprising successive video portions and the audio sequence comprising successive audio portions, the method comprising: generating a first video digest by applying a digest algorithm to a first video portion of the video sequence,
(24) FIGS. 2A and 2B present an example of the process by which registration server 102 registers source video 114a in blockchain 123, thereby storing a record of source video 114a in blockchain 123. As illustrated in FIG. 2A, source video 114a includes a set of video frames 202. Each frame 202 includes a set of pixels 204. In response to receiving source video 114a, registration server 102 splits source video 114a into a series of video segments 206 and then uses hash function 130 to generate a hash value 208 for each video segment 206, based on the pixels 204 included in the video segment. As illustrated in FIG. 2B, registration server 102 then stores each hash value 208 as part of a transaction 216 in a block 124 of blockchain 123. In this manner, registration server 102 stores a record of each video segment 206 in blockchain 123, such that a later modification to a particular video segment of a copy of source video 114a may be detectable by generating a hash value of that video segment and comparing it to the hash value stored in blockchain 123.
(28) In addition to generating hash values from video segments 206, in certain embodiments, registration server 102 generates hash values based on the audio 210 of source video 114a. For example, registration server 102 may split audio 210 into a set of audio segments 212, where each audio segment 212 corresponds to a given video segment 206. For example, first audio segment 212a may include the audio of source video 114a between starting timestamp 207a and ending timestamp 207b, thereby corresponding to first video segment 206a, which includes video frames 202a through 202g. Similarly, second audio segment 212b may include the audio of source video 114a between starting timestamp 207b and ending timestamp 207c, thereby corresponding to second video segment 206b, which includes video frames 202h through 202n, and third audio segment 212c may include the audio of source video 114a between starting timestamp 207c and ending timestamp 207d, thereby corresponding to third video segment 206c, which includes video frames 202o through 202u.
Kvochko, Col. 10 lines, 28-45 and Col. 11 line 55 to Col. 12 line 6, emphasis added.
generating a first audio digest by applying a digest algorithm to a first audio portion of the audio sequence,
(29) After splitting audio 210 into a set of audio segments 212, registration server 102 obtains a hash value 214 for each segment by applying a hash function 130b to the set of audio signals included in the segment. For example, registration server 102 obtains hash value 214a for first audio segment 212a by applying hash function 130b to the values of the audio signals included in first audio segment 212a, hash value 214b for second video segment 212b by applying hash function 130b to the values of the audio signals included in second audio segment 212b, and hash value 214c for third audio segment 212c by applying hash function 130b to the values of the audio signals included in third audio segment 212c. Hash function 130b may be the same hash function as hash function 130a or a different hash function from hash function 130a.
Kvochko, Col. 12 lines 7-21, emphasis added.
generating a first video signature by digitally signing the first video digest,
(28) In addition to generating hash values from video segments 206, in certain embodiments, registration server 102 generates hash values based on the audio 210 of source video 114a. For example, registration server 102 may split audio 210 into a set of audio segments 212, where each audio segment 212 corresponds to a given video segment 206. For example, first audio segment 212a may include the audio of source video 114a between starting timestamp 207a and ending timestamp 207b, thereby corresponding to first video segment 206a, which includes video frames 202a through 202g. Similarly, second audio segment 212b may include the audio of source video 114a between starting timestamp 207b and ending timestamp 207c, thereby corresponding to second video segment 206b, which includes video frames 202h through 202n, and third audio segment 212c may include the audio of source video 114a between starting timestamp 207c and ending timestamp 207d, thereby corresponding to third video segment 206c, which includes video frames 202o through 202u.
Kvochko, Col. 10 lines, 28-45 and Col. 11 line 55 to Col. 12 line 6, emphasis added.
generating a first audio signature by digitally signing the first audio digest,
(28) In addition to generating hash values from video segments 206, in certain embodiments, registration server 102 generates hash values based on the audio 210 of source video 114a. For example, registration server 102 may split audio 210 into a set of audio segments 212, where each audio segment 212 corresponds to a given video segment 206. For example, first audio segment 212a may include the audio of source video 114a between starting timestamp 207a and ending timestamp 207b, thereby corresponding to first video segment 206a, which includes video frames 202a through 202g. Similarly, second audio segment 212b may include the audio of source video 114a between starting timestamp 207b and ending timestamp 207c, thereby corresponding to second video segment 206b, which includes video frames 202h through 202n, and third audio segment 212c may include the audio of source video 114a between starting timestamp 207c and ending timestamp 207d, thereby corresponding to third video segment 206c, which includes video frames 202o through 202u.
Kvochko, Col. 10 lines, 28-45 and Col. 11 line 55 to Col. 12 line 6, emphasis added.
inserting the first video signature in a first target audio portion of the audio sequence,
(28) In addition to generating hash values from video segments 206, in certain embodiments, registration server 102 generates hash values based on the audio 210 of source video 114a. For example, registration server 102 may split audio 210 into a set of audio segments 212, where each audio segment 212 corresponds to a given video segment 206. For example, first audio segment 212a may include the audio of source video 114a between starting timestamp 207a and ending timestamp 207b, thereby corresponding to first video segment 206a, which includes video frames 202a through 202g. Similarly, second audio segment 212b may include the audio of source video 114a between starting timestamp 207b and ending timestamp 207c, thereby corresponding to second video segment 206b, which includes video frames 202h through 202n, and third audio segment 212c may include the audio of source video 114a between starting timestamp 207c and ending timestamp 207d, thereby corresponding to third video segment 206c, which includes video frames 202o through 202u.
Kvochko, Col. 10 lines, 28-45 and Col. 11 line 55 to Col. 12 line 6, emphasis added.
inserting the first audio signature in a first target video portion of the video sequence,
(28) In addition to generating hash values from video segments 206, in certain embodiments, registration server 102 generates hash values based on the audio 210 of source video 114a. For example, registration server 102 may split audio 210 into a set of audio segments 212, where each audio segment 212 corresponds to a given video segment 206. For example, first audio segment 212a may include the audio of source video 114a between starting timestamp 207a and ending timestamp 207b, thereby corresponding to first video segment 206a, which includes video frames 202a through 202g. Similarly, second audio segment 212b may include the audio of source video 114a between starting timestamp 207b and ending timestamp 207c, thereby corresponding to second video segment 206b, which includes video frames 202h through 202n, and third audio segment 212c may include the audio of source video 114a between starting timestamp 207c and ending timestamp 207d, thereby corresponding to third video segment 206c, which includes video frames 202o through 202u.
Kvochko, Col. 10 lines, 28-45 and Col. 11 line 55 to Col. 12 line 6, emphasis added.
generating a second video digest by applying a digest algorithm to the first target video portion including the first audio signature,
(28) In addition to generating hash values from video segments 206, in certain embodiments, registration server 102 generates hash values based on the audio 210 of source video 114a. For example, registration server 102 may split audio 210 into a set of audio segments 212, where each audio segment 212 corresponds to a given video segment 206. For example, first audio segment 212a may include the audio of source video 114a between starting timestamp 207a and ending timestamp 207b, thereby corresponding to first video segment 206a, which includes video frames 202a through 202g. Similarly, second audio segment 212b may include the audio of source video 114a between starting timestamp 207b and ending timestamp 207c, thereby corresponding to second video segment 206b, which includes video frames 202h through 202n, and third audio segment 212c may include the audio of source video 114a between starting timestamp 207c and ending timestamp 207d, thereby corresponding to third video segment 206c, which includes video frames 202o through 202u.
Kvochko, Col. 10 lines, 28-45 and Col. 11 line 55 to Col. 12 line 6, emphasis added.
generating a second audio digest by applying a digest algorithm to the first target audio portion including the first video signature,
(28) In addition to generating hash values from video segments 206, in certain embodiments, registration server 102 generates hash values based on the audio 210 of source video 114a. For example, registration server 102 may split audio 210 into a set of audio segments 212, where each audio segment 212 corresponds to a given video segment 206. For example, first audio segment 212a may include the audio of source video 114a between starting timestamp 207a and ending timestamp 207b, thereby corresponding to first video segment 206a, which includes video frames 202a through 202g. Similarly, second audio segment 212b may include the audio of source video 114a between starting timestamp 207b and ending timestamp 207c, thereby corresponding to second video segment 206b, which includes video frames 202h through 202n, and third audio segment 212c may include the audio of source video 114a between starting timestamp 207c and ending timestamp 207d, thereby corresponding to third video segment 206c, which includes video frames 202o through 202u.
Kvochko, Col. 10 lines, 28-45 and Col. 11 line 55 to Col. 12 line 6, emphasis added.
generating a second video signature by digitally signing the second video digest, and generating a second audio signature by digitally signing the second audio digest.
(28) In addition to generating hash values from video segments 206, in certain embodiments, registration server 102 generates hash values based on the audio 210 of source video 114a. For example, registration server 102 may split audio 210 into a set of audio segments 212, where each audio segment 212 corresponds to a given video segment 206. For example, first audio segment 212a may include the audio of source video 114a between starting timestamp 207a and ending timestamp 207b, thereby corresponding to first video segment 206a, which includes video frames 202a through 202g. Similarly, second audio segment 212b may include the audio of source video 114a between starting timestamp 207b and ending timestamp 207c, thereby corresponding to second video segment 206b, which includes video frames 202h through 202n, and third audio segment 212c may include the audio of source video 114a between starting timestamp 207c and ending timestamp 207d, thereby corresponding to third video segment 206c, which includes video frames 202o through 202u.
Kvochko, Col. 10 lines, 28-45 and Col. 11 line 55 to Col. 12 line 6, emphasis added.
In regarding to claim 2: Kvochko teaches:
2. The method according to claim 1, further comprising: inserting the second video signature in a second target audio portion of the audio sequence, and inserting the second audio signature in a second target video portion of the video sequence.
(28) In addition to generating hash values from video segments 206, in certain embodiments, registration server 102 generates hash values based on the audio 210 of source video 114a. For example, registration server 102 may split audio 210 into a set of audio segments 212, where each audio segment 212 corresponds to a given video segment 206. For example, first audio segment 212a may include the audio of source video 114a between starting timestamp 207a and ending timestamp 207b, thereby corresponding to first video segment 206a, which includes video frames 202a through 202g. Similarly, second audio segment 212b may include the audio of source video 114a between starting timestamp 207b and ending timestamp 207c, thereby corresponding to second video segment 206b, which includes video frames 202h through 202n, and third audio segment 212c may include the audio of source video 114a between starting timestamp 207c and ending timestamp 207d, thereby corresponding to third video segment 206c, which includes video frames 202o through 202u.
Kvochko, Col. 10 lines, 28-45 and Col. 11 line 55 to Col. 12 line 6, emphasis added.
In regarding to claim 3: Kvochko teaches:
3. The method according to claim 1, further comprising inserting the first video signature, also in a video portion of the video sequence, and inserting the first audio signature, also in an audio portion of the audio sequence.
(28) In addition to generating hash values from video segments 206, in certain embodiments, registration server 102 generates hash values based on the audio 210 of source video 114a. For example, registration server 102 may split audio 210 into a set of audio segments 212, where each audio segment 212 corresponds to a given video segment 206. For example, first audio segment 212a may include the audio of source video 114a between starting timestamp 207a and ending timestamp 207b, thereby corresponding to first video segment 206a, which includes video frames 202a through 202g. Similarly, second audio segment 212b may include the audio of source video 114a between starting timestamp 207b and ending timestamp 207c, thereby corresponding to second video segment 206b, which includes video frames 202h through 202n, and third audio segment 212c may include the audio of source video 114a between starting timestamp 207c and ending timestamp 207d, thereby corresponding to third video segment 206c, which includes video frames 202o through 202u.
Kvochko, Col. 10 lines, 28-45 and Col. 11 line 55 to Col. 12 line 6, emphasis added.
In regarding to claim 4: Kvochko teaches:
4. The method according to claim 2, further comprising inserting the second video signature, also in a video portion of the video sequence, and inserting the second audio signature also in an audio portion of the audio sequence.
(28) In addition to generating hash values from video segments 206, in certain embodiments, registration server 102 generates hash values based on the audio 210 of source video 114a. For example, registration server 102 may split audio 210 into a set of audio segments 212, where each audio segment 212 corresponds to a given video segment 206. For example, first audio segment 212a may include the audio of source video 114a between starting timestamp 207a and ending timestamp 207b, thereby corresponding to first video segment 206a, which includes video frames 202a through 202g. Similarly, second audio segment 212b may include the audio of source video 114a between starting timestamp 207b and ending timestamp 207c, thereby corresponding to second video segment 206b, which includes video frames 202h through 202n, and third audio segment 212c may include the audio of source video 114a between starting timestamp 207c and ending timestamp 207d, thereby corresponding to third video segment 206c, which includes video frames 202o through 202u.
Kvochko, Col. 11 line 41 to Col. 12 line 6, emphasis added.
In regarding to claim 5: Kvochko teaches:
5. The method according to claim 1, wherein the digest algorithm is a hash function.
(27) After splitting source video 114a into a set of video segments 206, registration server 102 obtains a hash value 208 for each segment by applying a hash function 130a to the set of pixels included in the segment. For example, registration server 102 obtains hash value 208a for first video segment 206a by applying hash function 130a to the values of the pixels included in first video segment 206a, hash value 208b for second video segment 206b by applying hash function 130a to the values of the pixels included in second video segment 206b, and hash value 208c for third video segment 206c by applying hash function 130a to the values of the pixels included in third video segment 206c. As described above, hash function 130a may be a cryptographic hash function or a perceptual hash function.
Kvochko, Col. 11 line 41 to Col. 12 line 6, emphasis added.
In regarding to claim 6: Kvochko teaches:
6. The method according to claim 1, wherein each video portion is a group of pictures.
Kvochko, Fig. 2A at least item 202
In regarding to claim 7: Kvochko teaches:
7. The method according to claim 1, wherein each audio portion is an audio frame or audio packet.
Kvochko, Fig.2A item 210
In regarding to claim 8: Kvochko teaches:
8. The method according to claim 1, wherein each audio signature is inserted in a respective SEI message or Open Bitstream Unit of the video sequence.
Kvochko, Col. 15 line 46-59
In regarding to claim 9: Kvochko teaches:
9. The method according to claim 1, wherein each video signature is inserted in a respective data stream element or header of the audio sequence.
(42) As illustrated in FIGS. 4A and 4B, target video includes a set of video frames 402 and a set of metadata 406. Each frame 402 includes a set of pixels. In certain embodiments, in response to receiving a target video 137 from user 106 for authentication, authentication server 104 determines whether metadata 406 includes an identifier 218a corresponding to a block 124a in blockchain 123. If metadata 406 does not include identifier 218a, authentication server 104 may transmit a message 138 to user 106 indicating that target video 137 is not authentic. If metadata 406 includes identifier 218a, authentication server 104 may use it to locate block 124a within blockchain 123, where block 124a is the first block 124 within blockchain 123 that stores information relating to source video 114a.
(28) In addition to generating hash values from video segments 206, in certain embodiments, registration server 102 generates hash values based on the audio 210 of source video 114a. For example, registration server 102 may split audio 210 into a set of audio segments 212, where each audio segment 212 corresponds to a given video segment 206. For example, first audio segment 212a may include the audio of source video 114a between starting timestamp 207a and ending timestamp 207b, thereby corresponding to first video segment 206a, which includes video frames 202a through 202g. Similarly, second audio segment 212b may include the audio of source video 114a between starting timestamp 207b and ending timestamp 207c, thereby corresponding to second video segment 206b, which includes video frames 202h through 202n, and third audio segment 212c may include the audio of source video 114a between starting timestamp 207c and ending timestamp 207d, thereby corresponding to third video segment 206c, which includes video frames 202o through 202u.
Kvochko, Col. 11 line 55 to Col. 12 line 6 and Col. 15 lines 46-59, emphasis added.
In regarding to claim 10: Kvochko teaches:
10. The method according to claim 1, wherein the video sequence and the audio sequence have been captured by a single device comprising an image sensor and a microphone.
Kvochko, Fig. 2A items 206 and 210
Claims 11-13 list all similar elements of claims 1-2 and 10, but in system form rather than method form. Therefore, the supporting rationale of the rejection to claims 1-2 and 10 applies equally as well to claims 11-13.
Claim 14 list all similar elements of claim 1, but in computer-readable medium form rather than method form. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 14.
In regarding to claim 15: Kvochko teaches:
15. (New) The method according to claim 1, wherein the second video signature is generated based on the first target video portion including the first audio signature, such that validation of the second video signature confirms a link to the audio sequence.
Kvochko, Col. 7 line 14 to col. 8 line 3
In regarding to claim 16: Kvochko teaches:
16. (New) The method according to claim 1, wherein the second audio signature is generated based on the first target audio portion including the first video signature, such that validation of the second audio signature confirms a link to the video sequence.
Kvochko, Col. 17 line 38 to col. 18 line 3
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Kvochko US 11,368,289.
In regarding to claim 17: Kvochko teaches:
17. (New) The method according to claim 1, however, Kvochko fail to explicitly teach wherein the routines wherein the first target video portion is a video portion being captured and encoded when the first audio signature is generated, or an upcoming video portion the capture and encoding of which has not yet started when the first audio signature is generated. Official Notice is taken that both the concept and the advantage of implementing wherein the first target video portion is a video portion being captured and encoded when the first audio signature is generated, or an upcoming video portion the capture and encoding of which has not yet started when the first audio signature is generated are well known and expected in the art. Thus, it would have been obvious to one skilled in the art, before the effective filing of the claimed invention, to utilize said feature within said system taught by Kvochko., because such incorporation would result by encoding audio and video separately, you can ensure that each component is optimized for its specific requirements, which can lead to efficiency use of audio and video data within the system.
In regarding to claim 18: Kvochko teaches:
18. (New) The method according to claim 1, however, Kvochko fail to explicitly teach wherein the first target audio portion is an audio portion being captured and encoded when the first video signature is generated, or an upcoming audio portion the capture and encoding of which has not yet started when the first video signature is generated. Official Notice is taken that both the concept and the advantage of implementing wherein the first target audio portion is an audio portion being captured and encoded when the first video signature is generated, or an upcoming audio portion the capture and encoding of which has not yet started when the first video signature is generated are well known and expected in the art. Thus, it would have been obvious to one skilled in the art, before the effective filing of the claimed invention, to utilize said feature within said system taught by Kvochko., because such incorporation would result by encoding audio and video separately, you can ensure that each component is optimized for its specific requirements, which can lead to efficiency use of audio and video data within the system.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL T TEKLE whose telephone number is (571)270-1117. The examiner can normally be reached Monday-Friday 8:00-4:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL T TEKLE/Primary Examiner, Art Unit 2481