Prosecution Insights
Last updated: April 19, 2026
Application No. 18/826,181

METHODS AND SYSTEMS TO IDENTIFY MEDIA CONTENT USING WATERMARK METADATA AND MAPPED AUDIO SIGNATURES

Non-Final OA §101§103§112
Filed
Sep 06, 2024
Examiner
LONG, EDWARD X
Art Unit
2439
Tech Center
2400 — Computer Networks
Assignee
The Nielsen Company (US), LLC
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
134 granted / 184 resolved
+14.8% vs TC avg
Strong +48% interview lift
Without
With
+47.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
20 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
4.8%
-35.2% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 184 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This Office Action is in response to the application 18/826,181 filed on 09/06/2024. Claims 1-20 have been examined and are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Action is made Non-FINAL. Priority This application claims priority to U.S. Provisional Application 63/581,579, filed Sept. 8, 2023. Claim Objection: Claim 17 is objected because of the following informality: Regarding claim 17, claim 17 recites “A computing system comprising a processor and a memory, the computing system configured to perform a set of operations comprising...” For better clarity, it is suggested that the aforementioned claim language be amended to ““[a] computing system comprising a processor and memory storing computer executable instructions that when executed cause the computing system to perform a set of operations: ” (emphasis added)...” Correction is requested. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Regarding claims 1, 12 and 17, claims 1, 12 and 17 are rejected under 35 U.S.C. 101 because the claims are directed to an abstract idea without being integrated into a practical application nor being significantly more. The claims recite the steps of “obtaining…,” “mapping…,” “comparing…” and “determining…” are directed to an abstract idea because these claimed limitations, under its broadest reasonable interpretation, covers processes that could be performed in the human mind. Thus, these limitations fall within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application as the claim does not recite any other addition operations that would be considered as applying the abstract idea into practical operations. It’s noted that the claims recite the limitations/steps “timestamp.” However, said steps are not sufficiently to consider that the abstract idea is being interpreted into a practical application. Said steps are recited at a high level of generality in gathering/processing/storing information, which are a form of insignificant extra-solution activity. It is also noted that the claims recite additional elements (i.e., “computing device” of claims 1; “processor” and “memory” of claim 17). However, said additional elements are recited at a high-level of generality (i.e., as a generic processor executing computing instructions stored on a non-transitory memory/storage media), such that it amounts no more than mere instructions to apply the exception using a generic computer component. The recitation of “computing device,” “processor” and “memory” does not represent extra-solution activity because it is a mere nominal or tangential addition to the claims. See MPEP 2106.05(g), Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1241-42 (Fed. Cir. 2016) and Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354 (Fed. Cir. 2016). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Therefore, the claims are not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. It is noted that the claim recites some additional elements such as “digital signature” and “watermark.” However, these additional elements, taken individually and as a combination, do not result in the claim amounting to significantly more than the abstract idea because “digital signature” and “watermark” in a network is recited as performing generic computer functions routinely used in information validation/verification (See Li et al. [0004]. In the other application scenario, integrity of video in storage is protected. A digital signature, a digital watermark, etc., can be adopted to ensure security of video data in storage. A digital digest of the video data is signed or a digital watermark is embedded in the video data, and if the video data is falsified, then it can not be verified with the digital signature or the digital watermark.). Generic computer components recited as performing generic computer functions that are well-understood, routine, and conventional activities amounts to no more than implementing the abstract idea with a computerized system. Therefore, the claim is directed to non-statutory subject matter. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of abstract idea into a practical application, the additional element of “digital signature” and “watermark” amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Therefore, these claims are not patent eligible. Regarding claims 2-11, 13-16, 18-20, claims 2-11, 13-16, 18-20 are also rejected under 35 U.S.C 101 as being directed to an abstract idea without being integrated into a practical application or significantly more as discussed above. It’s noted that claims 3 recites “identifying,” claims 4 and 19 recite “calculating…,” “crediting…,” claims 5 recites “calculating…,” claim 6 recites “obtaining,…” “determining…”…etc. However, said steps are also mental processes as they could be performed in the human mind. Also, the aforementioned steps not sufficiently to consider that the abstract idea is being interpreted into a practical application. Said steps are recited at a high level of generality in gathering/processing information, which are a form of insignificant extra-solution activity. For similar reasons as discussed above, these additional claims fail to recite any other addition operations that would be considered as applying the abstract idea into practical operations. Nor do these claims recite additional elements that are sufficient to amount to significantly more than the judicial exception. Accordingly, claims 2-13, and 16-20 are also rejected under 35 U.S.C 101. See Alice Corporation v. CLS Bank International, (S.Ct.2014). See also Intellectual Ventures LLC v. Symantec Corp. (Fed. Cir. 2016), Electric Power Group, LLC v. Alstom SA (Fed. Cir. 2016), Affinity Labs of Texas LLC v. Amazon.com Inc. (Fed. Cir. 2016). Claim Rejections- 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 16 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant) regards as the invention. Regarding claim 16, claim 16 recites “wherein the source of the media content item is at least one of Netflix, Amazon Prime Video, Disney +, Hulu, Tubi, Pluto TV, Roku Channel, YouTube, Paramount +, or Peacock” (emphasis added). In other words, the “source of the media content” is being restricted to “Netflix, Amazon Prime Video, Disney +, Hulu, Tubi, Pluto TV…,” etc., which are trademarks designating a provider of goods or services who owns or holds a license for the trademark. However, the source or services relating to the media content may change with time. As a result, metes and bounds of the claimed scope (e.g., “the source of the media content item is…”) remains unclear. See M.P.E.P 2173.05 (u). See also Ex parte Simpson, 218 USPQ 1020 (Bd. App. 1982). See also Eli Lilly & Co. v. Apotex, Inc., 837 Fed. Appx. 780, 784-85, 2020 USPQ2d 11531 (Fed. Cir. 2020) ("Following Patent Office procedure, the Examiner in this case rejected the claims of the '821 application as indefinite because they improperly used the trade name 'ALIMTA.' In response to the rejection, Lilly canceled its claims reciting the trade name and pursued claims using the generic name for the same substance, which mooted the rejection. Additionally, as the district court observed, the Examiner 'explicitly noted that pemetrexed disodium was 'also known by the trade name ALIMTA' ' in the contemporaneous obviousness rejection."). The claim is found to be indefinite for failing to particularly point out and distinctly claim the subject matter which the applicant regards as the invention. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically discloses as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5, 8-9, 11, 12-14, 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Cheruvu et al. (“Cheruvu,” US 20210390447, published Dec 16, 2021) in view of Perkalsky et al. (“Perkalsky,” US 11317128, patented April 26, 2022). Regarding claim 1, Cheruvu discloses A method comprising: obtaining, at a media manager computing device, a media content item with identification information including watermark data and one or more signatures, the watermark data having a source identifier and a [plurality of] timestamps associated with the media content (Cheruvu FIG. 5, [0047], [0051], [0071]-[0072]. Implementations of the disclosure provide a watermarking approach to verify the authenticity or integrity of the owner of content published and consumed at content consumer system 250. The watermarking approach prevents the publishing and/or consumption of unverified content at the content consumer system 250. In one implementation, the digital signature generated by hash generator 228 may be a hash of one or more the generated content (e.g., plaintext), the GUID 270, a ML model ID, and/or a timestamp. Once the digital signature is generated by hash generator 228, the publication component 224 may then transmit the digital signature, along with the content (e.g., plain text), timestamp, and model ID to the content consumer system 250. The flow 500 may be representative of some or all the operations that may be executed by or implemented on one or more components of system 100 of FIG. 1, such as a processor (e.g., CPU 131). In the illustrated embodiment shown in FIG. 5, the flow 500 may begin at block 510. At block 510, the processor may receive, by a processor of a content consumer platform, content generated by a machine learning (ML) model and a digital signature corresponding to the content. At block 520, the processor may process the digital signature to extract, from the digital signature, a global unique identifier (GUID) of the ML model that generated the content.); mapping, at the media manager computing device, the source identifier and a duration based on the [plurality of timestamps] to each of the one or more signatures as a respective tagged signature of one or more tagged signatures, each having a mapped source identifier and [a mapped duration], respectively (Cheruvu FIG. 5, [0034], [0052], [0073]. With respect to the sender component, the hash generator 113 and/or content deployer 114 may be implemented in CPU 111 of content generation platform 110 as hardware, software, and/or firmware of the content generation platform 110. Once content is generated by content generator 112, the hash generator 113 may utilize the ML model's 116 GUID, a ML model ID, the content (e.g., plain text, image, video, etc.), and/or a timestamp for a digital signature. The content deployer 114 may then transmit the digital signature to the verifier component, along with the content (e.g., plain text, image, video, etc.), timestamp, and model ID. In one implementation, the digital signature and content, timestamp, and model ID may be sent over an insecure communication channel 140 to the verifier (e.g., verification component 132). As previously discussed, the verification component 252 of content consumer system 250 verifies the digital signature of content prior to passing it to content consumer application 254 as verified content 256. In order to verify the content, the verification component 252 may utilize a previously downloaded public key of a website and the GUID 270. Subsequently, at block 530, the processor may verify the extracted GUID against data obtained from a shared registry, the data obtained from the shared registry comprising identifying information of the ML model including the GUID. At decision block 540, the processor determine whether the extracted GUID is successfully verified.); comparing, at the media manager computing device, a mapped source identifier and a mapped duration of at least one of the one or more tagged signatures to a mapped reference source identifier and [a mapped reference duration] of one or more reference signatures (Cheruvu FIG. 5, [0034], [0052], [0073]. With respect to the sender component, the hash generator 113 and/or content deployer 114 may be implemented in CPU 111 of content generation platform 110 as hardware, software, and/or firmware of the content generation platform 110. Once content is generated by content generator 112, the hash generator 113 may utilize the ML model's 116 GUID, a ML model ID, the content (e.g., plain text, image, video, etc.), and/or a timestamp for a digital signature. The content deployer 114 may then transmit the digital signature to the verifier component, along with the content (e.g., plain text, image, video, etc.), timestamp, and model ID. In one implementation, the digital signature and content, timestamp, and model ID may be sent over an insecure communication channel 140 to the verifier (e.g., verification component 132). As previously discussed, the verification component 252 of content consumer system 250 verifies the digital signature of content prior to passing it to content consumer application 254 as verified content 256. In order to verify the content, the verification component 252 may utilize a previously downloaded public key of a website and the GUID 270. Subsequently, at block 530, the processor may verify the extracted GUID against data obtained from a shared registry, the data obtained from the shared registry comprising identifying information of the ML model including the GUID. At decision block 540, the processor determine whether the extracted GUID is successfully verified.); and based on the comparison, determining, at the media manager computing device, that at least one of the one or more tagged signatures match one of the one or more reference signatures to identify as one or more matched signatures (Cheruvu [0052], [0074]. As previously discussed, the verification component 252 of content consumer system 250 verifies the digital signature of content prior to passing it to content consumer application 254 as verified content 256. In order to verify the content, the verification component 252 may utilize a previously downloaded public key of a website and the GUID 270. Subsequently, at block 530, the processor may verify the extracted GUID against data obtained from a shared registry, the data obtained from the shared registry comprising identifying information of the ML model including the GUID. At decision block 540, the processor determine whether the extracted GUID is successfully verified.). Cheruvu does not explicitly disclose: a duration based on the plurality of timestamps to each of the one or more signatures, a mapped duration, a mapped reference duration. However, in an analogous art, Perkalsky discloses a method comprising the step of: a duration based on the plurality of timestamps to each of the one or more signatures, a mapped duration, a mapped reference duration (Perkalsky col. 12: 37-53. Briefly, the method 500 includes detecting a first message instructing the client device to start calculation of fingerprints for a group of pictures starting at a first timestamp in the video stream and corresponding audio frames starting at a second timestamp in the audio stream; obtaining, from the buffer, video packets for the group of pictures starting at the first timestamp and audio packets for the corresponding audio frames starting at the second timestamp; deriving, from the video packets, a first sequence of signatures for the group of pictures and deriving, from the audio packets, a second sequence of signatures for the corresponding audio frames; detecting a second message including an expected signature for the group of pictures and the corresponding audio frames; and validating the expected signature based on the first sequence of signatures and the second sequence of signatures in response to detecting the second message.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the teachings of Perkalsky and Cheruvu to include the step of: a duration based on the plurality of timestamps to each of the one or more signatures. One would have been motivated to provide users with a means validating the authenticity of a series of multimedia frames (video or audio) with its corresponding series of digital signatures. (See Perkalsky col. 12: 37-53.) Regarding claim 2, Cheruvu and Perkalsky disclose the method of claim 1. Cheruvu further discloses wherein each of the one or more reference signatures are mapped to a source identifier [and a duration] associated with reference watermark data (Cheruvu FIG. 5, [0034], [0052], [0073]. With respect to the sender component, the hash generator 113 and/or content deployer 114 may be implemented in CPU 111 of content generation platform 110 as hardware, software, and/or firmware of the content generation platform 110. Once content is generated by content generator 112, the hash generator 113 may utilize the ML model's 116 GUID, a ML model ID, the content (e.g., plain text, image, video, etc.), and/or a timestamp for a digital signature. The content deployer 114 may then transmit the digital signature to the verifier component, along with the content (e.g., plain text, image, video, etc.), timestamp, and model ID. In one implementation, the digital signature and content, timestamp, and model ID may be sent over an insecure communication channel 140 to the verifier (e.g., verification component 132). As previously discussed, the verification component 252 of content consumer system 250 verifies the digital signature of content prior to passing it to content consumer application 254 as verified content 256. In order to verify the content, the verification component 252 may utilize a previously downloaded public key of a website and the GUID 270. Subsequently, at block 530, the processor may verify the extracted GUID against data obtained from a shared registry, the data obtained from the shared registry comprising identifying information of the ML model including the GUID. At decision block 540, the processor determine whether the extracted GUID is successfully verified. [For “watermark,” see [0047]. Implementations of the disclosure provide a watermarking approach to verify the authenticity or integrity of the owner of content published and consumed at content consumer system 250. The watermarking approach prevents the publishing and/or consumption of unverified content at the content consumer system 250. In one implementation, the digital signature generated by hash generator 228 may be a hash of one or more the generated content (e.g., plaintext), the GUID 270, a ML model ID, and/or a timestamp.)].). Perkalsky discloses wherein each of the one or more reference signatures are mapped to a source identifier and a duration (Perkalsky col. 12: 37-53. Briefly, the method 500 includes detecting a first message instructing the client device to start calculation of fingerprints for a group of pictures starting at a first timestamp in the video stream and corresponding audio frames starting at a second timestamp in the audio stream; obtaining, from the buffer, video packets for the group of pictures starting at the first timestamp and audio packets for the corresponding audio frames starting at the second timestamp; deriving, from the video packets, a first sequence of signatures for the group of pictures and deriving, from the audio packets, a second sequence of signatures for the corresponding audio frames; detecting a second message including an expected signature for the group of pictures and the corresponding audio frames; and validating the expected signature based on the first sequence of signatures and the second sequence of signatures in response to detecting the second message.). The motivation is the same as that of claim 1 above. Regarding claim 3, Cheruvu and Perkalsky disclose the method of claim 1. Cheruvu further discloses identifying, with a media content identifier associated with at least one of the matched signatures, the media content item (Chevuru [0073]-[0074]. Subsequently, at block 530, the processor may verify the extracted GUID against data obtained from a shared registry, the data obtained from the shared registry comprising identifying information of the ML model including the GUID. At decision block 540, the processor determine whether the extracted GUID is successfully verified. If the extracted GUID is successfully verified at decision block 540, the flow 500 proceeds to block 550 where the processor may provide the content for consumption at the content consumer platform and indicating that the content is generated by the ML model having verified authenticity.). Regarding claim 5, Cheruvu and Perkalsky disclose the method of claim 1. Cheruvu further discloses calculating [the duration] for the media content including the watermark (Cheruvu [0047], [0051]-[0052]. Implementations of the disclosure provide a watermarking approach to verify the authenticity or integrity of the owner of content published and consumed at content consumer system 250. The watermarking approach prevents the publishing and/or consumption of unverified content at the content consumer system 250. In one implementation, the digital signature generated by hash generator 228 may be a hash of one or more the generated content (e.g., plaintext), the GUID 270, a ML model ID, and/or a timestamp. Once the digital signature is generated by hash generator 228, the publication component 224 may then transmit the digital signature, along with the content (e.g., plain text), timestamp, and model ID to the content consumer system 250. As previously discussed, the verification component 252 of content consumer system 250 verifies the digital signature of content prior to passing it to content consumer application 254 as verified content 256. In order to verify the content, the verification component 252 may utilize a previously downloaded public key of a website and the GUID 270.). Perkalsky further discloses wherein mapping, at the media manager computing device, the source identifier and the duration based on the plurality of timestamps comprises: calculating the duration for the media content [including the watermark] based on the lowest value timestamp and the highest value timestamp of the plurality of timestamps (Perkalsky col. 13: 61-65; col. 14: 5-7, 22-26. Still referring to FIG. 5, as represented by block 560, the method 500 includes validating the expected signature based on the first sequence of signatures and the second sequence of signatures in response to detecting the second message. The client device then compares the expected signature with the calculated joint signature to detect tampering. In some embodiments, as represented by block 570, the second message also indicates to the client device to start calculation of fingerprints for a second group of pictures starting at a third timestamp in the video stream and a set of corresponding audio frames starting at a fourth timestamp.). The motivation is the same as that of claim 1 above. Regarding claim 8, Cheruvu and Perkalsky disclose the method of claim 1. Perkalsky further discloses wherein the media content item is a portion of media content that is streaming over a network (Perkalsky col. 3: 61-65. A variety of audio and video streaming formats can be encoded, packaged, transmitted, and/or decoded. For example, standard definition (SD) services tend to use MPEG-2 for video and MPEG-1 for audio.). The motivation is the same as that of claim 1 above. Regarding claim 9, Cheruvu and Perkalsky disclose the method of claim 1. Cheruvu further discloses obtaining, at a user computing device, a media content stream including a plurality of media content items including the media content item (Chevuru [0028]. In one implementation, content generation platform 110 provides for processing of ML data to generate content for consumption by content consumer platform 130. The generated content may be multi-modal content, including, but not limited to, one or more of plaintext, image(s), video(s), audio, and/or any other form of content.). Regarding claim 11, Cheruvu and Perkalsky disclose the method of claim 9. Cheruvu further discloses wherein the user computing device corresponds to a smartphone, a laptop, a tablet computing device, an Internet-connected television, a streaming computing device, a smart TV, or a computing device configured to present media content streams (Chevuru [0078]. Those skilled in the relevant art will appreciate that the illustrated embodiments as well as other embodiments may be practiced with other processor-based device configurations, including portable electronic or handheld electronic devices, for instance smartphones, portable computers, wearable computers, consumer electronics, personal computers (“PCs”), network PCs, minicomputers, server blades, mainframe computers, and the like.). Regarding claim 12, Chevuru discloses A method comprising: obtaining, at a media manager computing device, a media content item with identification information including watermark data and a signature (Cheruvu FIG. 5, [0047], [0051]. Implementations of the disclosure provide a watermarking approach to verify the authenticity or integrity of the owner of content published and consumed at content consumer system 250. The watermarking approach prevents the publishing and/or consumption of unverified content at the content consumer system 250. In one implementation, the digital signature generated by hash generator 228 may be a hash of one or more the generated content (e.g., plaintext), the GUID 270, a ML model ID, and/or a timestamp. Once the digital signature is generated by hash generator 228, the publication component 224 may then transmit the digital signature, along with the content (e.g., plain text), timestamp, and model ID to the content consumer system 250.); mapping, at the media manager computing device, a source identifier from the watermark data and [a duration] from the watermark data to the signature as a tagged signature, the tagged signature having a mapped source identifier and [a mapped duration] (Cheruvu FIG. 5, [0034], [0052], [0073]. With respect to the sender component, the hash generator 113 and/or content deployer 114 may be implemented in CPU 111 of content generation platform 110 as hardware, software, and/or firmware of the content generation platform 110. Once content is generated by content generator 112, the hash generator 113 may utilize the ML model's 116 GUID, a ML model ID, the content (e.g., plain text, image, video, etc.), and/or a timestamp for a digital signature. The content deployer 114 may then transmit the digital signature to the verifier component, along with the content (e.g., plain text, image, video, etc.), timestamp, and model ID. In one implementation, the digital signature and content, timestamp, and model ID may be sent over an insecure communication channel 140 to the verifier (e.g., verification component 132). As previously discussed, the verification component 252 of content consumer system 250 verifies the digital signature of content prior to passing it to content consumer application 254 as verified content 256. In order to verify the content, the verification component 252 may utilize a previously downloaded public key of a website and the GUID 270. Subsequently, at block 530, the processor may verify the extracted GUID against data obtained from a shared registry, the data obtained from the shared registry comprising identifying information of the ML model including the GUID. At decision block 540, the processor determine whether the extracted GUID is successfully verified.); and determining, at the media manager computing device, that the tagged signature matches a reference signature based on the mapped source identifier [and the mapped duration of the tagged signature] (Cheruvu [0052], [0074]. As previously discussed, the verification component 252 of content consumer system 250 verifies the digital signature of content prior to passing it to content consumer application 254 as verified content 256. In order to verify the content, the verification component 252 may utilize a previously downloaded public key of a website and the GUID 270. Subsequently, at block 530, the processor may verify the extracted GUID against data obtained from a shared registry, the data obtained from the shared registry comprising identifying information of the ML model including the GUID. At decision block 540, the processor determine whether the extracted GUID is successfully verified.). Cheruvu does not explicitly disclose: a duration, a mapped duration, a mapped reference duration. However, in an analogous art, Perkalsky discloses a method comprising the step of: a duration, a mapped duration, a mapped reference duration (Perkalsky col. 12: 37-53. Briefly, the method 500 includes detecting a first message instructing the client device to start calculation of fingerprints for a group of pictures starting at a first timestamp in the video stream and corresponding audio frames starting at a second timestamp in the audio stream; obtaining, from the buffer, video packets for the group of pictures starting at the first timestamp and audio packets for the corresponding audio frames starting at the second timestamp; deriving, from the video packets, a first sequence of signatures for the group of pictures and deriving, from the audio packets, a second sequence of signatures for the corresponding audio frames; detecting a second message including an expected signature for the group of pictures and the corresponding audio frames; and validating the expected signature based on the first sequence of signatures and the second sequence of signatures in response to detecting the second message.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the teachings of Perkalsky and Cheruvu to include the step of: a duration based on the plurality of timestamps to each of the one or more signatures. One would have been motivated to provide users with a means validating the authenticity of a series of multimedia frames (video or audio) with its corresponding series of digital signatures. (See Perkalsky col. 12: 37-53.) Regarding claim 13, Cheruvu and Perkalsky disclose the method of claim 12. Cheruvu further discloses obtaining, from a meter device, the watermark data including the source identifier and [a plurality of timestamps] extracted from a code embedded into the audio signal of the media content item (Cheruvu [0028], [0047], [0051]-[0052]. The generated content may be multi-modal content, including, but not limited to, one or more of plaintext, image(s), video(s), audio, and/or any other form of content. Implementations of the disclosure provide a watermarking approach to verify the authenticity or integrity of the owner of content published and consumed at content consumer system 250. The watermarking approach prevents the publishing and/or consumption of unverified content at the content consumer system 250. In one implementation, the digital signature generated by hash generator 228 may be a hash of one or more the generated content (e.g., plaintext), the GUID 270, a ML model ID, and/or a timestamp. Once the digital signature is generated by hash generator 228, the publication component 224 may then transmit the digital signature, along with the content (e.g., plain text), timestamp, and model ID to the content consumer system 250. As previously discussed, the verification component 252 of content consumer system 250 verifies the digital signature of content prior to passing it to content consumer application 254 as verified content 256. In order to verify the content, the verification component 252 may utilize a previously downloaded public key of a website and the GUID 270.). Perkalsky discloses a method, comprising: a plurality of timestamps (Perkalsky col. 12: 37-53. Briefly, the method 500 includes detecting a first message instructing the client device to start calculation of fingerprints for a group of pictures starting at a first timestamp in the video stream and corresponding audio frames starting at a second timestamp in the audio stream; obtaining, from the buffer, video packets for the group of pictures starting at the first timestamp and audio packets for the corresponding audio frames starting at the second timestamp; deriving, from the video packets, a first sequence of signatures for the group of pictures and deriving, from the audio packets, a second sequence of signatures for the corresponding audio frames; detecting a second message including an expected signature for the group of pictures and the corresponding audio frames; and validating the expected signature based on the first sequence of signatures and the second sequence of signatures in response to detecting the second message.). The motivation is the same of claim 2 above. Regarding claim 14, Cheruvu and Perkalsky disclose the method of claim 12. Cheruvu further discloses further comprising: identifying, with a media content identifier associated with tagged signature, the media content item; and identifying, with a source identifier associated with tagged signature, a source of the media content item (Cheruvu [0052], [0074]. As previously discussed, the verification component 252 of content consumer system 250 verifies the digital signature of content prior to passing it to content consumer application 254 as verified content 256. In order to verify the content, the verification component 252 may utilize a previously downloaded public key of a website and the GUID 270. Subsequently, at block 530, the processor may verify the extracted GUID against data obtained from a shared registry, the data obtained from the shared registry comprising identifying information of the ML model including the GUID. At decision block 540, the processor determine whether the extracted GUID is successfully verified.). Regarding claim 17, claim 17 is directed to a computing system corresponding to the method of claim 12. Claim 17 is similar to claim 12 and is therefore rejected under similar rationale. Regarding claim 18, claim 18 is directed to a computing system corresponding to the method of claim 11. Claim 18 is similar to claim 11 and is therefore rejected under similar rationale. Regarding claim 19, Cheruvu and Perkalsky disclose the system of claim 17. Cheruvu further discloses obtaining a media content item with identification information including watermark data and a signature (Cheruvu FIG. 5, [0034], [0052], [0073]. With respect to the sender component, the hash generator 113 and/or content deployer 114 may be implemented in CPU 111 of content generation platform 110 as hardware, software, and/or firmware of the content generation platform 110. Once content is generated by content generator 112, the hash generator 113 may utilize the ML model's 116 GUID, a ML model ID, the content (e.g., plain text, image, video, etc.), and/or a timestamp for a digital signature. The content deployer 114 may then transmit the digital signature to the verifier component, along with the content (e.g., plain text, image, video, etc.), timestamp, and model ID. In one implementation, the digital signature and content, timestamp, and model ID may be sent over an insecure communication channel 140 to the verifier (e.g., verification component 132). As previously discussed, the verification component 252 of content consumer system 250 verifies the digital signature of content prior to passing it to content consumer application 254 as verified content 256. In order to verify the content, the verification component 252 may utilize a previously downloaded public key of a website and the GUID 270. Subsequently, at block 530, the processor may verify the extracted GUID against data obtained from a shared registry, the data obtained from the shared registry comprising identifying information of the ML model including the GUID. At decision block 540, the processor determine whether the extracted GUID is successfully verified. [For “watermark,” see [0047]. Implementations of the disclosure provide a watermarking approach to verify the authenticity or integrity of the owner of content published and consumed at content consumer system 250. The watermarking approach prevents the publishing and/or consumption of unverified content at the content consumer system 250. In one implementation, the digital signature generated by hash generator 228 may be a hash of one or more the generated content (e.g., plaintext), the GUID 270, a ML model ID, and/or a timestamp.)].). Perkalsky further discloses mapping a source identifier from the watermark data and a duration from the watermark data to the signature as a tagged signature, the tagged signature having a mapped source identifier and a mapped duration (Perkalsky col. 12: 37-53. Briefly, the method 500 includes detecting a first message instructing the client device to start calculation of fingerprints for a group of pictures starting at a first timestamp in the video stream and corresponding audio frames starting at a second timestamp in the audio stream; obtaining, from the buffer, video packets for the group of pictures starting at the first timestamp and audio packets for the corresponding audio frames starting at the second timestamp; deriving, from the video packets, a first sequence of signatures for the group of pictures and deriving, from the audio packets, a second sequence of signatures for the corresponding audio frames; detecting a second message including an expected signature for the group of pictures and the corresponding audio frames; and validating the expected signature based on the first sequence of signatures and the second sequence of signatures in response to detecting the second message.); and determining the tagged signature matches a reference signature based on the mapped source identifier and the mapped duration of the tagged signature (Perkalsky col. 13: 61-65; col. 14: 5-7, 22-26. Still referring to FIG. 5, as represented by block 560, the method 500 includes validating the expected signature based on the first sequence of signatures and the second sequence of signatures in response to detecting the second message. The client device then compares the expected signature with the calculated joint signature to detect tampering. In some embodiments, as represented by block 570, the second message also indicates to the client device to start calculation of fingerprints for a second group of pictures starting at a third timestamp in the video stream and a set of corresponding audio frames starting at a fourth timestamp.). The motivation is the same as that of claim 17 above. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Cheruvu et al. (“Cheruvu,” US 20210390447, published Dec 16, 2021) in view of Perkalsky et al. (“Perkalsky,” US 11317128, patented April 26, 2022) and Siu (“Siu,” US 20240221000, filed Dec. 30, 2022). Regarding claim 6, Cheruvu and Perkalsky disclose the system of claim 17. Cheruvu further discloses obtaining a second media content item with identification information including one or more second signatures (Cheruvu [0034]. Once content is generated by content generator 112, the hash generator 113 may utilize the ML model's 116 GUID, a ML model ID, the content (e.g., plain text, image, video, etc.), and/or a timestamp for a digital signature. The content deployer 114 may then transmit the digital signature to the verifier component, along with the content (e.g., plain text, image, video, etc.), timestamp, and model ID.); and determining, at the media manager computing device, [that the obtained identification information of the second media content item does not include watermark data,] to compare each of the second signatures with the one or more reference signatures (Perkalsky col. 13: 61-65; col. 14: 5-7, 22-26. Still referring to FIG. 5, as represented by block 560, the method 500 includes validating the expected signature based on the first sequence of signatures and the second sequence of signatures in response to detecting the second message. The client device then compares the expected signature with the calculated joint signature to detect tampering. In some embodiments, as represented by block 570, the second message also indicates to the client device to start calculation of fingerprints for a second group of pictures starting at a third timestamp in the video stream and a set of corresponding audio frames starting at a fourth timestamp.). Siu further discloses a method comprising the step of determining, at the media manager computing device, that the obtained identification information of the second media content item does not include watermark data (Siu [0028], [0045], [0047]. For example, a public and private key security mechanism or a crypto wallet-based authentication associated with the NFT digital asset can be used to authorize viewing the NFT digital asset without the digital watermark. Based on evaluating the query, the NFT digital asset is either presented with the digital watermark or without the digital watermark. Using the set of digital watermarking attributes and contract, the NFT digital asset can cause the smart contract to authorize display of the NFT digital asset without the digital watermark. Based on the authorization, the NFT watermarking management client 130 accesses the NFT digital asset without the digital watermark and causes display of the NFT digital asset without the digital watermark.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the teachings of Siu, Perkalsky and Cheruvu to include the step of: determining, at the media manager computing device, that the obtained identification information of the second media content item does not include watermark data. One would have been motivated to provide users with a means for determining whether to provide a digital watermark according to system or smart contract policies. (See Siu [0045].) Claims 10 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Cheruvu et al. (“Cheruvu,” US 20210390447, published Dec 16, 2021) in view of Perkalsky et al. (“Perkalsky,” US 11317128, patented April 26, 2022) and Carney Landow (“Carney Landow,” US 20240214627, filed Dec. 27, 2022). Regarding claim 10, Cheruvu and Perkalsky disclose the method of claim 9. Carney Landow further discloses wherein the media content stream is streaming video-on-demand provided a media content provider (Carney Landow [0032]. During typical operation, the video services receiver 106 receives video programming (broadcast events, on-demand video events, streaming media, emergency broadcasts, etc.), signaling information, or other data via the network 112.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the teachings of Carney Landow, Perkalsky and Cheruvu to include the step of: wherein the media content stream is streaming video-on-demand provided a media content provider. One would have been motivated to provide users with a means for providing an on-demand video service users. (See Carney Landow [0032].) Regarding claim 16, Cheruvu and Perkalsky disclose the method of claim 14. Carney Landow further discloses wherein the source of the media content item is at least one of Netflix, Amazon Prime Video, Disney +, Hulu, Tubi, Pluto TV, Roku Channel, YouTube, Paramount +, or Peacock (Carney Landow [0032]. For example, an end user device (e.g., the presentation device 104) could be used to download and present a video clip posted on the Internet (e.g., using the well-known YOUTUBE video sharing service), where the video clip is a recorded version of a show that has already been broadcast.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the teachings of Carney Landow, Perkalsky and Cheruvu to include the step of: wherein the source of the media content item is at least one of Netflix, Amazon Prime Video, Disney +, Hulu, Tubi, Pluto TV, Roku Channel, YouTube, Paramount +, or Peacock. One would have been motivated to provide users with a means for providing an on-demand video service users. (See Carney Landow [0032].) Claims 4, 15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Cheruvu et al. (“Cheruvu,” US 20210390447, published Dec 16, 2021) in view of Perkalsky et al. (“Perkalsky,” US 11317128, patented April 26, 2022) and Palmer et al. (“Palmer,” US 20140074712, published Mar. 13, 2014). Regarding claim 4, Cheruvu and Perkalsky disclose the method of claim 1. Perkalsky further discloses calculating, at the media manager computing device, a content duration associated with the one or more matched signatures based on the corresponding mapped reference duration for each of the one or more matched signatures (Perkalsky col. 12: 37-53. Briefly, the method 500 includes detecting a first message instructing the client device to start calculation of fingerprints for a group of pictures starting at a first timestamp in the video stream and corresponding audio frames starting at a second timestamp in the audio stream; obtaining, from the buffer, video packets for the group of pictures starting at the first timestamp and audio packets for the corresponding audio frames starting at the second timestamp; deriving, from the video packets, a first sequence of signatures for the group of pictures and deriving, from the audio packets, a second sequence of signatures for the corresponding audio frames; detecting a second message including an expected signature for the group of pictures and the corresponding audio frames; and validating the expected signature based on the first sequence of signatures and the second sequence of signatures in response to detecting the second message.); crediting, at the media manager computing device, a duration portion [of a media rating] associated with the media content item based on the content duration (Perkalsky col. 13: 61-65; col. 14: 5-7, 22-26. Still referring to FIG. 5, as represented by block 560, the method 500 includes validating the expected signature based on the first sequence of signatures and the second sequence of signatures in response to detecting the second message. The client device then compares the expected signature with the calculated joint signature to detect tampering. In some embodiments, as represented by block 570, the second message also indicates to the client device to start calculation of fingerprints for a second group of pictures starting at a third timestamp in the video stream and a set of corresponding audio frames starting at a fourth timestamp.). Cheruvu and Perkalsky do not explicitly disclose: a duration portion of a media rating. However, in an analogous art, Palmer discloses a method comprising the step of: a duration portion of a media rating (Palmer [0096], [0105]. The user activity data is generated based on user input received by the devices 124 and can include: (i) ratings data representing a subjective rating (e.g., a like/dislike, or star rating, a ranking, etc.) of the media data file received through the UI; and (ii) sharing data representing transmission of data representing the segment ID to another user device (e.g., sharing a reference to or a title of a track with a friend at a concert). For example, the association can be based on data in the database representing a location of the live event and the stored location data from the nearby engine 228. The association can be based on data representing a current location or a recommendation (in rating data) of a friend of the user (recorded in the user's data record). The activity feed data can represent stored user data of other friend users who are represented in the stored user data of the user.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the teachings of Palmer, Perkalsky and Cheruvu to include the step of: a duration portion of a media rating. One would have been motivated to provide users with means for generating, recording, and sharing user ratings for media segments. (See Palmer [0096].) Regarding claim 15, Cheruvu and Perkalsky disclose the method of claim 14. Perkalsky further discloses crediting, at the media manager computing device, a duration portion [of a media rating] associated with the media content item based on the content duration and the source identifier (Perkalsky col. 13: 61-65; col. 14: 5-7, 22-26. Still referring to FIG. 5, as represented by block 560, the method 500 includes validating the expected signature based on the first sequence of signatures and the second sequence of signatures in response to detecting the second message. The client device then compares the expected signature with the calculated joint signature to detect tampering. In some embodiments, as represented by block 570, the second message also indicates to the client device to start calculation of fingerprints for a second group of pictures starting at a third timestamp in the video stream and a set of corresponding audio frames starting at a fourth timestamp.). Cheruvu and Perkalsky do not explicitly disclose: a duration portion of a media rating; storing, in a media content ratings database, the credited duration portion of the media rating associated with the media content item; and providing, from the media content ratings database, a crediting output including at least the source of the media content item associated with the credited duration portion of the media rating. However, in an analogous art, Palmer discloses a method, comprising the steps of: a duration portion of a media rating; storing, in a media content ratings database, the credited duration portion of the media rating associated with the media content item; and providing, from the media content ratings database, a crediting output including at least the source of the media content item associated with the credited duration portion of the media rating (Palmer [0096], [0105]. The user activity data is generated based on user input received by the devices 124 and can include: (i) ratings data representing a subjective rating (e.g., a like/dislike, or star rating, a ranking, etc.) of the media data file received through the UI; and (ii) sharing data representing transmission of data representing the segment ID to another user device (e.g., sharing a reference to or a title of a track with a friend at a concert). For example, the association can be based on data in the database representing a location of the live event and the stored location data from the nearby engine 228. The association can be based on data representing a current location or a recommendation (in rating data) of a friend of the user (recorded in the user's data record). The activity feed data can represent stored user data of other friend users who are represented in the stored user data of the user.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the teachings of Palmer, Perkalsky and Cheruvu to include the step of: providing, from the media content ratings database, a crediting output including at least the source of the media content item associated with the credited duration portion of the media rating. One would have been motivated to provide users with means for generating, recording, and sharing user ratings for media segments. (See Palmer [0096].) Regarding claim 20, Cheruvu and Perkalsky disclose the system of claim 19. Palmer further discloses wherein the set of operations further comprises causing display of a graphical representation of a crediting output from the media content ratings database including the duration portion of the media rating (Palmer [0096], [0105]. The user activity data is generated based on user input received by the devices 124 and can include: (i) ratings data representing a subjective rating (e.g., a like/dislike, or star rating, a ranking, etc.) of the media data file received through the UI; and (ii) sharing data representing transmission of data representing the segment ID to another user device (e.g., sharing a reference to or a title of a track with a friend at a concert). For example, the association can be based on data in the database representing a location of the live event and the stored location data from the nearby engine 228. The association can be based on data representing a current location or a recommendation (in rating data) of a friend of the user (recorded in the user's data record). The activity feed data can represent stored user data of other friend users who are represented in the stored user data of the user.). Therefore, it would have been obvious to one of ordinary skill in the art on or before the effective filing date of the claimed invention to combine the teachings of Palmer, Perkalsky and Cheruvu to include the step of: providing, from the media content ratings database, a crediting output including at least the source of the media content item associated with the credited duration portion of the media rating. One would have been motivated to provide users with means for generating, recording, and sharing user ratings for media segments. (See Palmer [0096].) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDWARD LONG whose telephone number is (571)272-8961. The examiner can normally be reached on Monday to Friday, 9 AM - 6 PM EST (Alternate Fridays). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached on (571) 270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EDWARD LONG/ Examiner, Art Unit 2439 /LUU T PHAM/ Supervisory Patent Examiner, Art Unit 2439
Read full office action

Prosecution Timeline

Sep 06, 2024
Application Filed
Jan 02, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603775
DATA INTERACTION
2y 5m to grant Granted Apr 14, 2026
Patent 12598090
INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12587387
PROTECTING WEBCAM VIDEO FEEDS FROM VISUAL MODIFICATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12567981
SYSTEMS AND METHODS FOR DATA AUTHENTICATION USING COMPOSITE KEYS AND SIGNATURES
2y 5m to grant Granted Mar 03, 2026
Patent 12563091
SYSTEM AND METHOD FOR DETECTING PATTERNS IN STRUCTURED FIELDS OF NETWORK TRAFFIC PACKETS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+47.9%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 184 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month