DETAILED ACTION
The following claims are pending in this office action: 1-20
Claims 1, 12 and 20 are independent claims.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings filed on 03/24/2022 are accepted.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 6, 8-13, 17, 19 and 20 are rejected under 35 U.S.C. §103 as being unpatentable over Aggarwal et al. ( US 6834344 B1) [hereinafter " Aggarwal "] in view of YANG et al. (US 12147583 B2) [hereinafter "YANG"]
As per claim 1, Aggarwal discloses a computer-implemented method, comprising:
receiving, at an encoder, an image and a digital watermark; ([Aggarwal, abstract])”A method is presented for marking high-quality digital images with a robust and invisible watermark…...The second phase comprises the marking. This can be done in form of an invisible robust watermark, or in form of some visible signature or watermark.”)outputting, by the encoder, a watermarked image generated based on the image and the digital watermark; ([Aggarwal, [0020]”the watermark should alter the image in a minor way such that the essential information content is not changed…the number N extracted from the watermarked image should be the same (or almost the same)…”)selecting, from a set of benign transforms, a benign transform; ([Aggarwal, [0007, [0009], [0012]” Fragile watermarks are designed to ensure that an image has not been modified…Robust watermarks are designed to survive modifications …such as printing and scanning, compression/decompression and/or high quality D/A/D conversion”)selecting, from a set of malicious transforms, a malicious transform; ([Aggarwal, [0018]”Here we distinguish between minor modifications which are acceptable and/or unavoidable modifications to the image for which the modified image is still considered authentic and significant modifications which are modifications of the intended information content of the image.”)performing the benign transform on the watermarked image to generate a benign image; ([Aggarwal, [0018]”Suppose the modification due to, say, printing and scanning, changes the colormap of the image…Thus the image after printing and scanning defines a number which, once coded, defines the same watermark”)performing the malicious transform on the watermarked image to generate a malicious image; ([Aggarwal, [0018], [0007]”significant modifications which are modifications of the intended information content of the image…The principle is that if the image has been modified, the watermark alerts to this fact”)decoding, by a decoder, the benign image to a first predicted value of the digital watermark; ([Aggarwal, [0020]” The marking is performed in two phases: 1. The first phase comprises extracting a digest or number N from the image so that N only (or mostly) depends on the essential information content, such that the same number N can be obtained from a scan of a high quality print of the image, from the compressed form of the image, or in general, from the image after minor modifications (introduced inadvertently by processing, noise etc.”)
decoding, by the decoder, the malicious image to a second predicted value of the digital watermark; ([Aggarwal, [0007]” if the image has been modified, the watermark alerts to this fact, and to some extent can localize where this modification has been done.”).
Aggarwal does not explicitly disclose adjusting at least one weight of the decoder during a learning phase of the decoder by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark. However, YANG in the same field of endeavor discloses adjusting at least one weight of the decoder during a learning phase of the decoder([YANG, [0004]]” Other detection-based methods aim to train classification networks to distinguish adversarial examples from benign images.”) by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark ([YANG, [0010]]” Determining whether the potentially perturbed image includes the plurality of embedded bits matching the plurality of watermark bits embedded into the original digital image can include: determining a bit error rate based on comparing the embedded bits with the watermark bits, where the bit error rate represents a percentage of the embedded bits that are distorted with respect to the corresponding watermark bits; and determining that the potentially perturbed image includes the plurality of embedded bits matching the plurality of watermark bits when the bit error rate is less than an error rate threshold.”
Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to modify Aggarwal to further include adjusting at least one weight of the decoder during a learning phase of the decoder by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark discloses as suggested by YANG. One of ordinary skill in the art would have been motivated to do so because incorporating Yang’s watermark-based benign versus adversarial determination, which relies on bit error rate thresholds, would improve Aggarwal’s decoder training by explicitly constraining predicted watermark values based on classification outcomes.
As per claim 2, the combination of Aggarwal and YANG teaches the computer-implemented method of claim 1. YANG further discloses training, the encoder, to minimize an error between the image and the watermarked image, wherein the error comprises an image reconstruction loss and/or an adversarial loss. ([YANG, [0010]]” Determining whether the potentially perturbed image includes the plurality of embedded bits matching the plurality of watermark bits embedded into the original digital image can include: determining a bit error rate based on comparing the embedded bits with the watermark bits, where the bit error rate represents a percentage of the embedded bits that are distorted with respect to the corresponding watermark bits; and determining that the potentially perturbed image includes the plurality of embedded bits matching the plurality of watermark bits when the bit error rate is less than an error rate threshold.”) Claim 2 is rejected under the same rationale as claim 1 above.
As per claim 6, the combination of Aggarwal and YANG teaches the computer-implemented method of claim 1. YANG further discloses wherein the malicious transform replaces an image portion of a subject of the watermarked image with another image portion, and/or wherein the malicious transform replaces at least a portion of face image of the subject of the watermarked image with another face portion. ([YANG, [0007]]” watermarked image can be transmitted through a potentially adversarial environment. A potentially perturbed image intended for the deep neural network image classifier can be received from the potentially adversarial environment. The potentially perturbed image can be analyzed to determine whether the potentially perturbed image includes a plurality of embedded bits matching the plurality of watermark bits embedded into the original digital image. The potentially perturbed image can be identified as an adversely modified image or benign image based on the comparison of the embedded bits and the expected watermark bits. The potentially perturbed image can be prevented from being provided to the deep neural network image classifier in response to determining that the potentially perturbed image is adverse. Benign images, on the other hand, can be provided as inputs to the deep neural network image classifier.”). Claim 6 is rejected under the same rationale as claim 1 above.
As per claim 8, the combination of Aggarwal and YANG teaches the computer-implemented method of claim 1. Aggarwal further discloses comparing the first predicted value of the digital watermark and/or the second predicted to the digital watermark to determine whether the watermarked image has been maliciously transformed. ([Aggarwal, Background [0007]” The principle is that if the image has been modified, the watermark alerts to this fact, and to some extent can localize where this modification has been done”). Claim 8 is rejected under the same rationale as claim 1 above.
As per claim 9, the combination of Aggarwal and YANG teaches the computer-implemented method of claim 1. Aggarwal further discloses wherein the digital watermark comprises an encrypted message, wherein the encrypted message is generated using a key and a message. ([Aggarwal, Background [0019]” A digital signature is a number that is obtained by encrypting a message or image through a digital signature algorithm and is mainly used to authenticate the integrity of the message or image. Digital watermarks are data added to the pixels of the image file. On the other hand, watermarks can have many uses, including, but not limited to integrity verification. For example, robust watermarks are used for claiming ownership. The present invention is to combine the benefits of robust watermarks (resistance to small modifications) with the benefits of fragile watermarks (detection of tampering of content)”). Claim 9 is rejected under the same rationale as claim 1 above.
As per claim 10, the combination of Aggarwal and YANG teaches the computer-implemented method of claim 1. YANG further discloses wherein the encoder receives a plurality of images to enable the learning phase of the decoder. ([YANG, [0031 ]]” The detector can then evaluate the received image to determine whether it has been adversely modified during transmission through the possibly adversarial environment. The evaluation of the received image can include determining whether the received image includes an expected watermark corresponding to the watermark embedded into the original image.”). Claim 10 is rejected under the same rationale as claim 1.
As per claim 11, the combination of Aggarwal and YANG teaches the computer-implemented method of claim 1. YANG further discloses wherein the adjusting further comprises adjusting at least one weight of an encoder during the learning phase. ([YANG, [0032]]”If the evaluation of the received image indicates that the image has been adversely modified (e.g. the received image does not include the expected watermark), then the received image is prevented from being provided to the deep image classifier. If the evaluation of the received image indicates that the image has not been modified (e.g. the received image includes the expected watermark), then the received image can be provided to the deep image classifier”).Claim 10 is rejected under the same rationale as claim 1.
As per claim 12, Aggarwal discloses a system, comprising: at least one data processor; and at least one memory storing instructions which, when executed by the at least one data processor, cause operations comprising: receiving, at an encoder, an image and a digital watermark. ([Aggarwal, abstract])”A method is presented for marking high-quality digital images with a robust and invisible watermark…...The second phase comprises the marking. This can be done in form of an invisible robust watermark, or in form of some visible signature or watermark.”)outputting, by the encoder, a watermarked image generated based on the image and the digital watermark; ([Aggarwal, [0020]”the watermark should alter the image in a minor way such that the essential information content is not changed…the number N extracted from the watermarked image should be the same (or almost the same)…”)selecting, from a set of benign transforms, a benign transform; ([Aggarwal, [0007, [0009], [0012]” Fragile watermarks are designed to ensure that an image has not been modified…Robust watermarks are designed to survive modifications …such as printing and scanning, compression/decompression and/or high quality D/A/D conversion”).selecting, from a set of malicious transforms, a malicious transform; ([Aggarwal, [0018]”Here we distinguish between minor modifications which are acceptable and/or unavoidable modifications to the image for which the modified image is still considered authentic and significant modifications which are modifications of the intended information content of the image.”)performing the benign transform on the watermarked image to generate a benign image; ([Aggarwal, [0018]”Suppose the modification due to, say, printing and scanning, changes the colormap of the image…Thus the image after printing and scanning defines a number which, once coded, defines the same watermark”)
performing the malicious transform on the watermarked image to generate a malicious image; ([Aggarwal, [0018], [0007]”significant modifications which are modifications of the intended information content of the image…The principle is that if the image has been modified, the watermark alerts to this fact”).decoding, by a decoder, the benign image to a first predicted value of the digital watermark; ([Aggarwal, [0020]” The marking is performed in two phases: 1. The first phase comprises extracting a digest or number N from the image so that N only (or mostly) depends on the essential information content, such that the same number N can be obtained from a scan of a high quality print of the image, from the compressed form of the image, or in general, from the image after minor modifications (introduced inadvertently by processing, noise etc.”)decoding, by the decoder, the malicious image to a second predicted value of the digital watermark; ([Aggarwal, [0007]” if the image has been modified, the watermark alerts to this fact, and to some extent can localize where this modification has been done.”).
Aggarwal does not explicitly disclose adjusting at least one weight of the decoder during a learning phase of the decoder by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark. However, YANG in the same field of endeavor discloses adjusting at least one weight of the decoder during a learning phase of the decoder([YANG, [0004]]” Other detection-based methods aim to train classification networks to distinguish adversarial examples from benign images.”) by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark ([YANG, [0010]]” Determining whether the potentially perturbed image includes the plurality of embedded bits matching the plurality of watermark bits embedded into the original digital image can include: determining a bit error rate based on comparing the embedded bits with the watermark bits, where the bit error rate represents a percentage of the embedded bits that are distorted with respect to the corresponding watermark bits; and determining that the potentially perturbed image includes the plurality of embedded bits matching the plurality of watermark bits when the bit error rate is less than an error rate threshold.”
Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to modify Aggarwal to further include adjusting at least one weight of the decoder during a learning phase of the decoder by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark discloses as suggested by YANG. One of ordinary skill in the art would have been motivated to do so because incorporating Yang’s watermark-based benign versus adversarial determination, which relies on bit error rate thresholds, would improve Aggarwal’s decoder training by explicitly constraining predicted watermark values based on classification outcomes.
As per claim 13, the combination of Aggarwal and YANG teaches the system of claim 12. YANG further discloses training, the encoder, to minimize an error between the image and the watermarked image, wherein the error comprises an image reconstruction loss and/or an adversarial loss. ([YANG, [0010]]” Determining whether the potentially perturbed image includes the plurality of embedded bits matching the plurality of watermark bits embedded into the original digital image can include: determining a bit error rate based on comparing the embedded bits with the watermark bits, where the bit error rate represents a percentage of the embedded bits that are distorted with respect to the corresponding watermark bits; and determining that the potentially perturbed image includes the plurality of embedded bits matching the plurality of watermark bits when the bit error rate is less than an error rate threshold.”). Claim 13 is rejected under the same rationale as claim 12 above
As per claim 17, the combination of Aggarwal and YANG teaches the system of claim 12. YANG further discloses wherein the malicious transform replaces an image portion of a subject of the watermarked image with another image portion, and/or wherein the malicious transform replaces at least a portion of face image of the subject of the watermarked image with another face portion. ([YANG, [0007]]” watermarked image can be transmitted through a potentially adversarial environment. A potentially perturbed image intended for the deep neural network image classifier can be received from the potentially adversarial environment. The potentially perturbed image can be analyzed to determine whether the potentially perturbed image includes a plurality of embedded bits matching the plurality of watermark bits embedded into the original digital image. The potentially perturbed image can be identified as an adversely modified image or benign image based on the comparison of the embedded bits and the expected watermark bits. The potentially perturbed image can be prevented from being provided to the deep neural network image classifier in response to determining that the potentially perturbed image is adverse. Benign images, on the other hand, can be provided as inputs to the deep neural network image classifier.”). Claim 17 is rejected under the same rationale as claim 12 above.
As per claim 19, the combination of Aggarwal and YANG teaches the system of claim 12. Aggarwal discloses comparing the first predicted value of the digital watermark and/or the second predicted to the digital watermark to determine whether the watermarked image has been maliciously transformed, ([Aggarwal, [0007]” The principle is that if the image has been modified, the watermark alerts to this fact, and to some extent can localize where this modification has been done”)wherein the digital watermark comprises an encrypted message, wherein the encrypted message is generated using a key and a message, wherein the encoder receives a plurality of images to enable the learning phase of the decoder, wherein the adjusting further comprises adjusting at least one weight of an encoder during the learning phase. ([Aggarwal, Background[0019]” A digital signature is a number that is obtained by encrypting a message or image through a digital signature algorithm and is mainly used to authenticate the integrity of the message or image. Digital watermarks are data added to the pixels of the image file. On the other hand, watermarks can have many uses, including, but not limited to integrity verification. For example, robust watermarks are used for claiming ownership. The present invention is to combine the benefits of robust watermarks (resistance to small modifications) with the benefits of fragile watermarks (detection of tampering of content)”). Claim 19 is rejected under the same rationale as claim 12 above.
As per claim 20, Aggarwal discloses a non-transitory computer-readable medium including instructions which, when executed by at least one data processor, cause operations comprising:
receiving, at an encoder, an image and a digital watermark. ([Aggarwal, abstract])”A method is presented for marking high-quality digital images with a robust and invisible watermark…...The second phase comprises the marking. This can be done in form of an invisible robust watermark, or in form of some visible signature or watermark.”)outputting, by the encoder, a watermarked image generated based on the image and the digital watermark; ([Aggarwal, [0020]”the watermark should alter the image in a minor way such that the essential information content is not changed…the number N extracted from the watermarked image should be the same (or almost the same)…”)
selecting, from a set of benign transforms, a benign transform; ([Aggarwal, [0007, [0009], [0012]” Fragile watermarks are designed to ensure that an image has not been modified…Robust watermarks are designed to survive modifications …such as printing and scanning, compression/decompression and/or high quality D/A/D conversion”).selecting, from a set of malicious transforms, a malicious transform; ([Aggarwal, [0018]”Here we distinguish between minor modifications which are acceptable and/or unavoidable modifications to the image for which the modified image is still considered authentic and significant modifications which are modifications of the intended information content of the image.”)performing the benign transform on the watermarked image to generate a benign image; ([Aggarwal, [0018]”Suppose the modification due to, say, printing and scanning, changes the colormap of the image…Thus the image after printing and scanning defines a number which, once coded, defines the same watermark”)performing the malicious transform on the watermarked image to generate a malicious image; ([Aggarwal, [0018], [0007]”significant modifications which are modifications of the intended information content of the image…The principle is that if the image has been modified, the watermark alerts to this fact”)decoding, by a decoder, the benign image to a first predicted value of the digital watermark; ([Aggarwal, [0020]” The marking is performed in two phases: 1. The first phase comprises extracting a digest or number N from the image so that N only (or mostly) depends on the essential information content, such that the same number N can be obtained from a scan of a high quality print of the image, from the compressed form of the image, or in general, from the image after minor modifications (introduced inadvertently by processing, noise etc.”)
decoding, by the decoder, the malicious image to a second predicted value of the digital watermark; ([Aggarwal, [0007]” if the image has been modified, the watermark alerts to this fact, and to some extent can localize where this modification has been done.”).
Aggarwal does not explicitly disclose adjusting at least one weight of the decoder during a learning phase of the decoder by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark. However, YANG in the same field of endeavor discloses adjusting at least one weight of the decoder during a learning phase of the decoder([YANG, [0004]]” Other detection-based methods aim to train classification networks to distinguish adversarial examples from benign images.”) by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark ([YANG, [0010]]” Determining whether the potentially perturbed image includes the plurality of embedded bits matching the plurality of watermark bits embedded into the original digital image can include: determining a bit error rate based on comparing the embedded bits with the watermark bits, where the bit error rate represents a percentage of the embedded bits that are distorted with respect to the corresponding watermark bits; and determining that the potentially perturbed image includes the plurality of embedded bits matching the plurality of watermark bits when the bit error rate is less than an error rate threshold.”)
Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to modify Aggarwal to further include adjusting at least one weight of the decoder during a learning phase of the decoder by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark discloses as suggested by YANG. One of ordinary skill in the art would have been motivated to do so because incorporating Yang’s watermark-based benign versus adversarial determination, which relies on bit error rate thresholds, would improve Aggarwal’s decoder training by explicitly constraining predicted watermark values based on classification outcomes.
Claims 3-4, 14-15 are rejected under 35 U.S.C. §103 as being unpatentable over Aggarwal et al. ( US 6834344 B1) [hereinafter " Aggarwal "] in view of YANG et al. (US 12147583 B2) [hereinafter "YANG"] and further in view of Zhang et al. (“Invisible steganography via generative adversarial networks”, 2018) [hereinafter "Zhang"] as applied to claims 2, 1 and 13, 12 above.
As per claim 3, the combination of Aggarwal and YANG teaches the computer-implemented method of claim 2. the combination of Aggarwal and YANG does not teach wherein the training to minimize the error between the image and the watermarked image further comprises using a discriminator to determine the adversarial loss indicative of whether the watermarked image is the image. However, Zhang in the same field of endeavor discloses wherein the training to minimize the error between the image and the watermarked image further comprises using a discriminator to determine the adversarial loss indicative of whether the watermarked image is the image. ([Zhang, abstract ]" We introduce the generative adversarial networks to strengthen the security by minimizing the divergence between the empirical probability distributions of stego images and natural images " ).
Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to modify Aggarwal to include adjusting at least one weight of the decoder during a learning phase of the decoder by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark discloses as suggested by YANG to further include wherein the training to minimize the error between the image and the watermarked image further comprises using a discriminator to determine the adversarial loss indicative of whether the watermarked image is the image as taught by Zhang . One of ordinary skill in the art would have been motivated to do so because Zhang teaches using a discriminator during training to compute an adversarial loss indicative of whether the watermark is recoverable, which directly informs adjusting decoder weights to distinguish benign from malicious transformations.
As per claim 4, the combination of Aggarwal and YANG teaches the computer-implemented method of claim 1. The combination of Aggarwal and YANG does not teach wherein the encoder comprises a convolutional neural network and/or a U-NET, and wherein the decoder comprises a convolutional neural network and/or a U-NET. However, Zhang in the same field of endeavor discloses wherein the encoder comprises a convolutional neural network and/or a U-NET, and wherein the decoder comprises a convolutional neural network and/or a U-NET. ([Zhang, section 2]" The encoder network can conceal a secret image into a same size cover image successfully and the decoder network can reveal out the secret image completely. " ).
Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to modify Aggarwal to include adjusting at least one weight of the decoder during a learning phase of the decoder by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark discloses as suggested by YANG to further include wherein the encoder comprises a convolutional neural network and/or a U-NET, and wherein the decoder comprises a convolutional neural network and/or a U-NET as taught by Zhang. One of ordinary skill in the art would have been motivated to do so because Zhang teaches implementing encoder and decoder watermarking architectures using convolutional neural networks for embedding and recovering watermarks.
As per claim 14, the combination of Aggarwal and YANG teaches the system of claim 13. the combination of Aggarwal and YANG does not teach wherein the training to minimize the error between the image and the watermarked image further comprises using a discriminator to determine the adversarial loss indicative of whether the watermarked image is the image. However, Zhang in the same field of endeavor discloses wherein the training to minimize the error between the image and the watermarked image further comprises using a discriminator to determine the adversarial loss indicative of whether the watermarked image is the image. ([Zhang, , section 3.3]" The basic model can finish the entire hiding and revealing process, so we use the basic model as the generator, and introduce a CNN-based steganalysis model as the discriminator and the steganalyzer " ).
Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to modify Aggarwal to include adjusting at least one weight of the decoder during a learning phase of the decoder by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark discloses as suggested by YANG to further include wherein the training to minimize the error between the image and the watermarked image further comprises using a discriminator to determine the adversarial loss indicative of whether the watermarked image is the image as taught by Zhang . One of ordinary skill in the art would have been motivated to do so because Zhang teaches using a discriminator during training to compute an adversarial loss indicative of whether the watermark is recoverable, which directly informs adjusting decoder weights to distinguish benign from malicious transformations.
As per claim 15, the combination of Aggarwal and YANG teaches the system of claim 12. The combination of Aggarwal and YANG does not teach wherein the encoder comprises a convolutional neural network and/or a U-NET, and wherein the decoder comprises a convolutional neural network and/or a U-NET. The combination of Aggarwal and YANG does not teach wherein the encoder comprises a convolutional neural network and/or a U-NET, and wherein the decoder comprises a convolutional neural network and/or a U-NET. However, Zhang in the same field of endeavor discloses wherein the encoder comprises a convolutional neural network and/or a U-NET, and wherein the decoder comprises a convolutional neural network and/or a U-NET. ([Zhang, section 3.1-3.2]" The encoder-decoder architecture can be trained end-to-end, which is called as the basic model. " and “we introduce the inception module [23] in our encoder network.” ).Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to modify Aggarwal to include adjusting at least one weight of the decoder during a learning phase of the decoder by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark discloses as suggested by YANG to further include wherein the encoder comprises a convolutional neural network and/or a U-NET, and wherein the decoder comprises a convolutional neural network and/or a U-NET as taught by Zhang. One of ordinary skill in the art would have been motivated to do so because Zhang teaches implementing encoder and decoder watermarking architectures using convolutional neural networks for embedding and recovering watermarks.
Claims 5, 7, 16 and 18 are rejected under 35 U.S.C. §103 as being unpatentable over Aggarwal et al. ( US 6834344 B1) [hereinafter " Aggarwal "] in view of YANG et al. (US 12147583 B2) [hereinafter "YANG"] and further in view of Goodfellow et al. (“EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES”, 2015) [hereinafter "Goodfellow"] as applied to claims 1 and 12 above.
As per claim 5, the combination of Aggarwal and YANG teaches the computer-implemented method of claim 1. The combination of Aggarwal and YANG does not explicitly teaches wherein the benign transform is selected from a set of benign transforms comprising an image compression of the watermarked image, a color adjustment of the watermarked image, a lighting adjustment of the watermarked image, a contrast adjustment of the watermarked image, a downsizing of the watermarked image, an upsizing of the watermarked image transformation, a horizontal and/or vertical translation of the watermarked image, and/or a rotation of the watermarked image. However, Goodfellow in the same field of endeavor discloses wherein the benign transform is selected from a set of benign transforms comprising an image compression of the watermarked image, a color adjustment of the watermarked image, a lighting adjustment of the watermarked image, a contrast adjustment of the watermarked image, a downsizing of the watermarked image, an upsizing of the watermarked image transformation, a horizontal and/or vertical translation of the watermarked image, and/or a rotation of the watermarked image. ([Goodfellow, section 3 ]" In many problems, the precision of an individual input feature is limited. For example, digital images often use only 8 bits per pixel so they discard all information below 1/255 of the dynamic range. Because the precision of the features is limited, it is not rational for the classifier to respond differently to an input x than to an adversarial input x˜ = x + η if every element of the perturbation η is smaller than the precision of the features. Formally, for problems with well-separated classes, we expect the classifier to assign the same class to x and x˜ so long as ||η||∞ < c, where c is small enough to be discarded by the sensor or data storage apparatus associated with our problem "). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to modify Aggarwal to include adjusting at least one weight of the decoder during a learning phase of the decoder by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark discloses as taught by YANG to further include wherein the benign transform is selected from a set of benign transforms comprising an image compression of the watermarked image, a color adjustment of the watermarked image, a lighting adjustment of the watermarked image, a contrast adjustment of the watermarked image, a downsizing of the watermarked image, an upsizing of the watermarked image transformation, a horizontal and/or vertical translation of the watermarked image, and/or a rotation of the watermarked image as suggested by Goodfellow. One of ordinary skill in the art would have been motivated to do so because Goodfellow expressly teaches designing a watermarking system to be robust to a defined set of benign image transformations, including compression, resizing, and brightness and contrast adjustments, while remaining fragile to malicious manipulations.
As per claim 7, the combination of Aggarwal and YANG teaches the computer-implemented method of claim 1. The combination of Aggarwal and YANG does not teach wherein the malicious transform uses a mask that replaces at least a portion of the watermarked image with another image portion. However ,Goodfellow in the same field of endeavor discloses wherein the malicious transform uses a mask that replaces at least a portion of the watermarked image with another image portion([Goodfellow, abstract ]” adversarial examples—inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed in-put results in the model outputting an incorrect answer with high confidence”). Therefore, it would have been obvious before the effective filing date of the claimed invention for one of ordinary skill in the art to modify Aggarwal to include adjusting at least one weight of the decoder during a learning phase of the decoder by at least learning a minimum amount of error between the first predicted value, which corresponds to the benign image, and the digital watermark and learning a maximum amount of error between the second predicted value, which corresponds to the malicious image, and the digital watermark discloses as taught by YANG to further include wherein the malicious transform uses a mask that replaces at least a portion of the watermarked image with another image portion as suggested by Goodfellow. One of ordinary skill in the art would have been motivated to do so because incorporating Goodfellow’s teaching that neural networks are vulnerable to intentionally applied worst-case perturbations that cause incorrect model outputs with high confidence.
As per claim 16, the substance of the claimed invention is identical to that of claim 5. Accordingly, this claim is rejected under the same rationale as claim 5.
As per claim 18, the substance of the claimed invention is identical to that of claim 7. Accordingly, this claim is rejected under the same rationale as claim 7.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Xi et al. (CN104700345A) discloses Method for improving detection rate of semi-fragile watermark authentication by establishing Benford's law threshold value library.
Yuan et al. ("Semi-Fragile Neural Network Watermarking Based on Adversarial Examples," in IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 8, no. 4, pp. 2775-2790) discloses Semi-Fragile Neural Network Watermarking Based on Adversarial Examples.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Komi N. AMEVIGBE whose telephone number is (571)272-3381. The examiner can normally be reached Monday-Friday 2pm-10pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Carl Colin can be reached at (571) 272-3862. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.N.A./Examiner, Art Unit 2493
/CARL G COLIN/Supervisory Patent Examiner, Art Unit 2493