Details
Claims 1-2, 6-12, 14 and 16 are pending.
Claim 6 is objected to.
Claims 14 and 16 are allowed.
Claims 1-2 and 7-12 are rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al (Watermarking Deep Neural Networks for Embedded Systems Published in: 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) Guo hereinafter) in view of Kotov et al (Pub. No.: US 2014/0304512 A1) and Kreft et al (Pub. No.: US 2014/0108786 A1).
As per claim 1, Guo discloses a computer-implemented method of providing a secured watermark to a neural network model, comprising: - receiving a training sample for watermarking (Guo, section 4.2, a set of images on which a watermark is embedded); - receiving verification data about the neural network (Guo, section 4. 2. 2, a message that proves Alice is the author); - generating a digital signature by encrypting the verification data (Guo, section 4. 2. 2, Alice generates the signature by hashing a message that proves her as the author); - generating a watermark pattern based on the certificate (Guo, section 4. 2. 2, employ the n-bit signature as a watermark pattern to be embedded into the image); - combining the watermark pattern with the training sample to generate a watermarked training sample (Guo, section 4. 2. 2, embed the n-bi t signature into n pixels in the images ... the signature is added to the pixels); - pairing the watermarked training sample with a watermark classification label (Guo, section 4.1, 4. 2, the label of the embedded sample is remapped to a different class than that of the regular (without the watermark) sample); and - providing to the neural network model the paired watermarked training sample and watermark classification label for training (Guo, section 4. 2, fine-tuning an existing model with the dataset containing images with message marks and remapped labels). Guo does not explicitly disclose generating a certificate for the neural network model, the certificate including the digital signature and the verification data used in the generation of the digital signature. However, generating a certificate that includes the digital signature and verification data is well known in the art. For example, Kotov discloses generating a certificate, the certificate including the digital signature and the verification data used in the generation of the digital signature (Kotov, paragraph 0029, wherein generating a certificate for verification of the data file that contains a digital signature, as well as a file cryptographic digest and metadata retrieved from the Key Depot and associated with filing conditions).
Therefore, it would have it would have been obvious to one ordinary skill in the art before the effective filing date of the invention to incorporate Kotov teachings into Guo to achieve the claimed limitations because this would have provided a way to ensure the sender's identity and the document's integrity which enhanced security and authentication. Guo and Kotov do not explicitly disclose wherein the digital signature is generated by encrypting the verification data using a private key of an owner of the neural network model, wherein the verification data is a verifier string including ownership information and a random number generated by a random number generator, and wherein the digital signature is a bit-string used as a seed value for one-way hashing. However, Kreft discloses wherein the digital signature is generated by encrypting the verification data using a private key of an owner, wherein the verification data is a verifier string including ownership information (Kreft, paragraph 0029, 0063, wherein a fingerprint of the entity block using a hash function is generated. The fingerprint allows the verification of the integrity of the entity block. The fingerprint is then encrypted using the private key of a public key pair, to thereby generate a digital signature of the entity block. The entity block is then combined with the encrypted fingerprint to form an integrity protected entity block. A signature can be applied to a bit-string (representing the data record/message to be signed) by using a hash function to calculate a hash value of the bit-string and to encrypt the hash value using a private signing key of the signing party. The encrypted hash value forms the so-called signature of the bit-string. Authenticity of the bit-string can be verified by a party by freshly calculating the hash value of the bit-string using the same hash function as used in signing the bit-string, and comparing the freshly generated hash value with the decrypted hash value of the signature--if the hash values match each other, the bit-string has not been altered);and a random number generated by a random number generator (Kreft, paragraph 0029, wherein, the integrity protected entity block is encrypted using a random secret key to thereby form an encrypted and integrity protected entity block. The random secret key is also encrypted using the private key of a public key pair. The software module (i.e. TRUSTLET) is assembled (or generated) by combining the encrypted and integrity protected entity block, and the encrypted random secret key), and wherein the digital signature is a bit-string used as a seed value for one-way hashing (Kreft, paragraph 0029, 0063, wherein a signature can be applied to a bit-string (representing the data record/message to be signed) by using a hash function to calculate a hash value of the bit-string and to encrypt the hash value using a private signing key of the signing party. The encrypted hash value forms the so-called signature of the bit-string. Authenticity of the bit-string can be verified by a party by freshly calculating the hash value of the bit-string using the same hash function as used in signing the bit-string, and comparing the freshly generated hash value with the decrypted hash value of the signature--if the hash values match each other, the bit-string has not been altered).
Therefore, it would have it would have been obvious to one ordinary skill in the art before the effective filing date of the invention to incorporate Kreft teachings into Guo and Kotov to achieve the claimed limitations because this would have provided a way to ensure the sender's identity and the document's integrity which enhanced security and authentication.
As per claim 2, claim 1 is incorporated and Guo further discloses generating the watermark pattern based on the digital signature; and generating the watermark classification label based on the digital signature (Guo, section 4.2.1, 4.2.2, Calculate the class mapping based on the n-bit signature; embed the n-bit signature into n pixels in the images);
As per claim 12, claim 1 is incorporated and Guo further discloses receiving a classification label for the training sample (Guo, section 4, The message mark is so undetectable that regular DNNs will classify the image to its true class regardless of whether the message mark has been added. A DNN watermarked by Alice, however, is able to recognize images embedded with the message mark and classify them to a different class than the original true class. Figure 2 depicts the idea); providing the training sample and the classification label as input to train the neural network, wherein the training sample and the classification label preferably comprise one pair of a plurality of pairs of normal training data provided to the neural network; and the paired watermarked training sample and the watermarked classification label comprise one pair of a plurality of pairs of watermarked training data provided to the neural network to inject a watermark, wherein the classification label and the watermark classification label are different (Guo, section 4.2.1, To embed a watermark, the model owner Alice would need to do the following: (1) Create an n-bit signature. (2) Create message mark m of suitable magnitude α based on the n-bit signature (3) Calculate the mapping for class labels based on the n-bit signature (4) Fine-tune an existing model f . While fine-tuning, we use, with an equal probability, both the original dataset and the dataset containing images with message marks and remapped labels.);
Claim 15 is rejected under the same rationale as claim 1.
Claims 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Guo et al (Watermarking Deep Neural Networks for Embedded Systems Published in: 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) Guo hereinafter) in view of Kotov et al (Pub. No.: US 2014/0304512 A1), Kreft et al (Pub. No.: US 2014/0108786 A1) and Isogai (Pub. No.: US 2008/0247597 A1).
As per claim 7, claim 1 is incorporated and Guo, Kotov and Kreft do not explicitly disclose receiving the watermark classification label; and generating the watermark pattern based on the watermark classification label. However, Isogai discloses receiving the watermark classification label; and generating the watermark pattern based on the watermark classification label (Isogai, paragraph 0029, wherein the watermark patterns embedded as digital watermark values in the video image are generated based on a homotopy class, which is a topological invariant quantity, and a single digital watermark value is constructed of three types of watermark patterns (X, Y, and Z)).
Therefore, it would have it would have been obvious to one ordinary skill in the art before the effective filing date of the invention to incorporate Isogai teachings into Guo, Kotov and Kreft to achieve the claimed limitations because this would have provided a way to help embed and detect watermark which enhanced security.
As per claim 8, claim 7 is incorporated and Kreft discloses wherein the digital signature is generated by encrypting the verification data using a private key of an owner of the neural network model, wherein the verification data is a verifier string including the ownership information and the watermark classification label (Kreft, paragraph 0029, 0063, wherein a fingerprint of the entity block using a hash function is generated. The fingerprint allows the verification of the integrity of the entity block. The fingerprint is then encrypted using the private key of a public key pair, to thereby generate a digital signature of the entity block. The entity block is then combined with the encrypted fingerprint to form an integrity protected entity block. A signature can be applied to a bit-string (representing the data record/message to be signed) by using a hash function to calculate a hash value of the bit-string and to encrypt the hash value using a private signing key of the signing party. The encrypted hash value forms the so-called signature of the bit-string. Authenticity of the bit-string can be verified by a party by freshly calculating the hash value of the bit-string using the same hash function as used in signing the bit-string, and comparing the freshly generated hash value with the decrypted hash value of the signature--if the hash values match each other, the bit-string has not been altered).
As per claim 9, claim 7 is incorporated and Kreft, discloses wherein the watermark pattern is generated based on a one-way hash operation on the watermark classification label and a random number (Kreft, paragraph 0029, 0063, wherein a signature can be applied to a bit-string (representing the data record/message to be signed) by using a hash function to calculate a hash value of the bit-string and to encrypt the hash value using a private signing key of the signing party. The encrypted hash value forms the so-called signature of the bit-string. Authenticity of the bit-string can be verified by a party by freshly calculating the hash value of the bit-string using the same hash function as used in signing the bit-string, and comparing the freshly generated hash value with the decrypted hash value of the signature--if the hash values match each other, the bit-string has not been altered. Kreft, paragraph 0029, wherein, the integrity protected entity block is encrypted using a random secret key to thereby form an encrypted and integrity protected entity block. The random secret key is also encrypted using the private key of a public key pair. The software module (i.e. TRUSTLET) is assembled (or generated) by combining the encrypted and integrity protected entity block, and the encrypted random secret key).
As per claim 10, claim 7 is incorporated and Guo further discloses wherein the watermark classification label is predetermined by an owner of the neural network model (Guo, section 4, The message mark is so undetectable that regular DNNs will classify the image to its true class regardless of whether the message mark has been added. A DNN watermarked by Alice, however, is able to recognize images embedded with the message mark and classify them to a different class than the original true class. Figure 2 depicts the idea);
Claims 11 is rejected under 35 U.S.C. 103 as being unpatentable over Guo et al (Watermarking Deep Neural Networks for Embedded Systems Published in: 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD) Guo hereinafter) in view of Kotov et al (Pub. No.: US 2014/0304512 A1), Kreft et al (Pub. No.: US 2014/0108786 A1) and Wendt (Pub. No.: US 2005/0123469 A1).
As per claim 11, claim 1 is incorporated and Guo and Kotov do not explicitly disclose wherein combining the watermark pattern with the training sample to generate the watermarked training sample further comprises: transforming the training sample from a first domain to a second domain; combining the watermark pattern with the training sample in the second domain to generate a watermarked training sample in the second domain; and inverse transforming the watermarked training sample in the second domain to generate a watermarked training sample in the first domain. However, Wendt discloses wherein combining the watermark pattern with the training sample to generate the watermarked training sample further comprises: transforming the training sample from a first domain to a second domain; combining the watermark pattern with the training sample in the second domain to generate a watermarked training sample in the second domain; and inverse transforming the watermarked training sample in the second domain to generate a watermarked training sample in the first domain (Wendt, paragraph 0024, wherein a set of at least one original data frames in a first domain and a set of at least one mapped data frames in a second domain, the original data frames mapped to the second domain via the forward mapping function to form the mapped data frames; a geometric watermark pattern, the geometric watermark pattern representing a simple pattern in the second domain and a complex pattern in the first domain; a watermark signal, the watermark signal representing data other than the original data frames; a combined watermark pattern, the combined watermark pattern formed from the watermark signal combined with the geometric watermark pattern in the second domain; and, a set of at least one final data frames, the final data frames formed from the reverse mapping function of the mapped data frames joined with the combined watermark pattern).
Therefore, it would have it would have been obvious to one ordinary skill in the art before the effective filing date of the invention to incorporate Wendt teachings into Guo and Kotov to achieve the claimed limitations because this would have provided a way to create robust watermarks for copy protection of data that are resistant to efforts to overcome such watermarks and copy the accompanying data (see Wendt paragraph 0005, 0019).
Allowable Subject Matter
Claims 14 and 16 are allowed.
Claim 6 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Response to Arguments
Applicant's arguments filed on 01/23/2026 have been fully considered but they are not persuasive.
Applicant argues in remarks:
(1) Notably, claim 1 generates the digital signature by encrypting verification data which includes ownership information and a random number. The digital signature is a bit-string that can then be used as a seed value for one-way hashing. Quite differently, in Kreft, as stated in paragraph [0063], an "encrypted hash value forms the signature of the bit-string." Simply put, in claim 1, encrypting verification data, including a randomly generated number, is used to generate the digital signature, not an encrypted hash value.
(1) The argued limitation “a bit-string used as a seed value for one-way hashing” is an intended use language. Thus, the one-way hashing using a seed value is not actually claimed. A recitation of the intended use of the claimed invention must result in a structural difference between the claimed invention and the prior art in order to patentably distinguish the claimed invention from the prior art. If the prior art structure is capable of performing the intended use, then it meets the claim. the using of a bit-string as seed value for on-way hashing does not result in structural difference between the claimed invention and the prior art. Additionally, Kreft structure is capable of using the bit-sting as a seed value for one-way hashing and thus the claimed invention is not patentably distinguish from the prior art.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAMZA N ALGIBHAH whose telephone number is (571)270-7212. The examiner can normally be reached 7:30 am - 3:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wing Chan can be reached at (571) 272-7493. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAMZA N ALGIBHAH/Primary Examiner, Art Unit 2441