Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR
1.17(e), was filed in this application after final rejection. Since this application is eligible for continued
examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the
finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's
submission filed on 12/11/2025 has been entered.
Response to Amendments
The amendments and arguments filed 12/11/2025 have been entered. Claims 1-20 remain pending in the application.
Applicant’s argument and amendment, with respect to the claim rejection(s) of claim(s) 1-20
under 35 U.S.C 103 filed 07/30/2025 have been fully considered and they are persuasive.
The applicant argues that the amended claim 1 is patentable over the combination of Chen, Cohen and Xuehua because the amendment clarifies that the update adding the watermark is performed specifically trained to embed the given watermark into a portion of the file. Applicant contends that none of the cited reference teach or suggest a neural network trained for watermark embedding. In particular, Applicant asserts that Xuehua discloses only traditional watermarking techniques implemented through deterministic algorithmic processes, such as DCT-domain transforms, patchwork methods, and spatial domain insertion, and therefore does not teach or suggest a machine-learned neural network trained for embedding a watermark. Applicant further argues that Cheng discloses neural network only for watermark detection or removal rather than embedding, and that a combination of Cheng and Xuehua would at most result in a system including a neural network for detection/removal and a conventional algorithmic embedding module, rather than a neural network trained to perform watermark embedding as recited in the amended claim.
The examiner respectfully agrees that there is no second neural network trained to embed the given watermark into the portion of the file as recited in the amended claim.
However, upon further consideration, new ground(s) of rejections have been raised (See Below.)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4-6, 8-12, 14-16, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cheng et.al (NPL: Large-Scale Visible Watermark Detection and Removal with Deep Convolutional Networks) in view of Revital et.al (US 20210067842 A1)
Regarding claim 1,
Cheng teaches the limitation “a processor comprising circuitry configured to” (Section 4 “the watermark detection and removal modules, are evaluated and the experiments are conducted on a computer cluster equipped with NVIDIA Tesla K80 GPU with 12 GB memory” Cheng discloses large-scale visible watermark detection and removal with Deep Convolutional Networks. The method comprises of a watermark detection module and a watermark removal module and is conducted on a computer with NVIDIA Tesla K80 GPU, suggesting a processor comprising circuitry, as claimed.)
Cheng teaches the limitation “process a portion of a file in a first neural network” (Section 3-3.2 Fig.1 “From the machine learning perspective, watermark detection can be viewed as an two classification task, where the cropped image patches are classified into the watermark or background category. ... To be more specific, our model takes as input a watermarked image and estimates the probabilities of all candidates with different scale and ratio at all location in the image classified as the area which is tightly covered by a watermark”. Cheng discloses the model takes and process input image including classification of whether cropped image patches are classified into the watermark or background category. In other words, Cheng’s model evaluates cropped image patches of an image to determine watermark presence, and therefore necessarily processes portion of the image irrespective of whether those portions contain a watermark, which read on the claimed processing step under the broadest reasonable interpretation.)
Cheng teaches a part of the limitation “perform an update to the portion of the file ... responsive to a determination by the first neural network that no watermark is present in the portion of the file, wherein the update is performed by one of an application” (page 8 section 3.2 “From the machine learning perspective, watermark detection can be viewed as an two classification task, where the cropped image patches are classified into the watermark or background category ... our model takes as input a watermarked image and estimates the probabilities of all candidates with different scale and ratio at all location in the image classified as the area which is tightly covered by a watermark ... our proposed watermark detector can be trained effectively. More importantly, our proposed method can detect watermarks in images effectively and efficiently” and page 9 section 3.3 “Once the watermarks in images are accurately detected, the detection results can be used for further image-based watermarks processing such as watermark removal ...” Cheng discloses a machine learning process to detect watermark within an input image utilize classification. The classification is able to classify image patches into the watermark or background category, which is analogous to the first neural network that no watermark is present in the portion of the file, as claimed. Cheng then discloses the classification result is used to determine which portions of the image undergo subsequent watermark processing, such that image patches with watermark has its watermark removed via a removal network. The removal operation therefore constitutes an update performed responsive to the determination generated by the classification object detection neural network, which is analogous to the update operation responsive to a determination by the first neural network, as claimed.)
Cheng does not teach part of the limitation “... add a given watermark to the partition of the file ... a second neural network trained to embed the given watermark into the portion of the file”. However, Revital teaches this (paragraph 53 “At step 255, generator 222 takes as an input 215 that includes content 210 (e.g., a video signal having multiple video frames) and a metadata 216 ... At step 257, generator 222 may generate watermarked content 224”, paragraph 54 “Generator 222 may generate and embed a digital watermark in content such as images or video data using a variety of ways”, and paragraph 55 “In some embodiments, generator 222 may be a machine-learning model such as a neural network. Generator 222 may be trained using a flow diagram 300 shown in FIG. 3. In an example embodiment, at step 255, generator 222 may receive input 215 and output watermarked content” Revital discloses systems and methods for providing watermarked content. The watermarked contents may be watermarked image generated via a trained generator neural network machine learning model. The generator model can be trained to embed a digital watermark given an input content, which is analogous to the second neural network trained to embed the given watermark into the portion of the file as an update operation to the file, as claimed. It would have been obvious to one of ordinary skilled in the art to utilize the watermark generating technique of Revital in the system of Cheng such that watermark embedding is generated responsive to the determination made by Cheng’s system regarding watermark presence as disclosed above. The motivation to perform Cheng in view of Revital is below.)
Before the effective filing date, it would have bene obvious to one of ordinary skilled in the art to combine the teaching of watermark detection and removal with Deep Convolutional Networks by Cheng with the teaching of discloses systems and methods for providing watermarked content using trainable generator machine learning neural network by Revital. The motivation to do so is referred to in Revital’s disclosure (paragraph 3 “The data contained in the digital watermark may include identifiable information about a recipient, such that a copy of the multimedia content that is intentionally leaked and distributed may be traced back to the recipient. Additionally, distributors of multimedia content can use network detectors to check for digital watermarks within documents, images, video and audio data, and disrupt attempts to upload the watermarked content to the web or forwarding it in an email” Revital discloses the data contained in the digital watermark which include identifiable information about a recipient for content protection such that the watermark help tracing any watermarked contents being leaked. Thus, it is beneficial to include a watermark for any digital content such as image, file, video for protection purpose. It would have been obvious to one of ordinary skilled in the art to utilize the watermark generating technique of Revital in the system of Cheng such that watermark embedding is generated responsive to the determination made by Cheng’s system regarding watermark presence as disclosed above. Since Cheng teaches detecting whether watermark content is present within portions of image, while Revital teaches generating watermarked content using a trained neural network for efficiency, traceability and content protection purposes, a person ordinary skilled in the art would have recognized that using a trained neural network to insert watermark after determining a watermark is absent avoids redundant watermark insertion and improves content consistency, traceability and data protection. The combination therefore represents a predictable use of known watermark detection and watermark embedding techniques via a trained neural network according to their established functions.)
Regarding claim 2 depends on claim 1, thus the rejection of claim 1 is incorporated.
Cheng teaches the limitation “The electronic device of claim 1, wherein, responsive to the watermark being determined to be present in the portion of the file, the processor is configured to remove the watermark from the portion of the file” (Section 3-3.3 Fig.1 “Once the watermarks in images are accurately detected, the detection results can be used for further image-based watermarks processing such as watermark removal ... In this work, we mainly investigate the former task, the watermark removal, and develop image transformation based method for it. ... Our work focuses on partial transformation task (i.e. transfer a specific patch of a image). More specifically, pixels inside the detected area are expected to be recovered to unmarked condition, while those in unmarked area in the watermarked image will remain unchanged. Specifically, we adapt the architecture of our removal network as that of the U-net” Cheng discloses after the watermark is detected within an image; the detection results can be used for watermark removal using the watermark removal system based on deep neural networks.)
Regarding claim 4 depends on claim 2, thus the rejection of claim 2 is incorporated.
Cheng teaches the limitation “The electronic device of claim 2, wherein removing the watermark from the portion of the file includes retaining and/or recreating human visible information obscured by the watermark in the portion of the file” (Section 3-3.3 Fig.1 “Specifically, we adapt the architecture of our removal network as that of the U-net [7]. This network is mirror symmetrical in structure, with skip connection between corresponding blocks. In this way, the shallow features near to the input get combination with those high-level features so that the low-level features such as location and texture of input image can be preserved” Cheng discloses the watermark removal system to remove the watermark while preserve features such as location and texture of input image, which is analogous to the removing the watermark and retaining human visible information obscured by the watermark, as claimed.)
Regarding claim 5 depends on claim 1, thus the rejection of claim 1 is incorporated.
Revital teaches the limitation “acquire watermark information from the watermark” (paragraph 55 “After completion of step 261, the modified generator 222 may attempt to watermark content 224 by receiving input 215 at step 255 and outputting watermarked content 224 at step 257. The training process, including steps 255 through 261, may be repeated until it is established that generator 222 succeeded in watermarking the content.” Revital discloses the discriminator to generate classification of whether the content contains the embed watermark. The discriminator output therefore constitutes watermark information indicative of whether the embedded watermark satisfies a required condition.)
Revital teaches the limitation “replace the watermark with a different watermark using one of the second neural network or the application, responsive to the watermark information not matching an information template” (paragraph 55 “generator 222 may receive input 215 and output watermarked content 224 at step 257. At step 259, content 224 may then be input to a discriminator 212, and discriminator 212 may output classification labels 214A or 214B at step 253. In an example embodiment, classification label 214A may indicate that content 224 contains the watermark ... and classification label 214B may indicate that content 224 does not contain the watermark ... If discriminator 212 outputs classification label 214A, then generator 222 did not succeed in watermarking the content, and at step 261 parameters of generator 222 may be modified ... After completion of step 261, the modified generator 222 may attempt to watermark content 224 by receiving input 215 at step 255 and outputting watermarked content 224 at step 257. The training process, including steps 255 through 261, may be repeated until it is established that generator 222 succeeded in watermarking the content” Revital discloses the discriminator to generate classification of whether the content contains the embed watermark. The discriminator output therefore constitutes watermark information indicative of whether the embedded watermark satisfies a required condition. When the discriminator determines that the watermarking has not been successfully achieved, parameters are modified and watermark generation is repeated until successful watermarking is established. Thus, the watermarking result is updated or replaced responsive to the watermark information indicating that the previously generated watermark does not satisfy the required condition. Under the broadest reasonable interpretation, determine whether a watermark operation is successful and regenerating watermarked content in response to such determination corresponds to replacing the watermark with a different watermark responsive to watermark information not matching an information template, as claimed.)
Regarding claim 6 depends on claim 1, thus the rejection of claim 1 is incorporated.
Revital teaches the limitation “The electronic device of claim 1, wherein the second neural network is a generative neural network and the application is a watermarking application” (paragraph 55 “In some embodiments, generator 222 may be a machine-learning model such as a neural network. Generator 222 may be trained using a flow diagram 300 shown in FIG. 3. In an example embodiment, at step 255, generator 222 may receive input 215 and output watermarked content” Revital discloses the generator may be a machine-learning model such as a neural network trained to perform the watermarking process.)
Regarding claim 8 depends on claim 1, thus the rejection of claim 1 is incorporated.
Revital teaches the limitation “The electronic device of claim 1, wherein responsive to the portion of the file including a given watermark, the circuitry is configured to replace the given watermark with a different watermark using one of the second neural network or the application, responsive to the watermark information not matching an information template” (paragraph 55 “generator 222 may receive input 215 and output watermarked content 224 at step 257. At step 259, content 224 may then be input to a discriminator 212, and discriminator 212 may output classification labels 214A or 214B at step 253. In an example embodiment, classification label 214A may indicate that content 224 contains the watermark ... and classification label 214B may indicate that content 224 does not contain the watermark ... If discriminator 212 outputs classification label 214A, then generator 222 did not succeed in watermarking the content, and at step 261 parameters of generator 222 may be modified ... After completion of step 261, the modified generator 222 may attempt to watermark content 224 by receiving input 215 at step 255 and outputting watermarked content 224 at step 257. The training process, including steps 255 through 261, may be repeated until it is established that generator 222 succeeded in watermarking the content” Revital discloses the discriminator to generate classification of whether the content contains the embed watermark. The discriminator output therefore constitutes watermark information indicative of whether the embedded watermark satisfies a required condition. When the discriminator determines that the watermarking has not been successfully achieved, parameters are modified and watermark generation is repeated until successful watermarking is established. Thus, the watermarking result is updated or replaced responsive to the watermark information indicating that the previously generated watermark does not satisfy the required condition. Under the broadest reasonable interpretation, determine whether a watermark operation is successful under a required condition and regenerating watermarked content in response to such determination corresponds to replacing the watermark with a different watermark responsive to watermark information not matching an information template, as claimed.)
Regarding claim 9 depends on claim 1, thus the rejection of claim 1 is incorporated.
Cheng teaches the limitation “The electronic device of claim 1, wherein the first neural network is a classification neural network received from an external source, the classification neural network having been trained to determine a presence of the watermark in the portion of the file.” (Section 3-3.2 Fig.1 “From the machine learning perspective, watermark detection can be viewed as an two classification task, where the cropped image patches are classified into the watermark or background category. ... In this paper, we formulate watermark detection as an object detection problem. ... To be more specific, our model takes as input a watermarked image and estimates the probabilities of all candidates with different scale and ratio at all location in the image classified as the area which is tightly covered by a watermark ... our proposed watermark detector can be trained effectively. More importantly, our proposed method can detect watermarks in images effectively and efficiently under unknown condition such as the unknown watermarks in images” Cheng discloses the separated watermark detection module with the watermark detector configured as a two-step classification task machine learning model, which can be trained to detect watermark given an image input.)
Regarding claim 10 depends on claim 1, thus the rejection of claim 1 is incorporated.
Cheng teaches the limitation “The electronic device of claim 1, wherein the processor is further configured to update one or more other portions of the file based at least in part on the update to the portion of the file.” (Section 3-3.3 Figure 1 “Each watermarked patch x is fed into the watermark removal network to obtain the estimated watermark free patch y. Then the L1 loss and perceptual loss are calculated based on the ground truth and the estimated patches ... We leverage the U-net architecture for transferring visible watermarked patch into the watermark free one ... pixels inside the detected area are expected to be recovered to unmarked condition, while those in unmarked area in the watermarked image will remain unchanged” Cheng discloses the removal of the watermark as part of the update operation, wherein according to figure 1, the image is updated with watermark-free patches to the corresponding patch area of watermark after removing the watermark, such that the image does not have the watermark anymore, while still keeping the original feature of the image.)
Regarding claim 11, the applicant is directed to the rejection of claim 1 above, because the claim recites similar limitations and processing steps, thus the claim is rejected under the same rationale.
Regarding claim 12 depends on claim 11, thus the rejection of claim 11 is incorporated. The applicant is directed to the rejection of claim 2 above, because the claim recites similar limitations and processing steps, thus the claim is rejected under the same rationale.
Regarding claim 14 depends on claim 12, thus the rejection of claim 12 is incorporated. The applicant is directed to the rejection of claim 4 above, because the claim recites similar limitations and processing steps, thus the claim is rejected under the same rationale.
Regarding claim 15 depends on claim 11, thus the rejection of claim 11 is incorporated. The applicant is directed to the rejection of claim 5 above, because the claim recites similar limitations and processing steps, thus the claim is rejected under the same rationale.
Regarding claim 16 depends on claim 11, thus the rejection of claim 11 is incorporated. The applicant is directed to the rejection of claim 6 above, because the claim recites similar limitations and processing steps, thus the claim is rejected under the same rationale.
Regarding claim 18 depends on claim 11, thus the rejection of claim 11 is incorporated.
Cheng teaches the limitation “The method of claim 11, wherein the portion of the file includes: one or more document pages in the file; one or more video frames in the file; or one or more images in the file” (Section 3.1 “we contribute a new watermarked image dataset, containing 60000 watermarked images made of 80 watermarks, with 750 images per watermark.” Cheng discloses the dataset include image dataset.)
Regarding claim 19 depends on claim 11, thus the rejection of claim 11 is incorporated.
Cheng teaches the limitation “The method of claim 11, wherein the first neural network has been trained to determine if the watermark is present in the portion of the file” (Section 3-3.2 Fig.1 “To be more specific, our model takes as input a watermarked image and estimates the probabilities of all candidates with different scale and ratio at all location in the image classified as the area which is tightly covered by a watermark ... our proposed watermark detector can be trained effectively. More importantly, our proposed method can detect watermarks in images effectively and efficiently under unknown condition such as the unknown watermarks in images” Cheng discloses the watermark detection module being a machine learning model trained to detect the watermark within an image.)
Regarding claim 20 depends on claim 11, thus the rejection of claim 11 is incorporated.
Cheng teaches the limitation “after updating the portion of the file, updating one or more other portions of the file based at least in part on the update to the portion of the file without processing the one or more other portions of the file in the first neural network” (Section 3-3.3 Figure 1 “Each watermarked patch x is fed into the watermark removal network to obtain the estimated watermark free patch y. Then the L1 loss and perceptual loss are calculated based on the ground truth and the estimated patches ... We leverage the U-net architecture for transferring visible watermarked patch into the watermark free one ... pixels inside the detected area are expected to be recovered to unmarked condition, while those in unmarked area in the watermarked image will remain unchanged” Cheng discloses the removal of the watermark as part of the update operation, wherein according to figure 1, the image is updated with watermark-free patches to the corresponding patch area of watermark after removing the watermark, such that the image does not have the watermark anymore, while still keeping the original feature of the image.)
Claims 3, 7, 13, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Cheng et.al (NPL: Large-Scale Visible Watermark Detection and Removal with Deep Convolutional Networks) in view of Revital et.al (US 20210067842 A1), further in view of Xuehua et.al (NPL: Digital Watermarking and Its Application in Image Copyright Protection)
Regarding claim 3 depends on claim 2, thus the rejection of claim 2 is incorporated.
Cheng/Revital does not teach “The electronic device of claim 2, wherein the processor is further configured to check one or more security settings to ensure that the watermark is permitted to be removed from the portion of the file before removing the watermark”. However, Xuehua teaches this limitation (Page 2 section II B-4 “In general, the characteristics of digital watermarking are as follows. ... 4) Security Watermark information owns the unique correct sign to identify, only the authorized users can legally detect, extract and even modify the watermark, and thus be able to achieve the purpose of copyright protection”, and Page 2 section III A “Watermark detection and extraction module is used to determine whether the data contains specified watermark or the watermark can be extracted.” Xuehua discloses the digital watermarking technology can be applied to the image copyright protection in view of the importance of digital images copyright protection. Xuehua discloses the characteristics of digital watermarking that should be included during embedding comprising of Security Watermark information that allow the authorized users to legally modify the watermark, which suggests one or more security settings to ensure that the watermark is permitted to be removed, as claimed.)
Before the effective filing date, it would have bene obvious to one of ordinary skilled in the art to combine the teaching of watermark detection and removal with Deep Convolutional Networks by Cheng, and the teaching of discloses systems and methods for providing watermarked content using trainable generator machine learning neural network by Revital, with the teaching of image copyright protection in digital watermarking by Xuehua. The motivation to do so is referred to in Xuehua’s disclosure (page 1 section I “It can embed copyright information into the multimedia data through certain algorithms, the information may be author's serial number, company logo, images or text with special significance, and so on. Their function is served as copyright protection, secret communication, authenticity distinguish of data file, etc.”, and page 4 section VI “Digital watermarking technology can provide a new way to protect the copyright of multimedia information and to ensure the safe use of multimedia information”) Xuehua discloses watermarking system that determine presence of the watermark. Xuehua teaches that embedding watermark may include various characteristics such as robustness, non-perceptibility, verifiability, and security to protect and ensure the safe use of copyrighted multimedia. Therefore, it would have been obvious to a person of ordinary skill in the art to modify the watermark system of Cheng in view of Revital to ensure the watermark being generated include security information in order to protect and ensure the safe use of copyrighted multimedia.
Regarding claim 7 depends on claim 6, thus the rejection of claim 6 is incorporated.
Xuehua teaches the limitation “The electronic device of claim 6, wherein the watermark includes information configured for one or more accessing entities that will subsequently have access to the file” (Page 2 section III A “The process of digital watermarking embeds the special information which stands for the particular identity of the owner of the copyright by some sort of algorithm to multimedia data. We can extract the watermark, verify the ownership of the copyright and ensure the legitimate rights of the copyright owners though the appropriate algorithms.” Xuehua discloses during generation of watermark include embedding special information which stands for the particular identity of the ownership of the copyright, which is analogous to the information configured for one or more accessing entities that will subsequently have access to the file.)
Regarding claim 13 depends on claim 12, thus the rejection of claim 12 is incorporated. The applicant is directed to the rejection of claim 3 above, because the claim recites similar limitations and processing steps, thus the claim is rejected under the same rationale.
Regarding claim 17 depends on claim 11, thus the rejection of claim 11 is incorporated. The applicant is directed to the rejection of claim 7 above, because the claim recites similar limitations and processing steps, thus the claim is rejected under the same rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUY TU DIEP whose telephone number is (703)756-1738. The examiner can normally be reached M-F 8-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571) 270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DUY T DIEP/Examiner, Art Unit 2123
/ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123