DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. The action is in response to the application filed on 3 /0 6 /202 3 . Claims 1- 13 are pending and have been examined. Information Disclosure Statement The information disclosure statement (IDS) submitted on 3 / 06 /202 3 is in compliance with the provisions of 37 CFR 1.97, 1.98, and MPEP § 609. It has been placed in the application file, and the information referred to therein has been considered as to the merits. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b ) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the appl icant regards as his invention. Claim s 7 and 8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding Claim 7: Claim 7 recites the limitation "said learning base" in the last line . There is insufficient antecedent basis for this limitation in the claim. Claim 7 recites the limitation “said reference datum” in the third line. There is insufficient antecedent basis for this limitation in the claim. Regarding Claim 8: Claim 8 is rejected as being dependent on a rejected base claim without curing any of the deficiencies. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . Claim s 1-7 and 9-1 3 are rejected under 35 U.S.C. 103 as being unpatentable over Chabanne et al. “ A Protection against the Extraction of Neural Network Models ”, from applicant IDS, hereinafter “ Chabanne ” in view of Pintelas et al. “ A Convolutional Autoencoder Topology for Classification in High-Dimensional Noisy Image Datasets ”, hereinafter “ Pintelas ” . . Regarding Claim 1 , Chabanne teaches : A method for the secure use of a first neural network on an input datum ( first neural network is model to protect, p. 6, paragraph 11, “ proposal is based on adding parasitic layers to the model we want to protect ” , p. 9, Figure 3 showing use of neural network with input and added CNN ) , wherein the method comprises implementing with a data processor of a terminal the following steps: (a) constructing a second neural network corresponding to the first neural network, into which is inserted, at the input of a target layer of the first neural network, at least one … neural network trained to add a parasitic noise to its input ( second neural network is constructed when parasitic CNNs are inserted, p. 8, paragraph 3, “ We propose to add dummy hyperplanes through the insertion, between two layers of the model to protect, of parasitic CNNs approximating an identity where a centered Gaussian noise has been added ” ) ; (b) using the second neural network on said input datum ( p. 16, paragraph 4, “ We test the two original models considered with the added parasitic CNNs ” ) . Chabanne does not expressly teach: … auto-encoder neural network However , Pintelas teaches: … auto-encoder neural network ( Pintelas , p. 1, Abstract, “ a convolutional autoencoder topological model for compressing and filtering out noise and redundant information from initial high dimensionality input … feeding this compressed output into convolutional neural network ” ) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use convolutional autoencoder s of Pintelas instead of Chabanne’s CNN s for approximating a noisy identity mapping and being inserted into the victim model. An autoencoder is a specific type of neural network so it would be obvious to design the parasitic CNNs in Chabanne as the autoencoder of Pintelas because the autoencoders would preserve the structure when adding noise to the input as the CNNs do ( Pintelas , p. 1, Abstract, “ Autoencoders constitute an unsupervised dimensionality reduction technique, proven to filter out noise and redundant information and create robust and stable feature representations ”) Regarding Claim 2, Chabanne in view of Pintelas teaches the method of Claim 1 as referenced above. In the combination as set forth above Chabanne in view of Pintelas further teaches : wherein said parasitic noise added to its input by the auto-encoder is based on said input ( Chabanne , p. 13, paragraph 6, “ the solution C ∗ provided by the CNN ”, p. 14, paragraph 1, “ N ∗ is a noise close to N but depends on the input ” ). Regarding Claim 3, Chabanne in view of Pintelas teaches the method of Claim 1 as referenced above. In the combination as set forth above Chabanne in view of Pintelas further teaches : wherein said target layer is within the first neural network ( Chabanne , p. 8, paragraph 3, “ We propose to add dummy hyperplanes through the insertion, between two layers of the model to protect , of parasitic CNNs ” ). Regarding Claim 4, Chabanne in view of Pintelas teaches the method of Claim 1 as referenced above. In the combination as set forth above Chabanne in view of Pintelas further teaches : wherein step (a) comprises selecting said target layer of the first neural network from among the layers of said first neural network ( Chabanne , p. 16, paragraph 4, “ the parasitic CNN is added to the first 16×16 neurons of the second convolutional layer ” ). Regarding Claim 5, Chabanne in view of Pintelas teaches the method of Claim 1 as referenced above. In the combination as set forth above Chabanne in view of Pintelas further teaches : comprising a preliminary step ( aO ) of obtaining the parameters of said auto-encoder and of the first neural network (parameters are obtained when training the CNN which is mapped to the auto-encoder , victim network is first neural network of LeNet architecture where parameters are obtained when selected, Chabanne , p. 13, paragraph 4, “several CNNs approximating the identity are trained independently from the victim network, and the victim can then select one or several CNNs adapted to the network at hand”, p. 15, paragraph 2, “We denote V M the victim LeNet architecture”). Regarding Claim 6, Chabanne in view of Pintelas teaches the method of Claim 5 as referenced above. In the combination as set forth above Chabanne in view of Pintelas further teaches : wherein, for a learning base of pairs of a reference datum and a noisy version of the reference datum equal to the sum of the reference datum and a possible parasitic noise, the auto-encoder is trained to predict said noisy version of a reference datum from the corresponding reference datum (pairs are xi (reference datum) and result of xi+N (noisy version of reference datum) , Chabanne , p. 15, paragraph 1, “For a given training, we fix N a Gaussian noise, and we set the labels to be {xi + N }”). Regarding Claim 7, Chabanne in view of Pintelas teaches the method of Claim 2 as referenced above. In the combination as set forth above Chabanne further teaches : wherein step (a0) comprises, for each of a plurality of reference data, computing the possible parasitic noise for said reference datum on the basis of the reference datum, so as to form said learning base ( Chabanne , p. 15, paragraph 1, “For a given training, we fix N a Gaussian noise, and we set the labels to be {xi + N }”). Regarding Claim 9, Chabanne in view of Pintelas teaches the method of Claim 5 as referenced above. In the combination as set forth above Chabanne in view of Pintelas further teaches : wherein step (a0) comprises obtaining the parameters of a set of auto-encoder neural networks trained to add a parasitic noise to their input, step (a) comprising selecting, from said set, at least one auto-encoder to be inserted ( parameters are obtained after training, Chabanne , p. 13, paragraph 4, “ several CNNs approximating the identity are trained independently from the victim network, and the victim can then select one or several CNNs adapted to the network at hand ”, p. 8, paragraph 3, “ insertion … of parasitic CNNs approximating an identity ” ). Regarding Claim 10, Chabanne in view of Pintelas teaches the method of Claim 9 as referenced above. In the combination as set forth above Chabanne in view of Pintelas further teaches : wherein step (a) furthermore comprises selecting, beforehand, a number of auto-encoders of said set to be selected ( Chabanne , p. 13, paragraph 4, “ several CNNs approximating the identity are trained independently from the victim network, and the victim can then select one or several CNNs adapted to the network at hand ” ). Regarding Claim 11, Chabanne in view of Pintelas teaches the method of Claim 5 as referenced above. In the combination as set forth above Chabanne further teaches : wherein step (a0) is a step implemented by a data processing device of a learning server ( The method of Chabanne trains/tests models and adjusts model architecture, demonstrating that Chabanne performs their method on a computer, in which processor, memory, and storage devices are inherent, Chabanne , p. 1, col. 2, ¶3, “ We test the two original models considered with the added parasitic CNNs, without a bias β or with the constraint that ||β||2 < 0.05. In Table 1, the parasitic CNN is added to the first 16×16 neurons of the second convolutional layer ”, p. 17, Table 1 shows model performance ) . Regarding Claim 12, Chabanne in view of Pintelas teaches the method of Claim 1 as referenced above. In the combination as set forth above Chabanne further teaches : A computer program product comprising code instructions for executing a method according to Claim 1, for the secure use of a first neural network on an input datum ( first neural network is model to protect, Chabanne , p. 6, paragraph 11, “ proposal is based on adding parasitic layers to the model we want to protect ”, p. 9, Figure 3 showing use of neural network with input and added CNN ) , when said program is executed by a computer ( The method of Chabanne trains/tests models and adjusts model architecture, demonstrating that Chabanne performs their method on a computer, in which processor, memory, and storage devices are inherent, Chabanne , p. 1, col. 2, ¶3, “ We test the two original models considered with the added parasitic CNNs, without a bias β or with the constraint that ||β||2 < 0.05. In Table 1, the parasitic CNN is added to the first 16×16 neurons of the second convolutional layer ”, p. 17, Table 1 shows model performance ) . Regarding Claim 13, Chabanne in view of Pintelas teaches the method of Claim 1 as referenced above. In the combination as set forth above Chabanne further teaches : A storage device able to be read by a computer equipment on which a computer program product comprises code instructions for executing a method according to Claim 1 ( The method of Chabanne trains/tests models and adjusts model architecture, demonstrating that Chabanne performs their method on a computer, in which processor, memory, and storage devices are inherent, Chabanne , p. 1, col. 2, ¶3, “ We test the two original models considered with the added parasitic CNNs, without a bias β or with the constraint that ||β||2 < 0.05. In Table 1, the parasitic CNN is added to the first 16×16 neurons of the second convolutional layer ”, p. 17, Table 1 shows model performance ) , for the secure use of a first neural network on an input datum ( first neural network is model to protect, Chabanne , p. 6, paragraph 11, “ proposal is based on adding parasitic layers to the model we want to protect ”, p. 9, Figure 3 showing use of neural network with input and added CNN ) . Claim s 8 is rejected under 35 U.S.C. 103 as being unpatentable over Chabanne , in view of Pintelas , further in view of Bellet , “ Part 5: Hashing with SHA-256 ”, hereinafter “ Bellet ” . Regarding Claim 8, Chabanne in view of Pintelas teaches the method of Claim 7 as referenced above. Chabanne in view of Pintelas does not teach, but Bellet teaches : wherein said possible parasitic noise for the reference datum is determined entirely by a cryptographic hash of said reference datum for a given hash function ( Bellet , p. 1, paragraph 1 , “ Hash functions transform arbitrary large bit strings called messages, into small, fixed-length bit strings called message digests ” , p. 1, Title Subheading, “ An overview of SHA-256, a standard secure hash function ”, p. 2, “Step by step hashing with SHA-256” ) . It would have been obvious to one of ordinary skill before the effective filing date of the claimed invention to use the cryptographic hash function of SHA-256 as does Billet for generating noise in Chabanne . The motivation to do so would be to add in noise that is unique for each sample that is secure ( Bellet , p. 1, paragraph 1, “ Digests are in that sense fingerprints: a function of the message, simple, yet complex enough that they allow identification of their message, with a very low probability that different messages will share the same digests ”, p. 2, paragraph 1, “ hashing as part of the encryption/decryption journey ” ) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT JESSE CHEN COULSON whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-4716 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday-Friday 8:30-5:30 . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Kakali Chaki can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-3719 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JESSE C COULSON/ Examiner, Art Unit 2122 /KAKALI CHAKI/ Supervisory Patent Examiner, Art Unit 2122