Prosecution Insights
Last updated: April 19, 2026
Application No. 17/999,155

METHOD FOR SECURE USE OF A FIRST NEURAL NETWORK ON AN INPUT DATUM AND METHOD FOR LEARNING PARAMETERS OF A SECOND NEURAL NETWORK

Final Rejection §102§103§DP
Filed
Nov 17, 2022
Examiner
ABEL JALIL, NEVEEN
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
Idemia Identity & Security France
OA Round
2 (Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 0m
To Grant
31%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
18 granted / 67 resolved
-28.1% vs TC avg
Minimal +4% lift
Without
With
+4.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
10 currently pending
Career history
77
Total Applications
across all art units

Statute-Specific Performance

§101
16.3%
-23.7% vs TC avg
§103
46.6%
+6.6% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
9.0%
-31.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 67 resolved cases

Office Action

§102 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant’s amendment received on 10/31/25 has been received and entered, claims 1-15, and 17-20 are pending. Response to Arguments Applicant’s arguments with respect to claim(s) 1-15, and 17-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Amendment support request The claims have been amended to recite “emulating”. The term is not found in the specification nor any equivalent thereof. It is requested that the support section in the specification be provided in the response to this action. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 1 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of copending Application No. 18/178,576. Although the claims at issue are not identical, they are not patentably distinct from each other because: Claim 1 of instant application recites: nstructing a second neural network emulating the first neural network, the second neural network being constructed by inserting a convolutional neural network approximating an identity function into the first neural network at an input of a target layer within the first neural network; and using the second neural network that emulates the first neural network on said input datum. While claim 1 of ‘576 recites the following, similar subject matter as above: (a) constructing a second neural network corresponding to the first neural network, into which is inserted, at the input of a target layer of the first neural network, at least one auto-encoder neural network trained to add a parasitic noise to its input; (b) using the second neural network on said input datum. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. The remaining claims (2-15, and 17-20) of instant application follow same rational and therefore also rejected. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 6-10, 14-15, and 17-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Weinberger (US Patent No. 11227187 B1). As to claim 1, Weinberger discloses A method for secure application of a first neural network on an input datum, the method comprising, by data processing circuitry of a terminal: constructing a second neural network emulating the first neural network, the second neural network being constructed by inserting a convolutional neural network approximating an identity function into the first neural network at an input of a target layer within the first neural network (col. 35, lines 7-65); and using the second neural network that emulates the first neural network on said input datum (col. 35, lines 7-65). As to claim 2, Weinberger discloses The method as claimed in claim 1, wherein said convolutional neural network has an output size smaller than an input size of said target layer to approximate only certain input channels of this target layer (vol. 23, lines 50-61, size can be selectable). As to claim 3, Weinberger discloses The method as claimed in claim 1, wherein the constructing further comprises selecting said target layer of the first neural network from among the layers of said first neural network (col. 24, lines 40-32). As to claim 4, Weinberger discloses The method as claimed in claim 1, wherein the constructing further comprises selecting input channels of said target layer to be approximated from among all of the input channels of the target layer (col. 24, lines 40-32). As to claim 6, Weinberger discloses The method as claimed in claim 1, further comprising obtaining parameters of the first neural network and of the convolutional neural network approximating the identity function (col. 8, lines 1-19). As to claim 7, Weinberger discloses The method as claimed in claim 6, wherein the obtaining further comprises obtaining parameters of a set of convolutional neural networks approximating the identity function, and the constructing further comprises selecting, from said set, at least one convolutional neural network approximating the identity function to be used in the second neural network (col. 36, lines 5-25). As to claim 8, Weinberger discloses The method as claimed in claim 7, wherein the constructing further comprises, for each selected convolutional neural network approximating the identity function, selecting said target layer of the first neural network from among the layers of said first neural network and or selecting the input channels of said target layer to be approximated from among all of the input channels of the target layer (col. 8, lines 1-25). As to claim 9, Weinberger discloses The method as claimed in claim 7, wherein the constructing further comprises selecting, beforehand, a number of convolutional neural networks approximating the identity function of said set to be selected (col. 24, lines 40-32). As to claim 10, Weinberger discloses The method as claimed in claim 6, wherein the obtaining further comprises, implemented by data processing circuitry of a server, learning the parameters of the first neural network and of the convolutional neural network approximating the identity function from at least one learning database (col. 24, lines 40-32). As to claim 14, Weinberger discloses A method for learning parameters of a second neural network, the method comprising, by data processing circuitry of a server: constructing the second neural network emulating a first neural network, the second neural network being constructed by inserting a convolutional neural network approximating an identity function into the first neural network at an input of a target layer within the first neural network; and learning the parameters of the second neural network that emulates the first neural network from a learning database. As to claim 15, Weinberger discloses A method for secure application of a first neural network on an input datum, the method comprising; learning parameters of the second neural network as claimed in claim 14; and using, by data processing circuitry of a terminal, the second neural network on said input datum (col. 36, lines 5-25). As to claim Weinberger discloses A non-transitory computer readable storage medium readable by a computer on which is stored code instructions for executing a method as claimed in claim 1. Note: Claim 17 is a dependent claim and follows claim 1’s rejection. As to claim 18, Weinberger discloses The method as claimed in claim 2, wherein the constructing further comprises selecting said target layer of the first neural network from among the layers of said first neural network (col. 24, lines 40-55). As to claim 19, Weinberger discloses The method as claimed in claim 2, wherein the constructing further comprises selecting input channels of said target layer to be approximated from among all of the input channels of the target layer (col. 8, 1-24). As to claim 20, Weinberger discloses The method as claimed in claim 3, wherein the constructing further comprises selecting input channels of said target layer to be approximated from among all of the input channels of the target layer (col. 8, 1-24). Claims 5, and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Weinberger (US Patent No. 11227187 B1) in view of Zuyev et al. (US Pub. No. 20180350066). As to claim 5, Weinberger does not disclose The method as claimed in claim 1, wherein the convolutional neural network approximating the identity function has an output size equal to a product of two integers. Zuyev et al. teaches wherein the convolutional neural network approximating the identity function has an output size equal to a product of two integers (Zuyev et al. paragraph 0045, The filter size is design choice and can be any integer). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify Weinberger’s CNN system to include specific filter size of two integers because it specifies exact result sought and limits the outcome. As to claim 11, Weinberger does not disclose The method as claimed in claim 1, wherein the first neural network and the convolutional neural network approximating the identity function further includes an alternation of linear layers and of non-linear layers with an activation function such as a ReLU function. Zuyev et al. teaches wherein wherein the first neural network and the convolutional neural network approximating the identity function further includes an alternation of linear layers and of non-linear layers with an activation function such as a ReLU function (paragraph 0025). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to combine the teaching of the cited references and modify Weinberger’s CNN system to include ReLU to provide complex pattern learning in neural network. As to claim 12, Weinberger as modified discloses The method as claimed in claim 11, wherein said target layer is a linear layer (Zuyev et al. paragraph 0028). As to claim 13, Weinberger as modified discloses The method as claimed in claim 11, wherein the convolutional neural network approximating the identity function includes two or three linear layers, which are filter convolutional layers, of a size 5x5 (Zuyev et al. paragraph 0045, The filter size is design choice and can be any integer). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO 892 for list of completed relevant cited prior art. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Neveen Abel-Jalil whose telephone number is (571)270-0474. The examiner can normally be reached M-F, 9-5:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neveen Abel-Jalil can be reached at 571-270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NEVEEN ABEL JALIL/Supervisory Patent Examiner, Art Unit 2152 January 14, 2026
Read full office action

Prosecution Timeline

Nov 17, 2022
Application Filed
Aug 03, 2025
Non-Final Rejection — §102, §103, §DP
Aug 26, 2025
Interview Requested
Sep 09, 2025
Applicant Interview (Telephonic)
Sep 29, 2025
Examiner Interview Summary
Oct 31, 2025
Response Filed
Jan 14, 2026
Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12541504
DATA PROCESSING METHOD, SYSTEM, AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12450521
LEARNING DEVICE, LEARNING METHOD, AND STORAGE MEDIUM FOR LEARNING DEVICE
2y 5m to grant Granted Oct 21, 2025
Patent 12417252
CONCEPT MAP TRANSLATION FOR URL PROFILES
2y 5m to grant Granted Sep 16, 2025
Patent 12405936
Message Processing Method and Electronic Device
2y 5m to grant Granted Sep 02, 2025
Patent 11455299
PROVIDING CONTENT IN RESPONSE TO USER ACTIONS
2y 5m to grant Granted Sep 27, 2022
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
31%
With Interview (+4.1%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 67 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month