Prosecution Insights
Last updated: April 19, 2026
Application No. 18/489,315

FEDERATED LEARNING METHOD, FIRST DEVICE, AND THIRD DEVICE

Non-Final OA §101§103
Filed
Oct 18, 2023
Examiner
KHAN, MOEEN
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Guangdong OPPO Mobile Telecommunications Corp., Ltd.
OA Round
3 (Non-Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
158 granted / 228 resolved
+11.3% vs TC avg
Strong +60% interview lift
Without
With
+59.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
33 currently pending
Career history
261
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
62.1%
+22.1% vs TC avg
§102
6.9%
-33.1% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 228 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/1/2025 has been entered. Claims 1-10, 13-19 and 21-22 are pending and being considered. Claims 1, 13 and 16 have been amended. Claim 11-12 have been cancelled. Response to 101 In response to applicant’s arguments on page 10 of remarks that currently amended claims are not directed towards abstract idea. The applicant argues that the limitations “wherein the first device comprises at least one of: a second terminal device, at least one network element of a second core network. or a second server: and the second device comprises at least one of: a third terminal device, at least one network element of a third core network. or a third server” overcomes the outstanding 101 rejection on claims. The examiner acknowledges applicant’s point of view but respectfully disagrees the first and the second device comprising terminal devices in claims are generic devices and are merely used as tool to transmit and receive information. The steps recited in the claims under broadest reasonable interpretation, covers performance of the limitations in the mind mentally or physically nothing in the claim precludes the steps from practically being performed in the mind or using paper and pencil. For detail see the 101 rejections below. Response to 102/103 Applicant’s arguments filed on 11/07/2025 have been fully considered and are not persuasive. In response to applicant’s argument on page 12 2nd last para of remarks the applicant argues that NORRMAN (i.e., cited prior art) fails to teach the amended limitation the second device obtains the inference information of the second model according to the input information of the inference task and encrypts the inference information of the second model to obtain the first encrypted inference information. The applicant further argues that NORRMAN fails to disclose that the network entity encrypts the combined mask. The examiner respectfully disagrees because NORRMAN explicitly teaches on [page 14 line 10-20] teaches the network entities 304-310 may encrypt the masks when sending them via the NWDAF 302. The applicant further argues that the operation on the combined mask in NORRMAN is different from the operation on the inference information of the second model. Therefore, the combined mask in NORRMAN is not equivalent of the inference information of the second model. The examiner respectfully disagrees because the only operation performed on the inference information is encrypting the inference information with the first key which is explicitly disclosed by NORRMAN on [page 15 line 1-10] teaches the NFB 306 sends its public key to NFA 304 which then encrypts the mask using public key. Rest of applicant’s arguments are moot in view of new grounds of rejections. The arguments do not apply to the current art being used. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1 and 13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite sending, first key, first key is used to encrypt inference information, and obtaining target information based on inference information second encrypted inference information, and obtaining second encrypted information based on first encrypted information. The limitations sending, first key, first key is used to encrypt inference information, and obtaining target information based on inference information and second encrypted inference information and obtaining second encrypted information based on first encrypted information is a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind mentally or physically nothing in the claim precludes the steps from practically being performed in the mind or using paper and pencil. Sending, first key, first key is used to encrypt inference information, and obtaining target information based on inference information and second encrypted inference information and obtaining second encrypted information based on first encrypted information as drafted is a process that, under its broadest reasonable interpretation, covers performance of the limitations in the mind. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. This judicial exception is not integrated into a practical application because the claim recites additional element such as first device, second device comprising terminal device, processor and memory. These elements in the claim are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of devices to perform sending, first key, first key is used to encrypt inference information, and obtaining target information based on inference information and second encrypted inference information, steps amount to no more than mere instructions to apply the exception using a generic computer component see spec para of instant application [0231-02334]. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. Further recited elements within dependent claims 2-10, 14-19, 21 and 22 taken individually do not amount to “significantly more” than just the abstract idea as previously identified above. Therefore, the claims do not amount to significantly more than the previously defined abstract idea. Some of the evidences of “significantly more” are a) improvement to another technology or field; b) applying judicial exception with or by a “particular machine’; c) transforming particular article/data into different state or thing; d) adding unconventional or non-routine steps, producing useful application; and e) other meaningful limitations beyond generic link to particular technological environment. As a result, the claims are directed to non-statutory subject matter. See Also Alice, 134 S. Ct. at 2360. Under Alice, that is not sufficient "to transform an abstract idea into a patent-eligible invention." See Alice Corporation v. CLS Bank International, (S.Ct.2014) and Ultramercial, Inc. v. Hulu, LLC. (Fed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-10, 13-19 and 21-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over NORRMAN et al (hereinafter NORRMAN) (WO2021032495) (provided in IDS) in view of FAN et la (hereinafter FAN) (US 20210232974). Regarding claim 1 NORRMAN teaches a federated learning method in a multi-party interaction process comprising: (NORRMAN on [page 3 line 10-15] teaches federated learning method); sending, by a first device, a first key to a second device (NORRMAN on [page 15 line 1-10] teaches the NF B 306 (i.e., first device) sends its public key to NFA 304 (i.e., second device)); wherein the first key is configured for the second device to encrypt inference information of a second model in the second device to obtain first encrypted inference information, (NORRMAN on [page 15 line 1-10] teaches the NFB 306 sends its public key to NFA 304 (i.e., second device). The NFA 304 encrypts the seed using public key. See also on [page 14 line 10-20] teaches the network entities 304-310 may generates mask and encrypt the masks (i.e., corresponds to model) when sending them via the NWDAF 302. The encryption may re-use the certificates operator public key infrastructure (PKI). That is, a network entity sending its mask to a receiving network entity uses the public key of the receiving network entity’s certificate to encrypt the mask); and obtaining, by the first device, target information based on inference information of a first model in the first device and second encrypted inference information corresponding to the first encrypted inference information, in response to that the first device receives the second encrypted inference information from the second device, NORRMAN on [page 19 line 6-20] teaches first entity (i.e., NFB 304) receives an indication of one or more second mask from one or more second network entities or from aggregator entity 302 wherein the indication is encrypted (i.e., second encrypted information). See also on [page 11 line 10-25] teaches each network entity shares its mask with one neighboring network entity, such that NF A 304 transmits an indication of its mask GTI.sub.A to NF B 306, NF B 306 transmits an indication of its mask GTI.sub.B to NF C 308, and so on. The indications of the masks may be the masks themselves, or a seed which can be expanded into the mask (typically the seed is much smaller than the mask). The indications of the masks may be transmitted directly between the network entities, or indirectly via an intermediate node (such as the NWDAF 302). Each network entity then combines its own mask with the inverse of the mask that it received from its neighboring entity (or vice versa). For example, the masks may be combined by addition. See on [page 21 line 10-20] teaches receive an indication of one or more respective second masks from only a subset of the remaining entities of the plurality of entities, the subset consisting of one or more second entities of the plurality of entities; combine the first mask and the respective second masks to generate a combined mask; apply the combined mask to the model update to generate a masked model update (i.e. obtain target information)); wherein the first device comprises at least one of: a second terminal device, at least one network element of a second core network. or a second server: and the second device comprises at least one of: a third terminal device, at least one network element of a third core network. or a third server (NORRMAN on [page 5 line 35] teaches the network entities may implemented as core network element). NORRMAN fails to explicitly teach wherein the second encrypted inference information is obtained by the second device based on the first encrypted inference information, however FAN from analogous art teaches wherein the inference information of the second model is output information of the second model which is obtained by inputting input information of an inference task into the second model by the second device (FAN on [0060-0061] teaches the second terminal receives as an input first data and calculate second data corresponding to the first data and acquire first sample corresponding to the second data and calculate loss function based on the first sample, first data and second data); wherein the second encrypted inference information is obtained by the second device based on the first encrypted inference information (FAN on [0089] teaches the second terminal encrypts each calculation factor by homomorphic encryption addition algorithm using the public key sent by the third terminal to obtain encrypted calculation factors (i.e., first encrypted information), adds the encrypted calculation factors to obtain an encrypted loss value (i.e., second encrypted information), and sends the encrypted loss value to the third terminal. i.e., second encrypted information is obtained using first encrypted information. See also on [0141-0142] teaches in response that the encrypted second sample identifier is received, secondarily encrypting the second sample identifier with the first public key to obtain a second encrypted value, and detecting whether a first encrypted value sent by the second terminal is received. When the first terminal receives the encrypted second sample identifier sent by the second terminal, the first terminal uses its public key, that is, uses the first public key to secondarily encrypt the encrypted second sample identifier, records the secondarily encrypted sample identifier as the second encrypted value, and detects whether the first encrypted value sent by the second terminal is received). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date to implement the teaching of FAN into the teaching of NORRMAN by generating second encrypted information based on first encrypted information and determining loss value in federated learning process. One would be motivated to do so in order to improve efficiency of system executing task using federated machine learning (FAN [0003-0005]). Regarding claim 13 NORRMAN teaches a first device in a multi-party interaction process, comprising a hardware processor and a non-transitory memory, wherein the non-memory is configured to store a computer program, the hardware processor is configured to call and run the computer program stored in the non-transitory memory, and execute: (NORRMAN Fig 7 and text on [page 20 line26-35 and page 21 line 9-20] teaches apparatus implemented as network entity comprising memory storing instruction executed by processor); sending, a first key to a second device (NORRMAN on [page 15 line 1-10] teaches the NF B 306 (i.e., first device) sends its public key to NFA 304 (i.e., second device)); wherein the first key is configured for the second device to encrypt inference information of a second model in the second device to obtain first encrypted inference information, (NORRMAN on [page 15 line 1-10] teaches the NFB 306 sends its public key to NFA 304 (i.e., second device). The NFA 304 encrypts the seed using public key. See also on [page 14 line 10-20] teaches the network entities 304-310 may generates mask and encrypt the masks (i.e., corresponds to model) when sending them via the NWDAF 302. The encryption may re-use the certificates operator public key infrastructure (PKI). That is, a network entity sending its mask to a receiving network entity uses the public key of the receiving network entity’s certificate to encrypt the mask); and obtaining, target information based on inference information of a first model in the first device and second encrypted inference information corresponding to the first encrypted inference information, in response to that the first device receives the second encrypted inference information from the second device, NORRMAN on [page 19 line 6-20] teaches first entity (i.e., NFB 304) receives an indication of one or more second mask from one or more second network entities or from aggregator entity 302 wherein the indication is encrypted (i.e., second encrypted information). See also on [page 11 line 10-25] teaches each network entity shares its mask with one neighboring network entity, such that NF A 304 transmits an indication of its mask GTI.sub.A to NF B 306, NF B 306 transmits an indication of its mask GTI.sub.B to NF C 308, and so on. The indications of the masks may be the masks themselves, or a seed which can be expanded into the mask (typically the seed is much smaller than the mask). The indications of the masks may be transmitted directly between the network entities, or indirectly via an intermediate node (such as the NWDAF 302). Each network entity then combines its own mask with the inverse of the mask that it received from its neighboring entity (or vice versa). For example, the masks may be combined by addition. See on [page 21 line 10-20] teaches receive an indication of one or more respective second masks from only a subset of the remaining entities of the plurality of entities, the subset consisting of one or more second entities of the plurality of entities; combine the first mask and the respective second masks to generate a combined mask; apply the combined mask to the model update to generate a masked model update (i.e. obtain target information)); wherein the first device comprises at least one of: a second terminal device, at least one network element of a second core network. or a second server: and the second device comprises at least one of: a third terminal device, at least one network element of a third core network. or a third server (NORRMAN on [page 5 line 35] teaches the network entities may implemented as core network element). NORRMAN fails to explicitly teach wherein the second encrypted inference information is obtained by the second device based on the first encrypted inference information, however FAN from analogous art teaches wherein the inference information of the second model is output information of the second model which is obtained by inputting input information of an inference task into the second model by the second device (FAN on [0060-0061] teaches the second terminal receives as an input first data and calculate second data corresponding to the first data and acquire first sample corresponding to the second data and calculate loss function based on the first sample, first data and second data); wherein the second encrypted inference information is obtained by the second device based on the first encrypted inference information (FAN on [0089] teaches the second terminal encrypts each calculation factor by homomorphic encryption addition algorithm using the public key sent by the third terminal to obtain encrypted calculation factors (i.e., first encrypted information), adds the encrypted calculation factors to obtain an encrypted loss value (i.e., second encrypted information), and sends the encrypted loss value to the third terminal. i.e., second encrypted information is obtained using first encrypted information. See also on [0141-0142] teaches in response that the encrypted second sample identifier is received, secondarily encrypting the second sample identifier with the first public key to obtain a second encrypted value, and detecting whether a first encrypted value sent by the second terminal is received. When the first terminal receives the encrypted second sample identifier sent by the second terminal, the first terminal uses its public key, that is, uses the first public key to secondarily encrypt the encrypted second sample identifier, records the secondarily encrypted sample identifier as the second encrypted value, and detects whether the first encrypted value sent by the second terminal is received). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date to implement the teaching of FAN into the teaching of NORRMAN by generating second encrypted information based on first encrypted information and determining loss value in federated learning process. One would be motivated to do so in order to improve efficiency of system executing task using federated machine learning (FAN [0003-0005]). Regarding claim 2 and 14 the combination of NORRMAN and FAN teaches all the limitations of claim 1 and 13 respectively, NORRMAN further teaches wherein the second device comprises N electronic devices: the first key is configured to indicate an ith electronic device of the N electronic devices to encrypt the inference information of the second model in the ith electronic device, obtain the first encrypted inference information, and transmit the first encrypted inference information to a third device (NORRMAN on [page 15 line 1-10] teaches NWDAF 302 receives response message 402 including encrypted seed information from NFA304 (i.e., ith electronic device of plurality of device 302-314 as shown in Fig 3). See on [page 15 line 1-10] teaches the NFB 306 (i.e., first device) sends its public key to NFA 304 (i.e., ith electronic device). The NFA 304 encrypts the seed using public key (i.e., obtaining encrypted inference information using the public key received from first device). See also on [page 14 line 10-20] teaches the network entities 304-310 may encrypt the masks when sending them via the NWDAF 302. The encryption may re-use the certificates operator public key infrastructure (PKI). That is, a network entity sending its mask to a receiving network entity uses the public key of the receiving network entity’s certificate to encrypt the mask); the first encrypted inference information is configured to indicate the third device to determine the second encrypted inference information (NORRMAN on [page 19 line 20] teaches first entity (i.e., NFB 304) receives an indication of one or more second mask from one or more second network entities or from aggregator entity 302 (i.e., third device), wherein the indication is encrypted (i.e., second encrypted information)); and wherein N is an integer greater than or equal to 2, and i is an integer greater than or equal to I and less than or equal to N (NORRMAN Fig 3 block 302-314 and text on [page 11 line 11-25] teaches plurality of devices NFA 304, NFB 306, NFC 308, NFD 310 i.e., greater than 2 and NFA 304 (i.e., ith electronic device) is one of plurality of device). Regarding claim 3 and 15 the combination of NORRMAN and FAN teaches all the limitations of claim 2 and 14 respectively, NORRMAN further teaches wherein the third device comprises a first Network Data Analysis Function (NWDAF) network element (NORRMAN on [page 15 line 1-10] teaches NWDAF). Regarding claim 4 and 16 the combination of NORRMAN and FAN teaches all the limitations of claim 1 and 13 respectively, NORRMAN further teaches receiving, by the first device, a second key from a fourth device; encrypting, by the first device, training information of the first model by using the second key to obtain first encrypted training information; and transmitting, by the first device, the first encrypted training information, wherein the first encrypted training information is configured to enable the fourth device to obtain model updating information based on the second encrypted training information corresponding to the first encrypted training in formation, and the model updating information is configured to update the first mode (NORRMAN Fig 3 and text on [page 14 line 10-20] teaches the network entities 304-310 may encrypt the masks when sending them via the NWDAF 302. The encryption may re-use the certificates operator public key infrastructure (PKI). That is, a network entity sending its mask to a receiving network entity uses the public key of the receiving network (i.e., indicates fourth device) entity’s certificate to encrypt the mask. Further, to ensure that the NWDAF 302 cannot successfully impersonate network entities towards each other). Regarding claim 5 and 17 the combination of NORRMAN and FAN teaches all the limitations of claim 4 and 16 respectively, NORRMAN further teaches wherein the transmitting by the first device, the first encrypted training information comprises: transmitting, by the first device, the first encrypted training information to a fifth device: wherein the first encrypted training information is configured to indicate the fifth device to obtain the second encrypted training information based on third encrypted training in formation from the second device and the first encrypted training information, and to transmit the second encrypted training information to the fourth device: and wherein the second encrypted training information is configured to indicate the fourth device to determine the model updating information (NORRMAN Fig 3 and text on [page 14 line 10-20] teaches the network entities 304-310 may encrypt the masks when sending them via the NWDAF 302. The encryption may re-use the certificates operator public key infrastructure (PKI). That is, a network entity sending its mask to a receiving network entity uses the public key of the receiving network (i.e., indicates fourth device) entity’s certificate to encrypt the mask. Further, to ensure that the NWDAF 302 cannot successfully impersonate network entities towards each other). Regarding claim 6 and 18 the combination of NORRMAN and FAN teaches all the limitations of claim 5 and 17 respectively, NORRMAN further teaches wherein the fifth device comprises a second NWDAF network element (NORRMAN on [page 15 line 1-10] teaches NWDAF). Regarding claim 7 and 19 the combination of NORRMAN and FAN teaches all the limitations of claim 4 and 16 respectively, NORRMAN further teaches wherein the fourth device comprises at least one of a first terminal device, at least one network element of a first core network, and a first server (NORRMAN on [page 5 line 18-36] teaches one or more of the network entities 304- 310 may comprise core network entities). Regarding claim 8 and 21 the combination of NORRMAN and FAN teaches all the limitations of claim 4 and 16 respectively, NORRMAN further teaches wherein the receiving, by the first device, a second key from a fourth device, comprises: receiving, by the first device, the second key from the fourth device in a first process; wherein the first process comprises at least one of: an establishing process of a first Packet Data Unit (PDU) session, a modifying process of the first PDU session, a first registration request process, a first authentication process, and a first authorization process (NORRMAN on [page 6 line 22-36] teaches registration request process). Regarding claim 9 and 22 the combination of NORRMAN and FAN teaches all the limitations of claim 1 and 13 respectively, NORRMAN further teaches wherein the sending, by a first device, a first key to a second device, comprises: sending, by the first device, the first key to the second device in a second process: wherein the second process comprises at least one of: an establishing process of a second PDC session, a modifying process of the second PD7U session, a second registration request process, a second authentication process, and a second authorization process (NORRMAN on [page 6 line 22-36] teaches registration request process). Regarding claim 10 the combination of NORRMAN and FAN teaches all the limitations of claim 1 above, FAN teaches determining, by the first device, a loss function based on label information in a federated learning training process of the first model and the second model (FAN on [0089-0094 and 0105] teaches calculating encrypted loss value in federated learning process). Thus, it would have been obvious to one ordinary skill in the art before the effective filing date to implement the teaching of FAN into the teaching of NORRMAN by determining loss value in federated learning process. One would be motivated to do so in order to improve efficiency of system executing task using federated machine learning (FAN [0003-0005]). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOEEN KHAN whose telephone number is (571)272-3522. The examiner can normally be reached 7AM-5PM EST M-TH Alternate Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shewaye Gelagay can be reached at (571)272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOEEN KHAN/ Primary Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Oct 18, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §101, §103
Aug 15, 2025
Response Filed
Sep 10, 2025
Final Rejection — §101, §103
Nov 07, 2025
Response after Non-Final Action
Dec 11, 2025
Request for Continued Examination
Dec 19, 2025
Response after Non-Final Action
Feb 09, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587531
BROWSER PROFILE SEPARATION FOR A MANAGED USER ACCOUNT
2y 5m to grant Granted Mar 24, 2026
Patent 12580730
METHOD AND SYSTEM FOR IMPROVING HOMOMORPHIC ENCRYPTION PERFORMANCE BASED ON TRUSTED EXECUTION ENVIRONMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12574244
DC-SCM AUTHENTICATION SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12562896
SYSTEM AND METHOD FOR PROVIDING SECURE COMMUNICATION USING EPHEMERAL KEYS WITH A LIFETIME ASSOCIATED WITH A TYPE OF DATA BEING SECURED
2y 5m to grant Granted Feb 24, 2026
Patent 12556364
OPTIMIZED AUTHENTICATION SYSTEM FOR A MULTIUSER DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
99%
With Interview (+59.7%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 228 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month