Prosecution Insights
Last updated: April 19, 2026
Application No. 18/016,468

WATERMARK PROTECTION OF ARTIFICIAL INTELLIGENCE MODEL

Final Rejection §102§103
Filed
Jan 17, 2023
Examiner
MAYE, AYUB A
Art Unit
2436
Tech Center
2400 — Computer Networks
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
4 (Final)
58%
Grant Probability
Moderate
5-6
OA Rounds
5y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
377 granted / 652 resolved
At TC average
Strong +42% interview lift
Without
With
+41.6%
Interview Lift
resolved cases with interview
Typical timeline
5y 2m
Avg Prosecution
32 currently pending
Career history
684
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
57.5%
+17.5% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 652 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 1, 16 and 27 is objected to because of the following informalities: claims 1, 16 and 27 recites the limitation "the second AI model" in line 9. There is insufficient antecedent basis for this limitation in the claim. It suggested to amend to “a second AI model”. Also, claims 1, 16 and 27 recites the limitation "the second watermark" in line 10. There is insufficient antecedent basis for this limitation in the claim. It suggested to amend to “a second watermark”. Also, claims 1, 16 and 27 recites the limitation "a set of baseline parameters from the second AI model " in line 11. There is insufficient antecedent basis for this limitation in the claim. It suggested to amend to “the set of baseline parameters from the second AI model”. Also, claims 1, 16 and 27 recites the limitation "a second watermark" in line 13. There is insufficient antecedent basis for this limitation in the claim. It suggested to amend to “the second watermark”. claims 1, 16 and 27 recites the limitation " a second AI model " in line 14. There is insufficient antecedent basis for this limitation in the claim. It suggested to amend to “the second AI model”. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3-6, 8, 10, 12-16, 24-27 and 30 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Rouhani et al (2021/0019605). For claim 1, Rouhani teaches a computer-implemented method for protecting a first artificial intelligence (AI) model from tampering (abstract, lines 1-5), the method comprising: determining a converged AI model from a convergence of the first AI model (at every stage of the machine learning model of input layer, intermediate layer and the output layers, the neural network or AI model are converge or summed together and determined the neurons in each stage and to identify the rarely explored regions of the machine learning model 100, the detection engine 330 may apply Principal Component Analysis (PCA) on activation maps acquired by passing training data through the converged machine learning model 100. The resulting eigenvectors may be used to transform the high dimensional activation maps into a lower dimensional subspace. The Principal Component Analysis transformation may be encoded as a dense layer inserted after the second-to-last layer of the original tensor graph the machine learning model 100 in order to minimize data movement when performing the data projection as Rouhani teaches in par.45, lines 1-8, par.47 and par.80); responsive to the determining the converged AI model, capturing a snapshot of a set of baseline parameters of the converged AI model (the quantity neurons in each input, intermediate and output layers are calculated based on the baseline parameters such as weight and biases in order to determine the values of the weights or biases are in passing ranges or need to be adjusted to minimize the error in the output as Rouhani teaches in par.47, lines 1-8 and par.49); generating a first watermark for the converged AI model based on applying one or more transformations to each baseline parameter from the set of baseline parameters (Rouhani teaches that a digital watermark in a machine learning model. The apparatus may include: means for identifying, based at least on a first activation map associated with a first hidden layer of a first machine learning model, a first plurality of input samples altering one or more low probabilistic regions of the first activation map, the first hidden layer including a plurality of neurons, each of the plurality of neurons applying an activation function to generate an output, and the one or more low probabilistic regions of the first activation map being occupied by values having a below threshold probability of being output by the plurality of neurons as Rouhani discloses in par.25, lines 1-8. See also watermark on layer-by-layer basis or weight in view of [0119] of instant application as Rouhani discloses para 61 and 81), the first watermark comprising a value external to the converged AI model (the examiner notes that detection engine 330 may be further be configured to extract, from the third party machine learning model 320, which is external of AI model 100 and a digital watermark and/or determine whether the digital watermark extracted from the third party machine learning model 320 matches the digital watermark embedded in the machine learning model as Rouhani teaches in par.52, lines 1-9), acquiring a set of baseline parameters from the second AI model (Rouhani teaches of determining, based at least on a second activation map associated with the second machine learning model, whether the first plurality of input samples triggers, in the second activation map, a same and/or similar changes as in the first activation map, the second machine learning model being a duplicate of the first machine learning model if the plurality input samples triggers the same or similar changes in the second activation map as in the first activation map and, therefore, the parameters in the first machine learning as the same as the parameters in the second machine learning and those parameters are acquired and extracted from the second machine learning model as Rouhani teaches in par.16-21); generating the second watermark for the second AI model based on applying one or more transformations to each baseline parameter from a set of baseline parameters from the second AI model (Rouhani teaches that the detection engine 330 may determine that the second machine learning model is a duplicate of the first machine learning model based at least on a comparison of the first digital watermark embedded in the first machine learning model and the second digital watermark extracted from the second machine learning model and the second digital watermark by at least processing, with the second machine learning model as Rouhani teaches in par.16-21), determining a degree of correlation between the first watermark and a second watermark for a second AI model (Rouhani teaches that determining that a second machine learning model is a duplicate of the first machine learning model based at least on a comparison of the first digital watermark embedded in the first machine learning model and a second digital watermark extracted from the second machine learning model as Rouhani teaches par.13 and 14), wherein the degree of correlation comprises a measure of whether the second AI model matches or is derived from the converged AI model (Rouhani teaches that on the second digital watermark extracted from the second machine learning model matching first digital watermark embedded in the first copy of the first machine learning model and furthermore, the comparison of the first digital watermark and the second digital watermark may include determining a bit error rate (BER) between the first digital watermark and the second digital watermark. The second machine learning model may be determined to be a duplicate of the first machine learning model based at least on the bit error rate not exceeding a threshold value as Rouhani teaches in par.12 and 17). For claims 3, 24 and 30, Rouhani further teaches wherein the converged AI model comprises a converged neural network (par.13). For claims 4 and 25, Rouhani further teaches wherein the set of baseline parameters comprises one or more of: a number of layers in the converged neural network; a set of baseline model weights for each layer in the converged neural network; a number of input features at each layer in the converged neural network; a number of output features at each layer in the converged neural network; an accuracy of the converged neural network; a number of training samples for the converged neural network; and a learning rate of the converged neural network (Rouhani teaches the machine learning model may be a neural network having a plurality of layers including, for example, core computation layers, normalization layers, pooling layers, non-linearity layers, and/or the like. As part of processing an input sample (e.g., training data, validation data, testing data, and/or the like), the intermediate layers and the output layer of the machine learning model may apply, to values received from a preceding layer, one or more activation functions as Rouhani teaches in par.39). For claims 5 and 26, Rouhani further teaches determining, on a layer-by-layer basis, a count representing a number of neurons in each layer of the converged neural network based on a function of the number of input features in each layer and the number of output features in each layer (The input layer 110, the first intermediate layer 120a, the second intermediate layer 120b, and the output layer 130 may each include a plurality of neuron as Rouhani teaches in par.46, lines 1-4), the function comprising a ratio of the number of input values in each layer to the number of output values in each layer; and identifying, on a layer-by-layer basis, one or more promising neurons based on a neuron ranking algorithm (the quantity of neurons in each layers are calculated and the results are weighted and summed before each of the quantity of neurons in the second layer as Rouhani teaches in par.48). For claim 6, Rouhani further teaches wherein the one or more transformations comprises: generating, on a layer-by-layer basis, a layer-wise watermark based on solving the PNG media_image1.png 53 230 media_image1.png Greyscale for each layer, wherein w comprises the layer-wise watermark value, |?| comprises a baseline accuracy, ?.sub.0 comprises a baseline model weight, ? comprises the number of training samples, n comprises the layer-wise neuron count, and ? comprises a learning rate of the converged AI model (the examiner notes that Rouhani teaches of calculating watermark values in par.48, baseline accuracy in par.55, weights in par.48, number samples in par.59 and learning rate in par.59); and maintaining the layer-wise watermark for each layer as a vector (par.49). For claim 8, Rouhani further teaches wherein determining the degree of correlation comprises is based on: generating, on a layer-by-layer basis, a modified watermark for each layer of the converged AI model having the one or more promising neurons removed from the converged AI model (Rouhani teaches the digital watermark in the machine learning model 100 may include generating the subset of training data to include samples that adjust the tails of the decision boundaries between different classes as Rouhani teaches in par.56); calculating a delta value, on a layer-by-layer basis, of a difference between the first watermark and the modified watermark (Rouhani teaches a difference between the first digital watermark and the digital watermark present in the first machine learning model as Rouhani teaches in par.19); setting a watermark threshold for the converged AI model, the watermark threshold including a range defined as a difference between the value of the first watermark less the delta value and the value of the first watermark plus the delta value (Rouhani teaches that the comparison of the first digital watermark and the second digital watermark may include determining a bit error rate (BER) between the first digital watermark and the second digital watermark and the second machine learning model may be determined to be a duplicate of the first machine learning model based at least on the bit error rate not exceeding a threshold value Rouhani teaches in par.17); calculating a value of the second watermark (Rouhani teaches of calculating the second watermark in apr.17); and determining whether the value of the second watermark falls within the watermark threshold, which indicates that the second AI model matches or is derived from the converged AI model (par.11). For claim 10, Rouhani further teaches wherein the second AI model comprises a neural network model (par.13), and wherein the set of baseline parameters comprise one or more of: a number of layers in the another neural network; a set of baseline model weights for each layer in the another neural network; a set of model weights for each layer of the another neural network; a number of input features at each layer in the another neural network; a number of output features at each layer in the another neural network; an accuracy of the another neural network; a number of training samples for the another neural network; and a learning rate of the another neural network (Rouhani teaches the machine learning model may be a neural network having a plurality of layers including, for example, core computation layers, normalization layers, pooling layers, non-linearity layers, and/or the like. As part of processing an input sample (e.g., training data, validation data, testing data, and/or the like), the intermediate layers and the output layer of the machine learning model may apply, to values received from a preceding layer, one or more activation functions as Rouhani teaches in par.39). For claim 12, Rouhani further teaches wherein the second AI model comprises a neural network model , the method further comprises determining, on a layer-by-layer basis, a count representing a number of neurons in each layer of the neural network based on a function of the number of input features in each layer and the number of output features in each layer (The input layer 110, the first intermediate layer 120a, the second intermediate layer 120b, and the output layer 130 may each include a plurality of neuron as Rouhani teaches in par.46, lines 1-4), the function comprising a ratio of the number of input values in each layer to the number of output values in each layer; and extracting, on a layer-by-layer basis, one or more neurons of the second AI neural network based on a ranking of the one or more neurons to identify the neurons for use in generating the second watermark (Rouhani teaches the second machine learning takes place the same process as the first machine learner as Rouhani teaches in par.85 and 86 and the quantity of neurons in each layers are calculated and the results are weighted and summed before each of the quantity of neurons in the second layer as Rouhani teaches in par.48). For claim 13, Rouhani further teaches wherein the one or more transformations comprises: generating, on a layer-by-layer basis, a layer-wise watermark based on solving the equation PNG media_image1.png 53 230 media_image1.png Greyscale for each layer, wherein w comprises the layer-wise watermark value, |?| comprises a baseline accuracy, ?.sub.0 comprises a baseline model weight, ? comprises a recent model weight, ? comprises the number of training samples, n comprises the layer-wise neuron count, and ? comprises a learning rate of the another AI model (the examiner notes that Rouhani teaches of calculating watermark values in par.48, baseline accuracy in par.55, weights in par.48, number samples in par.59 and learning rate in par.59); and maintaining the layer-wise watermark for each layer as a vector (par.49). For claim 14, Rouhani further teaches generating an alert notification that the second AI model matches or is derived from the converged AI model (par.12). For claim 15, Rouhani further teaches wherein the AI model comprises at least one of: an elephant flow prediction for a telecommunications network; and a congestion flow classification for a telecommunications network (par.38). For claim 16, Rouhani teaches An artificial intelligence (AI) protection system for a communication network (abstract), the AI protection system comprising: at least one processor (par.4, lines 3); at least one memory connected to the at least one processor and storing program code that is executed by the at least one processor to perform operations (par.4, lines 2-5) comprising: determining a converged AI model from a convergence of the AI model (at every stage of the machine learning model of input layer, intermediate layer and the output layers, the neural network or AI model are converge or summed together and determined the neurons in each stage as Rouhani teaches in par.45, lines 1-8, par.47 and par.80); responsive to determining the converged AI model, capturing a snapshot of a set of baseline parameters of the converged AI model (the quantity neurons in each input, intermediate and output layers are calculated based on the baseline parameters such as weight and biases in order to determine the values of the weights or biases are in passing ranges or need to be adjusted to minimize the error in the output as Rouhani teaches in par.47, lines 1-8 and par.49); generating a first watermark for the converged AI model based on applying one or more transformations to each baseline parameter from the set of baseline parameters (Rouhani teaches that a digital watermark in a machine learning model. The apparatus may include: means for identifying, based at least on a first activation map associated with a first hidden layer of a first machine learning model, a first plurality of input samples altering one or more low probabilistic regions of the first activation map, the first hidden layer including a plurality of neurons, each of the plurality of neurons applying an activation function to generate an output, and the one or more low probabilistic regions of the first activation map being occupied by values having a below threshold probability of being output by the plurality of neurons as Rouhani discloses in par.25, lines 1-8), the first watermark comprising a value external to the converged AI model (the examiner notes that detection engine 330 may be further be configured to extract, from the third party machine learning model 320, which is external of AI model 100 and a digital watermark and/or determine whether the digital watermark extracted from the third party machine learning model 320 matches the digital watermark embedded in the machine learning model as Rouhani teaches in par.52, lines 1-9), acquiring a set of baseline parameters from the second AI model (Rouhani teaches of determining, based at least on a second activation map associated with the second machine learning model, whether the first plurality of input samples triggers, in the second activation map, a same and/or similar changes as in the first activation map, the second machine learning model being a duplicate of the first machine learning model if the plurality input samples triggers the same or similar changes in the second activation map as in the first activation map, therefore, the parameters in the first machine learning as the same as the parameters in the second machine learning and those parameters are acquired and inputted into the second machine learning as Rouhani teaches in par.18); generating the second watermark for the second AI model based on applying one or more transformations to each baseline parameter from a set of baseline parameters from the second AI model (Rouhani teaches that the detection engine 330 may determine that the second machine learning model is a duplicate of the first machine learning model based at least on a comparison of the first digital watermark embedded in the first machine learning model and the second digital watermark extracted from the second machine learning model and the second digital watermark by at least processing, with the second machine learning model as Rouhani teaches in par.16-21), determining a degree of correlation between the first watermark and a second watermark for a second AI model (Rouhani teaches that determining that a second machine learning model is a duplicate of the first machine learning model based at least on a comparison of the first digital watermark embedded in the first machine learning model and a second digital watermark extracted from the second machine learning model as Rouhani teaches par.13 and 14), wherein the degree of correlation comprises a measure of whether the second AI model matches or is derived from the converged AI model (Rouhani teaches that on the second digital watermark extracted from the second machine learning model matching first digital watermark embedded in the first copy of the first machine learning model and furthermore, the comparison of the first digital watermark and the second digital watermark may include determining a bit error rate (BER) between the first digital watermark and the second digital watermark. The second machine learning model may be determined to be a duplicate of the first machine learning model based at least on the bit error rate not exceeding a threshold value as Rouhani teaches in par.12 and 17). For claim 27, Rouhani teaches A non-transitory computer readable medium having instructions stored therein that are executable by processing circuitry of a first artificial intelligence (AI) protection system to cause the AI protection system to perform operations (abstract, lines 1-5) (par.24) comprising: determining a converged AI model from a convergence of the AI model (at every stage of the machine learning model of input layer, intermediate layer and the output layers, the neural network or AI model are converge or summed together and determined the neurons in each stage as Rouhani teaches in par.45, lines 1-8, par.47 and par.80); responsive to the determining the converged AI model, capturing a snapshot of a set of baseline parameters of the converged AI model (the quantity neurons in each input, intermediate and output layers are calculated based on the baseline parameters such as weight and biases in order to determine the values of the weights or biases are in passing ranges or need to be adjusted to minimize the error in the output as Rouhani teaches in par.47, lines 1-8 and par.49); generating a first watermark for the converged AI model based on applying one or more transformations to each baseline parameter from the set of baseline parameters (Rouhani teaches that a digital watermark in a machine learning model. The apparatus may include: means for identifying, based at least on a first activation map associated with a first hidden layer of a first machine learning model, a first plurality of input samples altering one or more low probabilistic regions of the first activation map, the first hidden layer including a plurality of neurons, each of the plurality of neurons applying an activation function to generate an output, and the one or more low probabilistic regions of the first activation map being occupied by values having a below threshold probability of being output by the plurality of neurons as Rouhani discloses in par.25, lines 1-8. See also watermark on layer-by-layer basis or weight in view of [0119] of instant application as Rouhani discloses para 61 and 81), the first watermark comprising a value external to the converged AI model (the examiner notes that detection engine 330 may be further be configured to extract, from the third party machine learning model 320, which is external of AI model 100 and a digital watermark and/or determine whether the digital watermark extracted from the third party machine learning model 320 matches the digital watermark embedded in the machine learning model as Rouhani teaches in par.52, lines 1-9), acquiring a set of baseline parameters from the second AI model (Rouhani teaches of determining, based at least on a second activation map associated with the second machine learning model, whether the first plurality of input samples triggers, in the second activation map, a same and/or similar changes as in the first activation map, the second machine learning model being a duplicate of the first machine learning model if the plurality input samples triggers the same or similar changes in the second activation map as in the first activation map, therefore, the parameters in the first machine learning as the same as the parameters in the second machine learning and those parameters are acquired and inputted into the second machine learning as Rouhani teaches in par.18); generating the second watermark for the second AI model based on applying one or more transformations to each baseline parameter from a set of baseline parameters from the second AI model (Rouhani teaches that the detection engine 330 may determine that the second machine learning model is a duplicate of the first machine learning model based at least on a comparison of the first digital watermark embedded in the first machine learning model and the second digital watermark extracted from the second machine learning model and the second digital watermark by at least processing, with the second machine learning model as Rouhani teaches in par.16-21), determining a degree of correlation between the first watermark and a second watermark for a second AI model (Rouhani teaches that determining that a second machine learning model is a duplicate of the first machine learning model based at least on a comparison of the first digital watermark embedded in the first machine learning model and a second digital watermark extracted from the second machine learning model as Rouhani teaches par.13 and 14), wherein the degree of correlation comprises a measure of whether the second AI model matches or is derived from the converged AI model (Rouhani teaches that on the second digital watermark extracted from the second machine learning model matching first digital watermark embedded in the first copy of the first machine learning model and furthermore, the comparison of the first digital watermark and the second digital watermark may include determining a bit error rate (BER) between the first digital watermark and the second digital watermark. The second machine learning model may be determined to be a duplicate of the first machine learning model based at least on the bit error rate not exceeding a threshold value as Rouhani teaches in par.12 and 17). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2, 17, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Rouhani et al (2021/0019605) in views of Anthony samy et al (2020/0184212). For claims 2, 17 and 29, Rouhani teaches all the limitation as previously set forth except for store the first watermark in a repository separate from the converged AI model. Anthony samy teaches, similar system, store the first watermark in a repository separate from the converged AI model (par.44). It would have been obvious to one ordinary skill in the art before effective filling date to modify Rouhani to include a repository separate from the converged AI model as taught and suggested by Anthony samy for the purpose of determining the emergence of such patterns such that all patterns which have been previously identified are penalized in the tampered region generators to ensure they are not regenerated by the system so that the model is always trying to find new fraud patterns (Anthony samy, par.6). Response to Amendments/Arguments Applicant's arguments filed 11/11/2025 on page 2 of the remarks that Rouhani fails to teach “acquiring a set of baseline parameters from the second AI model and generating the second watermark for the second AI model based on applying one or more transformations to each baseline parameter from a set of baseline parameters from the second AI model. However, examiner respectfully disagrees with applicant because Rouhani teaches of extracting or obtaining, from the second machine learning model, the second digital watermark by at least processing, with the second machine learning model, which includes samples as set of baseline parameters, the detection engine 330 may determine that the second machine learning model is a duplicate of the first machine learning model based at least on a comparison of the first digital watermark embedded in the first machine learning model which includes all the samples and values as parameters and a second digital watermark extracted from the second machine learning model. Therefore, the Rouhani’s reference meets the claim limitation. Regarding dependent claims arguments, Applicant refers to the arguments presented with respect to claims, as presented earlier. In response, Applicant is directed to the reasons indicated above regarding why Applicant’s arguments are not persuasive and combinations of the cited references are proper. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AYUB A MAYE whose telephone number is (571)270-5037. The examiner can normally be reached Monday-Friday 9AM-5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHEWAYE GELAGAY can be reached at 571-272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AYUB A MAYE/Examiner, Art Unit 2436 /SHEWAYE GELAGAY/Supervisory Patent Examiner, Art Unit 2436
Read full office action

Prosecution Timeline

Jan 17, 2023
Application Filed
Jan 17, 2023
Response after Non-Final Action
Nov 28, 2024
Non-Final Rejection — §102, §103
Feb 24, 2025
Response Filed
Mar 22, 2025
Final Rejection — §102, §103
May 26, 2025
Interview Requested
Jun 24, 2025
Request for Continued Examination
Jun 30, 2025
Response after Non-Final Action
Aug 07, 2025
Non-Final Rejection — §102, §103
Nov 11, 2025
Response Filed
Mar 06, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574211
PERSONAL PRIVATE KEY ENCRYPTION DEVICE
2y 5m to grant Granted Mar 10, 2026
Patent 12574247
DEVICE FOR COMPUTING SOLUTIONS OF LINEAR SYSTEMS AND ITS APPLICATION TO DIGITAL SIGNATURE GENERATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12547740
INFORMATION PROCESSING DEVICES AND INFORMATION PROCESSING METHODS
2y 5m to grant Granted Feb 10, 2026
Patent 12526274
Geolocated Portable Authenticator for Transparent and Enhanced Information-Security Authentication of Users
2y 5m to grant Granted Jan 13, 2026
Patent 12373573
Vulnerability Processing Method, Apparatus and Device, and Computer-readable Storage Medium
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+41.6%)
5y 2m
Median Time to Grant
High
PTA Risk
Based on 652 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month