Prosecution Insights
Last updated: April 19, 2026
Application No. 17/816,163

SYSTEMS AND METHODS FOR PROVIDING DATA PRIVACY USING FEDERATED LEARNING

Final Rejection §103
Filed
Jul 29, 2022
Examiner
NYE, LOUIS CHRISTOPHER
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Reveald Holdings Inc.
OA Round
2 (Final)
22%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
58%
With Interview

Examiner Intelligence

Grants only 22% of cases
22%
Career Allow Rate
2 granted / 9 resolved
-32.8% vs TC avg
Strong +36% interview lift
Without
With
+35.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
27 currently pending
Career history
36
Total Applications
across all art units

Statute-Specific Performance

§101
38.3%
-1.7% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
3.9%
-36.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 1-3, 5, 7-12, 14, 16-21, 23, and 25-29 is/are rejected under 35 U.S.C. 103 as being unpatentable over McMahan et al. (From IDS: US Patent. No. 10,657,461, published May 2020, hereinafter “McMahan”), in view of Shen et al. (NPL: Learning Network Representation Through Reinforcement Learning, published April 2020, hereinafter “Shen”), and further in view of Hu et al. (NPL: A Scalable Federated Multi-agent Architecture for Networked Connected Communication Network, published 1 Aug. 2021, hereinafter “Hu”). Regarding claim 1, McMahan teaches a method for providing data privacy using federated learning, comprising: receiving a first instance of a master RL agent model (McMahan, Page 10, Col. 4, Lines 40-45 – “In an example federated learning framework, an objective is to learn a model with parameters embodied in a real matrix W∈R.sup.d.sup.1.sup.×d.sup.2. As examples, the model can include one or more neural networks (e.g., deep neural networks, recurrent neural networks, convolutional neural networks, etc.) or other machine-learned models” and in Page 10, Col. 4, Lines 46-47 – “In round t≥0, the server distributes the current model W.sub.t to a subset S.sub.t of n.sub.t clients” – teaches receiving (server distributes to clients) a first instance (first round server model, t=0) of a master agent model); receiving a second instance of the master RL agent model (McMahan, Page 10, Col. 4, Lines 40-45 – “In an example federated learning framework, an objective is to learn a model with parameters embodied in a real matrix W∈R.sup.d.sup.1.sup.×d.sup.2. As examples, the model can include one or more neural networks (e.g., deep neural networks, recurrent neural networks, convolutional neural networks, etc.) or other machine-learned models” and in Page 10, Col. 4, Lines 46-47 – “In round t≥0, the server distributes the current model W.sub.t to a subset S.sub.t of n.sub.t clients” – teaches receiving (server distributes to clients) a second instance (first round server model, t=0, distributed to subset of clients) of a master agent model); wherein each of the first software stack of the first client and the second software stack of the second client are siloed (McMahan, Pg. 9 Col. 2 Lines 21-33 – “Furthermore, by training a machine-learned model by a client computing device based on a local dataset, the security of the training process can be improved. This is because, for example, the information of the model update is less sensitive than the data itself. User data that is privacy sensitive remains at the user's computing device and is not uploaded to the server. Instead, only the less sensitive model update is transmitted.” – teaches wherein each of the first software stack of the first client and the second software stack of the second client are siloed (model trained on local dataset that remains on user device, only the less sensitive model update, or model weights, is transmitted, thus the data of the first and second clients are siloed)); transmitting the first information gain, [[and]] the first set of RL model weights, the second information gain and the second RL model weights to a central server thereby enabling the central server to update the master RL agent model, the second information gain and the second set of RL model weights such that the siloed client data remain separate to preserve the data privacy of the first client and the second client (McMahan, Page 10, Col. 4, Lines 54-57 – “Each client then sends the update back to the server, where the global update is computed by aggregating all the client-side updates.” – teaches transmitting the information gain and first set of model weights to a central server (client sends update back to server) to update the master agent model based on the first information gain and the first set of model weights (global update computed by aggregating all client-side updates). In addition to the previously cited passage, McMahan further teaches in Pg. 9 Col. 2 Lines 21-33 – “Furthermore, by training a machine-learned model by a client computing device based on a local dataset, the security of the training process can be improved. This is because, for example, the information of the model update is less sensitive than the data itself. User data that is privacy sensitive remains at the user's computing device and is not uploaded to the server. Instead, only the less sensitive model update is transmitted.” – teaches that the siloed client data remain separate to preserve the data privacy of the first client and the second client (model trained on local dataset that remains on user device, only the less sensitive model update, or model weights, is transmitted)). McMahan fails to explicitly teach a master reinforcement learning model RL agent model; training the first instance of the master RL agent model on a first graph that models relations of objects in the form of vector representations of real numbers corresponding to a first software stack of a first client, thereby generating a first set of RL model weights; training the second instance of the master RL agent model on a second graph that models relations of objects in the form of vector representations of real numbers corresponding to a second software stack of a second client, thereby generating a second set of RL model weights generating a first information gain corresponding to [[a]] the first software stack of [[a]] the first client; generating a second information gain corresponding to the second software stack of the second client; and update the master RL agent model. However, analogous to the field of the claimed invention, Shen teaches: a master reinforcement learning RL agent model (Shen, Algorithm 2 – teaches a master reinforcement learning RL agent model (RLNet calculates and accumulates gradients of RL agents, updating parameters θ)) training the first instance of the master RL agent model on a first graph that models relations of objects in the form of vector representations of real numbers corresponding to a first software stack of a first client (Shen, Fig. 1 and Section 3.1 Paragraph 1 – “Given a network (V,E), where V is the node (vertex)1 set and E is the edge set, RLNet aims to embed each node v ∈ V into a low dimensional real-value vector. RLNet explores a network through multiple walking processes on the network, and learns the representations at the same time. For each walk, a RLNet agent can be located in any vertex vi ∈ V and travel through edge eij ∈ E to vj to explore in the network.” – teaches training the first instance of the RL agent model on a first graph that models relations of objects in the form of vector representations of real numbers corresponding to a software stack (given a network) of a first client (V is node and E is edge, aims to embed each node into low dimensional real-valued vector)), thereby generating a first set of RL model weights (Shen, Algorithm 1 and Section 3.3.1 Paragraph 1 – “The node traversal (walking) process is described in Algorithm 1. In the beginning, an agent is randomly located in v1 (line 1). At each time step t, a probability number p is obtained (line 3). With probability p ≤ , the agent performs the-greedy strategy (line 4-5). Otherwise, an action at will be sampled from the policy network π(a|s) (line 7). The action at is executed, and then the vertex where the agent located is changed to vt+1, and reward rt is obtained (line 9). If the time t has reached m, the whole episode is returned (line 11).” – trains first instance of master RL agent model on vector representation corresponding to software stack of first client, thereby generating a first set of RL model weights (RL agent traverses node, performs action, obtains reward, and whole episode is returned when time t reaches m)); training the second instance of the master RL agent model on a second graph that models relations of objects in the form of vector representations of real numbers corresponding to a second software stack of a second client (Shen, Fig. 1 and Section 3.1 Paragraph 1 – “Given a network (V,E), where V is the node (vertex)1 set and E is the edge set, RLNet aims to embed each node v ∈ V into a low dimensional real-value vector. RLNet explores a network through multiple walking processes on the network, and learns the representations at the same time. For each walk, a RLNet agent can be located in any vertex vi ∈ V and travel through edge eij ∈ E to vj to explore in the network.” – teaches training the second instance of the RL agent model on a second graph that models relations of objects in the form of vector representations of real numbers corresponding to a software stack (given a network) of a second client (V is node and E is edge, aims to embed each node into low dimensional real-valued vector, each agent operates on its own data)), thereby generating a second set of RL model weights (Shen, Algorithm 1 and Section 3.3.1 Paragraph 1 – “The node traversal (walking) process is described in Algorithm 1. In the beginning, an agent is randomly located in v1 (line 1). At each time step t, a probability number p is obtained (line 3). With probability p ≤ , the agent performs the-greedy strategy (line 4-5). Otherwise, an action at will be sampled from the policy network π(a|s) (line 7). The action at is executed, and then the vertex where the agent located is changed to vt+1, and reward rt is obtained (line 9). If the time t has reached m, the whole episode is returned (line 11).” – trains second instance of master RL agent model on vector representation corresponding to software stack of second client, thereby generating a first set of RL model weights (RL agent traverses node, performs action, obtains reward, and whole episode is returned when time t reaches m)), update the master RL agent model (Shen, Section 3.3.2 Paragraph 1 – “The RLNet training process is depicted in Algorithm 2. For each epoch, C episodes will be sampled using Algorithm 1. For each sampled episodes, at each discrete time t, the accumulated reward Aπθ is obtained (line 6). And then, the gradient ∆θ of the model parameters θ (consisting of W and representations: v and x) are calculated, and accumulated (line 7). After C episodes have been sampled, the gradient ∆θ is averaged to obtain a robust estimation of gradient (line 10) and then ∆θ is applied on θ (line 11). The whole training process is repeated until converge” – teaches updating the master RL agent model (gradient of model parameters of RL agent is calculated and accumulated, thus updating the parameters of the master reinforcement learning model RLNet))) Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the RL agent training on vector representations of graphs and RL master model of Shen to the federated learning system of McMahan. Doing so would enable learning network representations via interacting with a network while maximizing accumulated rewards of a reinforcement learning agent (Shen, Introduction) while user data that is privacy sensitive remains at the user's computing device and is not uploaded to the server (McMahan, Page 9, Col. 2, Lines 30-32). The combination of McMahan and Shen fails to explicitly teach generating a first information gain corresponding to [[a]] the first software stack of [[a]] the first client; generating a second information gain corresponding to the second software stack of the second client. However, analogous to the field of the claimed invention, Huang teaches: generating a first information gain corresponding to [[a]] the first software stack of [[a]] the first client (Hu, Section VIII Paragraph 3 – “During the learning procedure, the information increase in each time step. For any agent i in a group of agent B with neighbor agents B−i, we define the information gain in each learning time step as Eq. (29) where ∆↑Ii,env is the information gain for local information,” – teaches generating a first information gain corresponding to the first software stack of the first client (generates information gain corresponding to local information of agent)); generating a second information gain corresponding to the second software stack of the second client (Hu, Section VIII Paragraph 3 – “During the learning procedure, the information increase in each time step. For any agent i in a group of agent B with neighbor agents B−i, we define the information gain in each learning time step as Eq. (29) where ∆↑Ii,env is the information gain for local information,” – teaches generating a second information gain corresponding to the second software stack of the second client (generates information gain corresponding to local information of agent)); transmitting the first information gain [[and]] the second information gain to a central server thereby enabling the central server to update the master RL agent model (Hu, Section VIII Subsection A Paragraph 3 – “In this way, the overall information gain in F − 1 local update and the following federated update can be written as Eq. (36)” – teaches updating the master RL agent model (federated update) based on the first information gain and the second information gain (based on I*,env which is the information gain of each agent and their respective local environment)) Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the information gain metrics of Hu to the federated learning system and RL agents of McMahan and Shen. Doing so would acquire system-level information and train the algorithms for agents in a central controller which is executed distributively in the network, and may do so without sharing agent information directly with others (Hu, Introduction) Claims 10 and 19 incorporate substantively all the limitations of claim 1 in a non-transitory computer-readable medium and system, and are rejected on the same grounds as above. McMahan teaches the non-transitory computer readable medium; system; and servers (McMahan, Pg. 9 Col. 1, Lines 50-65). Regarding claim 2, the combination of McMahan, Shen, and Huang teaches the method according to claim 1, further comprising: updating the master RL agent model based on the first information gain and the first set of RL model weights, thereby generating an updated master RL agent model (McMahan, Page 10, Col. 4, Lines 54-57 – “Each client then sends the update back to the server, where the global update is computed by aggregating all the client-side updates.” – teaches transmitting the information gain and first set of model weights to a central server (client sends update back to server) to update the master agent model based on the first information gain and the first set of model weights (global update computed by aggregating all client-side updates), thereby generating an updated master agent model). McMahan fails to explicitly teach updating the master RL agent model based on the first information gain. However, analogous to the field of the claimed invention, Hu teaches: updating the master RL agent model based on the first information gain (Hu, Section VIII Subsection A Paragraph 3 – “In this way, the overall information gain in F − 1 local update and the following federated update can be written as Eq. (36)” – teaches updating the master RL agent model (federated update) based on the first information gain (based on I*,env which is the information gain of the agent and their respective local environment)). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the updating of the master RL agent model based on the first information gain of Hu to the further modify the updating of the master model based on the first set of weights of McMahan, Shen, and Hu in order to update the master RL agent model using the first information gain and first set of RL model weights produced from training. Doing so would acquire system-level information and train the algorithms for agents in a central controller which is executed distributively in the network, and may do so without sharing agent information directly with others (Hu, Introduction). Claims 11 and 20 are similar to claim 2, hence similarly rejected. Regarding claim 3, the combination of McMahan, Shen, and Huang teaches the method according to claim 1, further comprising: receiving an updated master RL agent model (McMahan, Page 13, Col. 8, Lines 51-55 – “As indicated above, server 104 can receive each local update from client device 102, and can aggregate the local updates to determine a global update to the model 106. In some implementations, server 104 can determine an average (e.g., a weighted average) of the local updates and determine the global update based at least in part on the average” – teaches receiving an updated master agent model (determines global update to agent models based on weighted average of local updates)); McMahan fails to explicitly teach training the updated master RL agent model on another graph. However, analogous to the field of the claimed invention, Shen teaches: training the updated master RL agent model on another graph (Shen, Algorithm 1 and Algorithm 2 – teaches training the updated master RL agent model on another graph (RL agent model trains on graph, returns update, update is averaged and sent back to RL agent, process repeats until convergence, thus training the updated master RL agent model on another graph)). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the training of the updated master RL agent model on another graph of Shen to further modify the receiving of the updated master model of McMahan, Shen, and Hu in order to receive an updated master RL agent model and train the model on another graph. Doing so would enable learning network representations via interacting with a network while maximizing accumulated rewards of a reinforcement learning agent (Shen, Introduction). Claims 12 and 21 are similar to claim 3, hence similarly rejected. Regarding claim 5, the combination of McMahan, Shen, and Huang teaches the method according to claim [[4]] 1, further comprising: updating the master RL agent model based on the first information gain, the first set of RL model weights, the second information gain and the second set of RL model weights, thereby generating an updated master RL agent model (McMahan, Page 10, Col. 4, Lines 54-57 – “Each client then sends the update back to the server, where the global update is computed by aggregating all the client-side updates.” – teaches transmitting the first and second information gain and first and second sets of model weights to a central server (each client sends updates back to server) to update the master agent model based on the first and second information gain and the first and second sets of model weights (global update computed by aggregating all client-side updates)). McMahan fails to explicitly teach updating the master RL agent model based on the first and second information gain. However, analogous to the field of the claimed invention, Hu teaches: updating the master RL agent model based on the first information gain and the second information gain, thereby generating an updated master RL agent model (Hu, Section VIII Subsection A Paragraph 3 – “In this way, the overall information gain in F − 1 local update and the following federated update can be written as Eq. (36)” – teaches updating the master RL agent model (federated update) based on the first information gain and the second information gain (based on I*,env which is the information gain of each agent and their respective local environment), thereby generating an updated master RL agent model (federated update)). Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the updating of the master RL agent model based on the first information gain and the second information gain of Hu to further modify the updating of the master agent model based on the first and second sets of weights of McMahan, Shen, and Hu in order to update a master RL agent model of a federated learning environment based on first and second sets of weights and first and second information gain. Doing so would acquire system-level information and train the algorithms for agents in a central controller which is executed distributively in the network, and may do so without sharing agent information directly with others (Hu, Introduction). Claims 14 and 23 are similar to claim 5, hence similarly rejected. Regarding claim 7, the combination of McMahan, Shen, and Hu teaches the method of claim [[4]] 1, further comprising: combining the first set of RL model weights and the second set of RL model weights to generate a combined set of RL model weights (McMahan, Page 10, Col. 4, Lines 49-63 – “Some or all of these clients independently update the model based on their local data. The updated local models are W.sub.t.sup.1, W.sub.t.sup.2, . . . , W.sub.t.sup.n. Let the updates be: H.sub.t.sup.i=W.sub.t.sup.i−W.sub.t,i∈S.sub.t. Each client then sends the update back to the server, where the global update is computed by aggregating all the client-side updates.” – teaches combing the first set of model weights and the second set of model weights (client-side updates) to generate a combined set of model weights (aggregating all client-side updates)). Claims 16 and 25 are similar to claim 7, hence similarly rejected. Regarding claim 8, the combination of McMahan, Shen, and Hu teaches the method according to claim 7, further comprising: updating the master RL agent model by applying the combined set of RL model weights (McMahan, Page 10, Col. 4, Lines 49-63 – “Some or all of these clients independently update the model based on their local data. The updated local models are W.sub.t.sup.1, W.sub.t.sup.2, . . . , W.sub.t.sup.n. Let the updates be: H.sub.t.sup.i=W.sub.t.sup.i−W.sub.t,i∈S.sub.t. Each client then sends the update back to the server, where the global update is computed by aggregating all the client-side updates.” – teaches updating the master agent model by applying the combined set of model weights (global update for model is computed by aggregating all the client-side updates, thus the global update is determined by applying the combined set of model weights)). Claims 17 and 26 are similar to claim 8, hence similarly rejected. Regarding claim 9, the combination of McMahan, Shen, and Hu teaches the method according to claim [[4]] 1, further comprising: updating the master RL agent model by applying a weighted average based on the first information gain and the second information gain (McMahan, Page 13, Col. 8, Lines 51-55 – “As indicated above, server 104 can receive each local update from client device 102, and can aggregate the local updates to determine a global update to the model 106. In some implementations, server 104 can determine an average (e.g., a weighted average) of the local updates and determine the global update based at least in part on the average” – teaches updating the master agent model by applying a weighted average based on the first information gain and second information gain (weighted average of the local updates)). McMahan fails to explicitly teach updating the master RL agent model based on the first and second information gain. However, analogous to the field of the claimed invention, Hu teaches: updating the master RL agent model based on the first information gain and the second information gain (Hu, Section VIII Subsection A Paragraph 3 – “In this way, the overall information gain in F − 1 local update and the following federated update can be written as Eq. (36)” – teaches updating the master RL agent model (federated update) based on the first information gain and the second information gain (based on I*,env which is the information gain of each agent and their respective local environment)) Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate the updating of the master RL agent model based on the first information gain and the second information gain of Hu to further modify the updating of the master agent model based on the weighted average of local updates of McMahan, Shen, and Hu in order to update a master RL agent model of a federated learning environment based on a weighted average of first and second information gain. Doing so would acquire system-level information and train the algorithms for agents in a central controller which is executed distributively in the network, and may do so without sharing agent information directly with others (Hu, Introduction). Claims 18 and 27 are similar to claim 9, hence similarly rejected. Regarding claim 28, the combination of McMahan, Shen, and Hu teaches the system according to claim [[22]] 19, wherein the first agent server and the second agent server are on different systems (McMahan, Page 13, Col. 9, Lines 11-12 - “The server 210 can be implemented using one server device or a plurality of server devices.” – teaches wherein the first agent server and the second agent server are on different systems (implemented using plurality of server devices)). Regarding claim 29, the combination of McMahan, Shen, and Hu teaches the system according to claim [[22]] 19, wherein the first agent server and the second agent server are on the same system (McMahan, Page 13, Col. 9, Lines 11-12 - “The server 210 can be implemented using one server device or a plurality of server devices.” – teaches wherein the first agent server and the second agent server are on different systems (implemented using one server device)). Response to Arguments Applicant’s arguments, see pp. 7-15 of Remarks, filed 16 November 2025, with respect to the rejection(s) of claim(s) 1, 11, and 20 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made over McMahan in view of Shen et al. (NPL: Learning Network Representation Through Reinforcement Learning, published April 2020, hereinafter “Shen”), and further in view of Hu et al. (NPL: A Scalable Federated Multi-agent Architecture for Networked Connected Communication Network, published 1 Aug. 2021, hereinafter “Hu”). McMahan teaches “receiving a first instance of a master RL agent model; receiving a second instance of the master RL agent model;”, “wherein each of the first software stack of the first client and the second software stack of the second client are siloed;” and “transmitting the first information gain, [[and]] the first set of RL model weights, the second information gain and the second RL model weights to a central server…”. Shen teaches “a master reinforcement learning RL agent model;”, “training the first instance of the master RL agent model on a first graph…”, “training the second instance of the master RL agent model on a second graph…”, and “update the master RL agent model”. Hu teaches “generating a first information gain…;”, “generating a second information gain…;”, and “transmitting the first information gain and the second information gain to a central server…”. Applicant argues on pp. 8 of Remarks that McMahan does not teach siloed software stacks. Examiner respectfully disagrees and points to Pg. 9 Col. 2 Lines 21-33 of McMahan – “User data that is privacy sensitive remains at the user's computing device and is not uploaded to the server. Instead, only the less sensitive model update is transmitted.”. The Specification of the claimed invention at [0008] states that “Consequently, data silos mean that the datasets in each silo are unconnected. Similarly, the software stacks that support silos, such as with regard to machine learning inferencing, embedding generation, path generation (e.g., based on the inferencing), and the logic that that is used to send such data to a graphics processing unit (GPU) server is siloed as well. That is, there is no crosstalk between client software stacks, which as explained above makes it difficult or impossible to work with each other.” and the Specification at [0064] states that “Since model weights are sent in transit (and appropriately encrypted) no client graph data is sent or compromised. Only learned information for the RL agent is sent in the transfer”. McMahan teaches that client data remains at the computing device and only the less sensitive model updates is transferred instead, which is equivalent to the data silos with no crosstalk and only learned information for the RL agent is sent in transfer. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Huang et al. (NPL: Scalable Orchestration of Service Function Chains in NFV-Enabled Networks: A Federated Reinforcement Learning Approach, published July 2021) teaches a federated reinforcement learning system for orchestration of service function chains. Teaches a master RL agent model that distributes a model to RL agents that train on local data. The RL agents send a parameter update to the master RL agent model, and the master RL agent model performs a weighted average of the RL agent model parameter updates based on certain solutions, such as weighting the parameter updates from the RL agents based on a proportion to their state-action rewards. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LOUIS C NYE whose telephone number is 571-272-0636. The examiner can normally be reached Monday - Friday 9:00AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATT ELL can be reached at 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LOUIS CHRISTOPHER NYE/Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Jul 29, 2022
Application Filed
Jul 14, 2025
Non-Final Rejection — §103
Nov 16, 2025
Response Filed
Feb 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12524683
METHOD FOR PREDICTING REMAINING USEFUL LIFE (RUL) OF AERO-ENGINE BASED ON AUTOMATIC DIFFERENTIAL LEARNING DEEP NEURAL NETWORK (ADLDNN)
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
22%
Grant Probability
58%
With Interview (+35.7%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month