Prosecution Insights
Last updated: April 19, 2026
Application No. 18/099,320

SYSTEM AND METHOD FOR TRAINING FEDERATED LEARNING MODEL

Final Rejection §103
Filed
Jan 20, 2023
Examiner
SPRATT, BEAU D
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
Accenture Global Solutions Limited
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
342 granted / 432 resolved
+24.2% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment filed 01/20/2026 has been entered. Claims 1-20 remain pending in the application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-7 and 9-20 are rejected under 35 U.S.C. 103 as being unpatentable over XIN et al. (US 20230083982 B2 hereinafter Xin) in view of Prakash et al. (US 20190220703 A1 hereinafter Prakash) and KANG et al. (US 20230090731 A1 hereinafter Kang) As to independent claim 1, Xin teaches a method for training a federated learning model in a federated learning network comprising a server and a plurality of clients, the server maintaining a global federated learning model, the plurality of clients separately maintaining decentralized data sources, the data sources separately storing training datasets for the global federated learning model, the method comprising: [training for federated learning models ¶182-185, ¶385] receiving, with a processor circuitry in communication with a client, the global federated learning model from the server via the client, the client controlling remote computing resources; [Fig.3 illustrates server and clients that receive models for training from server ¶196 "each client node retains a same model (which may be from the server node or may be obtained by locally personalizing based on the server node) for local inference"] identifying, with the processor circuitry, a spare computing instance from the remote computing resources; [Identifies clients with load less than a load thresholds (spare) ¶270, ¶337-338 " selects a client NWDAF whose Load is less than the preset load threshold"] in response to a processing capacity of the spare computing instance being sufficient to process the threshold training load, offloading, with the processor circuitry, the threshold training load to the spare computing instance; and [selects the client for performing client side learning (offloading) according to load (capacity to process) ¶338 "selects a client NWDAF whose Load is less than the preset load threshold to perform horizontal federated learning"], [capability information is also known about the clients ¶376-377] training the global federated learning model on the spare computing instance with the training dataset stored in a data source maintained by the client. [performs horizontal federated learning (training) ¶338, client (third element) based training data ¶277 "training based on data obtained by the third data analytics network element"] configurable thresholds based on iterations [¶195, ¶285, ¶296-297 “raining is terminated when the quantity of iterations reaches a maximum quantity of iterations”] Xin does not specifically teach determining, with the processor circuitry, a threshold training load for training the global federated learning model based on a training load assigned to the client, the threshold training load being a subset of the assigned training load; However, Prakash teaches determining, with the processor circuitry, a threshold training load for training the global federated learning model based on a training load assigned to the client, the threshold training load being a subset of the assigned training load; [determines load partitions (subsets of thresholds) based on nodes (client) ¶89 "determines load partitions based on the operational parameters and a load balancing policy to ensure the same epoch time or nearly the same epoch times for individual edge compute nodes 2101 to accomplish their individual partial gradient calculation. At operation 212, the master node 2112 provides the partitioned training datasets to respective edge compute nodes 2101"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the distributed training disclosed by Xin by incorporating the determining, with the processor circuitry, a threshold training load for training the global federated learning model based on a training load assigned to the client, the threshold training load being a subset of the assigned training load disclosed by Prakash because both techniques address the same field of federated learning and by incorporating Prakash into Xin helps reduce the computation and time needed to update models [Prakash ¶4, ¶18]. Xin and Prakash do not particularly teach wherein the training load assigned to the client is n epochs and the threshold training load is at least 50% of the T epochs associated to the training load assigned to the client. However, Kang teaches wherein the training load assigned to the client is n epochs and the threshold training load is at least 50% of the T epochs associated to the training load assigned to the client; [early stopping threshold based on epochs ¶45, ¶11 "training the first local artificial intelligence model and the second local artificial intelligence model by means of a local epoch and early stopping which exceed the predetermined reference number of times."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning management disclosed by Xin and Prakash by incorporating the wherein the training load assigned to the client is n epochs and the threshold training load is at least 50% of the T epochs associated to the training load assigned to the client disclosed by Kang because all techniques address the same field of machine learning and by incorporating Kang into Xin and Prakash helps preserve data privacy while identifying data [Kang ¶5-6] As to dependent claim 2, the rejection of claim 1 is incorporated, Xin, Prakash and Kang further teach in response to the spare computing instance completing the threshold training load, obtaining a partially trained model update corresponding to the threshold training load. [Xin clients train sub-models/gradients accordingly and return results (completed) to server ¶385-386] As to dependent claim 3, the rejection of claim 2 is incorporated, Xin, Prakash and Kang further teach in response to the spare computing instance becoming unavailable after completing the threshold training load or the processing capacity of the spare computing instance being insufficient to process a remainder of the assigned training load, transmitting the partially trained model update to the server, where the partially trained model update is aggregated into an updated global federated learning model on the server. [Xin aggregation module for updated model ¶385-386 "server NWDAF may perform, by using a model aggregation module in the server NWDAF, weighted average aggregation on sub-models reported by all target clients NWDAF participating in horizontal federated training, to obtain an updated model."] As to dependent claim 4, the rejection of claim 1 is incorporated, Xin, Prakash and Kang further teach in response to the spare computing instance completing the threshold training load and the spare computing instance being still available, offloading a remainder of the assigned training load to the spare computing instance; and [Prakash opportunistic offloading (when available offload) ¶79 " task offloading may be “opportunistic”"] in response to the spare computing instance completing the remainder of the assigned training load, [Prakash nodes compute their assigned data/tasks (complete) as partial gradients ¶89 " computational tasks (compute partial gradients) to the respective edge compute nodes 2101 for calculating output data, such as partial gradients when the underlying ML algorithm is a GD algorithm."] obtaining a completely trained model update corresponding to the assigned training load, and [Prakash partial gradients are completed model updates ¶89] transmitting the completely trained model update to the server, where the completely trained model update is aggregated into an updated global federated learning model on the server. [Prakash partial gradients get aggregated ¶89, Fig. 2 224] As to dependent claim 6, the rejection of claim 1 is incorporated, Xin, Prakash and Kang further teach the identifying the spare computing instance comprises: [Prakash available edge compute nodes ¶43] identifying a plurality of spare computing instances from the remote computing resources; and [Prakash available edge compute nodes ¶43] the offloading the training of the global federated learning model to the spare computing instance comprises: [Prakash offloading for training ¶44] in response to an aggregate processing capacity of the plurality of spare computing instances being sufficient to process the threshold training load, dividing the threshold training load among the plurality of spare computing instances based on processing capacities of the plurality of spare computing instances, and offloading the divided threshold training load to the plurality of spare computing instances respectively. [Prakash load balancing partitions load (training data) based on processing capability ¶43 " load balancing policy may calculate estimated processing capability or processing rate based on the particular type or types of operational parameters] As to dependent claim 7, the rejection of claim 6 is incorporated, Xin, Prakash and Kang further teach obtaining candidate spare computing instances from the remote computing resources; and [Prakash shortlist of candidates ¶44] selecting the plurality of spare computing instances from the candidate spare computing instances based on instance types of the candidate spare computing instances. [Prakash load balances based on type of node (speed, temp, condition) ¶47 "evaluating both computation and communication resources needed for different offloading opportunities. The threshold criteria or a desired level of reliability mentioned previously may be based on a certain amount or type of compute node capabilities "] As to dependent claim 9, the rejection of claim 7 is incorporated, Xin, Prakash and Kang further teach where each of the plurality of spare computing instance has a different instance type. [Prakash heterogeneous nodes (different) and types ¶47, ¶39 "a heterogeneous environment because collaborating nodes have disparate operational parameters, including different device/system capabilities and different operational contexts and/or constraints"] As to dependent claim 10, the rejection of claim 7 is incorporated, Xin, Prakash and Kang further teach selecting the plurality of spare computing instances from the candidate spare computing instances based on consumption metrics. [Prakash consumption ¶44 and bandwidth ¶48] As to dependent claim 11, the rejection of claim 6 is incorporated, Xin, Prakash and Kang further teach in response to the plurality of spare computing instances collectively completing the threshold training load, obtaining trained model updates corresponding to the threshold training load from the plurality of spare computing instances respectively and averaging the trained model updates as a partially trained model update corresponding to the threshold training load. [Xin weighted average of aggregation of sub-models (updates) ¶386] As to dependent claim 12, the rejection of claim 6 is incorporated, Xin, Prakash and Kang further teach in response to the plurality of spare computing instances collectively completing the threshold training load and at least one of the plurality of spare computing instances being still available, [Prakash partitions/redistributes iteratively through workloads ¶4, ¶30] dividing a remainder of the assigned training load among the at least one spare computing instance based on processing capacities of the at least one spare computing instance, [Prakash load balances based on type of node (speed, temp, condition) ¶47 "evaluating both computation and communication resources needed for different offloading opportunities. The threshold criteria or a desired level of reliability mentioned previously may be based on a certain amount or type of compute node capabilities "] offloading the divided remainder of the assigned training load to the at least one spare computing instance respectively. [Prakash partitions/redistributes iteratively through workloads ¶4, ¶30] As to dependent claim 13, the rejection of claim 12 is incorporated, Xin, Prakash and Kang further teach in response to the at least one spare computing instance collectively completing the remainder of the assigned training load, [Xin clients go through different rounds to collectively complete model ¶360-361] obtaining trained model updates corresponding to the assigned training load from the plurality of spare computing instances respectively, and [Xin server receives sub-models from clients ¶358-360] averaging the trained model updates as a completely trained model update corresponding to the assigned training load. [Xin weighted average of aggregation of sub-models (updates) ¶386] As to dependent claim 14, the rejection of claim 6 is incorporated, Xin, Prakash and Kang further teach in response to one of the plurality of spare computing instances getting unavailable during training, [Prakash reassignment ¶217, fail, redundancy ¶251 ] obtaining remaining training load that the spare computing instance fails to complete, [Prakash partitions/redistributes iteratively through workloads ¶4, ¶30] dividing the remaining training load among others of the plurality of spare computing instances, and [Prakash load balances based on type of node (speed, temp, condition) ¶47 "evaluating both computation and communication resources needed for different offloading opportunities. The threshold criteria or a desired level of reliability mentioned previously may be based on a certain amount or type of compute node capabilities "] offloading the divided training load to the others of the plurality of spare computing instances respectively. [Prakash partitions/redistributes iteratively through workloads ¶4, ¶30] As to dependent claim 15, the rejection of claim 6 is incorporated, Xin, Prakash and Kang further teach in response to all of the plurality of spare computing instances becoming unavailable during training or an aggregate processing capacity of available spare computing instances in the plurality of spare computing instances is insufficient to process an uncompleted training load, selecting and instantiating additional spare computing instances to complete the uncompleted training load. [Prakash reassignment ¶217, fail, redundancy ¶251 ] As to dependent claim 16, the rejection of claim 1 is incorporated, Xin, Prakash and Kang further teach where a plurality of clients are selected by the server to participate in training the global federated learning model, and the client is one of the plurality of clients. [Prakash shortlist of clients (nodes) ¶44] As to dependent claim 17, the rejection of claim 1 is incorporated, Xin, Prakash and Kang further teach obtaining training parameters for training the federated learning model from the client. [Xin receive information about second elements for training ¶53-54 "receive information about one or more second data analytics network elements from the service discovery network element, where the second data analytics network element supports the type of distributed learning requested by the first data analytics network element. 0054] In a possible implementation, the processing unit is configured to determine, based on the information about the one or more second data analytics network elements, information about a third data analytics network element that performs distributed learning, where there is one or more third data analytics network elements."] As to independent claim 19, Xin teaches a system for training a federated learning model in a federated learning network comprising a server and a plurality of clients, the server maintaining a global federated learning model, the plurality of clients separately maintaining decentralized data sources, the data sources separately storing training datasets for the global federated learning model, the system comprising: [training for federated learning models ¶182-185, ¶385] a memory having stored thereon executable instructions; [memory and code ¶421] a processor circuitry in communication with the memory, the processor circuitry when executing the instructions configured to: [processor ¶418] receive the global federated learning model from the server via the client, the client controlling remote computing resources; [Fig.3 illustrates server and clients that receive models for training from server ¶196 "each client node retains a same model (which may be from the server node or may be obtained by locally personalizing based on the server node) for local inference"] identify a spare computing instance from the remote computing resources; [Identifies clients with load less than a load threshold (spare) ¶270, ¶337-338 " selects a client NWDAF whose Load is less than the preset load threshold"] in response to a processing capacity of the spare computing instance being sufficient to process the threshold training load, offloading, with the processor circuitry, the threshold training load to the spare computing instance; and [selects the client for performing client side learning (offloading) according to load (capacity to process) ¶338 "selects a client NWDAF whose Load is less than the preset load threshold to perform horizontal federated learning"], [capability information is also known about the clients ¶376-377] train the global federated learning model on the spare computing instance with the training dataset stored in a data source maintained by the client. [performs horizontal federated learning (training) ¶338, client (third element) based training data ¶277 "training based on data obtained by the third data analytics network element"] configurable thresholds based on iterations [¶195, ¶285, ¶296-297 “raining is terminated when the quantity of iterations reaches a maximum quantity of iterations”] Xin does not specifically teach determining, with the processor circuitry, a threshold training load for training the global federated learning model based on a training load assigned to the client, the threshold training load being a subset of the assigned training load; However, Prakash teaches determining, with the processor circuitry, a threshold training load for training the global federated learning model based on a training load assigned to the client, the threshold training load being a subset of the assigned training load; [determines load partitions (subsets of thresholds) based on nodes (client) ¶89 "determines load partitions based on the operational parameters and a load balancing policy to ensure the same epoch time or nearly the same epoch times for individual edge compute nodes 2101 to accomplish their individual partial gradient calculation. At operation 212, the master node 2112 provides the partitioned training datasets to respective edge compute nodes 2101"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the distributed training disclosed by Xin by incorporating the determining, with the processor circuitry, a threshold training load for training the global federated learning model based on a training load assigned to the client, the threshold training load being a subset of the assigned training load disclosed by Prakash because both techniques address the same field of federated learning and by incorporating Prakash into Xin helps reduce the computation and time needed to update models [Prakash ¶4, ¶18]. Xin and Prakash do not particularly teach wherein the training load assigned to the client is n epochs and the threshold training load is at least 50% of the T epochs associated to the training load assigned to the client. However, Kang teaches wherein the training load assigned to the client is n epochs and the threshold training load is at least 50% of the T epochs associated to the training load assigned to the client; [early stopping threshold based on epochs ¶45, ¶11 "training the first local artificial intelligence model and the second local artificial intelligence model by means of a local epoch and early stopping which exceed the predetermined reference number of times."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning management disclosed by Xin and Prakash by incorporating the wherein the training load assigned to the client is n epochs and the threshold training load is at least 50% of the T epochs associated to the training load assigned to the client disclosed by Kang because all techniques address the same field of machine learning and by incorporating Kang into Xin and Prakash helps preserve data privacy while identifying data [Kang ¶5-6] As to independent claim 20, Xin teaches a product for training a federated learning model in a federated learning network comprising a server and a plurality of clients, the server maintaining a global federated learning model, the plurality of clients separately maintaining decentralized data sources, the data sources separately storing training datasets for the global federated learning model, the product comprising: [training for federated learning models ¶182-185, ¶385] non-transitory machine-readable media; and instructions stored on the machine-readable media, the instructions configured to, when executed, cause a processor circuitry to: [media and code ¶421 and processor ¶418] a processor circuitry in communication with the memory, the processor circuitry when executing the instructions configured to: [processor ¶418] receive the global federated learning model from the server via the client, the client controlling remote computing resources; [Fig.3 illustrates server and clients that receive models for training from server ¶196 "each client node retains a same model (which may be from the server node or may be obtained by locally personalizing based on the server node) for local inference"] identify a spare computing instance from the remote computing resources; [Identifies clients with load less than a load threshold (spare) ¶270, ¶337-338 " selects a client NWDAF whose Load is less than the preset load threshold"] in response to a processing capacity of the spare computing instance being sufficient to process the threshold training load, offloading, with the processor circuitry, the threshold training load to the spare computing instance; and [selects the client for performing client side learning (offloading) according to load (capacity to process) ¶338 "selects a client NWDAF whose Load is less than the preset load threshold to perform horizontal federated learning"], [capability information is also known about the clients ¶376-377] train the global federated learning model on the spare computing instance with the training dataset stored in a data source maintained by the client. [performs horizontal federated learning (training) ¶338, client (third element) based training data ¶277 "training based on data obtained by the third data analytics network element"] configurable thresholds based on iterations [¶195, ¶285, ¶296-297 “raining is terminated when the quantity of iterations reaches a maximum quantity of iterations”] Xin does not specifically teach determining, with the processor circuitry, a threshold training load for training the global federated learning model based on a training load assigned to the client, the threshold training load being a subset of the assigned training load; However, Prakash teaches determining, with the processor circuitry, a threshold training load for training the global federated learning model based on a training load assigned to the client, the threshold training load being a subset of the assigned training load; [determines load partitions (subsets of thresholds) based on nodes (client) ¶89 "determines load partitions based on the operational parameters and a load balancing policy to ensure the same epoch time or nearly the same epoch times for individual edge compute nodes 2101 to accomplish their individual partial gradient calculation. At operation 212, the master node 2112 provides the partitioned training datasets to respective edge compute nodes 2101"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the distributed training disclosed by Xin by incorporating the determining, with the processor circuitry, a threshold training load for training the global federated learning model based on a training load assigned to the client, the threshold training load being a subset of the assigned training load disclosed by Prakash because both techniques address the same field of federated learning and by incorporating Prakash into Xin helps reduce the computation and time needed to update models [Prakash ¶4, ¶18]. Xin and Prakash do not particularly teach wherein the training load assigned to the client is n epochs and the threshold training load is at least 50% of the T epochs associated to the training load assigned to the client. However, Kang teaches wherein the training load assigned to the client is n epochs and the threshold training load is at least 50% of the T epochs associated to the training load assigned to the client; [early stopping threshold based on epochs ¶45, ¶11 "training the first local artificial intelligence model and the second local artificial intelligence model by means of a local epoch and early stopping which exceed the predetermined reference number of times."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning management disclosed by Xin and Prakash by incorporating the wherein the training load assigned to the client is n epochs and the threshold training load is at least 50% of the T epochs associated to the training load assigned to the client disclosed by Kang because all techniques address the same field of machine learning and by incorporating Kang into Xin and Prakash helps preserve data privacy while identifying data [Kang ¶5-6] Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Xin in view of Prakash and Kang, as applied in claim 1 above, and further in view of SABELLA et al. (US 20180183855 A1 hereinafter Sabella) As to dependent claim 5, Xin, Prakash and Kang teach the method of claim 1 above that is incorporated, Xin, Prakash and Kang further teach in response to the processing capacity of the spare computing instance being insufficient to process the threshold training load, [Xin when above the threshold clients don’t get selected ¶337-338] refraining from training the global federated learning model, and [Xin only selected clients perform the learning ¶337-338] Xin, Prakash and Kang do not specifically teach reporting a failure of training the global federated learning model on the remote computing resources to the server. However, Sabella teaches reporting a failure of training the global federated learning model on the remote computing resources to the server. [reporting faults (failure) about the resources ¶82 " The VI manager 332 may also collect and report performance and fault information about the virtualized resources, and perform application relocation when supported."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning management disclosed by Xin, Prakash and Kang by incorporating the reporting a failure of training the global federated learning model on the remote computing resources to the server disclosed by Sabella because all techniques address the same field of offloading processing and by incorporating Sabella into Xin, Prakash and Kang alleviates issues with network congestion and performance [Sabella ¶25] Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Xin in view of Prakash and Kang, as applied in claim 7 above, and further in view of Doshi et al. (US 11456914 B2 hereinafter Doshi) As to dependent claim 8, Xin, Prakash and Kang teach the method of claim 7 above that is incorporated, Xin, Prakash and Kang do not specifically teach in response to a spare computing instance belonging to a specific instance type being selected, decreasing a selection priority of other spare computing instances belonging to the specific instance type. However, Doshi teaches in response to a spare computing instance belonging to a specific instance type being selected, decreasing a selection priority of other spare computing instances belonging to the specific instance type. [selects based on anti-affinity (deceases same type) Col. 4 ln. 14-36 "selecting the node 110 according to constraints"…"example constraint may be a “degree of anti-affinity” with respect to another pod"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the learning management disclosed by Xin, Prakash and Kang by incorporating the in response to a spare computing instance belonging to a specific instance type being selected, decreasing a selection priority of other spare computing instances belonging to the specific instance type disclosed by Doshi because all techniques address the same field of orchestrating processing and by incorporating Doshi into Xin, Prakash and Kang handles more complex user requirement of tasks having constraints [Doshi Col. 2 ln. 62-67 and Col. 4 ln. 20-29] Response to Arguments Applicant's arguments filed 01/20/2026. In the remark, applicant argues that: Xin and Prakash fail to teach "determining, with the processor circuitry, a threshold training load for training the global federated learning model based on a training load assigned to the client, the threshold training load being a subset of the assigned training load, wherein the training load assigned to the client is n epochs and the threshold training load is at least 50% of the T epochs associated to the training load assigned to the client; " See Prakash ¶89 As to point (1) applicant’s arguments with respect to claim 1 have been considered but are moot in view of a new ground of rejection made under rejected under 35 U.S.C. 103 as being unpatentable over Xin view of Prakash and Kang as set forth above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Ding et al. (US 20220300618 A1) trains in a set of epochs and may finish early based on threshold (see ¶25) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEAU SPRATT whose telephone number is (571)272-9919. The examiner can normally be reached M-F 8:30-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 5712127212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEAU D SPRATT/Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Jan 20, 2023
Application Filed
Nov 07, 2025
Non-Final Rejection — §103
Jan 20, 2026
Response Filed
Mar 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595715
Cementing Lab Data Validation based On Machine Learning
2y 5m to grant Granted Apr 07, 2026
Patent 12596955
REWARD FEEDBACK FOR LEARNING CONTROL POLICIES USING NATURAL LANGUAGE AND VISION DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596956
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD FOR PRESENTING REACTION-ADAPTIVE EXPLANATION OF AUTOMATIC OPERATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12561464
CATALYST 4 CONNECTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12561606
TECHNIQUES FOR POLL INTENTION DETECTION AND POLL CREATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.6%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month