Prosecution Insights
Last updated: April 19, 2026
Application No. 18/181,513

METHOD AND SYSTEM FOR PREDICTING A DELAY FOR A FLIGHT OF AN AIRCRAFT

Non-Final OA §103
Filed
Mar 09, 2023
Examiner
GRUSZKA, DANIEL PATRICK
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
The Boeing Company
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
32 currently pending
Career history
32
Total Applications
across all art units

Statute-Specific Performance

§101
38.3%
-1.7% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
7.4%
-32.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4-11, and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Sidahmed (US 2023/0214642 A1) in view of Guo (NPL: ‘Research on Flight Delay Prediction Based on Horizontal and Vertical Federated Learning Framework’). Regarding claim 1, Sidahmed teaches: receiving, at the central machine learning model from the one or more edge computing devices, a neural network gain between the first prediction and an organization-specific prediction … generated by the version of the central machine learning model trained with the organization-specific data, the neural network gain being based on a weighted difference between the first prediction and the organization-specific prediction and an actual …; and ([0102] “The updates 220 may include information indicative of the updated trainable parameters. The updates 220 may include the locally updated trainable parameters (e.g., the updated parameters or a difference between the updated parameter and the previous parameter received from the server 204). In some examples, the updates 220 may include an update term, a corresponding weight, and/or a corresponding learning rate, and the server may determine therewith an updated version of the corresponding trainable parameter. Communications between the server 204 and the client devices 204 can be encrypted or otherwise rendered private.” And [0099] “In some implementations, updated parameters are provided to the server 204 by a plurality of client devices 202, and the respective updated parameters are summed across the plurality of client devices 202. The sum for each of the updated parameters may then be divided by a corresponding sum of weights for each parameter as provided by the clients to form a set of weighted average updated parameters.”) updating the central machine learning model based on the neural network gain from the one or more edge computing devices to generate a revised prediction… ([0114] “At 316, method 300 can include the server computing device aggregating the local updates from the plurality of client devices.” And [0115] “At 318, method 300 can include the server computing device updating the global model based on the aggregation.”) Sidahmed does not teach that the data relates to aircraft delays and they do not teach: A method for predicting a delay for a flight of an aircraft from a departure airport to an arrival airport, the method comprising: accessing, by one or more processors in communication with a non-transitory computer readable medium having executable instructions therein, a plurality of data points comprising historical flights from the departure airport to the arrival airport, building a central machine learning model for predicting the delay for the flight of the aircraft using the data points, the central machine learning model built using a machine learning algorithm and one or more of the plurality of data points; generating, by the central machine learning model, a first prediction of the delay for the flight of the aircraft; distributing a version of the central machine learning model along with the first prediction of the delay to one or more edge computing devices, each of the one or more edge computing devices being associated with an organization that operates the aircraft, and being in communication with a data store comprising organization-specific data usable by the one or more edge computing devices associated with the organization to train the version of the central machine learning model received thereby with the organization-specific data; However, Guo does teach these: A method for predicting a delay for a flight of an aircraft from a departure airport to an arrival airport, the method comprising: (Abstract) accessing, by one or more processors in communication with a non-transitory computer readable medium having executable instructions therein, a plurality of data points comprising historical flights from the departure airport to the arrival airport, (Section IV. Experiments “After analyzing the data attribute values, we finally get the attributes used for flight delay prediction, which mainly include date, departure airport number, destination airport number, planned landing time.”) building a central machine learning model for predicting the delay for the flight of the aircraft using the data points, the central machine learning model built using a machine learning algorithm and one or more of the plurality of data points; (Section III. Construction of flight delay prediction model based on federated learning “Initially, the server first initializes to obtain a global model.”) generating, by the central machine learning model, a first prediction of the delay for the flight of the aircraft; (Section III. Construction of flight delay prediction model based on federated learning “Our goal is to train a model with the best prediction effect for the flight delay prediction task and infer the value of the predicted label xi by inputting the feature value ω ∈ ℝd in the training sample and determining the model parameter vector y.”) distributing a version of the central machine learning model along with the first prediction of the delay to one or more edge computing devices, each of the one or more edge computing devices being associated with an organization that operates the aircraft, and being in communication with a data store comprising organization-specific data usable by the one or more edge computing devices associated with the organization to train the version of the central machine learning model received thereby with the organization-specific data; (Section III. Construction of flight delay prediction model based on federated learning “Then each airport client downloads the initial global model from the server and uses its local data to train the model.”) Sidahmed and Guo are considered analogous art to the claimed invention because they are in the same field of endeavor being federated learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system and models of Sidahmed with the aircraft delay prediction model of Guo. One would want to do this for a collaborative approach to aircraft delay prediction. Regarding claim 4, Sidahmed in view of Guo teaches claim 1 as outlined above. Sidahmed further teaches: the central machine learning model is a deep neural network (DNN) model containing a first number of layers and neurons, the first number of layers and neurons corresponding to the plurality of data points, ([0062] “For example, the machine-learned models 120 can be or can otherwise include various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models.”) wherein the method further comprises training the DNN model using the first number of layers and neurons corresponding to the plurality of data points. ([0119] “At operation 402, the method can include determining, by a server computing device, a first set of training parameters from a plurality of parameters of the global model. The plurality of parameters of the global model can include the first set of training parameters and a set of frozen parameters.”) Regarding claim 5, Sidahmed in view of Guo teaches claim 4 as outlined above. Sidahmed further teaches: the version of the central machine learning model includes the DNN model further trained by the one or more edge computing devices using the organization-specific data, wherein the DNN model further trained by the one or more edge computing devices using the organization-specific data includes the first number of layers and neurons and a second number of layers and neurons corresponding to the organization-specific data. ([0127]“At operation 406, the method can include transmitting, respectively to a plurality of client computing devices, the first set of training parameters and the random seed. The set of frozen parameters can be reconstructed from the random seed by the plurality of client computing devices using the random number generator.” The frozen parameters are implying that some layers/neurons are not to be trained. Thus the client computing devices on train some of the parameters and/or layers/neurons.) Regarding claim 6, Sidahmed in view of Guo teaches claim 5 as outlined above. Sidahmed further teaches: the DNN model is further trained using the first number of layers and neurons corresponding to the plurality of data points and the second number of layers and neurons corresponding to the organization-specific data. ([0127]“At operation 406, the method can include transmitting, respectively to a plurality of client computing devices, the first set of training parameters and the random seed. The set of frozen parameters can be reconstructed from the random seed by the plurality of client computing devices using the random number generator.” The frozen parameters are implying that some layers/neurons are not to be trained. Thus the client computing devices on train some of the parameters and/or layers/neurons.) Regarding claim 7, Sidahmed in view of Guo teaches claim 1 as outlined above. Sidahmed further teaches: receiving, by the central machine learning model, edge neural network gains from two or more edge computing devices, ([0102] “The updates 220 may include information indicative of the updated trainable parameters. The updates 220 may include the locally updated trainable parameters (e.g., the updated parameters or a difference between the updated parameter and the previous parameter received from the server 204). In some examples, the updates 220 may include an update term, a corresponding weight, and/or a corresponding learning rate, and the server may determine therewith an updated version of the corresponding trainable parameter. Communications between the server 204 and the client devices 204 can be encrypted or otherwise rendered private.”) determining an accuracy of the organization-specific prediction generated by each of the two or more edge computing devices by determining a difference between the organization-specific prediction and the actual delay by the aircraft; and ([0140] “At operation 504, the method can include determining whether the performance value exceeds a threshold value. In some instances, the performance value exceeds the threshold value when an accuracy percentage of the global model is reduced by a specific margin after the modification of the one or more global parameters of the global model, which may result in performance degradation.”) assigning, by the one or more processors, a different weight to each of the edge neural network gains received from the two or more edge computing devices based on the difference, wherein each of the different weights is assigned to a corresponding one of the two or more edge computing devices in descending order where a highest weight is assigned to the edge computing device that produced the organization-specific prediction with a smallest difference, and a lowest weight is assigned to the edge computing device that produced the organization-specific prediction with a largest difference, ([0099] “The sum for each of the updated parameters may then be divided by a corresponding sum of weights for each parameter as provided by the clients to form a set of weighted average updated parameters. In some implementations, updated parameters are provided to the server 204 by a plurality of client devices 202, and the respective updated parameters scaled by their respective weights are summed across the plurality of clients to provide a set of weighted average updated parameters. In some examples, the weights may be correlated to a number of local training iterations or epochs so that more extensively trained updates contribute in a greater amount to the updated parameter version.”) wherein the neural network gain is calculated using a weighted average of the edge neural network gains received from the two or more edge computing devices based on the different weights assigned to the two or more edge computing devices; and ([0102] “The updates 220 may include information indicative of the updated trainable parameters. The updates 220 may include the locally updated trainable parameters (e.g., the updated parameters or a difference between the updated parameter and the previous parameter received from the server 204). In some examples, the updates 220 may include an update term, a corresponding weight, and/or a corresponding learning rate, and the server may determine therewith an updated version of the corresponding trainable parameter. Communications between the server 204 and the client devices 204 can be encrypted or otherwise rendered private.”) wherein updating the central machine learning model based on the neural network gain comprises training the central machine learning model to generate the revised prediction using the neural network gain calculated using the weighted average. ([0102] “The updates 220 may include information indicative of the updated trainable parameters. The updates 220 may include the locally updated trainable parameters (e.g., the updated parameters or a difference between the updated parameter and the previous parameter received from the server 204). In some examples, the updates 220 may include an update term, a corresponding weight, and/or a corresponding learning rate, and the server may determine therewith an updated version of the corresponding trainable parameter. Communications between the server 204 and the client devices 204 can be encrypted or otherwise rendered private.” And [0099] “In some implementations, updated parameters are provided to the server 204 by a plurality of client devices 202, and the respective updated parameters are summed across the plurality of client devices 202. The sum for each of the updated parameters may then be divided by a corresponding sum of weights for each parameter as provided by the clients to form a set of weighted average updated parameters.”) Regarding claim 8, Sidahmed in view of Guo teaches claim 7 as outlined above. Sidahmed further teaches: executing multiple training iterations to train the version of the central machine learning model, wherein at a first training iteration, a random neural network gain is applied to the version of the central machine learning model, and a first iteration of the organization-specific prediction is generated based on the random neural network gain; and ([0069] “Gradient descent techniques can be used to iteratively update the parameters over a number of training iterations.” And [0091] “The server device 204 can be configured to access machine learning model 206, and to provide trainable parameters 210 of model 206 and a random seed 212 associated with non-trainable parameters (e.g., frozen parameters) to a plurality of client devices 202.”) at subsequent training iterations, updating the random neural network gain to obtain the corresponding edge neural network gain of the edge computing device and applying the corresponding edge neural network gain to the version of the central machine learning model, and generating subsequent iterations of the organization-specific prediction based on the corresponding edge neural network gain. ([0092] “Client devices 202 can each be configured to determine updates 220 to one or more trainable parameters associated with model 206 based at least in part on training data 208, the trainable parameters 210, and the random seed 212.” Regarding claim 9, Sidahmed in view of Guo teaches claim 9 as outlined above. Guo further teaches: performing a comparison of the first iteration of the organization-specific prediction to the actual delay of the flight of the aircraft; and (Section IV. Experiments “the sum of the "product of actual and predicted numbers" corresponding to all categories divided by the "total number of samples" square”) updating the random neural network gain to the corresponding edge neural network gain based on the comparison. (Section II Propaedeutics “The server decrypts the received gradient value and loss with the private key, updates the global model parameters, and sends the updated results to the participants, who use the gradient information sent by the server to update the model parameters, respectively.”) Regarding claim 10, Sidahmed in view of Guo teaches claim 9 as outlined above. Sidahmed further teaches: receiving the edge neural network gains comprises receiving each of the corresponding edge neural network gains from the one or more edge computing devices by the central machine learning model. ([0102] “The updates 220 may include information indicative of the updated trainable parameters. The updates 220 may include the locally updated trainable parameters (e.g., the updated parameters or a difference between the updated parameter and the previous parameter received from the server 204). Regarding claim 11, Sidahmed teaches: one or more processors in communication with a non-transitory computer readable medium having executable instructions therein, wherein, upon execution of the executable instructions, the one or more processors are configured to: ([0158] “The client device can include one or more processors, and one or more non-transitory computer-readable media that collectively store a set of local data and instructions. The instructions, when executed, can cause the one or more processors to perform the operations described in method 700.”) receive, at the central machine learning model from the one or more edge computing devices, a neural network gain between the first prediction and an organization-specific prediction … generated by the version of the central machine learning model trained with the organization-specific data, the neural network gain being based on a weighted difference between the first prediction and the organization-specific prediction and an actual …; and ([0102] “The updates 220 may include information indicative of the updated trainable parameters. The updates 220 may include the locally updated trainable parameters (e.g., the updated parameters or a difference between the updated parameter and the previous parameter received from the server 204). In some examples, the updates 220 may include an update term, a corresponding weight, and/or a corresponding learning rate, and the server may determine therewith an updated version of the corresponding trainable parameter. Communications between the server 204 and the client devices 204 can be encrypted or otherwise rendered private.” And [0099] “In some implementations, updated parameters are provided to the server 204 by a plurality of client devices 202, and the respective updated parameters are summed across the plurality of client devices 202. The sum for each of the updated parameters may then be divided by a corresponding sum of weights for each parameter as provided by the clients to form a set of weighted average updated parameters.”) update the central machine learning model based on the neural network gain from the one or more edge computing devices to generate a revised prediction… ([0114] “At 316, method 300 can include the server computing device aggregating the local updates from the plurality of client devices.” And [0115] “At 318, method 300 can include the server computing device updating the global model based on the aggregation.”) Sidahmed does not teach that the data relates to aircraft delays and they do not teach: A system for predicting a delay for a flight of an aircraft from a departure airport to an arrival airport access a plurality of data points comprising historical flights from the departure airport to the arrival airport; build a central machine learning model for predicting the delay for the flight of the aircraft using the data points, the central machine learning model built using a machine learning algorithm and one or more of the plurality of data points; generate, using the central machine learning model, a first prediction of the delay for the flight of the aircraft; distribute a version of the central machine learning model along with the first prediction of the delay to one or more edge computing devices, each of the one or more edge computing devices being associated with an organization that operates the aircraft, and being in communication with a data store comprising organization-specific data usable by the one or more edge computing devices associated with the organization to train the version of the central machine learning model received thereby with the organization-specific data; However, Guo does teach these: A system for predicting a delay for a flight of an aircraft from a departure airport to an arrival airport: (Abstract) access a plurality of data points comprising historical flights from the departure airport to the arrival airport; (Section IV. Experiments “After analyzing the data attribute values, we finally get the attributes used for flight delay prediction, which mainly include date, departure airport number, destination airport number, planned landing time.”) build a central machine learning model for predicting the delay for the flight of the aircraft using the data points, the central machine learning model built using a machine learning algorithm and one or more of the plurality of data points; (Section III. Construction of flight delay prediction model based on federated learning “Initially, the server first initializes to obtain a global model.”) generate, using the central machine learning model, a first prediction of the delay for the flight of the aircraft; (Section III. Construction of flight delay prediction model based on federated learning “Our goal is to train a model with the best prediction effect for the flight delay prediction task and infer the value of the predicted label xi by inputting the feature value ω ∈ ℝd in the training sample and determining the model parameter vector y.”) distribute a version of the central machine learning model along with the first prediction of the delay to one or more edge computing devices, each of the one or more edge computing devices being associated with an organization that operates the aircraft, and being in communication with a data store comprising organization-specific data usable by the one or more edge computing devices associated with the organization to train the version of the central machine learning model received thereby with the organization-specific data; (Section III. Construction of flight delay prediction model based on federated learning “Then each airport client downloads the initial global model from the server and uses its local data to train the model.”) Sidahmed and Guo are considered analogous art to the claimed invention because they are in the same field of endeavor being federated learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system and models of Sidahmed with the aircraft delay prediction model of Guo. One would want to do this for a collaborative approach to aircraft delay prediction. Regarding claim 14, Sidahmed in view of Guo teaches claim 1 as outlined above. Sidahmed further teaches: the central machine learning model is a deep neural network (DNN) model containing a first number of layers and neurons, the first number of layers and neurons corresponding to the plurality of data points, ([0062] “For example, the machine-learned models 120 can be or can otherwise include various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models.”) wherein the one or more processors are configured to train the DNN model using the first number of layers and neurons corresponding to the plurality of data points.. ([0119] “At operation 402, the method can include determining, by a server computing device, a first set of training parameters from a plurality of parameters of the global model. The plurality of parameters of the global model can include the first set of training parameters and a set of frozen parameters.”) Regarding claim 15, Sidahmed in view of Guo teaches claim 4 as outlined above. Sidahmed further teaches: the version of the central machine learning model includes the DNN model further trained by the one or more edge computing devices using the organization-specific data, wherein the DNN model further trained by the one or more edge computing device using the organization-specific data includes the first number of layers and neurons and a second number of layers and neurons corresponding to the organization-specific data. ([0127]“At operation 406, the method can include transmitting, respectively to a plurality of client computing devices, the first set of training parameters and the random seed. The set of frozen parameters can be reconstructed from the random seed by the plurality of client computing devices using the random number generator.” The frozen parameters are implying that some layers/neurons are not to be trained. Thus the client computing devices on train some of the parameters and/or layers/neurons.) Regarding claim 16, Sidahmed in view of Guo teaches claim 5 as outlined above. Sidahmed further teaches: the DNN model is further trained using the first number of layers and neurons corresponding to the plurality of data points and the second number of layers and neurons corresponding to the organization-specific data. ([0127]“At operation 406, the method can include transmitting, respectively to a plurality of client computing devices, the first set of training parameters and the random seed. The set of frozen parameters can be reconstructed from the random seed by the plurality of client computing devices using the random number generator.” The frozen parameters are implying that some layers/neurons are not to be trained. Thus the client computing devices on train some of the parameters and/or layers/neurons.) Regarding claim 17, Sidahmed in view of Guo teaches claim 1 as outlined above. Sidahmed further teaches: receive, at the central machine learning model, edge neural network gains from two or more edge computing devices, ([0102] “The updates 220 may include information indicative of the updated trainable parameters. The updates 220 may include the locally updated trainable parameters (e.g., the updated parameters or a difference between the updated parameter and the previous parameter received from the server 204). In some examples, the updates 220 may include an update term, a corresponding weight, and/or a corresponding learning rate, and the server may determine therewith an updated version of the corresponding trainable parameter. Communications between the server 204 and the client devices 204 can be encrypted or otherwise rendered private.”) each of the one or more edge computing devices configured to determine an accuracy of the organization-specific prediction generated by each of the two or more edge computing devices by determining a difference between the organization-specific prediction and the actual delay by the aircraft; and ([0140] “At operation 504, the method can include determining whether the performance value exceeds a threshold value. In some instances, the performance value exceeds the threshold value when an accuracy percentage of the global model is reduced by a specific margin after the modification of the one or more global parameters of the global model, which may result in performance degradation.”) responsive to receiving the difference, the one or more processors configured to assign a different weight to each of the edge neural network gains received from the two or more edge computing devices based on the difference, wherein each of the different weights is assigned to a corresponding one of the two or more edge computing devices in descending order where a highest weight is assigned to the edge computing device that produced the organization-specific prediction with a smallest difference, and a lowest weight is assigned to the edge computing device that produced the organization-specific prediction with a largest difference, ([0099] “The sum for each of the updated parameters may then be divided by a corresponding sum of weights for each parameter as provided by the clients to form a set of weighted average updated parameters. In some implementations, updated parameters are provided to the server 204 by a plurality of client devices 202, and the respective updated parameters scaled by their respective weights are summed across the plurality of clients to provide a set of weighted average updated parameters. In some examples, the weights may be correlated to a number of local training iterations or epochs so that more extensively trained updates contribute in a greater amount to the updated parameter version.”) wherein the neural network gain is calculated using a weighted average of the edge neural network gains received from the two or more edge computing devices based on the different weights assigned to the two or more edge computing devices; and ([0102] “The updates 220 may include information indicative of the updated trainable parameters. The updates 220 may include the locally updated trainable parameters (e.g., the updated parameters or a difference between the updated parameter and the previous parameter received from the server 204). In some examples, the updates 220 may include an update term, a corresponding weight, and/or a corresponding learning rate, and the server may determine therewith an updated version of the corresponding trainable parameter. Communications between the server 204 and the client devices 204 can be encrypted or otherwise rendered private.”) wherein the one or more processors configured to update the central machine learning model based on the neural network gain comprises the one or more processors configured to train the central machine learning model to generate the revised prediction using the neural network gain calculated using the weighted average. ([0102] “The updates 220 may include information indicative of the updated trainable parameters. The updates 220 may include the locally updated trainable parameters (e.g., the updated parameters or a difference between the updated parameter and the previous parameter received from the server 204). In some examples, the updates 220 may include an update term, a corresponding weight, and/or a corresponding learning rate, and the server may determine therewith an updated version of the corresponding trainable parameter. Communications between the server 204 and the client devices 204 can be encrypted or otherwise rendered private.” And [0099] “In some implementations, updated parameters are provided to the server 204 by a plurality of client devices 202, and the respective updated parameters are summed across the plurality of client devices 202. The sum for each of the updated parameters may then be divided by a corresponding sum of weights for each parameter as provided by the clients to form a set of weighted average updated parameters.”) Regarding claim 18, Sidahmed in view of Guo teaches claim 7 as outlined above. Sidahmed further teaches: execute multiple training iterations to train the version of the central machine learning model, wherein at a first training iteration, a random neural network gain is applied to the version of the central machine learning model, and a first iteration of the organization-specific prediction is generated based on the random neural network gain; and ([0069] “Gradient descent techniques can be used to iteratively update the parameters over a number of training iterations.” And [0091] “The server device 204 can be configured to access machine learning model 206, and to provide trainable parameters 210 of model 206 and a random seed 212 associated with non-trainable parameters (e.g., frozen parameters) to a plurality of client devices 202.”) at subsequent training iterations, update the random neural network gain to obtain the corresponding edge neural network gain of the edge computing device and apply the corresponding edge neural network gain to the version of the central machine learning model, and generate subsequent iterations of the organization-specific prediction based on the corresponding edge neural network gain. ([0092] “Client devices 202 can each be configured to determine updates 220 to one or more trainable parameters associated with model 206 based at least in part on training data 208, the trainable parameters 210, and the random seed 212.” Regarding claim 19, Sidahmed in view of Guo teaches claim 9 as outlined above. Guo further teaches: perform a comparison of the first iteration of the organization-specific prediction to the actual delay of the flight of the aircraft; and (Section IV. Experiments “the sum of the "product of actual and predicted numbers" corresponding to all categories divided by the "total number of samples" square”) update the random neural network gain to the corresponding edge neural network gain based on the comparison. (Section II Propaedeutics “The server decrypts the received gradient value and loss with the private key, updates the global model parameters, and sends the updated results to the participants, who use the gradient information sent by the server to update the model parameters, respectively.”) Regarding claim 20, Sidahmed in view of Guo teaches claim 9 as outlined above. Sidahmed further teaches: the one or more processors being configured to receive the edge neural network gains comprises the one or more processors being further configured to receive each of the corresponding edge neural network gains from the one or more edge computing devices at the central machine learning model. ([0102] “The updates 220 may include information indicative of the updated trainable parameters. The updates 220 may include the locally updated trainable parameters (e.g., the updated parameters or a difference between the updated parameter and the previous parameter received from the server 204). Claims 2-3 and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Sidahmed in view of Guo and Klein (NPL: ‘Airport delay prediction using weather-impacted traffic index (WITI) model). Regarding claim 2, Sidahmed in view of Guo teaches claim 1 as outlined above. Sidahmed and Guo do not teach using environmental factors in aircraft delay. However Klein does: the plurality of data points includes public data selected from the group consisting of: time of day, airline, location, flight distance, departure and arrival data points, environmental conditions, historical and current departure delay times, and historical and current arrival delay times. (Background Section “The airport delay models were trained using historical traffic and weather data and a variety of regression techniques, and were tested in quasi-prediction mode (post factum) against actual delay data.”). Sidahmed, Guo and Klein are considered analogous art to the claimed invention because they are in the same field of endeavor being prediction models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system and models of Sidahmed with the aircraft delay prediction model of Guo with the historical weather data of Klein. One would want to do this for to include environmental factors in aircraft delay predictions. Regarding claim 3, Sidahmed in view of Guo teaches claim 1 as outlined above. Klein further teaches: the organization-specific data includes proprietary data associated with the organization selected from the group consisting of: an actual flight plan of the aircraft, weather along the flight plan, crew schedule for the aircraft, maintenance schedule for the aircraft, airline or organization electronic equipment glitches, airline or organization control decisions, and turnaround resource availability. (Background Section “The airport delay models were trained using historical traffic and weather data and a variety of regression techniques, and were tested in quasi-prediction mode (post factum) against actual delay data.”). Regarding claim 12, Sidahmed in view of Guo teaches claim 11 as outlined above. Sidahmed and Guo do not teach using environmental factors in aircraft delay. However Klein does: the plurality of data points includes public data selected from the group consisting of: time of day, airline, location, flight distance, departure and arrival data points, environmental conditions, historical and current departure delay times, and historical and current arrival delay times. (Background Section “The airport delay models were trained using historical traffic and weather data and a variety of regression techniques, and were tested in quasi-prediction mode (post factum) against actual delay data.”). Sidahmed, Guo and Klein are considered analogous art to the claimed invention because they are in the same field of endeavor being prediction models. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system and models of Sidahmed with the aircraft delay prediction model of Guo with the historical weather data of Klein. One would want to do this for to include environmental factors in aircraft delay predictions. Regarding claim 13, Sidahmed in view of Guo teaches claim 11 as outlined above. Klein further teaches: the organization-specific data includes proprietary data associated with the organization selected from the group consisting of: an actual flight plan of the aircraft, weather along the flight plan, crew schedule for the aircraft, maintenance schedule for the aircraft, airline or organization electronic equipment glitches, airline or organization control decisions, and turnaround resource availability. (Background Section “The airport delay models were trained using historical traffic and weather data and a variety of regression techniques, and were tested in quasi-prediction mode (post factum) against actual delay data.”). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL PATRICK GRUSZKA whose telephone number is (571)272-5259. The examiner can normally be reached M-F 9:00 AM - 6:00 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL GRUSZKA/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Mar 09, 2023
Application Filed
Dec 16, 2025
Non-Final Rejection — §103
Mar 07, 2026
Interview Requested

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month