DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on November 7, 2025 has been entered.
Response to Amendment and Arguments
1. Claims 1-20 are pending and are being examined in this application.
2. In light of Applicant' s amendments to the claims, the 101 rejection is withdrawn.
4. Applicant' s arguments with respect to the 102 rejection have been considered, but are moot in view of the new ground(s) of rejection. However, to the extent that the arguments also apply to the new ground(s) of rejection, the arguments are unpersuasive for at least the following reasons:
Applicant argues that the determining of frequencies of uploading local model parameters recited in the claims “relates to frequencies of uploading from the edge nodes to the server (i.e., information from the edge nodes), rather than the edge nodes receiving instructions from the server (i.e., information to the edge nodes)” [Remarks dated 11/07/2025, pg. 10].
However, paragraphs 264, 273-277, 306, and 310 of Akdeniz disclose the claim limitation “determine frequencies of uploading local model parameters for the plurality of edge nodes based on the assigned training steps and paragraphs 273-277, 306, and 310 of Akdeniz disclose the claim limitation “receive local model parameters from the one or more of the plurality of edge nodes based on the determined frequencies” [Final Rejection, pg. 16].
Paragraph 264 discloses selecting and scheduling the clients for training based on total update time (i.e., compute rates and communication times). Paragraphs 273-277 disclose that the clients receive an initial global model from the server, the clients perform local training, and then the clients share model weights to the server for updating of the global model. Paragraph 306 discloses ensuring that clients do not go for long time periods without model updates. This means scheduling the model updates with at least a minimum frequency, which entails determining the frequencies of the model updates. Paragraph 310 discloses selecting a set of K clients to perform local training during each local training iteration of each global training epoch, where each client is assigned a specific amount of training.
The above paragraphs describe “update” and “upload” as referring to the clients (i.e., edge nodes) sharing model updates to the server. Thus, Akdeniz clearly teaches that the determining of frequencies of uploading local model parameters recited in the claims relates to frequencies of uploading from the edge nodes to the server. See also paragraph 317 of Akdeniz for further clarification that uploading/updating refers to uploading/updating to the server: “The MEC server may sort clients into sets based on their similar times to upload, which time to upload takes into consideration compute time and time to communicate the updated model weights back to the server.”
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Akdeniz et al. (US Pub. 20230068386) in view of Zhang et al. (US Pub. 20220138550).
Referring to claim 1, Akdeniz discloses A system comprising: a controller [fig. 12; pars. 127 and 263; a server] programmed to:
obtain information about a computation resource in each of a plurality of edge nodes [fig. 12; pars. 127 and 264; the server requests clients (i.e., edge nodes) to share their respective compute rates and communication times in order to estimate the total update time for each client], wherein the plurality of edge nodes are associated with a plurality of vehicles, and wherein the plurality of edge nodes comprise heterogeneous edge nodes [par. 1018-1039; note edge computing system supporting vehicle-to-vehicle, vehicle-to-everything, or vehicle-to-infrastructure scenarios and comprising various types of edge nodes] that differ in local data set size and the computation resource... [pars. 167 and 193; note local batchsize and various compute parameters for each client].
assign training steps to the plurality of edge nodes based on the information about the computation resource [par. 264; the clients are scheduled for training based on the total update time (i.e., based on respective compute rates)];
determine frequencies of uploading local model parameters for the plurality of edge nodes based on the assigned training steps [pars. 264, 273-277, 306, and 310; the server determines frequencies of model updates, ensuring that the clients are scheduled for training with at least a minimum frequency], wherein the frequencies are associated with at least one of a respective number of epochs for each of the plurality of edge nodes or respective training steps per epoch [pars. 264, 273-277, 306, and 310; note the determined frequencies for the model updates and the scheduling of the clients for training (for each round/epoch); the number of training iterations per round/epoch is determined based on the number of training examples at each client];
receive local model parameters from one or more of the plurality of edge nodes based on the determined frequencies [pars. 273-277, 306, and 310; the model weights are sent by the clients to the server based on the determined frequencies]; and
update a global model based on the received local model parameters [pars. 122 and 273-277; a global model is updated based on the model weights].
Akdeniz implicitly discloses wherein the computation resource corresponds to a power of at least one of a central processing unit or a graphical processing unit [par. 264; note that a compute rate, by definition, measures how fast a computer performs calculations, while computing power, by definition, is a computer’s ability to process data and execute instructions quickly; thus, the respective compute rates of the clients would correspond to computing power of the clients].
However, Zhang explicitly discloses wherein the computation resource corresponds to a power of at least one of a central processing unit or a graphical processing unit [fig. 9; pars. 43 and 55; a neural network is trained by blockchain peers; if some peers become more powerful (e.g., have more available processing power), the training of a larger portion of the neural network is assigned to these peers; note that processing power refers to processing power of processors (e.g., a CPU) and GPUs].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the scheduling of training taught by Akdeniz so that the compute rates correspond to processing power as taught by Zhang, with a reasonable expectation of success. The motivation for doing so would have been to allow the training to be adjusted flexibly to best fit the current status of the clients [Zhang, par. 43].
Referring to claim 2, Akdeniz discloses The system of claim 1, wherein the controller is further programmed to: obtain a size of training data in each of the plurality of edge nodes [par. 263; the server polls each client to have the client share the number of training examples available to it]; determine a weight for each of the plurality of edge nodes based on the size of training data [pars. 122, 123, and 273-277; the training (i.e., for learning underlying model parameters such as the model weights) is performed based on the number of training examples (i.e., k)]; and update the global model by averaging the received local parameters using the weights [pars. 122 and 273-277; the server calculates the average of the model weights to update the global model].
Referring to claim 3, Akdeniz discloses The system of claim 1, wherein the controller is further programmed to: determine a time for implementing a predetermined number of training steps in each of the plurality of edge nodes based on the information about the computation resource; and assign training steps per epoch to the plurality of edge nodes based on the times for implementing the predetermined number of training steps [pars. 264 and 310; note the scheduling of the training (for each round/epoch) based on the total update time].
Referring to claim 4, Akdeniz discloses The system of claim 3, wherein the controller is further programmed to: determine the frequencies of uploading the local model parameters for the plurality of edge nodes based on the assigned training steps per epoch and a training step threshold; and instruct the plurality of edge nodes to upload the local model parameters based on the frequencies [pars. 264, 273-277, 306, and 310; note the determined frequencies for the model updates and the scheduling of the clients for training (for each round/epoch); the number of training iterations per round/epoch is determined based on the number of training examples at each client].
Referring to claim 5, Akdeniz discloses The system of claim 1, wherein the controller is further programmed to: transmit parameters of the updated global model to the one or more of the plurality of edge nodes [pars. 273-277; the server propagates the global model weights to the clients].
Referring to claim 6, Akdeniz discloses The system of claim 1, wherein the controller is further programmed to: determine whether local model parameters from two or more edge nodes are received during a single epoch; and in response determining that local model parameters from two or more edge nodes are received during the single epoch: update the global model based on the local model parameters received from the two or more edge nodes; and transmit parameters of the updated global model to the two or more edge nodes [pars. 264, 273-277, and 310; note the scheduling of the training (for each round/epoch) based on the total update time; also note the sending of the model weights from the clients (plural) to the server to update the global model].
Referring to claim 7, Akdeniz discloses The system of claim 1, wherein the controller is further programmed to: determine whether local model parameters from two or more edge nodes are received during a single epoch; and in response determining that local model parameters from less than two edge nodes are received during the single epoch, hold transmitting parameters of the global model to any of the plurality of edge nodes [pars. 186, 187, 308, 429, 548, and 600; if the model weights are received from less than an expected number of clients (e.g., two) during an optimized epoch time, the server first accounts for the missing model updates (by mimicking the local training at the clients from which the model updates were not received) and re-selects the clients before propagating the global model weights to the clients].
Referring to claim 8, Akdeniz discloses The system of claim 1, wherein the plurality of edge nodes include at least one of a connected vehicle or an edge server [par. 39; the clients may include autonomous vehicles].
Referring to claim 9, Akdeniz discloses The system of claim 1, wherein the local model parameters received from the one or more of the plurality of edge nodes are compressed parameters [pars. 194, 436, and 437; the server may receive compressed/encoded data (e.g., the model weights) from the clients].
Referring to claim 10, see the rejection for claim 1, which incorporates the claimed method.
Referring to claim 11, see the rejection for claim 2.
Referring to claim 12, see the rejection for claim 3.
Referring to claim 13, see the rejection for claim 4.
Referring to claim 14, see the rejection for claim 5.
Referring to claim 15, see the rejection for claim 6.
Referring to claim 16, see the rejection for claim 7.
Referring to claim 17, Akdeniz discloses A vehicle comprising: a controller [fig. 12; par. 39; an edge node (i.e., client) may be located in an autonomous vehicle] programmed to:
transmit information about a computation resource of the vehicle to a server... [fig. 12; pars. 127 and 264; a server requests clients to share their respective compute rates and communication times in order to estimate the total update time for each client];
receive a frequency of uploading local model parameters of a model for image processing [par. 200; training is performed for modeling complex relationships in problems such as image recognition] from the server [pars. 264, 273-277, 306, and 310; clients are scheduled for the training based on the total update time; the server determines frequencies of model updates, ensuring that the clients are scheduled for training with at least a minimum frequency], wherein the frequency is associated with at least one of a number of epochs or training steps per epoch [pars. 264, 273-277, 306, and 310; note the determined frequencies for the model updates and the scheduling of the clients for training (for each round/epoch); the number of training iterations per round/epoch is determined based on the number of training examples at each client];
upload the local model parameters of the model based on the frequency to the server [pars. 273-277, 306, and 310; the model weights are sent by the clients to the server based on the determined frequencies];
receive a global model updated based on the local model parameters of the model from the server [pars. 273-277; the server propagates the global model weights to the clients]; and
implement processing of images captured by the vehicle using the received global model [pars. 119, 122, and 200; training is performed for modeling complex relationships in problems such as image recognition; once the global model is trained, the global model is used to perform machine learning tasks (e.g., image recognition)].
Akdeniz implicitly discloses wherein the computation resource corresponds to a power of at least one of a central processing unit or a graphical processing unit [par. 264; note that a compute rate, by definition, measures how fast a computer performs calculations, while computing power, by definition, is a computer’s ability to process data and execute instructions quickly; thus, the respective compute rates of the clients would correspond to computing power of the clients].
However, Zhang explicitly discloses wherein the computation resource corresponds to a power of at least one of a central processing unit or a graphical processing unit [fig. 9; pars. 43 and 55; a neural network is trained by blockchain peers; if some peers become more powerful (e.g., have more available processing power), the training of a larger portion of the neural network is assigned to these peers; note that processing power refers to processing power of processors (e.g., a CPU) and GPUs].
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the scheduling of training taught by Akdeniz so that the compute rates correspond to processing power as taught by Zhang, with a reasonable expectation of success. The motivation for doing so would have been to allow the training to be adjusted flexibly to best fit the current status of the clients [Zhang, par. 43].
Referring to claim 18, Akdeniz discloses The vehicle of claim 17, wherein the controller is programmed to: compress the local model parameters of the model; and upload the compressed local model parameters to the server [pars. 194, 436, and 437; the server may receive compressed/encoded data (e.g., the model weights) from the clients].
Referring to claim 19, Akdeniz discloses The vehicle of claim 18, wherein the controller is programmed to: compress the local model parameters of the model using quantization or sparsification [par. 227; the encoding involves a sparse generator matrix using a Bernoulli distribution].
Referring to claim 20, Akdeniz discloses The vehicle of claim 17, wherein the controller is programmed to: transmit a size of training data in the vehicle to the server [par. 263; the server polls each client to have the client share the number of training examples available to it]; and receive a global model updated based on the local model parameters of the model and the size of training data from the server [pars. 122, 123, and 273-277; the training (i.e., for learning underlying model parameters such as the model weights) is performed based on the number of training examples (i.e., k); the server calculates the average of the model weights to update the global model].
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRACE PARK whose telephone number is (571)270-7727. The examiner can normally be reached M-F 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TAMARA KYLE can be reached at (571)272-4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Grace Park/Primary Examiner, Art Unit 2144