Prosecution Insights
Last updated: April 19, 2026
Application No. 18/164,518

SYSTEMS AND METHODS FOR USER-EDGE ASSOCIATION BASED ON VEHICLE HETEROGENEITY FOR REDUCING THE HETEROGENEITY IN HIERARCHICAL FEDERATED LEARNING NETWORKS

Non-Final OA §103§112
Filed
Feb 03, 2023
Examiner
GOLAN, MATTHEW BRYCE
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
Toyota Motor Engineering & Manufacturing North America, Inc.
OA Round
1 (Non-Final)
0%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 3 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
36 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
27.5%
-12.5% vs TC avg
§103
37.5%
-2.5% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
23.7%
-16.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103 §112
DETAILED ACTION This communication is in response to Application No. 18/164,518 filed on February 03, 2023 in which claims 1-20 are presented for examination Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement submitted on 02/10/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement was considered by the examiner. Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference characters not mentioned in the description: “124”, “126”, “136”, and “140” in Fig. 1. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference characters in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification The contents of the specification are sufficient for examination purposes. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 7-8, 15-16, and 19-20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Regarding Claim 7, the claim recites “strict” (ln. 2), which is a relative term that renders the claim indefinite. The term “strict” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. As a result, it is unclear what “privacy requirements” (ln. 2-3) are sufficient to qualify as “strict”, which results in claimed subject matter without a distinct scope. Therefore, the claim is rejected. The claim should be amended to clarify the meaning of “strict”. Regarding Claim 8, the claim recites “lenient” (ln. 2), which is a relative term that renders the claim indefinite. The term “lenient” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. As a result, it is unclear what “privacy requirements” (ln. 2-3) are sufficient to qualify as “lenient”, which results in claimed subject matter without a distinct scope. Therefore, the claim is rejected. The claim should be amended to clarify the meaning of “lenient”. Regarding Claim 15, the claim recites “strict privacy requirements” (ln. 3-4), which is indefinite for substantially the same reasoning as discussed in regard to the rejection of Claim 7. Therefore, the claim is similarly rejected and should be amended in a similar manner. Regarding Claim 16, the claim recites “lenient privacy requirements” (ln. 3-4), which is indefinite for substantially the same reasoning as discussed in regard to the rejection of Claim 8. Therefore, the claim is similarly rejected and should be amended in a similar manner. Regarding Claim 19, the claim recites “strict privacy requirements” (ln. 4), which is indefinite for substantially the same reasoning as discussed in regard to the rejection of Claim 7. Therefore, the claim is similarly rejected and should be amended in a similar manner. Regarding Claim 20, the claim recites “lenient privacy requirements” (ln. 4), which is indefinite for substantially the same reasoning as discussed in regard to the rejection of Claim 8. Therefore, the claim is similarly rejected and should be amended in a similar manner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 9-10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou et al. (hereinafter Zhou) (“Two-Layer Federated Learning With Heterogeneous Model Aggregation for 6G Supported Internet of Vehicles”) in view of Han et al. (hereinafter Han) (“FedMes: Speeding Up Federated Learning With Multiple Edge Servers”). Regarding Claim 1, Zhou teaches a method for vehicular assisted hierarchical federated learning, the method comprising (Pg. 5309, Col. 1, Para. 2, “In this study, we propose a two-layer federated learning model based on convolutional neural network (TFL-CNN), which makes use of the local and global contexts of individual vehicles and RSUs to perform hierarchical and heterogeneous model selection and aggregation at the edge and cloud level”; Pg. 5313, Col. 1, Para. 4, “Experiments are conducted and discussed to demonstrate the usefulness and effectiveness of the proposed method comparing with several baseline methods”): responsive to joining a hierarchical federated learning network, obtaining vehicular system conditions of a vehicle (Pg. 5311, Col. 1, Para. 1, “RSUs are the middle brokers to collect and aggregate not only learning parameters but also contextual information, such as vehicle locations and navigation direction, from all the connected vehicles within the coverage to facilitate parameter aggregation”; Pg. 5311, Col. 2, Para. 7, “the vehicular contextual information, such as location and navigation information, can be directly acquired by RSUs via 6G technologies [37] to conduct the proposed weighted aggregation in RSU”, where “vehicular contextual information, such as location and navigation information” is within the broadest reasonable interpretation of vehicular system conditions, which is “acquired by the RSUs” responsive to a vehicle joining the hierarchical federated learning network and being “connected” within the “RSUs” “coverage”); exchanging data between the vehicle and a plurality of edge servers of the hierarchical federated learning network (Pg. 5310, Col. 2, Fig. 1, “Two-Layer Federated Learning Framework in 6G Supported Vehicular Networks”, where the “Two-Layer Federated Learning Framework” is a hierarchical federated learning network, where data is exchanged, as indicated by the lower-level green and blue arrows, between the vehicles, “Vehicles”, and a plurality of edge servers, the plurality of “RSU[s]”, see Pg. 5310-5311, Col. 2-1, Para. 4-1, “In the middle layer, each RSU has limited caching and computing capabilities, and is responsible for supervising all the interconnected vehicles in its coverage . . . RSUs are the middle brokers to collect and aggregate . . . from all the connected vehicles within the coverage . . . RSUs then communicate with the central cloud server”, where “RSUs” are servers because the provide services to network clients, the “connected vehicles”; and where a given “vehicle” exchanges data directly between the “connected” “RSU” and indirectly between the other “RSUs” through the “central cloud server”, see also Pg. 5310, Col. 2, Fig. 1) according to a vehicle-to-edge server association protocol that is based on the vehicular system conditions (Pg. 5310, Col. 2, Fig. 1 and Pg. 5313, Col. 1, Fig. 3, “4: for each data owner vi ∈ V do 5: if vi supervised by rj do . . . 8: Submit the local training parameter wi (t) to rj”, where the exchange of data is based on the vehicle-to edge server association protocols of clustered “data owner[s] vi” “supervised by rj” “RSU[s]”, where vehicular system conditions, “contextual information, such as vehicle locations”, determines which “RSU” association protocol the “vehicles” will be selected for, to be “connected” to as a supervisor see Pg. 5311, Col. 1, Para. 1, “RSUs are the middle brokers to collect and aggregate not only learning parameters but also contextual information, such as vehicle locations and navigation direction, from all the connected vehicles within the coverage to facilitate parameter aggregation”); identifying a machine learning model for the vehicle . . . [and providing the identified model parameters to] the vehicle (Pg. 5313, Col. 1, Fig. 3, “13: Calculate the global parameters w(t+1) for model M . . . 15: Broadcast w(t+1) to the network” and Pg. 5311, Col. 1, Para. 1, “the updated model parameters will be dispatched from the central cloud server to RSUs and then to individual vehicles”, where the “updated” “global . . . model M” is identified at the “central cloud server” and provided to the “individual vehicles”); and at least one of: training the identified machine learning model to perform a task using the data acquired by the vehicle to produce a locally trained machine learning model; and applying the data acquired by the vehicle to the identified machine learning model to perform a task (Pg. 5311, Col. 2, Para. 3, “xi is the input samples of a data owner vi, the CNN model, which is introduced to perform an object detection task (e.g., traffic sign recognition, pedestrian detection, or object avoidance), can be represented as the hypothesis h(xi, ω), and trained locally by the data owner vi”; see also Pg. 5311, Col. 1, Para. 4, “any individual data owner vi in this framework can keep its own data di and train the object detection model locally” and Pg. 5311, Col. 1, Para. 2, “each vehicle generates raw data including both the captured photos/videos by built-in camera and the contextual information (e.g., GPS locality data, driving information, etc.). The computation capability of individual vehicle is able to support a relatively light computing task (e.g., training a learning model for object detection or road sign recognition)”). Zhou does not explicitly disclose . . . from a plurality of machine learning models hosted on the plurality of edge servers using data acquired by . . . . However, Han teaches . . . [identifying a machine learning model for the client] from a plurality of machine learning models hosted on the plurality of edge servers using data acquired by [the client]. . . (Pg. 3870, Col. 1, Abstract, “in the model-downloading stage, the clients in the overlapping areas receive multiple models from different ESs, take the average of the received models, and then update the averaged model with their local data”, where “clients” receive “multiple models” from the plurality of models hosted by the plurality of “different Ess” and the multiple models are identified, which requires that a machine learning model be identified, based on whether the “clients [are] in the overlapping areas”; Pg. 3872, Col. 2, Para. 3, “We call this region in which the client can reliably communicate with multiple ESs overlapping cell area”, where a “client” is determined to be in an “overlapping cell area” “communication” data acquired by the “client” and from “multiple Ess”). Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the identifying a machine learning model for a vehicle and providing the identified model parameters to the vehicle of Zhou with the identifying a machine learning model for a client from a plurality of machine learning models hosted on a plurality of edge servers using acquired client data of Han in order to significantly reduce training time while maintaining model synchronization across edge server clusters by sending and receiving models between multiple vehicles and edge servers (Han, Pg. 3870, Col. 1, Abstract, “the proposed scheme does not require costly communications with the central cloud server (located at the higher tier of edge servers) for model synchronization, significantly reducing the overall training time compared to the conventional cloud-based FL systems. Extensive experimental results show remarkable performance gains of our scheme compared to existing methods”). Regarding Claim 2, Zhou in view of Han teach the method of claim 1, further comprising: selecting the vehicle-to-edge server association protocol from a plurality of vehicle-to-edge server association protocols using the vehicular system conditions (Zhou, Pg. 5310, Col. 2, Fig. 1 and Zhou, Pg. 5313, Col. 1, Fig. 3, “4: for each data owner vi ∈ V do 5: if vi supervised by rj do . . . 8: Submit the local training parameter wi (t) to rj”, where the exchange of data is based on the vehicle-to edge server association protocols of clustered “data owner[s] vi” “supervised by rj” “RSU[s]”, where vehicular system conditions, “contextual information, such as vehicle locations”, determines which “RSU” association protocol the “vehicles” will be selected for, to be “connected” to as a supervisor see Zhou, Pg. 5311, Col. 1, Para. 1, “RSUs are the middle brokers to collect and aggregate not only learning parameters but also contextual information, such as vehicle locations and navigation direction, from all the connected vehicles within the coverage to facilitate parameter aggregation”). Regarding Claim 9, Zhou teaches a vehicle, comprising: a communication circuit configured to exchange communications with edge servers of a hierarchical federated learning network; a memory storing instructions; and one or more processors communicably coupled to the memory and configured to execute the instructions to (Zhou, Pg. 5310, Col. 2, Fig. 1; Zhou, Pg. 5311, Col. 1, Para. 2, “each vehicle generates raw data including both the captured photos/videos by built-in camera and the contextual information (e.g., GPS locality data, driving information, etc.). The computation capability of individual vehicle is able to support a relatively light computing task (e.g., training a learning model for object detection or road sign recognition)”; Zhou, Pg. 5311, Col. 2, Para. 1, “parameters computed via local training by vi need to be encrypted and then sent to the corresponding RSU which conducts secure aggregation without exposing privacy information about any data owners”, where the “raw data” “generat[ion]” and “training a learning model” require a memory to store program instructions executed by processors; and where a communications hardware, that is within the broadest reasonable interpretation of a communication circuit, is required for the “sen[ding]” of “parameters” to “corresponding RSUs”) . . . . The remaining limitations are substantially the same as limitations of Claim 1, therefore it is rejected under the same rationale. Regarding Claim 10, the additional elements of the dependent claim are substantially the same as limitations of Claim 2, therefore it is rejected under the same rationale. Regarding Claim 17, Zhou in view of Han teach a server of a hierarchical federated learning network, the server comprising (Zhou, Pg. 5309, Col. 1, Para. 2, “In this study, we propose a two-layer federated learning model based on convolutional neural network (TFL-CNN), which makes use of the local and global contexts of individual vehicles and RSUs to perform hierarchical and heterogeneous model selection and aggregation at the edge and cloud level”; Zhou, Pg. 5310, Col. 2, Fig. 1, “Two-Layer Federated Learning Framework in 6G Supported Vehicular Networks”, where the “Two-Layer Federated Learning Framework” is a hierarchical federated learning network, and where the plurality of “RSU[s]” are servers, see Zhou, Pg. 5310-5311, Col. 2-1, Para. 4-1, “In the middle layer, each RSU has limited caching and computing capabilities, and is responsible for supervising all the interconnected vehicles in its coverage . . . RSUs are the middle brokers to collect and aggregate . . . from all the connected vehicles within the coverage . . . RSUs then communicate with the central cloud server”, where “RSUs” are servers because the provide services to network clients, the “connected vehicles”): a communication circuit configured to exchange communications with at least one vehicle of a hierarchical federated learning network (Zhou, Pg. 5311, Col. 1, Para. 1, “the updated model parameters will be dispatched from the central cloud server to RSUs and then to individual vehicles”, where hardware, that is within the broadest reasonable interpretations of a communications server is required for “dispatch[ing]” of “updated model parameters” from the “RSUs . . . to individual vehicles”; Zhou, Pg. 5310, Col. 2, Fig. 1, “Two-Layer Federated Learning Framework in 6G Supported Vehicular Networks”, where the “Vehicles” are part of a hierarchical federated learning network); a memory storing instructions and a machine learning model; and one or more processors communicably coupled to the memory and configured to execute the instructions to (Zhou, Pg. 5310-5311, Col. 2-1, Para. 4-1, “In the middle layer, each RSU has limited caching and computing capabilities, and is responsible for supervising all the interconnected vehicles in its coverage . . . RSUs are the middle brokers to collect and aggregate . . . from all the connected vehicles within the coverage”, where the “computing”, “supervising”, and “caching” require a memory storing instructions executed by a processor; Zhou, Pg. 5311, Col. 1, Para. 1, “the updated model parameters will be dispatched from the central cloud server to RSUs and then to individual vehicles”, where the transmission from the “RSUs” “to individual vehicles” requires the memory to store “model parameters”, where, in view of Han, are “models”, see Han, Pg. 3870, Col. 1, Abstract, “in the model-downloading stage, the clients in the overlapping areas receive multiple models from different Ess”): exchange data with the at least one vehicle of the hierarchical federated learning network (Zhou, Pg. 5310, Col. 2, Fig. 1, “Two-Layer Federated Learning Framework in 6G Supported Vehicular Networks”, where the “Two-Layer Federated Learning Framework” is a hierarchical federated learning network, where data is exchanged, as indicated by the lower-level green and blue arrows, between the vehicles, “Vehicles”, and a plurality of edge servers, the plurality of “RSU[s]”, see Zhou, Pg. 5310-5311, Col. 2-1, Para. 4-1, “In the middle layer, each RSU has limited caching and computing capabilities, and is responsible for supervising all the interconnected vehicles in its coverage . . . RSUs are the middle brokers to collect and aggregate . . . from all the connected vehicles within the coverage . . . RSUs then communicate with the central cloud server”) according to a vehicle-to-edge server association protocol selected based on vehicular system conditions of the at least one vehicle (Zhou, Pg. 5310, Col. 2, Fig. 1 and Zhou, Pg. 5313, Col. 1, Fig. 3, “4: for each data owner vi ∈ V do 5: if vi supervised by rj do . . . 8: Submit the local training parameter wi (t) to rj”, where the exchange of data is based on the vehicle-to edge server association protocols of clustered “data owner[s] vi” “supervised by rj” “RSU[s]”, where vehicular system conditions, “contextual information, such as vehicle locations”, determines which “RSU” association protocol the “vehicles” will be selected for, to be “connected” to as a supervisor see Zhou, Pg. 5311, Col. 1, Para. 1, “RSUs are the middle brokers to collect and aggregate not only learning parameters but also contextual information, such as vehicle locations and navigation direction, from all the connected vehicles within the coverage to facilitate parameter aggregation”); based on the vehicle-to-edge server association, receive a model trained locally by the at least one vehicle using data acquired by the vehicle (Zhou, Pg. 5310, Col. 2, Fig. 1 and Zhou, Pg. 5313, Col. 1, Fig. 3, “4: for each data owner vi ∈ V do 5: if vi supervised by rj do . . . 8: Submit the local training parameter wi (t) to rj”, where the exchange of data is based on the vehicle-to edge server association protocols of clustered “data owner[s] vi” “supervised by rj” “RSU[s]”, where vehicular system conditions, “contextual information, such as vehicle locations”, determines which “RSU” a “vehicles” will be “connected” to as a supervisor, see Zhou, Pg. 5311, Col. 1, Para. 1, “RSUs are the middle brokers to collect and aggregate not only learning parameters but also contextual information, such as vehicle locations and navigation direction, from all the connected vehicles within the coverage to facilitate parameter aggregation”; Han, Pg. 3870, Col. 1, Abstract, “in the model-downloading stage, the clients in the overlapping areas receive multiple models from different ESs, take the average of the received models, and then update the averaged model with their local data”; Han, Pg. 3872, Col. 2, Para. 1, “Now each client sends the updated model to the PS, and the PS aggregates the model”); and aggregate the machine learning model and the locally trained model to generate an aggregate machine learning model (Zhou, Pg. 5311, Col. 1, Para. 1, “In the middle layer, each RSU has limited caching and computing capabilities, and is responsible for supervising all the interconnected vehicles in its coverage . . . RSUs are the middle brokers to collect and aggregate . . . from all the connected vehicles within the coverage . . . RSUs then communicate with the central cloud server”). The reasons of obviousness have been discussed in regard to the rejection of claim 1 above and remain applicable here. Claims 3, 5, 11, and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou in view of Han and Dai et al. (hereinafter Dai) (“Joint Offloading and Resource Allocation in Vehicular Edge Computing and Networks”). Regarding Claim 3, Zhou in view of Han teach the method of claim 1, wherein vehicular system conditions comprises . . . [location information] of the vehicle . . . and . . . [navigation information] of the vehicle (Zhou, Pg. 5311, Col. 1, Para. 1, “RSUs are the middle brokers to collect and aggregate not only learning parameters but also contextual information, such as vehicle locations and navigation direction, from all the connected vehicles within the coverage to facilitate parameter aggregation”). Zhou in view of Han do not explicitly disclose . . . at least one of: computational resources . . . a privacy requirement settings . . . . However, Dai teaches . . . [a vehicular edge computing method, wherein vehicular system conditions comprises] . . . at least one of: computational resources of the vehicle and a privacy requirement settings of the vehicle (Pg. 2, Col. 2, Para. 7, “Let fi denote the computational resource of vehicle i, which varies for different users and can be obtained through offline measurement [13]”; see generally Pg. 1, Col. 1, Abstract, “Vehicular Edge Computing (VEC) is a new computing paradigm with a high potential to improve vehicular services by offloading computation-intensive tasks to the VEC servers”). Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the method for hierarchical federated learning with vehicles and edge servers, wherein vehicular system conditions of a vehicle are used as a basis for association protocols of Zhou in view of Han with the vehicular edge computing method, wherein vehicular system conditions comprise computational resources of the vehicle of Dai in order to utilize the computational resources of VEC servers, while not overloading the VEC servers with tasks that the vehicles have sufficient computational resources to complete (Dai, Pg. 1, Col. 1, Abstract, “Vehicular Edge Computing (VEC) is a new computing paradigm with a high potential to improve vehicular services by offloading computation-intensive tasks to the VEC servers. Nevertheless, as the computation resource of each VEC server is limited, offloading may not be efficient if all vehicles select the same VEC server to offload their tasks. To address this problem, in this paper, we propose offloading with resource allocation. We incorporate the communication and computation to derive the task processing delay. We formulate the problem as a system utility maximization problem, and then develop a low-complexity algorithm to jointly optimize offloading decision and resource allocation. Numerical results demonstrate the superior performance of our Joint Optimization of Selection and Computation (JOSC) algorithm compared to state of the art solutions”), which will allow local training decisions to be based on the specific computational power of the vehicle in the federated learning network (Zhou, Pg. 5309, Col. 1, Para. “With small quantity of data (or low data quality depending on the camera resolution) and varying computational power of individual vehicles, local training is usually limited by its accuracy”). Regarding Claim 5, Zhou in view of Han and Dai teach the method of claim 1, further comprising: determining that the vehicle comprises insufficient computational resources (Dai, Pg. 2, Col. 1, Para. 5-6, “Each task can either be offloaded to a selected VEC server to process, or be executed locally at the vehicle . . . If vehicle i chooses to offload task Di to a selected VEC server to process”, where the decision to “offload task Di” is determined based on whether the vehicle has sufficient computational resources to avoid “bottlenecks” that compromise “Quality of Service”, see Dai Pg. 1, Col. 1, Para. 2, “resource-constrained vehicles can be strained by computation intensive applications, resulting in bottlenecks and making it challenging for the vehicles to ensure the required level of Quality of Service”, with reference to “maximum allowed latency”, see Dai, Pg. 3, Col. 1, Para. 5, “we formulate the joint offloading and resource allocation scheme as an optimization problem . . . The first constraint (8b) guarantees that the task processing time cannot exceed the maximum allowed latency”) for storing and running each machine learning model of the plurality of machine learning models (Zhou, Pg. 5311, Col. 1, Para. 4, “any individual data owner vi in this framework can keep its own data di and train the object detection model locally”, which, in view of Han, are they plurality of models, see Han, Pg. 3870, Col. 1, Abstract, “in the model-downloading stage, the clients in the overlapping areas receive multiple models from different ESs, take the average of the received models, and then update the averaged model with their local data”, where the “client” stores and runs the plurality of models, collectively, as the “averaged model”; see also Zhou, Pg. 5309, Col. 1, Para. “With small quantity of data (or low data quality depending on the camera resolution) and varying computational power of individual vehicles, local training is usually limited by its accuracy”), wherein selecting the vehicle-to-edge server association protocol from the plurality of vehicle-to-edge server association protocols is based on the determination (Zhou, Pg. 5310, Col. 2, Fig. 1 and Zhou, Pg. 5313, Col. 1, Fig. 3, “4: for each data owner vi ∈ V do 5: if vi supervised by rj do . . . 8: Submit the local training parameter wi (t) to rj”, where the exchange of data is based on the vehicle-to edge server association protocol of clustered “data owner[s] vi” “supervised by rj” “RSU[s]”, where vehicular system conditions, “contextual information, such as vehicle locations”, determines which “RSU” a “vehicles” will be “connected” to as a supervisor see Zhou, Pg. 5311, Col. 1, Para. 1, “RSUs are the middle brokers to collect and aggregate not only learning parameters but also contextual information, such as vehicle locations and navigation direction, from all the connected vehicles within the coverage to facilitate parameter aggregation”, where in view of Dai, the contextual information includes the offloading decision of the “vehicle”, see Dai, Pg. 2, Col. 2, Para. 7, “Let fi denote the computational resource of vehicle i, which varies for different users and can be obtained through offline measurement [13]”). The reasons of obviousness have been discussed in the rejection of claim 1, in regard to the combination of Zhou with Han, and the rejection of claim 3, in regard to the combination of Zhou and Han with Dai, and remain applicable here. Regarding Claim 11, the additional elements of the dependent claim are substantially the same as limitations of Claim 3, therefore it is rejected under the same rationale. Regarding Claim 13, the additional elements of the dependent claim are substantially the same as limitations of Claim 5, therefore it is rejected under the same rationale. Claims 4, 6, 12, 14, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou in view of Han, Dai, and Zhao et al. (hereinafter Zhao) (“Privacy-Aware Federated Learning for Page Recommendation”). Regarding Claim 4, Zhou in view of Han and Dai teach the method of claim 3, wherein vehicular system conditions comprises . . . [location information, navigation information,] and computational resources of the vehicle (Zhou, Pg. 5311, Col. 1, Para. 1, “RSUs are the middle brokers to collect and aggregate not only learning parameters but also contextual information, such as vehicle locations and navigation direction, from all the connected vehicles within the coverage to facilitate parameter aggregation”, which, in view of Dai, includes computational resources of the vehicle, see Dai, Pg. 2, Col. 2, Para. 7, “Let fi denote the computational resource of vehicle i, which varies for different users and can be obtained through offline measurement [13]”). The reasons of obviousness have been discussed in regard to the rejection of claim 3 above and remain applicable here. Zhou in view of Han and Dai do not explicitly disclose . . . privacy requirements . . . . However, Zhao teaches [a federated learning method, where entity conditions comprise] . . . privacy requirements . . . (Pg. 1071, Col. 1, Abstract, “We propose Fed4Rec, a privacy-preserving framework for page recommendation based on federated learning (FL) and model-agnostic meta-learning (MAML), which allows machine learning models to train on data collected from both public users, who share data with the server, and private users, who do not share data with the server”). Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the method for hierarchical federated learning with vehicles and edge servers, wherein vehicular system conditions of a vehicle, which include computational resources, are used as a basis for association protocols of Zhou in view of Han and Dai with the federated learning method, where entity conditions comprise privacy requirements of Zhao or order to train models on all user data by adjusting data sharing protocols depending on user requirements (Zhao, Pg. 1071, Col. 1, Abstract, “Fed4Rec . . . allows machine learning models to train on data collected from both public users, who share data with the server, and private users, who do not share data with the server. Fed4Rec enables recommendations for both public users, computed at the server, and private users, computed at their local devices”), which improves model accuracy (Zhao, Pg. 1071, Col. 1, Abstract, “The results show that Fed4Rec outperforms the baselines in terms of recommendation accuracy”). Regarding Claim 6, Zhou in view of Han, Dai, and Zhao teach the method of claim 5, further comprising: responsive to the determination that the vehicle comprises insufficient computational resources (Dai, Pg. 2, Col. 1, Para. 5-6, “Each task can either be offloaded to a selected VEC server to process, or be executed locally at the vehicle . . . If vehicle i chooses to offload task Di to a selected VEC server to process”, where the decision to “offload task Di” is determined based on whether the vehicle has sufficient computational resources to avoid “bottlenecks” that compromise “Quality of Service”, see Dai Pg. 1, Col. 1, Para. 2, “resource-constrained vehicles can be strained by computation intensive applications, resulting in bottlenecks and making it challenging for the vehicles to ensure the required level of Quality of Service”, with reference to “maximum allowed latency”, see Dai, Pg. 3, Col. 1, Para. 5, “we formulate the joint offloading and resource allocation scheme as an optimization problem . . . The first constraint (8b) guarantees that the task processing time cannot exceed the maximum allowed latency”) for storing and running each machine learning model of the plurality of machine learning models (Zhou, Pg. 5311, Col. 1, Para. 4, “any individual data owner vi in this framework can keep its own data di and train the object detection model locally”, which, in view of Han, are they plurality of models, see Han, Pg. 3870, Col. 1, Abstract, “in the model-downloading stage, the clients in the overlapping areas receive multiple models from different ESs, take the average of the received models, and then update the averaged model with their local data”, where the “client” stores and runs the plurality of models, collectively, as the “averaged model”; see also Zhou, Pg. 5309, Col. 1, Para. “With small quantity of data (or low data quality depending on the camera resolution) and varying computational power of individual vehicles, local training is usually limited by its accuracy”), checking privacy requirement settings of the vehicle, wherein selecting the vehicle-to-edge server association protocol from the plurality of vehicle-to-edge server association protocols is based on the privacy requirement settings (Zhou, Pg. 5310, Col. 2, Fig. 1 and Zhou, Pg. 5313, Col. 1, Fig. 3, “4: for each data owner vi ∈ V do 5: if vi supervised by rj do . . . 8: Submit the local training parameter wi (t) to rj”, where the exchange of data is based on the vehicle-to edge server association protocol of clustered “data owner[s] vi” “supervised by rj” “RSU[s]”, where vehicular system conditions, “contextual information, such as vehicle locations”, determines which “RSU” a “vehicles” will be “connected” to as a supervisor see Zhou, Pg. 5311, Col. 1, Para. 1, “RSUs are the middle brokers to collect and aggregate not only learning parameters but also contextual information, such as vehicle locations and navigation direction, from all the connected vehicles within the coverage to facilitate parameter aggregation”, where in view of Zhao, includes a determination of “public” or “private” privacy requirement settings, which determine which of the plurality of association protocols, “share data with server” or “do not share data with the server”, are selected, see Zhao, Pg. 1071, Col. 1, Abstract, “We propose Fed4Rec, a privacy-preserving framework for page recommendation based on federated learning (FL) and model-agnostic meta-learning (MAML), which allows machine learning models to train on data collected from both public users, who share data with the server, and private users, who do not share data with the server”). The reasons of obviousness have been discussed in the rejection of claim 1, in regard to the combination of Zhou with Han, the rejection of claim 3, in regard to the combination of Zhou and Han with Dai, and the rejection of claim 4, in regard to the combination of Zhou, Han, and Dai, with Zhao, and remain applicable here. Regarding Claim 12, the additional elements of the dependent claim are substantially the same as limitations of Claim 4, therefore it is rejected under the same rationale. Regarding Claim 14, the additional elements of the dependent claim are substantially the same as limitations of Claim 6, therefore it is rejected under the same rationale. Regarding Claim 18, the additional elements of the dependent claim are substantially the same as limitations of Claim 4, therefore it is rejected under the same rationale. Claims 7-8, 15-16, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou in view of Han, Dai, Zhao, and Karb et al. (hereinafter Karb) (“A Network-Based Transfer Learning Approach to Improve Sales Forecasting of New Products”). Regarding Claim 7, Zhou in view of Han, Dai, and Zhao teach the method of claim 6, further comprising: responsive to the privacy requirement settings being set to strict privacy requirements, requesting data . . . . (Zhao, Pg. 1074, Col. 1, Para. 1, “This global model can then be downloaded by private users to perform page recommendations on their devices or can be used at the server for page recommendations for public users”, where “private users” have strict privacy requirements) from each edge server of the plurality of edge servers . . . of each machine learning model . . . [based on data of] the vehicle derived from the data acquired by the vehicle (Han, Pg. 3870, Col. 1, Abstract, “in the model-downloading stage, the clients in the overlapping areas receive multiple models from different ESs, take the average of the received models, and then update the averaged model with their local data”, where “clients” receive “multiple models” from the plurality of models hosted by the plurality of “different Ess” and the multiple models are identified, which requires that a machine learning model be identified, based on whether the “clients [are] in the overlapping areas”; Han, Pg. 3872, Col. 2, Para. 3, “We call this region in which the client can reliably communicate with multiple ESs overlapping cell area”, where a “client” is determined to be in an “overlapping cell area” “communication” data acquired by the “client” and from “multiple Ess”), wherein identifying the machine learning model for the vehicle from the plurality of machine learning models hosted on the plurality of edge servers comprises . . . the vehicle (Zhou, Pg. 5313, Col. 1, Fig. 3, “13: Calculate the global parameters w(t+1) for model M . . . 15: Broadcast w(t+1) to the network” and Zhou, Pg. 5311, Col. 1, Para. 1, “the updated model parameters will be dispatched from the central cloud server to RSUs and then to individual vehicles”, where the “updated” “global . . . model M” is identified at the “central cloud server” and provided to the “individual vehicles”, where, in view of Han, if from a plurality of machine learning models hosted on the plurality of edge servers, see Han, Pg. 3870, Col. 1, Abstract, “in the model-downloading stage, the clients in the overlapping areas receive multiple models from different ESs, take the average of the received models, and then update the averaged model with their local data”, where “clients” receive “multiple models” from the plurality of models hosted by the plurality of “different Ess” and the multiple models are identified, which requires that a machine learning model be identified, based on whether the “clients [are] in the overlapping areas”; Han, Pg. 3872, Col. 2, Para. 3, “We call this region in which the client can reliably communicate with multiple ESs overlapping cell area”, where a “client” is determined to be in an “overlapping cell area” “communication” data acquired by the “client” and from “multiple Ess”). The reasons of obviousness have been discussed in the rejection of claim 1, in regard to the combination of Zhou with Han, and the rejection of claim 4, in regard to the combination of Zhou, Han, and Dai, with Zhao, and remain applicable here. Zhou in view of Han, Dai, and Zhao do not explicitly disclose . . . feature information . . . data feature information representative . . . performing feature matching between data feature information of each machine learning and data feature information of . . . identifying the machine learning model corresponding to data feature information that is closest to the data feature information of . . . . However, Karb teaches [a model identification method, wherein] . . . [data] feature information [of a plurality of machine learning models, wherein the] . . . data feature information [is] representative [of each of the models, is used] . . . (Pg. 8, Para. 2, “For each of the 14 source products one model is trained independently and the network architectures, including all parameters, were saved after training”; Pg. 7, Para. 2, “As described in Section 3, the success of the transfer is highly depended on the similarity between source and target. We analyze three different dimensions to compare product similarities, based on the information available at the time of the first forecast in week three . . . The values for the source products in this table are based on two years of available data. Within the three dimensions, the source and target products are assigned to clusters based on the respective values. This clustering approach is used to systematically search for the most suitable source models”, where the method uses data feature information, “the three dimensions”, as representative of the “suitability” of the “source models”) performing feature matching between data feature information of each machine learning and data feature information of [a target model application] (Pg. 7, Para. 2, “Within the three dimensions, the source and target products are assigned to clusters based on the respective values”, where the clustering uses the data feature information of both “source” and “target”, see Pg. 7, Para. 2, “For the target product, additionally to the sales price, the mean and standard deviation of the hourly sales as well as the share of promotion . . . The values for the source products in this table are based on two years of available data”) . . . identifying the machine learning model corresponding to data feature information that is closest to the data feature information of [the target model application] . . . (Pg. 7, Para. 2, “Within the three dimensions, the source and target products are assigned to clusters based on the respective values. This clustering approach is used to systematically search for the most suitable source models”, where, as discussed above, “clustering” is based on the data feature information of the source models and target model application, and where “clustering” is used to identify the “model” with the most “similarity” to the target, for example, see Pg. 11, Para. 1, “it can be concluded that the same price cluster is a good indicator for the similarity of the domains”). Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the requesting data, based on a privacy setting, from each edge server storing a machine learning, wherein the transmitting of data is based on the data derived from data acquired by the vehicle of Zhou in view of Han, Dai, and Zhao with the model identification method, wherein data feature information of a plurality of machine learning models and data feature information of a target application model are compared to identify the machine learning model with data feature information closest to the data feature information of the target model application of Karb in order to select source models with similar data to the target application (Karb, Pg. 3-4, Para. 4-1, “The TL concept is based on the assumption that there are similarities between Ds and Dt and that it can be useful to transfer knowledge between these domains”), which improves model accuracy for the target application (Karb, Pg. 1, Abstract, “The experimental results show, that the prediction accuracy of deep neural networks for food sales forecasting can be effectively increased using the proposed approach”). Regarding Claim 8, Zhou in view of Han, Dai, Zhao, and Karb teach the method of claim 6, further comprising: responsive to the privacy requirement settings being set to lenient privacy requirements, transmitting, to each edge server of the plurality of edge servers, data feature information of the vehicle derived from the data acquired by the vehicle (Zhao, Pg. 1074, Col. 1, Para. 1, “This global model can then be downloaded by private users to perform page recommendations on their devices or can be used at the server for page recommendations for public users”, where “public users” have lenient privacy requirements; Han, Pg. 3870, Col. 1, Abstract, “in the model-downloading stage, the clients in the overlapping areas receive multiple models from different ESs, take the average of the received models, and then update the averaged model with their local data”, where “clients” receive “multiple models” from the plurality of models hosted by the plurality of “different Ess” and the multiple models are identified, which requires that a machine learning model be identified, based on whether the “clients [are] in the overlapping areas”; Han, Pg. 3872, Col. 2, Para. 3, “We call this region in which the client can reliably communicate with multiple ESs overlapping cell area”, where a “client” is determined to be in an “overlapping cell area” “communication” data acquired by the “client” and from “multiple Ess”, which requires that the client data be transmitted, either directly or indirectly, to each edge server, where in view of Karb, the data is data feature information, see Karb, Pg. 7, Para. 2, “Within the three dimensions, the source and target products are assigned to clusters based on the respective values”, where the clustering uses the data feature information of both “source” and “target”); and receiving data matching results from each edge server of the plurality of edge servers (Han, Pg. 3870, Col. 1, Abstract, “in the model-downloading stage, the clients in the overlapping areas receive multiple models from different ESs, take the average of the received models, and then update the averaged model with their local data”, where “clients” receive “multiple models” from the plurality of models hosted by the plurality of “different Ess”, where, in view Zhao, the computations occur at the server, Zhao, Pg. 1074, Col. 1, Para. 1, “This global model can then be downloaded by private users to perform page recommendations on their devices or can be used at the server for page recommendations for public users”, and in view of Karb, the computations include the data matching, and therefore the receiving of data includes the data matching results, see Karb, Pg. 7, Para. 2, “Within the three dimensions, the source and target products are assigned to clusters based on the respective values. This clustering approach is used to systematically search for the most suitable source models”, where, as discussed above, “clustering” is based on the data feature information of the source models and target model application, and where “clustering” is used to identify the “model” with the most “similarity” to the target, for example, see Karb, Pg. 11, Para. 1, “it can be concluded that the same price cluster is a good indicator for the similarity of the domains”), wherein the data matching results from each edge sever comprises a measure of similarity between data feature information of the machine learning model hosted by a respective edge server and the data feature information of the vehicle, wherein identifying the machine learning model for the vehicle from the plurality of machine learning models hosted on the plurality of edge servers comprises identifying the machine learning model corresponding to a data matching result having highest measure of similarity (Karb, Pg. 7, Para. 2, “Within the three dimensions, the source and target products are assigned to clusters based on the respective values”, where the clustering uses the data feature information of both “source” and “target”, see Karb, Pg. 7, Para. 2, “For the target product, additionally to the sales price, the mean and standard deviation of the hourly sales as well as the share of promotion . . . The values for the source products in this table are based on two years of available data” where, “clustering” is based on the data feature information of the source models and target model application, and where “clustering” is used to identify the “model” with the most “similarity” to the target, for example, see Karb, Pg. 11, Para. 1, “it can be concluded that the same price cluster is a good indicator for the similarity of the domains” , where the course models are the edge servers and the target is the vehicle, discussed above in regard to Zhou and Han, see generally Zhou, Pg. 5309, Col. 1, Para. 2, “In this study, we propose a two-layer federated learning model based on conv
Read full office action

Prosecution Timeline

Feb 03, 2023
Application Filed
Oct 29, 2025
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month