Prosecution Insights
Last updated: April 19, 2026
Application No. 18/825,975

FEDERATED LEARNING FOR CONTROLS AND MONITORING OF FUNCTIONS IN VEHICULAR SETTINGS

Non-Final OA §102§103
Filed
Sep 05, 2024
Examiner
EVANS, KARSTON G
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
CUMMINS INC.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
91%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
100 granted / 143 resolved
+17.9% vs TC avg
Strong +21% interview lift
Without
With
+21.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
174
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
21.2%
-18.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 143 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 2, 7, 8, 12, 13, 15, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over GYLLENHAMMAR (US 20230297845 A1) in view of Zhang (NPL: “End-to-End Federated Learning for Autonomous Driving Vehicles”). Regarding Claim 1, GYLLENHAMMAR teaches A system for performing federated learning across vehicles, comprising: (“The processing system comprising control circuitry configured to obtain one or more locally updated model parameters of a self-supervised machine-learning algorithm from a plurality of remote vehicles, and to update one or more model parameters of a global self-supervised machine-learning algorithm based on the obtained one or more locally updated model parameters.” See at least [0012]; See at least [0039] for description of Federated learning) a first computing device having one or more processors coupled with memory, the first computing device configured to: (“FIG. 4 is a schematic block diagram representation of a (central) processing system 20 30 for updating a perception function of a vehicle 1 having an Automated Driving System (ADS) in accordance with some embodiments. In more detail, FIG. 4 serves to further the above-described embodiments related to the central knowledge distillation of the production network by schematically illustrating the flow of information and the various process steps. The processing system 10 comprises control circuitry (e.g. one or more processors) configured to perform the functions of the method S100 disclosed herein, where the functions may be included in a non-transitory computer-readable storage medium or other computer program product configured for execution by the control circuitry.” See at least [0081]) maintain, in the memory, a first machine learning (ML) model (“At the central server, the locally updated models across the fleet are received and they are combined or consolidated into a new updated global model that incorporates the teachings from all of the local models.” See at least [0105], wherein the central server is the central processing system according to at least [0051]) comprising a first plurality of parameters (“there is provided a computer-implemented method for updating a perception function of a plurality of vehicles having an Automated Driving System (ADS). The method comprises obtaining one or more locally updated model parameters of a self-supervised machine-learning algorithm from a plurality of remote vehicles, and updating one or more model parameters of a global self-supervised machine-learning algorithm based on the obtained one or more locally updated model parameters.” See at least [0009]) receive, from each respective vehicle of the plurality of vehicles, a second plurality of parameters generated by a second ML model used by a second computing device on each respective vehicle using (i) data associated with the vehicle function acquired via at least one sensor and (ii) (“obtaining perception data from one or more vehicle-mounted sensors configured to monitor a surrounding environment of the vehicle, processing the obtained perception data using a self-supervised machine-learning algorithm, and locally updating one or more model parameters of the self-supervised machine-learning algorithm. The method further comprises transmitting the locally updated model parameters of the self-supervised machine-learning algorithm to a remote entity, and obtaining a centrally fine-tuned machine-learning algorithm formed from a consolidated version of the self-supervised machine-learning algorithm from the remote entity.” See at least [0015], wherein the local models are second ML models.; “The reconstructed image is subsequently compared to the original image to create a loss function (cost function), which is used to update the model parameters (e.g. network weights and/or biases), as known in the art.” See at least [0072]) update the first plurality of parameters of the first ML model in accordance with the second plurality of parameters received from each respective vehicle of the plurality of vehicles; (“the method S100 comprises updating S102 one or more model parameters of a global self-supervised machine-learning algorithm based on the obtained one or more locally updated model parameters. In other words, the local self-supervised ML algorithms are consolidated so to form a “global” self-supervised ML algorithm, i.e. the local model parameters of the nodes are consolidated so to form a global model.” See at least [0056]) and transmit, to at least one vehicle of the plurality of vehicles, the updated first plurality of parameters to update the second plurality of parameters of the second ML model of the at least one vehicle. (“the step of forming the ML algorithm for the in-vehicle perception module comprises transmitting S106 the fine-tuned model parameters of the fine-tuned global machine-learning algorithm to the plurality of remote vehicles, and obtaining S107 one or more locally distilled model parameters of a local machine-learning algorithm for the in-vehicle perception module from each of the plurality of remote vehicles. Subsequently, the machine-learning algorithm for the in-vehicle perception module is formed S108 based on a consolidation of the one or more locally distilled model parameters. In other words, the teacher network is pushed to the vehicles, which are provided with suitable hardware and/or software to perform a “local” knowledge distillation based on input data in the form of perception data generated locally in each of the vehicles. An advantage of using a federated learning scheme for the knowledge distillation is that the probability of successfully including rare scenarios (edge cases or corner cases) in the knowledge distillation process may be increased, further increasing the performance of the formed production network.” See at least [0063]) GYLLENHAMMAR does not explicitly teach, but Zhang teaches machine learning (ML) model comprising a first plurality of parameters for determining values identifying a characteristic of a vehicle function on at least one of a plurality of vehicles (“The process of training a local CNN network is to find the best model parameters which cause the minimum difference between the predicted angle and the ground truth steering angle.” See at least pg. 3, C. Machine Learning Method, wherein the predicted steering angles are the determined values.) a second plurality of parameters generated by a second ML model used by a second computing device on each respective vehicle using … (ii) a value identifying the characteristic of the vehicle function on the respective vehicle; (“In this section, we describe the algorithm and the approach applied in this paper. In order to perform on-device end-to-end learning based on the input image stream, images are firstly stored in an external storage driver located on each edge vehicles. At the same time, the optical flow information is calculated. When triggering the training threshold, image frames and optical flow frames are fed into a convolutional neural network. The output of the network is then compared to the ground truth for that image frame, which is the recorded steering wheel angle. The weights of the CNN are adjusted using back-propagation to enforce the model output as close as possible to the desired output. Figure 5 illustrates the diagram of the learning procedure in a single edge vehicle.” See at least pg. 5, IV. END-TO-END FEDERATED LEARNING and fig. 5, wherein the input/recorded steering wheel angle is the claimed value used to generate a second ML model on each respective vehicle.) PNG media_image1.png 284 350 media_image1.png Greyscale It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of GYLLENHAMMAR to further include the teachings of Zhang with a reasonable expectation of success to improve training of local models (See at least pg. 1, Abstract and I. INTRODUCTION, and pg. 7, VII. CONCLUSION) and to improve automatic steering control learning. (See at least pg. 2, B. End-to-end Learning in Automotive) Regarding Claim 2, GYLLENHAMMAR further teaches wherein the first computing device is further configured to retrieve, responsive to establishing a connection with at least one vehicle of the plurality of vehicles, the second plurality of parameters of the second ML model on the at least one vehicle, (“It should be noted that the transmission S203 need not necessarily be performed directly after every update S202. Instead, the local updating S202 process may “looped”, and the transmission S203 of the locally updated S202 model parameters may be executed … as soon as a suitable communication-network connection is available.” See at least [0073]) GYLLENHAMMAR does not specifically teach wherein the second plurality of parameters are generated on the at least one vehicle while no connection was established with the first computing device. However, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified GYLLENHAMMAR to generate the second parameters while no connection was established with the first computing device because GYLLENHAMMAR teaches locally updating the parameters (“locally updating S202 one or more model parameters of the self-supervised machine learning algorithm.” See at least [0072]) and transmitting them to the central processing system “as soon as a suitable communication-network connection is available” (See at least [0073]). Accordingly, the modification would be made with a reasonable expectation of success to update parameters locally before connection is available because local updating does not require connection and the modification would improve machine learning efficiency when connection is not always available. Regarding Claim 7, GYLLENHAMMAR further teaches wherein the vehicle function comprises at least one of: (“Accordingly, by means of the technology disclosed herein, one can efficiently incorporate the various scenes and scenarios that the vehicles of the fleet are exposed to during normal operation in the training of the production network without the need for either transmitting huge datasets or annotating the data collected by each vehicle. Consequently, an efficient process for increasing the performance of the “production network” is readily achievable. Moreover, an efficient expansion of the production network's operational capability into new regions and new use cases (i.e. Operational Design Domain expansion) at a faster pace is readily achievable. An Operational design domain (ODD) is to be understood as a description of the operating domains in which an automated or a semi-automated driving system (i.e. AD or ADAS) is designed to function, including, but not limited to, geographic, roadway (e.g. type, surface, geometry, edges and markings), environmental parameters, connectivity, surrounding objects, and speed limitations.” See at least [0043-0044]) Regarding Claim 8, GYLLENHAMMAR teaches A vehicle system, comprising: a component configured to perform a vehicle system function; (“a processing system for updating a perception function of a vehicle having an Automated Driving System (ADS). … the control circuitry is configured to form a machine-learning algorithm for an in-vehicle perception module based on the fine-tuned global machine-learning algorithm, and to transmit one or more model parameters of the formed machine-learning algorithm for the in-vehicle perception module to the plurality of remote vehicles.” See at least [0012]) and at least one processing circuit including one or more processors coupled with memory, the at least one processing circuit configured to: (“FIG. 4 is a schematic block diagram representation of a (central) processing system 20 30 for updating a perception function of a vehicle 1 having an Automated Driving System (ADS) in accordance with some embodiments. In more detail, FIG. 4 serves to further the above-described embodiments related to the central knowledge distillation of the production network by schematically illustrating the flow of information and the various process steps. The processing system 10 comprises control circuitry (e.g. one or more processors) configured to perform the functions of the method S100 disclosed herein, where the functions may be included in a non-transitory computer-readable storage medium or other computer program product configured for execution by the control circuitry.” See at least [0081]) maintain a first machine learning (ML) model (“At the central server, the locally updated models across the fleet are received and they are combined or consolidated into a new updated global model that incorporates the teachings from all of the local models.” See at least [0105], wherein the central server is the central processing system according to at least [0051]) comprising a first plurality of parameters (“there is provided a computer-implemented method for updating a perception function of a plurality of vehicles having an Automated Driving System (ADS). The method comprises obtaining one or more locally updated model parameters of a self-supervised machine-learning algorithm” See at least [0009]) identify data associated with the vehicle system function and (“obtaining perception data from one or more vehicle-mounted sensors configured to monitor a surrounding environment of the vehicle, processing the obtained perception data using a self-supervised machine-learning algorithm, and locally updating one or more model parameters of the self-supervised machine-learning algorithm. The method further comprises transmitting the locally updated model parameters of the self-supervised machine-learning algorithm to a remote entity, and obtaining a centrally fine-tuned machine-learning algorithm formed from a consolidated version of the self-supervised machine-learning algorithm from the remote entity.” See at least [0015]; “The reconstructed image is subsequently compared to the original image to create a loss function (cost function), which is used to update the model parameters (e.g. network weights and/or biases), as known in the art.” See at least [0072]) and transmit the first plurality of parameters from the ML model to at least one computing device remote from the at least one processing circuit to update a second ML model. (“The method S200 further comprises transmitting S203 the locally updated model parameters of the self-supervised machine-learning algorithm to a remote entity (e.g. the above-described central processing system).” See at least [0073]; “The method comprises obtaining one or more locally updated model parameters of a self-supervised machine-learning algorithm from a plurality of remote vehicles, and updating one or more model parameters of a global self-supervised machine-learning algorithm based on the obtained one or more locally updated model parameters.” See at least [0009]) GYLLENHAMMAR does not explicitly teach, but Zhang teaches machine learning (ML) model comprising a first plurality of parameters for determining values associated with the vehicle system function; (“The process of training a local CNN network is to find the best model parameters which cause the minimum difference between the predicted angle and the ground truth steering angle.” See at least pg. 3, C. Machine Learning Method, wherein the predicted steering angles are the determined values.) identify data associated with the vehicle system function and a first value identifying a characteristic of the vehicle system function; generate a second value identifying the characteristic of the vehicle system function using the first ML model; update the first plurality of parameters of the first ML model based on a comparison between the first value and the second value (“In this section, we describe the algorithm and the approach applied in this paper. In order to perform on-device end-to-end learning based on the input image stream, images are firstly stored in an external storage driver located on each edge vehicles. At the same time, the optical flow information is calculated. When triggering the training threshold, image frames and optical flow frames are fed into a convolutional neural network. The output of the network is then compared to the ground truth for that image frame, which is the recorded steering wheel angle. The weights of the CNN are adjusted using back-propagation to enforce the model output as close as possible to the desired output. Figure 5 illustrates the diagram of the learning procedure in a single edge vehicle.” See at least pg. 5, IV. END-TO-END FEDERATED LEARNING and fig. 5, wherein the input/recorded steering wheel angle is the claimed first value and the predicted steering wheel angle is the second value.) PNG media_image1.png 284 350 media_image1.png Greyscale It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of GYLLENHAMMAR to further include the teachings of Zhang with a reasonable expectation of success to improve training of local models (See at least pg. 1, Abstract and I. INTRODUCTION, and pg. 7, VII. CONCLUSION) and to improve automatic steering control learning. (See at least pg. 2, B. End-to-end Learning in Automotive) Regarding Claim 12, GYLLENHAMMAR further teaches wherein the at least one computing device is at least one server, and wherein the at least one processing circuit is further configured to transmit the first plurality of parameters to the at least one server to update a third ML model on the at least one server using the first plurality of parameters and a second plurality of parameters from each respective vehicle system of a plurality of vehicle systems. (“At the central server, the locally updated models across the fleet are received and they are combined or consolidated into a new updated global model that incorporates the teachings from all of the local models.” See at least [0105]; “the method S200 comprises receiving S210 one or more one or more consolidated model parameters of the self-supervised ML algorithm from the remote entity, and updating S211 the self-supervised ML algorithm based on the consolidated model parameters. In other words, the self-supervised ML algorithm is subdued to a “global update” that is based on a plurality of “local updates” performed across an entire fleet of ADS-equipped vehicles. This, consolidated or “global” version of the self-supervised ML algorithm forms a new “baseline” that is to be locally updated S202 in a subsequent iteration of the method S200.” See at least [0074]; “The centrally fine-tuned machine-learning algorithm may be understood as the aforementioned teacher network or teacher model that is to be used for a knowledge distillation process following a federated learning methodology as described above. Accordingly, the method S200 further comprises distilling S206 a machine-learning algorithm for an in-vehicle perception module (i.e. a production network) from the centrally fine-tuned machine-learning algorithm acting as a teacher model,” See at least [0076]) Regarding Claim 13, GYLLENHAMMAR further teaches wherein the at least one computing device is an on-board vehicle computing device, and wherein the at least one processing circuit is further configured to transmit the first plurality of parameters to the on-board vehicle computing device to update the second ML model using the first plurality of parameters. (“the step of forming the ML algorithm for the in-vehicle perception module comprises transmitting S106 the fine-tuned model parameters of the fine-tuned global machine-learning algorithm to the plurality of remote vehicles, and obtaining S107 one or more locally distilled model parameters of a local machine-learning algorithm for the in-vehicle perception module from each of the plurality of remote vehicles. Subsequently, the machine-learning algorithm for the in-vehicle perception module is formed S108 based on a consolidation of the one or more locally distilled model parameters. In other words, the teacher network is pushed to the vehicles, which are provided with suitable hardware and/or software to perform a “local” knowledge distillation based on input data in the form of perception data generated locally in each of the vehicles. An advantage of using a federated learning scheme for the knowledge distillation is that the probability of successfully including rare scenarios (edge cases or corner cases) in the knowledge distillation process may be increased, further increasing the performance of the formed production network.” See at least [0063]) Regarding Claim 15, GYLLENHAMMAR teaches A method of performing federated learning across vehicles, the method comprising: (“methods for updating a perception function of a plurality of vehicles having an Automated Driving System (ADS). In particular, embodiments disclosed herein relates to systems and methods for federated learning of self-supervised machine-learning algorithms in ADSs.” See at least [0002]) maintaining, by a first computing system, a first machine learning (ML) model (“At the central server, the locally updated models across the fleet are received and they are combined or consolidated into a new updated global model that incorporates the teachings from all of the local models.” See at least [0105], wherein the central server is the central processing system according to at least [0051]) comprising a first plurality of parameters (“there is provided a computer-implemented method for updating a perception function of a plurality of vehicles having an Automated Driving System (ADS). The method comprises obtaining one or more locally updated model parameters of a self-supervised machine-learning algorithm from a plurality of remote vehicles, and updating one or more model parameters of a global self-supervised machine-learning algorithm based on the obtained one or more locally updated model parameters.” See at least [0009]) receiving, by the first computing system and from each respective vehicle of the plurality of vehicles, a second plurality of parameters generated by a second ML model used by a second computing system on each respective vehicle; (“obtaining perception data from one or more vehicle-mounted sensors configured to monitor a surrounding environment of the vehicle, processing the obtained perception data using a self-supervised machine-learning algorithm, and locally updating one or more model parameters of the self-supervised machine-learning algorithm. The method further comprises transmitting the locally updated model parameters of the self-supervised machine-learning algorithm to a remote entity, and obtaining a centrally fine-tuned machine-learning algorithm formed from a consolidated version of the self-supervised machine-learning algorithm from the remote entity.” See at least [0015], wherein the local models are second ML models.; “The reconstructed image is subsequently compared to the original image to create a loss function (cost function), which is used to update the model parameters (e.g. network weights and/or biases), as known in the art.” See at least [0072]) updating, by the first computing system, the first plurality of parameters of the first ML model in accordance with the second plurality of parameters received from each respective vehicle of the plurality of vehicles; (“the method S100 comprises updating S102 one or more model parameters of a global self-supervised machine-learning algorithm based on the obtained one or more locally updated model parameters. In other words, the local self-supervised ML algorithms are consolidated so to form a “global” self-supervised ML algorithm, i.e. the local model parameters of the nodes are consolidated so to form a global model.” See at least [0056]) and transmitting, by the first computing system and to at least one vehicle of the plurality of vehicles, the updated first plurality of parameters to cause a second computing system on the at least one vehicle to update the second plurality of parameters of the second ML model on the at least one vehicle. (“the step of forming the ML algorithm for the in-vehicle perception module comprises transmitting S106 the fine-tuned model parameters of the fine-tuned global machine-learning algorithm to the plurality of remote vehicles, and obtaining S107 one or more locally distilled model parameters of a local machine-learning algorithm for the in-vehicle perception module from each of the plurality of remote vehicles. Subsequently, the machine-learning algorithm for the in-vehicle perception module is formed S108 based on a consolidation of the one or more locally distilled model parameters. In other words, the teacher network is pushed to the vehicles, which are provided with suitable hardware and/or software to perform a “local” knowledge distillation based on input data in the form of perception data generated locally in each of the vehicles. An advantage of using a federated learning scheme for the knowledge distillation is that the probability of successfully including rare scenarios (edge cases or corner cases) in the knowledge distillation process may be increased, further increasing the performance of the formed production network.” See at least [0063]) GYLLENHAMMAR does not explicitly teach, but Zhang teaches machine learning (ML) model comprising a first plurality of parameters for determining values associated with a vehicle function of at least one of a plurality of vehicles; (“The process of training a local CNN network is to find the best model parameters which cause the minimum difference between the predicted angle and the ground truth steering angle.” See at least pg. 3, C. Machine Learning Method, wherein the predicted steering angles are the determined values.) PNG media_image1.png 284 350 media_image1.png Greyscale It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of GYLLENHAMMAR to further include the teachings of Zhang with a reasonable expectation of success to improve training of local models (See at least pg. 1, Abstract and I. INTRODUCTION, and pg. 7, VII. CONCLUSION) and to improve automatic steering control learning. (See at least pg. 2, B. End-to-end Learning in Automotive) Regarding Claim 16, GYLLENHAMMAR further teaches wherein receiving the second plurality of parameters further comprises retrieving, by the first computing system and responsive to establishing a connection with at least one vehicle of the plurality of vehicles, the second plurality of parameters of the second ML model on the at least one vehicle, (“It should be noted that the transmission S203 need not necessarily be performed directly after every update S202. Instead, the local updating S202 process may “looped”, and the transmission S203 of the locally updated S202 model parameters may be executed … as soon as a suitable communication-network connection is available.” See at least [0073]) GYLLENHAMMAR does not specifically teach wherein the second plurality of parameters are generated on the at least one vehicle while no connection was established with the first computing system. However, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified GYLLENHAMMAR to generate the second parameters while no connection was established with the first computing device because GYLLENHAMMAR teaches locally updating the parameters (“locally updating S202 one or more model parameters of the self-supervised machine learning algorithm.” See at least [0072]) and transmitting them to the central processing system “as soon as a suitable communication-network connection is available” (See at least [0073]). Accordingly, the modification would be made with a reasonable expectation of success to update parameters locally before connection is available because local updating does not require connection and the modification would improve machine learning efficiency when connection is not always available. Claim(s) 3, 11, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over GYLLENHAMMAR (US 20230297845 A1) in view of Zhang (NPL: “End-to-End Federated Learning for Autonomous Driving Vehicles”) and Shao (US 20240394556 A1). Regarding Claim 3, Modified GYLLENHAMMAR does not explicitly teach, but Shao teaches wherein the first computing device is further configured to initialize the first ML model using a third plurality of parameters generated by a third ML model maintained on at least one server. (“the edge server obtains a machine learning submodel from the cloud server, where a parameter scale of the machine learning submodel is less than a parameter scale of a complete machine learning model stored in the cloud server.” See at least [0008], wherein the edge server is equivalent to the first computing device and the submodel from the cloud server is the third ML model.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified GYLLENHAMMAR to further include the teachings of Shao with a reasonable expectation of success to improve model training efficiency. (See at least [0004-0005]) Regarding Claim 11, Modified GYLLENHAMMAR does not explicitly teach, but Shao teaches wherein the at least one processing circuit is further configured to initialize the first ML model using a second plurality of parameters received from the second ML model of the at least one computing device remote from the at least one processing circuit. (“the edge server obtains a machine learning submodel from the cloud server, where a parameter scale of the machine learning submodel is less than a parameter scale of a complete machine learning model stored in the cloud server.” See at least [0008], wherein the edge server is equivalent to the first computing device and the submodel from the cloud server is the second ML model.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified GYLLENHAMMAR to further include the teachings of Shao with a reasonable expectation of success to improve model training efficiency. (See at least [0004-0005]) Regarding Claim 17 Modified GYLLENHAMMAR does not explicitly teach, but Shao teaches further comprising initializing, by the first computing system, the first ML model using a third plurality of parameters generated by a third ML model maintained on at least one server. (“the edge server obtains a machine learning submodel from the cloud server, where a parameter scale of the machine learning submodel is less than a parameter scale of a complete machine learning model stored in the cloud server.” See at least [0008], wherein the edge server is equivalent to the first computing device and the submodel from the cloud server is the third ML model.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified GYLLENHAMMAR to further include the teachings of Shao with a reasonable expectation of success to improve model training efficiency. (See at least [0004-0005]) Claim(s) 4, 9, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over GYLLENHAMMAR (US 20230297845 A1) in view of Zhang (NPL: “End-to-End Federated Learning for Autonomous Driving Vehicles”) and Gurumurthy (US 20220374764 A1). Regarding Claim 4, Modified GYLLENHAMMAR does not explicitly teach, but Gurumurthy teaches wherein the first computing device is further configured to identify, from a plurality of ML models, the first ML model based on the vehicle function. (“model catalog structure 900 can store template models in association with particular trackable dimensions. As shown by FIG. 9, such trackable dimensions can include vehicle and vehicle functional unit dimensions that can correspond to particular types of vehicle and vehicle functional units, respectively. Such trackable dimensions can further include geographic and temporal dimensions that can correspond to particular geographical and temporal contexts, respectively. Policy server 810 can utilize such trackable dimensions to deploy updated template models to a given vehicle chief 830 that are appropriate to a context of the given vehicle chief 830. For example, the given vehicle chief 830 can operate within an in-vehicle network of a particular vehicle type including one or more particular domains comprising a set of particular vehicle functional units. In this example, policy server 810 can deploy updated template models to the given vehicle chief 830 that are associated with the particular vehicle type, the one or more particular domains, and/or the set of particular vehicle functional units.” See at least [0055]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified GYLLENHAMMAR to further include the teachings of Gurumurthy with a reasonable expectation of success to improve the ability to adapt models appropriate to context. (See at least [0054]) Regarding Claim 9, GYLLENHAMMAR further teaches wherein the at least one processing circuit is further configured to: receive, from the at least one computing device, a second plurality of parameters of the second ML model, the second plurality of parameters generated by the at least one computing device using the first plurality of parameters and a third plurality of parameters received from each respective vehicle system of a plurality of vehicle systems; … and update the first plurality of parameters of the first ML model using the third plurality of parameters. (“the method S100 comprises updating S102 one or more model parameters of a global self-supervised machine-learning algorithm based on the obtained one or more locally updated model parameters.” See at least [0056]; “the method S200 comprises receiving S210 one or more one or more consolidated model parameters of the self-supervised ML algorithm from the remote entity, and updating S211 the self-supervised ML algorithm based on the consolidated model parameters. In other words, the self-supervised ML algorithm is subdued to a “global update” that is based on a plurality of “local updates” performed across an entire fleet of ADS-equipped vehicles. This, consolidated or “global” version of the self-supervised ML algorithm forms a new “baseline” that is to be locally updated S202 in a subsequent iteration of the method S200.” See at least [0074]; “The centrally fine-tuned machine-learning algorithm may be understood as the aforementioned teacher network or teacher model that is to be used for a knowledge distillation process following a federated learning methodology as described above. Accordingly, the method S200 further comprises distilling S206 a machine-learning algorithm for an in-vehicle perception module (i.e. a production network) from the centrally fine-tuned machine-learning algorithm acting as a teacher model,” See at least [0076]) Modified GYLLENHAMMAR does not explicitly teach, but Gurumurthy teaches identify, from a plurality of ML models, the first ML model based on the vehicle system function; (“model catalog structure 900 can store template models in association with particular trackable dimensions. As shown by FIG. 9, such trackable dimensions can include vehicle and vehicle functional unit dimensions that can correspond to particular types of vehicle and vehicle functional units, respectively. Such trackable dimensions can further include geographic and temporal dimensions that can correspond to particular geographical and temporal contexts, respectively. Policy server 810 can utilize such trackable dimensions to deploy updated template models to a given vehicle chief 830 that are appropriate to a context of the given vehicle chief 830. For example, the given vehicle chief 830 can operate within an in-vehicle network of a particular vehicle type including one or more particular domains comprising a set of particular vehicle functional units. In this example, policy server 810 can deploy updated template models to the given vehicle chief 830 that are associated with the particular vehicle type, the one or more particular domains, and/or the set of particular vehicle functional units.” See at least [0055]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified GYLLENHAMMAR to further include the teachings of Gurumurthy with a reasonable expectation of success to improve the ability to adapt models appropriate to context. (See at least [0054]) Regarding Claim 18, Modified GYLLENHAMMAR does not explicitly teach, but Gurumurthy teaches further comprising identifying, by the first computing system, the first ML model from a plurality of ML models based on the vehicle function. (“model catalog structure 900 can store template models in association with particular trackable dimensions. As shown by FIG. 9, such trackable dimensions can include vehicle and vehicle functional unit dimensions that can correspond to particular types of vehicle and vehicle functional units, respectively. Such trackable dimensions can further include geographic and temporal dimensions that can correspond to particular geographical and temporal contexts, respectively. Policy server 810 can utilize such trackable dimensions to deploy updated template models to a given vehicle chief 830 that are appropriate to a context of the given vehicle chief 830. For example, the given vehicle chief 830 can operate within an in-vehicle network of a particular vehicle type including one or more particular domains comprising a set of particular vehicle functional units. In this example, policy server 810 can deploy updated template models to the given vehicle chief 830 that are associated with the particular vehicle type, the one or more particular domains, and/or the set of particular vehicle functional units.” See at least [0055]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified GYLLENHAMMAR to further include the teachings of Gurumurthy with a reasonable expectation of success to improve the ability to adapt models appropriate to context. (See at least [0054]) Claim(s) 6, 10, 14, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over GYLLENHAMMAR (US 20230297845 A1) in view of Zhang (NPL: “End-to-End Federated Learning for Autonomous Driving Vehicles”) and Chen (US 20240174254 A1). Regarding Claim 6, Modified GYLLENHAMMAR does not explicitly teach, but Chen teaches wherein the first computing device is further configured to: generate an output using the first ML model that identifies a vehicle function action; and cause, in accordance with the output, one or more components associated with the vehicle function action to execute the vehicle function action. (“Each of the vehicles 101, 103, 105 receives the aggregated machine learning model from the server, and controls the vehicle to drive autonomously based on the aggregated machine learning model. For example, the aggregated machine learning model may be used for object detection, object classification, and the like.” See at least [0016]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified GYLLENHAMMAR to further include the teachings of Chen with a reasonable expectation of success to facilitate automation of driving control of a plurality of vehicles. (See at least [0016]) Regarding Claim 10, GYLLENHAMMAR further teaches wherein the at least one processing circuit is further configured to transmit (“The method S200 further comprises transmitting S203 the locally updated model parameters of the self-supervised machine-learning algorithm to a remote entity (e.g. the above-described central processing system). It should be noted that the transmission S203 need not necessarily be performed directly after every update S202. Instead, the local updating S202 process may “looped”, and the transmission S203 of the locally updated S202 model parameters may be executed … as soon as a suitable communication-network connection is available.” See at least [0073]) Modified GYLLENHAMMAR does not explicitly teach, but Chen teaches to transmit compressed data regarding the first plurality of parameters to the at least one computing device (“a vehicle includes a controller programmed to: train a machine learning model using first local data, obtain a network bandwidth for a channel between the vehicle and a server, determine a level of compression based on the network bandwidth for the channel, compress the trained machine leaning model based on the determined level of compression, transmit the compressed trained machine learning model to the server,” See at least [0005]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified GYLLENHAMMAR to further include the teachings of Chen with a reasonable expectation of success to facilitate federated learning with bandwidth constraints. (See at least [0002-0005]) Regarding Claim 14, Modified GYLLENHAMMAR does not explicitly teach, but Chen teaches wherein the at least one processing circuit is further configured to: generate an output using the first ML model that identifies a vehicle function action; and cause, in accordance with the output, one or more components associated with the vehicle function action to execute the vehicle function action. (“Each of the vehicles 101, 103, 105 receives the aggregated machine learning model from the server, and controls the vehicle to drive autonomously based on the aggregated machine learning model. For example, the aggregated machine learning model may be used for object detection, object classification, and the like.” See at least [0016]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified GYLLENHAMMAR to further include the teachings of Chen with a reasonable expectation of success to facilitate automation of driving control of a plurality of vehicles. (See at least [0016]) Regarding Claim 20, Modified GYLLENHAMMAR does not explicitly teach, but Chen teaches further comprising: generating, by the first computing system, an output using the first ML model that identifies a vehicle function action; and causing, by the first computing system, in accordance with the output, one or more components associated with the vehicle function action to execute the vehicle function action. (“Each of the vehicles 101, 103, 105 receives the aggregated machine learning model from the server, and controls the vehicle to drive autonomously based on the aggregated machine learning model. For example, the aggregated machine learning model may be used for object detection, object classification, and the like.” See at least [0016]) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to modify the teachings of modified GYLLENHAMMAR to further include the teachings of Chen with a reasonable expectation of success to facilitate automation of driving control of a plurality of vehicles. (See at least [0016]) Allowable Subject Matter Claims 5 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The relevant prior art does not disclose transmitting, to the at least one vehicle of the plurality of vehicles, an identifier corresponding the vehicle function and the updated first plurality of parameters as disclosed by the applicant. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Milton (US 20210312725 A1) is pertinent because it discusses implementing a federated machine-learning architecture to modulate a start, stop, or change in vehicle operations. Gao (US 20240265296 A1) is pertinent because it discusses exchanging data between the vehicle and edge servers of the hierarchical federated learning network according to a vehicle-to-edge server association protocol that is based on the vehicular system conditions, and identifying a model for the vehicle from models hosted on the edge servers. Kawana (US 20240256892 A1) is pertinent because it discusses a method including: receiving, from one or more server computers through a communication network, a first model; collecting sensor data acquired by a sensor on a first vehicle; identifying a first data item from among the collected sensor data when the first data item is determined to satisfy a criterion; detecting an object contained in the identified first data item by running the first model with the identified first data item as input to the first model; establishing communication with a computer on a second vehicle located at equal to or less than a predetermined distance from the first vehicle; receiving a second data item that is indicated as containing the object from the computer on the second vehicle; generating a training dataset containing the first data item, the second data item and a label of the object as a supervision signal; training with respect to the first model on the training dataset; and transmitting first data representing the trained first model to the one or more server computers though the communication network. The above mentioned art, evaluated separately and in combination, does not disclose the entirety of limitations of the dependent claims 5 and 19 since they do not describe transmitting, to the at least one vehicle of the plurality of vehicles, an identifier corresponding the vehicle function and the updated first plurality of parameters as disclosed by the applicant. No prior art has been found at the time of writing this office action to reject the pending claims 5 and 19 under 35 U.S.C. 102 or 103. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Karston G Evans whose telephone number is (571)272-8480. The examiner can normally be reached Mon-Fri 9:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached at (571)270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KARSTON G. EVANS/Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Sep 05, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602054
CONTROL DEVICE FOR MOBILE OBJECT, CONTROL METHOD FOR MOBILE OBJECT, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12600037
REMOTE CONTROL ROBOT, REMOTE CONTROL ROBOT CONTROL SYSTEM, AND REMOTE CONTROL ROBOT CONTROL METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12589493
INFORMATION PROCESSING APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12566457
BULK STORE SLOPE ADJUSTMENT VIA TRAVERSAL INCITED SEDIMENT GRAVITY FLOW
2y 5m to grant Granted Mar 03, 2026
Patent 12552023
METHOD FOR CONTROLLING A ROBOT, AND SYSTEM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
91%
With Interview (+21.3%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 143 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month