DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-3, 5-11, 13-18 and 20 are pending of which claims 1, 9 and 16 are in independent form.
Claims 1-3, 5-11, 13-18 and 20 are rejected under 35 U.S.C. 103.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-3, 5-11, 13-18 and 20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s Argument:
Applicant argues, in pages 7-9 of the “Remarks”, that “The cited references do not teach or suggest at least "wherein the metadata includes information about a computing power of a processor of the vehicle, wherein a weight for the trained local machine learning model is determined based on the computing power of the processor in the metadata, and wherein the aggregated machine learning model is generated based on the trained local machine learning model, the determined weight for the trained local machine learning model, and another local machine learning model trained by another vehicle and another weight determined for the another local machine learning model," as recited in amended independent claim 1”.
Examiner’s Response:
Applicant’s arguments, see “Remarks”, filed 10/17/2025, with respect to the rejection(s), regarding “wherein the metadata includes information about a computing power of a processor of the vehicle, wherein a weight for the trained local machine learning model is determined based on the computing power of the processor in the metadata”, of claim(s) 1, 9 and 16 under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Roth; Holger Reinhard et al. (US 20220366220 A1) [Roth] in view of Wang; Jiajing et al. (US 10963734 B1) [Wang] in view of Bernat; Francesc Guim et al. (US 20210397999 A1) [Bernat].
Furthermore, the combination of Roth and Wang clearly teaches, wherein the aggregated machine learning model is generated based on the trained local machine learning model (FIG. 4 illustrates adjustment of learnable aggregation weights during training rounds between a federated server and a plurality of client locations, according to at least one embodiment ¶ [0006]. Also see ¶ [0056], [0086], [0092], [0095], [0099], [0100]) the determined weight for the trained local machine learning model, and another local machine learning model trained by another vehicle and another weight determined for the another local machine learning model (federated learning or federated training is neural network training using data and/or local neural network models 108, 116, 124 from edge devices or clients 102, 110, 118 at a plurality of locations. In at least one embodiment, a federated server 132 performs federated training or federated learning of a global model 134 by aggregating neural network data values received from edge devices or clients 102, 110, 118, such as hospital computing systems, where each hospital is located at different geographic locations, and training said global model 134 using said neural network data values. A federated server 132 further facilitates federated training of or federated learning by local models 108, 116, 124 by distributing neural network weights values and/or updated models from a global model 134 to local models 108, 116, 124 usable by edge devices or clients 102, 110, 118 located at a plurality of locations ¶ [0056]. Also see ¶ [0059], [0069]-[0070], [0072]-[0073], [0081]-[0082]).
Examiner’s Note
Examiner has reviewed claims 1 and 16; according to the specification (¶ [0022], [0034]), the “controller” has been specified as “one or more processors”. Therefore claims 1 and 16 comply with 35 USC 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5-11, 13-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Roth; Holger Reinhard et al. (US 20220366220 A1) [Roth] in view of Wang; Jiajing et al. (US 10963734 B1) [Wang] in view of Bernat; Francesc Guim et al. (US 20210397999 A1) [Bernat].
Regarding claims 1, 9 and 15, Roth discloses, a vehicle comprising: a controller programmed to: train a local machine learning model using first local data (A federated server 132 further facilitates federated training of or federated learning by local models 108, 116, 124 by distributing neural network weights values and/or updated models from a global model 134 to local models 108, 116, 124 usable by edge devices or clients 102, 110, 118 located at a plurality of locations ¶ [0056]-[0057], [0062]-[0064]);
transmit the trained local machine learning model [and the metadata to a server] (Figs. 2 and 3, Model Transfer to the Federated Server);
receive an aggregated machine learning model from the server (FIG. 1 is a block diagram illustrating an example architecture for federated learning by one or more neural networks 108, 116, 124, 134 to process medical and/or other data, according to at least one embodiment. In at least one embodiment, a federated server 132 collects neural network neural network weights comprising numerical values calculated as a result of neural network training, and/or aggregation weights from one or more edge devices or clients 102, 110, 118, such as computing systems belonging to hospitals at different locations. In at least one embodiment, a federated server 132 is a computing system comprising hardware components and memory containing software instructions that, when executed, train a global model 134 according to neural network data values collected from one or more edge devices or clients 102, 110, 118, such as computing systems belonging to hospitals at different locations. In at least one embodiment, neural network weights comprise one or more numerical values or other data values associated with one or more neural networks ¶ [0056]. FIG. 3 is a block diagram illustrating an architecture to perform federated learning using learnable aggregation weights 352, 354, 356, according to at least one embodiment ¶ [0077]-[0090]);
and train the aggregated machine learning model using second local data, wherein the aggregated machine learning model is generated based on the trained local machine learning model (FIG. 4 illustrates adjustment of learnable aggregation weights during training rounds between a federated server and a plurality of client locations, according to at least one embodiment ¶ [0006]. Also see ¶ [0056], [0086], [0092], [0095], [0099], [0100]) the determined weight for the trained local machine learning model, and another local machine learning model trained by another vehicle and another weight determined for the another local machine learning model (federated learning or federated training is neural network training using data and/or local neural network models 108, 116, 124 from edge devices or clients 102, 110, 118 at a plurality of locations. In at least one embodiment, a federated server 132 performs federated training or federated learning of a global model 134 by aggregating neural network data values received from edge devices or clients 102, 110, 118, such as hospital computing systems, where each hospital is located at different geographic locations, and training said global model 134 using said neural network data values. A federated server 132 further facilitates federated training of or federated learning by local models 108, 116, 124 by distributing neural network weights values and/or updated models from a global model 134 to local models 108, 116, 124 usable by edge devices or clients 102, 110, 118 located at a plurality of locations ¶ [0056]. Also see ¶ [0059], [0069]-[0070], [0072]-[0073], [0081]-[0082]).
However, Roth does not explicitly facilitate obtain metadata for hardware elements of the vehicle; and the metadata to a server; and the metadata.
Wang discloses, obtain metadata for hardware elements of the vehicle (For example, continuing from the earlier example where a scene captured by the LIDAR sensor 124 was analyzed by the object detection model 147 to generate the first set of labels, … Metadata associated with the LIDAR point cloud image 400A (now labeled) may appended with the manually created bounding boxes and/or labels [col. 14, ll. 22-48]. Also see [col. 15, ll. 29-35], [col. 19, ll. 32-42]);
the metadata to a server; and the metadata (the method may further comprise receiving, by the one or more processors, an user input flagging the object for further training the object detection model; and in response to receiving the user input, sending, by the one or more processors, the metadata to a remote server for further training the object detection model [col. 3, ll. 36-41]. Also see [col. 4, ll. 3-13], [col. 24, ll. 20-31], [col. 25, ll. 8-23]).
It would have been obvious, to one ordinary skilled in the art, before the effective filing date of the claimed invention to combine the teachings of the cited references because Wang's system would have allowed Roth to explicitly facilitate obtain metadata for hardware elements of the vehicle; and the metadata to a server; and the metadata. The motivation to combine is apparent in the Roth’s reference, because a need for labeling various sensors for autonomous vehicles in order to improve training the ML models.
However, neither Roth nor Wang explicitly facilitates wherein the metadata includes information about a computing power of a processor of the vehicle, wherein a weight for the trained local machine learning model is determined based on the computing power of the processor in the metadata.
Bernat discloses, wherein the metadata includes information about a computing power of a processor of the vehicle (The example model adjustor circuitry 715 generates layer-specific metadata for each layer in the machine learning model. The layer specific metadata enables a computation of how much time and/or energy will be required to execute the layer given different resources (e.g., at a local node or at a remote node) ¶ [0054]. The example offload controller circuitry 740 of the illustrated example of FIG. 7 estimates resource requirements for local and remote execution of each layer of the machine learning model. In examples disclosed herein, the local and remote resource requirements are estimated based on compute capabilities of the local and remote nodes, respectively, as well as the layer-specific metadata associated with each layer of the model ¶ [0059], [0080]);
wherein a weight for the trained local machine learning model is determined based on the computing power of the processor in the metadata (disclosed herein utilize connectivity telemetry (such as bandwidth and/or latency) and compute/power available in the local fog or far edge to determine the best trade-off to which layers of a machine learning model should be executed on the local edge (e.g., the node 410) and which layers should be executed on the near edge (e.g., a remote node) for a particular network topology with known behavior (e.g., compute required per each layer and data bandwidth required between each pair of layers) ¶ [0041]. The example offload controller circuitry 740 of the illustrated example of FIG. 7 estimates resource requirements for local and remote execution of each layer of the machine learning model. In examples disclosed herein, the local and remote resource requirements are estimated based on compute capabilities of the local and remote nodes, respectively, as well as the layer-specific metadata associated with each layer of the model ¶ [0059], [0080]).
It would have been obvious, to one ordinary skilled in the art, before the effective filing date of the claimed invention to combine the teachings of the cited references because Bernat's system would have allowed Roth and Wang to explicitly facilitate wherein the metadata includes information about a computing power of a processor of the vehicle, wherein a weight for the trained local machine learning model is determined based on the computing power of the processor in the metadata. The motivation to combine is apparent in the Roth and Wang’s reference, because a need for improving the availability of different compute resources including, for example, available power (e.g., computing power).
Regarding claims 2 and 10, the combination of Roth, Wang and Bernat discloses, an imaging sensor configured to capture the first local data and the second local data (Roth: In at least one embodiment, RISC cores may interact with image sensors (e.g., image sensors of any cameras described herein), image signal processor(s), etc. ¶ [0170]. LIDAR ¶ [0207]-[0209]).
Regarding claims 3, 11 and 18, the combination of Roth, Wang and Bernat discloses, wherein the metadata includes information about a number of sensors of the vehicle and a quality of sensors of the vehicle (Wang: For still another example, the metadata may include other object information 726, such as number of LIDAR points, colors in the object, lighting on the object, etc. For yet another example, the metadata may include scene information 728, such as the scene number, timestamp of the scene, total number of LIDAR points in the scene, lighting and colors of the scene, etc. [col. 23, ll. 62-col. 24, ll. 2]. Also see [col. 19, ll. 54-63]).
Regarding claims 4, 12 and 19, (Canceled).
Regarding claims 5, the combination of Roth, Wang and Bernat discloses, wherein the processor is a graphics processing unit (Roth: FIG. 21 illustrates a multi-graphics processing unit (GPU) system, according to at least one embodiment ¶ [0034]. Also see ¶ [0114], [0115]).
Regarding claims 6 and 14, the combination of Roth, Wang and Bernat discloses, wherein the controller is programed to operate the vehicle to drive autonomously using the trained aggregated machine learning model (Roth: In at least one embodiment, one or more PPUs 3200 are configured to accelerate High Performance Computing (“HPC”), data center, and machine learning applications. In at least one embodiment, PPU 3200 is configured to accelerate deep learning systems and applications including following non-limiting examples: autonomous vehicle platforms, deep learning, high-accuracy speech, image, text recognition systems, intelligent video analytics, molecular simulations, drug discovery, disease diagnosis, weather forecasting, big data analytics, astronomy, molecular dynamics simulation, financial modeling, robotics, factory automation, real-time language translation, online search optimizations, and personalized user recommendations, and more ¶ [0461]).
Regarding claims 7, 14 and 20, the combination of Roth, Wang and Bernat discloses, wherein the metadata includes a resolution of the first local data (Roth: In at least one embodiment, organ segmentation 3810 application and/or container may read an image file from a cache, normalize or convert an image file to format suitable for inference (e.g., convert an image file to an input resolution of a machine learning model), and run inference against a normalized image ¶ [0544]).
Regarding claims 8 and 15, the combination of Roth, Wang and Bernat discloses, wherein the controller is programmed to operate one or more actuators of the vehicle to keep the vehicle within lane boundaries using the aggregated machine learning model (Roth: In at least one embodiment, a steering system 1054, which may include, without limitation, a steering wheel, is used to steer vehicle 1000 (e.g., along a desired path or route) when propulsion system 1050 is operating (e.g., when vehicle 1000 is in motion). In at least one embodiment, steering system 1054 may receive signals from steering actuator(s) 1056. In at least one embodiment, a steering wheel may be optional for full automation (Level 5) functionality. In at least one embodiment, a brake sensor system 1046 may be used to operate vehicle brakes in response to receiving signals from brake actuator(s) 1048 and/or brake sensors ¶ [0136], [0137]. In at least one embodiment, vehicle 1000 may include ADAS system 1038. In at least one embodiment, ADAS system 1038 may include, without limitation, an SoC, in some examples. In at least one embodiment, ADAS system 1038 may include, without limitation, any number and combination of an autonomous/adaptive/automatic cruise control (“ACC”) system, a cooperative adaptive cruise control (“CACC”) system, a forward crash warning (“FCW”) system, an automatic emergency braking (“AEB”) system, a lane departure warning (“LDW)” system, a lane keep assist (“LKA”) system, a blind spot warning (“BSW”) system, a rear cross-traffic warning (“RCTW”) system, a collision warning (“CW”) system, a lane centering (“LC”) system, and/or other systems, features, and/or functionality ¶ [0215]).
Regarding claim 17, the combination of Roth, Wang and Bernat discloses, wherein the server determines contributions of the trained local machine learning models based on the metadata received from the plurality of vehicles (Roth: see Figs. 1-3).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD S ROSTAMI whose telephone number is (571)270-1980. The examiner can normally be reached Mon-Fri From 9 a.m. to 5 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571)270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
1/28/2026
/MOHAMMAD S ROSTAMI/Primary Examiner, Art Unit 2154