DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-20 were pending for examination in Application No. 18/159,767, filed January 26th, 2023. In the remarks and amendments received on February 23rd, 2026, claims 1-2, 8-9, and 15-16 are amended, no claims are cancelled, and no claims are added. Accordingly, claims 1-20 are pending for examination in the application.
Response to Arguments
Applicant’s arguments filed February 23rd, 2026, with respect to the rejection of claims 1, 8, and 15, have been fully considered but are moot because the arguments do not apply to the new combination of references, facilitated by Applicant’s newly submitted amendments being used in the current rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 6, 8-10, 13, 15-17, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lerner et al. (US-20200349833-A1), and further in view of Månsson et al. (US-20240135252-A1) and Verbeke et al. (US-20240212319-A1).
Regarding claim 1, Lerner teaches a method, implemented by programmed one or more processors (“the dynamic speed limit module 125 can be instructions executed by one or processors of vehicle 120,” Para [0027]), comprising:
receiving, from one or more server computers through a communication network, a first model (“the dynamic speed limits module 125 includes algorithm(s) and/or model(s) to predictively map an output, namely a dynamic speed limit, to the received input of real-time data,” Para [0026]);
collecting sensor data acquired by a sensor on a first vehicle (“vehicle sensors provide real-time data indicative of current road conditions, which can be inputs to a predictive model,” Para [0026]);
detecting an object (“road conditions”) contained in the identified first data item by running the first model with the identified first data item as input to the first model (“capturing visual data deemed relevant for determining the current road conditions, such as the surface of the road (e.g., presence of ice or rain), lane occupancy, nearby vehicles 101A-101C,” Para [0022]);
establishing communication with a computer on a second vehicle (“the vehicle 120 can directly receive the real-time data from the nearby devices 101A-101B,” Para [0030]) located at equal to or less than a predetermined distance from the first vehicle (“the nearby vehicles 101A-101C can include vehicles within a predetermined distance from the vehicle 120,” Para [0030]);
receiving a second data item that is indicated as containing the object (“road conditions”) from the computer on the second vehicle (“The dynamic speed limit module 125 can implement various federated learning aspects, including collecting additional real-time data that is also indicative of the current road conditions from communication points external to vehicle 120, such as nearby vehicles 101A-101b,” Para [0025]);
and transmitting first data representing the trained first model to the one or more server computers though the communication network (“In referring to the federated learning aspects described above, vehicle 120 can use V2V communications to collect real-time data from nearby vehicles 101A-101C. As a result, the dynamic speed limit module 125 can improve the vehicle's 120 awareness of the current road conditions based on the varying perspective of the other vehicles 101A-101C,” Para [0029]).
Lerner is not relied upon to teach the following limitations as further claimed. Månsson, however, teaches:
identifying a first data item (“change of behavior in the at least one external vehicle”) from among the collected sensor data when the first data item is determined to satisfy a criterion (“the ego vehicle 1 may observe the change of behavior in the at least one external vehicle 2, changing its speed from the 50 km/hour to 90 km/hour±ΔV (acceptable speed tolerance around 90 km/hour) while changing lanes from the exit lane 103 to lane 102,” Para [0056]);
wherein the criterion comprises vehicle information when the first data item is sensed (“change of behavior in the at least one external vehicle 2, changing its speed from the 50 km/hour to 90 km/hour±ΔV,” Para [0056]), and wherein the vehicle information comprises at least one of a speed that is greater than or equal to a predetermined speed (“The control system 10 may determine if the ego vehicle 1 and/or any one of the external vehicles 2, 3 has changed its speed at a specific point in time compared to its previously-registered speed,” Para [0053]), a braking when the speed is greater than or equal to a predetermined speed, steering that is greater than or equal to a predetermined degree or amount, or steering that is greater than or equal to a predetermined amount when the speed is greater than or equal to a predetermined speed.
Lerner and Månsson are not relied upon to teach the following limitations as further claimed. Verbeke, however, further teaches:
generating a training dataset containing the first data item (from “vehicle 1”), the second data item (from “vehicle 2”) and a label of the object (“labelled images”) as a supervision signal (“the control system 10 of the ego vehicle 1 may further be configured to obtain a set of selected one or more labelled images of the at least one object from the remote server 15, transmitted to the server 15 by each vehicle 1, 2 comprised in the fleet of vehicles server,” Para [0066]);
and training with respect to the first model on the training dataset (“The formed training data set for the ML algorithm may be transmitted to the remote server 15 for centrally training the ML algorithm,” Para [0070]).
Månsson is considered to be analogous to the claimed invention because they are both in the same field of training vehicle machine learning models in a federated learning setting. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Månsson into Lerner for the benefit of safer autonomous vehicles for the user.
Verbeke is considered to be analogous to the claimed invention because they are both in the same field of training vehicle machine learning models using federated learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Verbeke into Lerner for the benefit of a more accurate machine learning model.
Regarding claim 2, the rejection of claim 1 is incorporated herein. Lerner in view of Månsson and Verbeke teaches the method of claim 1, and Lerner further teaches
receiving, from the one or more server computers through a communication network, update data that represents a model that is trained with aggregated model information from other edge models (“the dynamic speed limit module 125 can improve the vehicle's 120 awareness of the current road conditions based on the varying perspective of the other vehicles 101A-101C, and increased data points and yield a more optimal dynamic speed limit improving the overall performance of the system,” Para [0029]); and
updating the first model based on the update data (“Using the dynamic speed limit module 125, a static speed limit can be an initial value that is adapted to generate another operating speed deemed most appropriate for the monitored road conditions (based on the captured real-time data),” Para [0032]).
Regarding claim 3, the rejection of claim 1 is incorporated herein. Lerner in view of Månsson and Verbeke teach the method of claim 1, and Verbeke further teaches wherein the training with respect to the first model comprises training a copy of the received first model (“the control system 10 may be further configured for transmitting the one or more updated model parameters of the ML algorithm to the remote server 15 and receiving a set of globally updated model parameters of the ML algorithm from the remote server 15,” Para [0071]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Verbeke into Lerner for the benefit of keeping sensitive or private data safe (while still allowing improvements to the overall model).
Regarding claim 6, the rejection of claim 1 is incorporated herein. Lerner in view of Månsson and Verbeke teach the method of claim 1, and Verbeke further teaches wherein the receiving the second data item comprises receiving the second data item (“images of the at least one object 4a, 4b… from the fleet of vehicles”) and an inference result (“corresponding outputs of the FSL model”) of a second model in the second vehicle with respect to detecting of the object in the second data item (“the set of selected one or more images of the at least one object 4a, 4b received from the fleet of vehicles may be entered into a ranking scheme based on the corresponding outputs of the FSL model for the one or more images of each of the one or more objects 4a, 4b,” Para [0068]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Verbeke into Lerner and Månsson for the benefit of more accurate object detection in the model.
Claims 8-10, 13, 15-17, and 20 are device or non-transitory computer-readable medium claims that correspond to method claims 1-3 and 6. These claims are thus rejected for the same reasons as claims 1-3 and 6.
Claim(s) 4, 11, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lerner et al. (US-20200349833-A1), Månsson et al. (US-20240135252-A1), and Verbeke et al. (US-20240212319-A1) as applied to claims 1, 8, and 15 above, and further in view of Rawat et al. (US-20210326757-A1).
Regarding claim 4, the rejection of claim 1 is incorporated herein. Lerner in view of Månsson and Verbeke teach the method of claim 1, but are not relied upon to teach the following limitations as further claimed. Rawat, however, further teaches
obtaining, as the first data, a gradient (“a difference between the updated embeddings and the previous embeddings”) between the first model prior to the training (“previous embeddings”) and the first model subsequent (“updated embeddings”) to the training (“The information indicative of the updates may include the locally-updated embeddings (e.g., the updated embeddings or a difference between the updated embeddings and the previous embeddings),” Para [0064]).
Rawat is considered to be analogous to the claimed invention because they are both in the same field of training federated learning models with labelled data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Rawat into Lerner, Månsson, and Verbeke for the benefit of a more accurate federated learning model.
Claims 11 and 18 are device or non-transitory computer-readable medium claims that correspond to method claim 4. These claims are thus rejected for the same reasons as claim 4.
Claim(s) 5, 12, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lerner et al. (US-20200349833-A1), Månsson et al. (US-20240135252-A1), and Verbeke et al. (US-20240212319-A1) as applied to claims 1, 8, and 15 above, and further in view of Zhou et al. (US-20220292387-A1).
Regarding claim 5, the rejection of claim 3 is incorporated herein. Lerner in view of Månsson and Verbeke teach the method of claim 3, but are not relied upon to teach the following limitations as further claimed. Zhou, however, further teaches
obtaining, as the first data, a gradient between the received first model (“previously provided gradients from a plurality of workers,” Para [0010]) and the copy of the first model (from the “global machine learning model” that distributes local copies of model parameters/gradients to workers) that is updated by the training (“receive updated gradients from the plurality of workers, calculate a vulnerability weight for each layer of a global machine learning model using the updated gradients, calculate an aggregated gradient using the vulnerability weight and the updated gradients, and update the global machine learning model using the aggregated gradient,” Para [0010]).
Zhou is considered to be analogous to the claimed invention because they are both in the same field of training a federated machine learning model. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Zhou into Lerner, Månsson, and Verbeke for the benefit of a more accurate federated machine learning model.
Claims 12 and 19 are device or non-transitory computer-readable medium claims that correspond to method claim 5. These claims are thus rejected for the same reasons as claim 5.
Claim(s) 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lerner et al. (US-20200349833-A1), Månsson et al. (US-20240135252-A1), and Verbeke et al. (US-20240212319-A1) as applied to claims 1 and 8 above, and further in view of Zhang et al. (US-20230221942-A1).
Regarding claim 7, the rejection of claim 1 is incorporated herein. Lerner in view of Månsson and Verbeke teach the method of claim 1, but are not relied upon to teach the following limitations as further claimed. Zhang, however, further teaches
wherein the generating the training dataset comprises obtaining the label of the object by combining inference results of the first model (“while sensor data samples at the requesting vehicle 100, 404 only captured the rear view, an oblique view, a partial view,” Para [0072]) and a second model in the second vehicle (“the second vehicle 100, 406 may have been positioned to observe a side view of an ambulance which allowed the second vehicle 100, 406 to classify the detected object at that location as an ambulance,” Para [0072]) that detects the object in the second data item (“when multiple different vehicles respond to a query request, the processing system 110, 420 may combine or otherwise augment the responses to obtain an aggregate response before assigning the most likely object type to the detected object,” Para [0073]).
Zhang is considered to be analogous to the claimed invention because they are both in the same field of vehicles performing object detection in the context of federated learning. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have incorporated the teachings of Zhang into Lerner, Månsson, and Verbeke for the benefit of highly accurate object classification for the federated learning system.
Claim 14 is a device claim that corresponds to method claim 7. This claim is thus rejected for the same reason as claim 7.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL A OMETZ whose telephone number is (571)272-2535. The examiner can normally be reached 6:45am-4:00pm ET Monday-Thursday, 6:45am-1:00pm ET every other Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at 571-272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Rachel Anne Ometz/Examiner, Art Unit 2668 3/9/26
/VU LE/Supervisory Patent Examiner, Art Unit 2668