DETAILED ACTION
This Office Action is in response to Request for Continued Examination (RCE) filed on 06/24/2025. Claims 1, 3-8, 10-14, and 16-23 are being considered and further pending examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement(s) (IDS(s)) submitted on 06/24/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Amendments
Amendments to claims 22 addressing the issue regarding the limitation “close proximity” clarify the claim and render it no longer indefinite, therefore the previous rejection under 112(b) is withdrawn.
Amendments to claim 23 clarify the claimed limitations and no longer render the claim indefinite, therefore the previous rejection under 112(b) is withdrawn.
Response to Arguments
Applicant’s arguments, see Remarks pages , filed 06/24/2025, with respect to the rejection(s) of claim(s) 1, 3-8, 10-14, and 16-23 under 35 U.S.C. 103 have been fully considered and are not persuasive and/or moot.
In the Remarks, Applicant argued the following:
The cited references fail to teach or suggest
“the user data comprises images of a user identified with a driver of a vehicle that utilizes the DMS, the images of the user being acquired via the UE, the UE being separate from the vehicle”
The training dataset is a “user-personalized training dataset”
The ML model is generated by “using the user-personalized training dataset to initially train a machine learning model with the user data comprising the images of the driver”
Regarding point (a)(i), the argument is moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for the teaching or matter specifically challenged in this argument.
Regarding point (a)(ii), Sobhany explicitly teaches retraining the model based on parameters of a particular driver to personalize the model as described in the rejection below, therefore Sobhany teaches a training dataset as a user-personalized training dataset.
Regarding point (a)(iii), applicant argues that Sobhany does not disclose the use of user-personalized training dataset to initially train the model. However, examiner maintains Sobhany teaches this limitation as Sobhany’s teachings of retraining the model based on parameters of a particular driver to personalize the model is an initial training of the model for the driver, as such retraining is an instance of initially training a model for a specific case. Fur applicant argues Sobhany teaches initially training the model using a conventional training dataset that aggregates data from several drivers, as a retraining of a model generates a new model the previous models training does not excluded the retraining of the model using the user-personalized dataset as an instance of initially training a model for a specific case/situation.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 3-8, 10-14, and 16-23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Independent claims 1, 8, and 14 recite “generate a machine learning trained model using the user-personalized training dataset to initially train a machine learning model with the user data comprising the images of the driver”, however in view of the specification this limitation is unclear regarding “initially” training the machine learning model. The specification does not appear to have support for training an untrain model using the user-personalized training dataset comprising images of the driver though has support for retraining using the dataset. In light of the specification the limitation is being interpreted as retraining a machine learned model being initially training the machine learning model for the user.
Claims 3-7, 10-13, and 16-23 are dependent on claims 1, 8, or 14 and do not cure the deficiencies thereof, therefore they are rejected for the same reason.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-5, 8, 11, 13-14, 16-18, and 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sobhany (US 10967873 B2) henceforth referred to as Sobhany in view of Feng et al (US 11481483 B2) henceforth referred to as Feng and further in view of Enhancing Active Transportation and Demand Management (ATDM) with Advanced and Emerging Technologies and Data Sources (https://ops.fhwa.dot.gove/publications/fhwahop19010/ch2.htm accessed from archive.org waybackmachine archived date 10/18/2020) henceforth referred to as USDoT.
Regarding Claim 1 Sobhany teaches A computing device, comprising:
a memory configured to store computer-readable instructions (col 1 line 56-59 : “A non-transitory computer readable storage medium stores executable instructions that, when executed, cause the processor to apply a model to the data received from the sensors.”); and
a processor configured to execute the computer-readable instructions to cause the computing device to (col 4 line 50-54 : “The driver attention server 130 receives data indicative of a plurality of parameters of the driver and determines the driver's state of attention based on the plurality of parameters. The driver attention server 130 is described further with respect to FIG. 2.”, col 5 line 14-26 : “FIG. 2 is a block diagram illustrating functional modules executed by the driver attention server 130, according to some embodiments. In some embodiments, the driver attention server 130 executes an attention determination module 215 and stores a model 205, a user account database 210, and a user attention history database 220. The modules can comprise software (e.g., computer-readable instructions executable by a processor of the driver attention server 130), hardware (e.g., one or more ASICs or other special-purpose circuitry), or a combination of software and hardware. Other embodiments of the driver attention server 130 can execute additional, fewer, or different modules, and the functionality can be distributed differently between the modules.”):
store, in the secure location of the memory as part of a user-personalized training dataset, user data received via an encrypted communication channel established between a server and a user equipment (UE), the user data comprising images of a driver of a vehicle that are acquired via the UE, the UE being separate from the vehicle (col 1 line 53-55 : “In some embodiments, the vehicle includes multiple sensors each configured to measure a different parameter of a driver of the vehicle.”, col 2 line 30-48 : “The sensors 112 each measure a parameter associated with a driver of the vehicle. The measured parameter can be any parameter related to the driver's state of attentiveness, including parameters that describe a position of at least a portion of the driver's body in the vehicle, parameters that quantify or qualify an expression on the driver's face, or parameters that measure a direction or rate of change of direction of the driver's gaze. Accordingly, the sensors 112 can include cameras, force sensors in a steering wheel or seat, touch sensors in the steering wheel or other components, or any other type of sensor that can output information relevant for determining the driver parameters. In some cases, the sensors 112 can be coupled to processing modules to process raw sensor data into the parameters of the driver. For example, a camera can be coupled to an eye tracking module that processes image data captured by the camera to track a direction of the driver's gaze, as well as to a facial coding module that processes the image data to determine the driver's facial expression.”), col 4 line 55-67 – col 5 line 1-13 : “The network 140 enables communications between the vehicle 110, peripheral device 120, and driver attention server 130, and can include any of a variety of individual connections via the internet such as cellular or other wireless networks, such as 4G networks, 5G networks, or WiFi. In some embodiments, the network 140 may connect terminals, services, and mobile devices using direct connections such as radio-frequency identification (RFID), near-field communication (NFC), Bluetooth?, low-energy Bluetooth? (BLE), WiFi?, ZigBee?, ambient backscatter communications (ABC) protocols, USB, or LAN. Because the information transmitted may be personal or confidential, security concerns may dictate one or more of these types of connections be encrypted or otherwise secured. In some embodiments, however, the information being transmitted may be less personal, and therefore the network connections may be selected for convenience over security. The network 140 may comprise any type of computer networking arrangement used to exchange data. For example, the network 140 may be the Internet, a private data network, virtual private network using a public network, and/or other suitable connection(s) that enables components in system environment 100 to send and receive information between the components of system environment 100. The network 140 may also include a public switched telephone network (“PSTN”) and/or a wireless network.”.);
generate a machine learning trained model using the user personalized training dataset to initially train a machine learning model with the user data comprising the images of the driver (col 6 line 29-42 : “Some embodiments of the model 205 can be retrained based on parameters of a particular driver, thus personalizing the model to the driver. In this case, the driver attention server 130 may continuously or periodically receive data indicating parameters of the driver and use the parameters to update the model for application to subsequent measurements of the driver's parameters. In some cases, the parameters of the driver used to retrain the model can be associated with an identifier of the determination made about the driver's state of attention at the time the parameters were collected, as well as information indicating whether the determination was correct to refine the ability of the model 205 to accurately characterize the driver's attentiveness.”); and
transmit the machine learning trained model to the vehicle, which utilizes the machine learning trained model as part of a driver monitoring system (DMS) (col 1 line 51-559 : “A vehicle monitors whether a driver is attentive to driving the vehicle and causes outputs to rectify driver inattentiveness. In some embodiments, the vehicle includes multiple sensors each configured to measure a different parameter of a driver of the vehicle. A processor is coupled to the sensors to receive data from the sensors. A non-transitory computer readable storage medium stores executable instructions that, when executed, cause the processor to apply a model to the data received from the sensors.”, col 5 line 14-30 : “FIG. 2 is a block diagram illustrating functional modules executed by the driver attention server 130, according to some embodiments. In some embodiments, the driver attention server 130 executes an attention determination module 215 and stores a model 205, a user account database 210, and a user attention history database 220. The modules can comprise software (e.g., computer-readable instructions executable by a processor of the driver attention server 130), hardware (e.g., one or more ASICs or other special-purpose circuitry), or a combination of software and hardware. Other embodiments of the driver attention server 130 can execute additional, fewer, or different modules, and the functionality can be distributed differently between the modules. For example, at least a portion of the functions described below as being performed by the driver attention server 130 can instead be performed by the vehicle control system 115 on the vehicle 110.”, where in the case of a portion of the functions executed by the server including training the model and a portion of the functions executed by the vehicle including executing the model would require transmission of the model to the vehicle.). However, Sobhany does not explicitly teach generate an enclave that is executed in a secure location of the memory and is protected by the processor, the encrypted communication channel between the enclave and user equipment, and the UE being separate from the vehicle.
However, in a similar field of endeavor (secure systems for machine learning applications), Feng teaches a system to generate an enclave that is executed in a secure location of the memory and is protected by the processor (col 3 line 38-43 : “The server is configured with a machine learning controller (ML Controller). The server starts the machine learning controller first before performing machine learning training. And then, in response to a training data uploading request triggered by a user, the terminal uploads the training data to the machine learning controller.”, col 3 line 51-52 : “The machine learning training request is a trigger condition to create the trusted execution environment (Enclave).”, col 4 line 19-23 : “The training data and the machine learning training operation are encapsulated in the trusted execution environment, so that an attack on the training data and the training model launched by malicious software or an illegal program may be avoided.”, further the combination of Sobhany teaching sending training data over an encrypted communication channel and Feng teaching of storing training data in an enclave teaches storing of the received data via an encrypted communication channel established between the enclave and a user equipment in the secure location of the memory.).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to modify the system of Sobhany with the system of Feng “to ensure that the privacy data from one party will not be acquired by another one” (Feng col 1 line 36-38). However the combination does not explicitly teach the UE being separate from the vehicle.
However, in a similar field of endeavor (mobile device and vehicle connections), USDoT teaches the UE being separate from the vehicle (§ 2.3 Connected Travelers : “The use of mobile devices, primarily smartphones, has evolved over the years and become applicable in enhancing travel and transportation through ITS. A connected traveler is one that is using a mobile device that generates and transmits status data that could be collected, saved, and used by ITS devices and the corresponding traffic management system, other connected mobile devices, and connected vehicles. The majority of travelers are already connected to a suite of applications and services, e.g., WiFi, GPS data, through a personal device that accurately monitors locations up to a few meters precision. Virtual 3G/4G cellular networks and prevalent open WiFi networks enable travelers to experience uninterrupted connectivity. Approximately 68 percent of American adults currently own a smartphone, and the number is expected to exponentially increase as the years go by.”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to modify the combination of Sobhany and Feng with mobile device use of USDoT to enhance efficiency by reducing the processing and sensor usage of the vehicle.
Regarding Claim 3 the combination of Sobhany, Feng, and USDoT teaches The computing device of claim 1, further Sobhany teaches wherein the processor is configured to execute the computer-readable instructions to generate the machine learning trained model by re-training a previously-trained machine learning trained model using the user-personalized training dataset (col 6 line 29-42 : “Some embodiments of the model 205 can be retrained based on parameters of a particular driver, thus personalizing the model to the driver. In this case, the driver attention server 130 may continuously or periodically receive data indicating parameters of the driver and use the parameters to update the model for application to subsequent measurements of the driver's parameters. In some cases, the parameters of the driver used to retrain the model can be associated with an identifier of the determination made about the driver's state of attention at the time the parameters were collected, as well as information indicating whether the determination was correct to refine the ability of the model 205 to accurately characterize the driver's attentiveness.”).
Regarding Claim 4 the combination of Sobhany, Feng, and USDoT teaches The computing device of claim 1, further Feng teaches wherein the processor is configured to execute the computer-readable instructions to encrypt the machine learning trained model with a key that is stored in the secure location of the memory to generate an encrypted machine learning trained model (col 6 line 50-55: “At block S330, the trusted communication link is established between the terminal and the trusted execution environment, in which the trusted communication link is configured to transmit the encryption key of the terminal to the key manager in the trusted execution environment, the key manager being configured to manage the encryption key.”, col 8 line 8-13 : “For example, the machine learning controller may encrypt, in the trusted execution environment, the machine learning model by calling the encryption key of at least one target terminal in the key manager, and then distributes the encrypted model to a corresponding target terminal.” ).
Regarding Claim 5 the combination of Sobhany, Feng, and USDoT teaches The computing device of claim 4, further Sobhany teaches wherein the encrypted machine learning trained model is stored in a portion of the memory other than the secure location (col 5 line 14-30 : “FIG. 2 is a block diagram illustrating functional modules executed by the driver attention server 130, according to some embodiments. In some embodiments, the driver attention server 130 executes an attention determination module 215 and stores a model 205, a user account database 210, and a user attention history database 220. The modules can comprise software (e.g., computer-readable instructions executable by a processor of the driver attention server 130), hardware (e.g., one or more ASICs or other special-purpose circuitry), or a combination of software and hardware. Other embodiments of the driver attention server 130 can execute additional, fewer, or different modules, and the functionality can be distributed differently between the modules. For example, at least a portion of the functions described below as being performed by the driver attention server 130 can instead be performed by the vehicle control system 115 on the vehicle 110.”).
Regarding Claim 8 Sobhany teaches A vehicle comprising (col 2 line 17-23 : “The vehicle 110 according to embodiments described herein can be any automotive vehicle, including any vehicle body type (such as cars, trucks, or buses), engine type (such as internal combustion, hybrid, or electric), or driving mode (such as fully manual (human-operated) vehicles, self-driving vehicles, or hybrid-mode vehicles that can switch between manual and self-driving modes).”):
a memory configured to store computer-readable instructions (col 3 line 7-12 : “In some cases, the vehicle control system 115 includes one or more processors, such as a central processing unit (CPU), graphical processing unit (GPU), or neural processing unit (NPU), that executes instructions stored in a non-transitory computer readable storage medium, such as a memory.”); and
a processor configured to execute the computer-readable instructions to cause the vehicle to (col 3 line 7-12 : “In some cases, the vehicle control system 115 includes one or more processors, such as a central processing unit (CPU), graphical processing unit (GPU), or neural processing unit (NPU), that executes instructions stored in a non-transitory computer readable storage medium, such as a memory.”):
establish an encrypted communication channel between the vehicle and a cloud associated with a computing device (col 4 line 55-67 – col 5 line 1-13 : “The network 140 enables communications between the vehicle 110, peripheral device 120, and driver attention server 130, and can include any of a variety of individual connections via the internet such as cellular or other wireless networks, such as 4G networks, 5G networks, or WiFi. In some embodiments, the network 140 may connect terminals, services, and mobile devices using direct connections such as radio-frequency identification (RFID), near-field communication (NFC), Bluetooth?, low-energy Bluetooth? (BLE), WiFi?, ZigBee?, ambient backscatter communications (ABC) protocols, USB, or LAN. Because the information transmitted may be personal or confidential, security concerns may dictate one or more of these types of connections be encrypted or otherwise secured. In some embodiments, however, the information being transmitted may be less personal, and therefore the network connections may be selected for convenience over security. The network 140 may comprise any type of computer networking arrangement used to exchange data. For example, the network 140 may be the Internet, a private data network, virtual private network using a public network, and/or other suitable connection(s) that enables components in system environment 100 to send and receive information between the components of system environment 100. The network 140 may also include a public switched telephone network (“PSTN”) and/or a wireless network.”);
store an encrypted machine learning trained model received from the cloud via the encrypted communication channel in the memory, the encrypted machine learning trained model being generated via the computing device using a user-personalized training dataset that includes user data identified with the vehicle (col 1 line 53-55 : “In some embodiments, the vehicle includes multiple sensors each configured to measure a different parameter of a driver of the vehicle.”, col 1 line 51-59 : “A vehicle monitors whether a driver is attentive to driving the vehicle and causes outputs to rectify driver inattentiveness. In some embodiments, the vehicle includes multiple sensors each configured to measure a different parameter of a driver of the vehicle. A processor is coupled to the sensors to receive data from the sensors. A non-transitory computer readable storage medium stores executable instructions that, when executed, cause the processor to apply a model to the data received from the sensors.”, col 5 line 14-30 : “FIG. 2 is a block diagram illustrating functional modules executed by the driver attention server 130, according to some embodiments. In some embodiments, the driver attention server 130 executes an attention determination module 215 and stores a model 205, a user account database 210, and a user attention history database 220. The modules can comprise software (e.g., computer-readable instructions executable by a processor of the driver attention server 130), hardware (e.g., one or more ASICs or other special-purpose circuitry), or a combination of software and hardware. Other embodiments of the driver attention server 130 can execute additional, fewer, or different modules, and the functionality can be distributed differently between the modules. For example, at least a portion of the functions described below as being performed by the driver attention server 130 can instead be performed by the vehicle control system 115 on the vehicle 110.”, col 6 line 29-42 : “Some embodiments of the model 205 can be retrained based on parameters of a particular driver, thus personalizing the model to the driver. In this case, the driver attention server 130 may continuously or periodically receive data indicating parameters of the driver and use the parameters to update the model for application to subsequent measurements of the driver's parameters. In some cases, the parameters of the driver used to retrain the model can be associated with an identifier of the determination made about the driver's state of attention at the time the parameters were collected, as well as information indicating whether the determination was correct to refine the ability of the model 205 to accurately characterize the driver's attentiveness.”, where in the case of a portion of the functions executed by the server including training the model and a portion of the functions executed by the vehicle including executing the model would require transmission of the model to the vehicle.),
wherein the user data comprises images of a driver of a vehicle that are acquired via the UE (col 2 line 30-48 : “The sensors 112 each measure a parameter associated with a driver of the vehicle. The measured parameter can be any parameter related to the driver's state of attentiveness, including parameters that describe a position of at least a portion of the driver's body in the vehicle, parameters that quantify or qualify an expression on the driver's face, or parameters that measure a direction or rate of change of direction of the driver's gaze. Accordingly, the sensors 112 can include cameras, force sensors in a steering wheel or seat, touch sensors in the steering wheel or other components, or any other type of sensor that can output information relevant for determining the driver parameters. In some cases, the sensors 112 can be coupled to processing modules to process raw sensor data into the parameters of the driver. For example, a camera can be coupled to an eye tracking module that processes image data captured by the camera to track a direction of the driver's gaze, as well as to a facial coding module that processes the image data to determine the driver's facial expression.”), and
wherein the encrypted machine learning trained model is generated using the user-personalized training dataset to initially train a machine learning trained model with the user data comprising the images of the driver (col 6 line 29-42 : “Some embodiments of the model 205 can be retrained based on parameters of a particular driver, thus personalizing the model to the driver. In this case, the driver attention server 130 may continuously or periodically receive data indicating parameters of the driver and use the parameters to update the model for application to subsequent measurements of the driver's parameters. In some cases, the parameters of the driver used to retrain the model can be associated with an identifier of the determination made about the driver's state of attention at the time the parameters were collected, as well as information indicating whether the determination was correct to refine the ability of the model 205 to accurately characterize the driver's attentiveness.”); and
execute a driver monitoring system (DMS) using the encrypted machine learning trained model (col 1 line 51-62 : “A vehicle monitors whether a driver is attentive to driving the vehicle and causes outputs to rectify driver inattentiveness. In some embodiments, the vehicle includes multiple sensors each configured to measure a different parameter of a driver of the vehicle. A processor is coupled to the sensors to receive data from the sensors. A non-transitory computer readable storage medium stores executable instructions that, when executed, cause the processor to apply a model to the data received from the sensors. When the model is applied by the processor, the processor outputs a determination, based on the parameters of the driver, of whether the driver is attentive to driving the vehicle.”).However, Sobhany does not explicitly teach generate a vehicle enclave that is executed in a secure location of the memory protected by the processor; and establishing encrypted communication between the vehicle enclave and a cloud enclave and user equipment, and the UE being separate from the vehicle.
However, in a similar field of endeavor (secure systems for machine learning applications), Feng teaches a system to generate a vehicle enclave that is executed in a secure location of the memory protected by the processor (col 3 line 38-43 : “The server is configured with a machine learning controller (ML Controller). The server starts the machine learning controller first before performing machine learning training. And then, in response to a training data uploading request triggered by a user, the terminal uploads the training data to the machine learning controller.”, col 3 line 51-52 : “The machine learning training request is a trigger condition to create the trusted execution environment (Enclave).”, col 4 line 19-23 : “The training data and the machine learning training operation are encapsulated in the trusted execution environment, so that an attack on the training data and the training model launched by malicious software or an illegal program may be avoided.”, further the combination of Sobhany teaching sending training data over an encrypted communication channel between the vehicle and the cloud and Feng teaching of using enclaves for sensitive data and of storing training data in an enclave renders obvious transmission between a vehicle enclave and a cloud enclave as the same data is used on both the vehicle and the cloud which needs to be protected.).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to modify the system of Sobhany with the system of Feng “to ensure that the privacy data from one party will not be acquired by another one” (Feng col 1 line 36-38). However the combination does not explicitly teach the UE being separate from the vehicle.
However, in a similar field of endeavor (mobile device and vehicle connections), USDoT teaches the UE being separate from the vehicle (§ 2.3 Connected Travelers : “The use of mobile devices, primarily smartphones, has evolved over the years and become applicable in enhancing travel and transportation through ITS. A connected traveler is one that is using a mobile device that generates and transmits status data that could be collected, saved, and used by ITS devices and the corresponding traffic management system, other connected mobile devices, and connected vehicles. The majority of travelers are already connected to a suite of applications and services, e.g., WiFi, GPS data, through a personal device that accurately monitors locations up to a few meters precision. Virtual 3G/4G cellular networks and prevalent open WiFi networks enable travelers to experience uninterrupted connectivity. Approximately 68 percent of American adults currently own a smartphone, and the number is expected to exponentially increase as the years go by.”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to modify the combination of Sobhany and Feng with mobile device use of USDoT to enhance efficiency by reducing the processing and sensor usage of the vehicle.
Regarding Claim 11 the combination of Sobhany, Feng, and USDoT teaches The vehicle of claim 8, further Sobhany teaches wherein the encrypted communication channel is established in response to a handshake request transmitted to the cloud enclave that is initiated by the vehicle (col 4 line 55-67 – col 5 line 1-13 : “The network 140 enables communications between the vehicle 110, peripheral device 120, and driver attention server 130, and can include any of a variety of individual connections via the internet such as cellular or other wireless networks, such as 4G networks, 5G networks, or WiFi. In some embodiments, the network 140 may connect terminals, services, and mobile devices using direct connections such as radio-frequency identification (RFID), near-field communication (NFC), Bluetooth?, low-energy Bluetooth? (BLE), WiFi?, ZigBee?, ambient backscatter communications (ABC) protocols, USB, or LAN. Because the information transmitted may be personal or confidential, security concerns may dictate one or more of these types of connections be encrypted or otherwise secured. In some embodiments, however, the information being transmitted may be less personal, and therefore the network connections may be selected for convenience over security. The network 140 may comprise any type of computer networking arrangement used to exchange data. For example, the network 140 may be the Internet, a private data network, virtual private network using a public network, and/or other suitable connection(s) that enables components in system environment 100 to send and receive information between the components of system environment 100. The network 140 may also include a public switched telephone network (“PSTN”) and/or a wireless network.”, in order for there to be an encrypted communication channel it is required that a handshake request be transmitted.).
Regarding Claim 13 the combination of Sobhany, Feng, and USDoT teaches The vehicle of claim 8, further Sobhany teaches further comprising:
a sensor configured to acquire further user data (col 2 line 30-41 : “The sensors 112 each measure a parameter associated with a driver of the vehicle. The measured parameter can be any parameter related to the driver's state of attentiveness, including parameters that describe a position of at least a portion of the driver's body in the vehicle, parameters that quantify or qualify an expression on the driver's face, or parameters that measure a direction or rate of change of direction of the driver's gaze. Accordingly, the sensors 112 can include cameras, force sensors in a steering wheel or seat, touch sensors in the steering wheel or other components, or any other type of sensor that can output information relevant for determining the driver parameters.”),
wherein the encrypted machine learning trained model is generated via the computing device using the user-personalized training dataset that includes the user data and the further user data (col 5 line 59-67 – col 6 line 1-2 : “In other cases, the model 205 includes a trained machine learning model, where the model 205 is represented, for example, as an artifact of weights and biases resulting from the training of the model. In some embodiments, the driver attention server 130 trains the model 205 using aggregated attentiveness data from many drivers. For example, the driver attention server 130 can use aggregated data sets that each include multiple parameters of drivers collected over various periods of time and labeled according to a level of attention of the driver at the time the parameters were measured.”).
Regarding Claim 14, it recites a computer-readable medium with limitations substantially the same as claim 1 above, therefore it is rejected for the same reason.
Regarding Claim 16, it recites a computer-readable medium with limitations substantially the same as claim 3 above, therefore it is rejected for the same reason.
Regarding Claim 17, it recites a computer-readable medium with limitations substantially the same as claim 4 above, therefore it is rejected for the same reason.
Regarding Claim 18, it recites a computer-readable medium with limitations substantially the same as claim 5 above, therefore it is rejected for the same reason.
Regarding claim 23 the combination of Sobhany, Feng, and USDoT teaches The computing device of claim 17, further Sobhany renders obvious wherein the processor is configured to execute the computer-readable instructions to cause the computing device to establish a further encrypted communication channel between the computing device and a vehicle enclave, and to transmit the encrypted machine learning trained model to the vehicle enclave via the further encrypted communication channel (col 8 line 8-13 : “For example, the machine learning controller may encrypt, in the trusted execution environment, the machine learning model by calling the encryption key of at least one target terminal in the key manager, and then distributes the encrypted model to a corresponding target terminal.”, where transmission between environments is disclosed to be encrypted and transmission of the encrypted machine learning model is disclosed thus transmission from one environment to another of the encrypted machine learning model would be encrypted), further Feng teaches and
wherein the encrypted machine learning trained model is stored in an unsecured portion of the memory, the encrypted machine learning trained model, upon being decrypted, being executed within the vehicle enclave in the secure location of the memory (col 3 line 51-52 : “The machine learning training request is a trigger condition to create the trusted execution environment (Enclave).”, col 4 line 19-23 : “The training data and the machine learning training operation are encapsulated in the trusted execution environment, so that an attack on the training data and the training model launched by malicious software or an illegal program may be avoided.”, where Feng’s teachings of an enclave to create a trusted execution environment along with the machine learning model of Sobhany teaches the execution of the machine learning trained model which would be executed in a trusted environment as disclosed by Feng.).
Claim(s) 6-7 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sobhany, Feng, and USDoT and further in view of Knauth et al ("Integrating Remote Attestation with Transport Layer Security" published 2018) henceforth referred to as Knauth.
Regarding Claim 6 the combination of Sobhany, Feng, and USDoT teaches The computing device of claim 1, however the combination does not explicitly teach wherein the processor is configured to execute the computer-readable instructions to cause the computing device to establish the encrypted communication channel via an attestation procedure performed with the UE.
However, in a similar field of endeavor (systems for secure communication), Knauth teaches wherein the processor is configured to execute the computer-readable instructions to cause the computing device to establish the encrypted communication channel via an attestation procedure performed with the UE (pg 1 Introduction : “An integral part of the Intel SGX architecture is the ability to perform attestation. The attester wants to convince the challenger that it is a genuine Intel SGX enclave running on an up-to-date platform. At the end of the attestation process the enclave has convince the challenger that it is genuine. Based on the enclave’s attested attributes, the challenger decides whether to trust the enclave of not.”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to modify the combination of Sobhany, Feng, and USDoT with the attestation of Knauth ”to protect against man-in-the-middle attacks” (Knauth pg 1 Introduction).
Regarding Claim 7 the combination of Sobhany, Feng, and USDoT teaches The computing device of claim 4, however the combination does not explicitly teach wherein the processor is configured to execute the computer-readable instructions to cause the computing device to establish a further encrypted communication channel between the computing device and the vehicle using an attestation request that is initiated by the computing device, and to transmit the encrypted machine learning trained model to the vehicle via the further encrypted communication channel.
However, in a similar field of endeavor (systems for secure communication), Knauth teaches wherein the processor is configured to execute the computer-readable instructions to cause the computing device to establish a further encrypted communication channel between the computing device and the vehicle using an attestation request that is initiated by the computing device, and to transmit the encrypted machine learning trained model to the vehicle via the further encrypted communication channel. (pg 1 Introduction : “An integral part of the Intel SGX architecture is the ability to perform attestation. The attester wants to convince the challenger that it is a genuine Intel SGX enclave running on an up-to-date platform. At the end of the attestation process the enclave has convince the challenger that it is genuine. Based on the enclave’s attested attributes, the challenger decides whether to trust the enclave of not.”, as the combination of Sobhany and Feng teaches the transmission of the encrypted machine learning trained model to the vehicle via encrypted communication and Knauth teaches attestation for secure encrypted communication the combination teaches the encrypted communication and attestation protocol as a further encrypted communication channel and as the transmission of the machine learning trained model is communication it would be transmitted through the further encrypted communication channel.).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to modify the combination of Sobhany, Feng, and USDoT with the attestation of Knauth ”to protect against man-in-the-middle attacks” (Knauth pg 1 Introduction).
Regarding Claim 19, it recites a computer-readable medium with limitations substantially the same as claim 6 above, therefore it is rejected for the same reason.
Regarding Claim 20, it recites a computer-readable medium with limitations substantially the same as claim 7 above, therefore it is rejected for the same reason.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sobhany , Feng, and USDoT and further in view of Ayal Yogev ("Secure enclave protection for AI and ML" www.helpnetsecurity.com/2020/12/15/secure-enclave-protection-ai-ml accessed via archive.org waybackmachine log date 12/15/2020) henceforth referred to as Yogev.
Regarding Claim 10 the combination of Sobhany, Feng, and USDoT teaches The vehicle of claim 8, further Feng teaches wherein the processor is configured to execute the computer- readable instructions to decrypt the encrypted machine learning trained model using a decryption key that is stored in the secure location of the memory (col 8 line 8-15 : “For example, the machine learning controller may encrypt, in the trusted execution environment, the machine learning model by calling the encryption key of at least one target terminal in the key manager, and then distributes the encrypted model to a corresponding target terminal. The target terminal may decrypt the encrypted model by using its own encryption key after receiving the encrypted model.”). However, the combination does not explicitly teach to store the decrypted machine learning trained model in the secure location of the memory.
However, in a similar field of endeavor (secure systems for machine learned models), Yogev teaches to store the decrypted machine learning trained model in the secure location of the memory (pg 2 : “But running and storing machine learning algorithms within the confines of a secure enclave assures that proprietary learning techniques are kept in the hands of their owners, even when those algorithms run in insecure environments.”).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to modify the combination of Sobhany, Feng, and USDoT with the teachings of Yogev to assure “that proprietary learning techniques are kept in the hands of their owners, even when those algorithms run in insecure environments” (Yogev pg 2).
Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sobhany and Feng and further in view of Harata et al (US 11709666 B2) henceforth referred to as Harata.
Regarding Claim 12 the combination of Sobhany and Fen Sobhany, Feng, and USDoT teaches The vehicle of claim 8, wherein the processor is configured to execute the computer- readable instructions to cause the vehicle to store the encrypted machine learning trained model in the memory conditioned upon approval of a consent request transmitted from the cloud enclave to the UE. However, the combination does not explicitly teach wherein the processor is configured to execute the computer- readable instructions to cause the vehicle to store the encrypted machine learning trained model in the memory conditioned upon approval of a consent request transmitted from the cloud enclave to a user equipment (UE).
However, in a similar field of endeavor (approval systems for electronic vehicle control updates), Harata teaches wherein the processor is configured to execute the computer- readable instructions to cause the vehicle to store the encrypted machine learning trained model in the memory conditioned upon approval of a consent request transmitted from the cloud enclave to the UE (col 3 line 27-46 : “In an aspect of the present disclosure, a center device manages a program update of a vehicle. A vehicular master device is communicable with the center device. Responsive to a user giving approval for program update by using a first device not being a possession owned by the user, an approval information receiving unit in the center device receives approval information of the user as first approval information. When the first approval information is received by the approval information receiving unit, an approval information management unit stores in an approval information storage unit and manages the received first approval information in association with vehicle information of the user. An approval information transmission unit transmits the first approval information to the user's vehicle side. In the vehicular mater device, an approval information reception unit executes reception of the first approval information transmitted from the center device. When the first approval information is received by the approval information reception unit, a program rewrite unit performs rewriting of the program.”, where storing of a machine learned model is an update to the vehicle).
It would have been obvious to a person having ordinary skill in the art prior to the effective filing date to modify the combination of Sobhany, Feng, and USDoT with the vehicle updating of Harata to “make it possible for even a user who does not have his/her own mobile terminal or a user of a vehicle not equipped with an in-vehicle display to give approval for program update and that make it possible to appropriately perform rewriting of a program” (Harata col 3 line 21-26).
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sobhany and Feng and further in view of Palanisamy (US 9942043 B2) henceforth referred to as Palanisamy.
Regarding claim 21 the combination of Sobhany, Feng, and USDoT teaches The computing device of claim 1, however the combination does not explicitly teach wherein the images of the user are acquired via the UE as part of an application that is triggered to execute in response to an authentication procedure being completed between the UE and the DMS.
However, in a similar field of endeavor (user authentication), Palanisamy teaches wherein the images of the user are acquired via the UE as part of an application that is triggered to execute in response to an authentication procedure being completed between the UE and the DMS (col 18 line 45-57 : “FIG. 7 illustrates a flow diagram of a process 700 that can be performed by a communication device to use a token or sensitive information, according to some embodiments. A user may interact with a user interface of the communication device to execute or access an application installed on the communication device. The application may request the user to enter user authentication data on a user interface. At block 702, process 700 may receive the user authentication data from the user on a user interface of the communication device. The application may authenticate the user based on the received user authentication data, and retrieve an encrypted token or sensitive information, and an encrypted session key from memory for use by the application.”¸ As Sobhany and Feng teaches execution of an application between a user equipment and a DMS including acquiring images and Palanisamy teaches authentication for execution of an application the combination teaches execution of the application in response to an authentication procedure being completed between the UE and DMS.).
It would have been obvious to a person having ordinary