Prosecution Insights
Last updated: April 19, 2026
Application No. 17/930,553

SYSTEMS AND METHODS FOR IMPLEMENTING DATA SECURITY

Final Rejection §103
Filed
Sep 08, 2022
Examiner
KHAN, SHER A
Art Unit
2497
Tech Center
2400 — Computer Networks
Assignee
Motional Ad LLC
OA Round
4 (Final)
85%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
284 granted / 333 resolved
+27.3% vs TC avg
Strong +23% interview lift
Without
With
+23.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
12 currently pending
Career history
345
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
51.1%
+11.1% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
18.6%
-21.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 333 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner’s Response to Amendment and Arguments Applicant amended independent claims 1, 18 & 19 and requested withdrawal of rejection issued in the previous office action under 35 USC 103 (please see Remarks filed on 12/22/2025, first page, 2nd paragraph). Examiner considered the request but found them moot as Examiner introduced new art. The Applicant argued in the 3rd page, 2nd paragraph of the said Remarks that Quyang does not teach “the private key is stored in a secure environment of a cryptographic coprocessor associated with the respective sensor”. Examiner considered this argument but however, found them to be moot as Examiner has changed ground. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1,18-19 & 28-29 are rejected under 35 USC 103 as being unpatentable over Kim (US20170185781A1) in view of Bronk (WO 2016048177 A1-original in English is attached) and Aneja (US20180095806) Regarding claim 1, Kim teaches: a method comprising: for each of one or more device/processor( [0042] Following the switch from normal mode to secure mode, the secure module 120 may verify the integrity of the first firmware image FI1 stored in the first area 410 of the volatile memory 400 (S400). The block diagram of FIG. 7 shows one example of the first firmware image FI1 including first firmware data FD corresponding to the operating system code of the CP 200, and a digital signature DS derived from (or computationally related to) the first firmware data FD. [0043] The digital signature DS may be derived (or generated) by applying one or more conventionally understood encryption algorithms or data security approaches to the first firmware data FD. For example, a public key capable of decrypting the digital signature DS (it is obvious that as public key is used to decrypt the Digital signature means that this signature is created by using corresponding private key) may be stored in the AP 100, where in certain embodiments of the inventive concept, the public key is stored in the AP 100 during the manufacturing of the SoC 10. In other embodiments of the inventive concept, a public key is downloaded from an external source to the AP 100. However, specifically provided, the secure module 120 may verify the integrity of the first firmware image FI1 stored in the first area 410 of the volatile memory 400 using the digital signature DS. [0043] The digital signature DS may be derived (or generated) by applying one or more conventionally understood encryption algorithms or data security approaches to the first firmware data FD. For example, a public key capable of decrypting the digital signature DS may be stored in the AP 100, where in certain embodiments of the inventive concept, the public key is stored in the AP 100 during the manufacturing of the SoC 10. In other embodiments of the inventive concept, a public key is downloaded from an external source to the AP 100. However, specifically provided, the secure module 120 may verify the integrity of the first firmware image FI1 stored in the first area 410 of the volatile memory 400 using the digital signature DS.] validating, by at least one processor comprising a cryptographic processor, the digital signatures of respective device/processor ) upon booting of each device/processor the validating including using at least one public key stored in a cryptographic coprocessor associated with the respective sensor to verify the digital signature;[0044] In some embodiments, the secure module 120 may determine whether the first firmware data FD included in the first firmware image FI1 has been changed after the digital signature DS included in the first firmware image FI1 was generated using (e.g.,) the digital signature DS and the public key. In this regard, the secure module 120 may decrypt the digital signature DS using the public key to generate decryption data, and compare the decryption data with the first firmware data FD to determine whether the first firmware data FD has been changed after the digital signature DS was generated. When it is determined that the first firmware data FD has been changed after the digital signature DS was generated, the secure module 120 may determine that the first firmware image FI1 stored in the first area 410 of the volatile memory 400 is unreliable and the verification is deemed to be unsuccessful (S500=N0). Accordingly, a verification fail signal VFS may be communicated from the secure module 120 to the non-secure module 110. When the non-secure module 110 receives the verification fail signal VFS, it will maintain the reset signal RST in a deactivated state (S600). And when the reset signal RST is maintained as deactivated, the CP 200 is maintained in an OFF state and does not perform a boot operation.] determining, based at least on the validating, that firmware of at least one device/processor of the one or more devices/processors .[[ 0044] In some embodiments, the secure module 120 may determine whether the first firmware data FD included in the first firmware image FI1 has been changed after the digital signature DS included in the first firmware image FI1 was generated using (e.g.,) the digital signature DS and the public key. In this regard, the secure module 120 may decrypt the digital signature DS using the public key to generate decryption data, and compare the decryption data with the first firmware data FD to determine whether the first firmware data FD has been changed after the digital signature DS was generated. When it is determined that the first firmware data FD has been changed after the digital signature DS was generated, the secure module 120 may determine that the first firmware image FI1 stored in the first area 410 of the volatile memory 400 is unreliable and the verification is deemed to be unsuccessful (S500=N0). Accordingly, a verification fail signal VFS may be communicated from the secure module 120 to the non-secure module 110. When the non-secure module 110 receives the verification fail signal VFS, it will maintain the reset signal RST in a deactivated state (S600). And when the reset signal RST is maintained as deactivated, the CP 200 is maintained in an OFF state and does not perform a boot operation.] Although, Kim teaches booting of devices, he does not teach explicitly, however, Bronk teaches: at least one processor comprising a cryptographic coprocessor, [[0024] The security co-processor 158 may be embodied as any hardware component(s) or circuitry capable of establishing a trusted execution environment. For example, the security co-processor 158 may be embodied as a Trusted Platform Module (TPM), a manageability engine (ME), or an out-of-band processor. In some embodiments, one or more trusted execution environment (TEE) cryptographic keys 168 are stored or provisioned into the security coprocessor 158. For example, a private Enhanced Privacy Identification (EPID) key and/or another private TEE key may be provisioned into the security co-processor 158 during the manufacturing process of the security co-processor 158 or of the in-vehicle computing system 102 or by virtue of a "join" protocol. In other embodiments, EPID or other TEE keys may be provisioned into one or more other components of the in-vehicle computing system 102. Additionally, in some embodiments, a TEE key certificate (e.g., an EPID certificate) is also stored or provisioned into the security co-processor 158. The TEE key certificate may include the public TEE key corresponding to the private TEE key provisioned into the security coprocessor 158 and may be signed by the manufacturer of the security co-processor 158 (e.g., by the manufacturer server 1 12). It should also be appreciated that, in some embodiments, the security co-processor 158 may directly communicate with a corresponding security co-processor of the coordination server 108. Additionally, in some embodiments, the security co-processor 1 58 may establish a secure out-of-band communication link with remote devices and/or components (e.g., the coordination server 108).] by cryptographic coprocessor, [[0024] The security co-processor 158 may be embodied as any hardware component(s) or circuitry capable of establishing a trusted execution environment. For example….. In some embodiments, …..in-vehicle computing system 102. Additionally, in some embodiments, a TEE key certificate (e.g., an EPID certificate) is also stored or provisioned into the security co-processor 158. The TEE key certificate may include the public TEE key corresponding to the private TEE key provisioned into the security coprocessor 158 and may be signed by the manufacturer of the security co-processor 158 (e.g., by the manufacturer server 1 12). It should also be appreciated that, in some embodiments, the security co-processor 158 may directly communicate with a corresponding security co-processor of the coordination server 108. Additionally, in some embodiments, the security co-processor 1 58 may establish a secure out-of-band communication link with remote devices and/or components (e.g., the coordination server 108).] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim with the disclosure of Bronk. The motivation or suggestion would have been to implement a system that will provide efficient and improved techniques secure exchange of sensor information between autonomous vehicles in a way that protects against any malicious actions (abstract, para 0009-0014, Bronk) Although, Kim and Bronk teach sensors and booting, they do not teach explicitly, however, Aneja teaches one or more sensors of autonomous vehicle. [0036] In block 318, the ADAS computing device 102 continues system initialization, booting an operating system and loading various applications. For example, the boot loader 206 may pass control of the ADAS computing device 102 to an operating system, and the operating system may load an ADAS application to provide advanced driver aid services to the driver. For example, the ADAS application may receive sensor data from the sensors 110 of the vehicle and perform a driver assist function based on the sensor data (e.g., collision warning, lane departure warning, pedestrian detection, adaptive cruise control, autonomous driving and/or other advanced driver assist function). The operating system may also load additional applications, such as a policy agent 202. In some embodiments, operating system, it should be understood that in some embodiments the memory training policy 204 may be evaluated by firmware such as the boot loader 206. In block 324, the ADAS computing device 102 checks whether training is required. If not, the method 300 loops back to block 322 to continue evaluating the memory training policies 204. If memory training is required, the method 300 advances to block 326.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim and Bronk with the disclosure of Aneja. The motivation or suggestion would have been to implement a system that will provide efficient techniques for fast booting of the computing device when possible, while also safely performing memory re-training. (abstract, para 0001-0002, 0010-0015. Aneja) Regarding claims 18 & 19, these claims are interpreted to be same as claim 1 and rejected for the same reasons as set forth for claim 1. Regarding claim 28, although, Kim, and Aneja teach sensors, they do not teach explicitly, however, Bronk teaches wherein the cryptographic coprocessor generates a session key for secure communications between processors and validated sensors. [0050] In block 434, the in-vehicle computing system 102 generates an attestation quote of the trusted execution environment established by the in-vehicle computing system 102. For example, in some embodiments, the in-vehicle computing system 102 may generate a secure enclave quote. Further, in block 436, the in-vehicle computing system 102 transmits the sensor data with the generated attestation quote and the TEE key 168 signature to the coordination server 108. It should be appreciated that, in some embodiments, the in-vehicle computing system 102 establishes a SIGMA session with the coordination server 108 for secure communication (e.g., an EPID-based SIGMA session). Further, the in-vehicle computing system 102 may prepare the TEE key 168 signature (e.g., EPID key signature) in any suitable way. For example, in some embodiments, the in-vehicle computing system 102 cryptographically signs the attestation quote and/or the sensor data with the TEE key 168 for transmission to the coordination server 108. In other embodiments, the in-vehicle computing system 102 may cryptographically sign another communication to be transmitted with the attestation quote and the sensor data. Further, in some embodiments, the in-vehicle computing system 102 may also transmit a TEE key 168 certificate (e.g., EPID certificate) to the coordination server 108 (e.g., for verification of the TEE key 168 signature). As discussed herein, the coordination server 108 verifies the key signature and the attestation quote and, upon successful verification, processes the sensor data.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim and Aneja with the disclosure of Bronk. The motivation or suggestion would have been to implement a system that will provide efficient and improved techniques secure exchange of sensor information between autonomous vehicles in a way that protects against any malicious actions (abstract, para 0009-0014, Bronk) Regarding claim 29, although, Kim and Aneja teach sensors, they do not teach explicitly, however, Bronk teaches wherein the respective cryptographic coprocessor stores a non-migratable private key for decryption operations. [0024] The security co-processor 158 may be embodied as any hardware component(s) or circuitry capable of establishing a trusted execution environment. For example, the security co-processor 158 may be embodied as a Trusted Platform Module (TPM), a manageability engine (ME), or an out-of-band processor. In some embodiments, one or more trusted execution environment (TEE) cryptographic keys 168 are stored or provisioned into the security coprocessor 158. For example, a private Enhanced Privacy Identification (EPID) key and/or another private TEE key may be provisioned into the security co-processor 158 during the manufacturing process of the security co-processor 158 or of the in-vehicle computing system 102 or by virtue of a "join" protocol. In other embodiments, EPID or other TEE keys may be provisioned into one or more other components of the in-vehicle computing system 102. It is obvious to ordinary skilled person that as the private key is stored in a trusted execution environment (TEE) the private key non-migratable. (Moreover, it is known to an ordinary skilled person is that the private key is usually a secret key and usually non-migratable).] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim and Aneja with the disclosure of Bronk. The motivation or suggestion would have been to implement a system that will provide efficient and improved techniques secure exchange of sensor information between autonomous vehicles in a way that protects against any malicious actions (abstract, para 0009-0014, Bronk) Claims 3, & 6-7 are rejected under 35 USC 103 as being unpatentable over Kim in view of Bronk, Aneja, Kravitz and Angus (US 20160294829) Regarding claim 3, although, Kim, Bronk, Aneja and Kravitz teach autonomous vehicle, they do not teach explicitly, however, Angus teaches wherein digitally signing the firmware uses [[the]] sensor data including identification data associated with the one or more sensors, the method further comprising anonymizing the identification data before sending the sensor data to the at least one processor. [0104] FIG. 9 is a schematic flow diagram illustrating yet another embodiment of a method 900 for secure provisioning of devices for manufacturing and maintenance. In one embodiment, the method 900 begins and an initialization module 302 provisions 902 a sensor 102 at a manufacturer 101. In one embodiment, the initialization module 302 stores 904 and digitally signs identification data (anonymizing/obfuscating) for the sensor 102, such as a MAC address, a public/private key pair, and/or a unique random number, in the sensor 102.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk, Aneja and Kravitz with the disclosure of Angus. The motivation or suggestion would have been to implement a system that will provide efficient techniques to securely provision and maintain the devices in the wireless sensor network during the manufacturing process, throughout the supply chain, and during ongoing repairs and maintenance, to ensure the wireless sensor network remains secure. (abstract, para 0001-0008, Angus) Regarding claim 6, although, Kim, Bronk and Aneja and Kravitz teach autonomous vehicle, they do not teach explicitly, however, Angus teaches wherein anonymizing the identification data comprises obfuscating the identification data. [0104] FIG. 9 is a schematic flow diagram illustrating yet another embodiment of a method 900 for secure provisioning of devices for manufacturing and maintenance. In one embodiment, the method 900 begins and an initialization module 302 provisions 902 a sensor 102 at a manufacturer 101. In one embodiment, the initialization module 302 stores 904 and digitally signs identification data (anonymizing/obfuscating) for the sensor 102, such as a MAC address, a public/private key pair, and/or a unique random number, in the sensor 102.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk, Aneja, and Kravitz with the disclosure of Angus. The motivation or suggestion would have been to implement a system that will provide efficient techniques to securely provision and maintain the devices in the wireless sensor network during the manufacturing process, throughout the supply chain, and during ongoing repairs and maintenance, to ensure the wireless sensor network remains secure. (abstract, para 0001-0008, Angus) Regarding claim 7, although, Kim, Bronk, Aneja and Kravitz teach autonomous vehicle, they do not teach explicitly, however, Angus teaches wherein obfuscating the identification data comprises at least one of: substituting [[the]] identification data values with secondary identification data values, encrypting the identification data, and shuffling the identification data. [0104] FIG. 9 is a schematic flow diagram illustrating yet another embodiment of a method 900 for secure provisioning of devices for manufacturing and maintenance. In one embodiment, the method 900 begins and an initialization module 302 provisions 902 a sensor 102 at a manufacturer 101. In one embodiment, the initialization module 302 stores 904 and digitally signs identification data (anonymizing/obfuscating) for the sensor 102, such as a MAC address, a public/private key pair, and/or a unique random number, in the sensor 102.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk, Aneja, and Kravitz with the disclosure of Angus. The motivation or suggestion would have been to implement a system that will provide efficient techniques to securely provision and maintain the devices in the wireless sensor network during the manufacturing process, throughout the supply chain, and during ongoing repairs and maintenance, to ensure the wireless sensor network remains secure. (abstract, para 0001-0008, Angus) Claim 4, is rejected under 35 USC 103 as being unpatentable over Kim in view of Bronk, Aneja, Kravitz and Angus and Stickle (US10397273) Regarding claim 4, although Kim, Bronk, Aneja and Kravitz teach sensor data, they do not teach explicitly, however, Stickle teaches wherein anonymizing the identification data comprises removing the identification data. [Col 24, lines 15-30: At 706, process 700 can aggregate sensor activity information based on attributes of the sensor and/or user associated with the sensor. For example, process 700 can aggregate sensor activity information based on the type of sensor (e.g., web server, database server, mail server, etc.), the industry that the user is in, the geographic region that the user operates in and/or that the sensor was deployed in, the country or countries in which the user operates, etc. In some embodiments, process 700 can remove any potentially identifiable information from the sensor information. For example, by removing the IP address of the sensor, the domain of the sensor, etc. In some embodiments, process 700 can correlate network activity recorded by the sensor with state changes recorded by the sensor prior to aggregating the information.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk, Aneja, and Kravitz and Angus with the disclosure of Stickle. The motivation or suggestion would have been to implement a system that will provide efficient techniques for monitoring of network traffic for potentially malicious communications. (abstract, Col 01, lines 5-45, Stickle) Claim 5 & 24 are rejected under 35 USC 103 as being unpatentable over Kim in view of Bronk, Anuja, Kravitz, Angus and Yamoka (WO20120044939) Regarding claims 5 & 24, although, Kim, Bronk, Aneja and Kravitz and Angus teach sensor data, they do not teach explicitly, however, Yamoka teaches wherein the identification data comprises at least one of location data, waveform data, or tag identification data. [page 02 of attached translated copy: Background Section: As a communication device using conventional proximity wireless communication, a wireless communication IC tag is attached to a vehicle etc., and history information is stored together with an ID stored in the IC tag to a history information storage device connected via a network A technique for transmitting is disclosed.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk ,Aneja, and Kravitz and Angus with the disclosure of Yamoka. The motivation or suggestion would have been to implement a system that will provide efficient techniques for storing operational history with unique identity of the related potable device. (abstract, page 02, Yamoka) Claim 8 is rejected under 35 USC 103 as being unpatentable over Kim in view of Bronk, Aneja, Kravitz, Angus and Mishra (WO 2019171227 A1) Regarding claim 8, although Kim, Bronk, Aneja, Kravitz and Angus teach obfuscating data, they do not teach explicitly, however, Mishra teaches wherein obfuscating the identification data comprises learning, by a machine learning coprocessor, secondary identification features associated with the identification data. [page 11, lines 5-20: Training the secondary data-based object identification model may include feeding a suitable machine learning algorithm with primary data (including processed primary data) and secondary data which relate to the same scene (e.g. having been obtained from an optical camera and radar respectively which have the same field of view) and which have been obtained at the same point in time. The primary and secondary data may therefore be two different representations of exactly the same physical scene. As the primary data has been processed and labelled, the machine learning algorithm can use this information to adapt the secondary data-based object identification model to identify corresponding objects in the secondary data. Training the secondary data-based object identification model may include inputting one or both of the segmented and labelled primary data and the segmented and labelled secondary data into the machine learning algorithm during a training process. Training the models may include improving performance of the models (e.g. making the models better at identifying objects present in the data).] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk, Aneja, and Kravitz and Angus with the disclosure of Mishra. The motivation or suggestion would have been obvious to a ordinary skilled person in art is to implement a system that will provide efficient techniques for provide improved techniques for processing identification data. Claim 9 is rejected under 35 USC 103 as being unpatentable over Kim in view of Bronk, Aneja, Kravitz, Angus and Nethi (US 20170048079) Regarding claim 9, although Kim, Bronk, Aneja, Kravitz and Angus teach obfuscating data, they do not teach explicitly, however, Nethi teaches further comprising validating the sensor data by determining whether the sensor data is configured in either an expected format, an expected size, or both.[0098] In various embodiments, network/middleware services 402 may include a device protocol abstraction process/service 616 that is operable to convert data received via any or all of adaptors 602-614 into a canonical format using a common data model 642. For example, protocol abstraction process 616 may provide a set of APIs to transform sensor data received via different formats or protocols into a common format. By doing so, the local and remote applications that make use of the received sensor data and/or provide commands to a network asset do not need knowledge of the actual protocol used to communicate with that asset.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk, Aneja, and Kravitz and Angus with the disclosure of Nethi. The motivation or suggestion would have been to implement a system that will provide efficient techniques for communications in a network with constrains in terms of bandwidth, latency, etc. (abstract, para 0001-0003, Nethi) Claim 10 is rejected under 35 USC 103 as being unpatentable over Kim in view of Bronk, Aneja, Kravitz and Angus and Vian (US 20080033684) Regarding claim 10, although, Kim, Bronk, Aneja, Kravitz and Angus teach sensor data, they do not teach explicitly, however, Vian teaches wherein the sensor data is simulated detection data. [0032] In this embodiment, the command-and-control architecture 200 further includes a simulated environment and dynamics module 220 configured to perform computations and data management associated with one or more simulated vehicle modules 222. The simulated environment and dynamics module 220 may also reside on the command-and-control computer 102 as part of the command and control software 112. Each simulated vehicle module 222 operatively communicates with the control data network 240 and the health monitoring network 242. The simulated environment and dynamics module 220 is further configured to provide simulated position, attitude, and movement data associated with the simulated vehicles 222, as well as health management data associated with the simulated vehicles 222, to the reformatting module 214 for broadcast onto the control data network 240. Thus, the command and control architecture 200 may advantageously be used for developing test vehicles 110 operating in an environment having both real and simulated vehicle and environmental conditions.[0036] At a block 308, the position reference system 120 monitors the positions and movements of the test vehicles 110, and if applicable, the positions and dynamics of the simulated vehicles are also calculated. The position and dynamics data measured by the position reference system 120 (and computed for the simulated vehicles) are communicated to the command-and-control computer 102 at a block 310. In preferred embodiments, the position reference system 120 is capable of measuring each of the six degrees of freedom that define the position and movement of each test vehicle 110, however, in alternate embodiments, the position reference system 120 may suitably measure fewer than six degrees of freedom. Similarly, at a block 312, health monitoring data collected by sensors located on board each of the test vehicles 110, and if applicable, the simulated vehicle health data, are communicated to the command and control computer 102.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk, Aneja, and Kravitz and Angus with the disclosure of Vian. The motivation or suggestion would have been to implement a system that will provide efficient techniques for health monitoring component configured to monitor health conditions of the one or more vehicles, the control signals being determined at least in part on the health conditions. (abstract, para 0002-0004, Vian) Claim 17 is rejected under 35 USC 103 as being unpatentable over Kim in view of Bronk, Aneja, and Tomatsu (US10083547) Regarding claim 17, although, Kim, Bronk and Aneja, teach sensor data, they do not teach explicitly, however, Tomatsu teaches wherein the portion of data comprises computer executable code. [ Col 05, lines 25-45: One general aspect includes a computer program product including a non-transitory memory of an onboard vehicle computer system of a HAV storing computer-executable code that, when executed by the onboard vehicle computer system, causes the onboard vehicle computer system to: analyze external sensor data recorded by one or more external sensors of the HAV to identify a traffic situation, where the external sensor data describes one or more measurements of a physical environment external to the HAV; generate graphical data describing visual feedback that visually depicts information describing the traffic situation; and provide, by a processor of the HAV, the graphical data to an interface device to cause the interface device to display the visual feedback, where the interface device is communicatively coupled to the HAV and operable to receive the graphical data from the processor of the HAV. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk and Aneja with the disclosure of Tomatsu. The motivation or suggestion would have been to implement a system that will provide efficient techniques for providing visual feedback of the traffic situations to the driver of the autonomous vehicle.(abstract, Tomatsu) Claim 20 is rejected under 35 USC 103 as being unpatentable over Kim in view of Bronk, Aneja, and Peterka (US20100217964) Regarding claim 20, although, Kim, Bronk and Aneja teach sensors, they do not teach explicitly, however, Peterka teaches wherein digitally signing comprises storing at least one public key in the one or more computer processors, the at least one public key being associated with one of the one or more sensors. [0031] As discussed hereinabove, many conventional device processor platforms determine whether or not the JTAG interface is enabled based on a flag or other information in the boot code image. The boot code can be signed by a digital signature, which uses an authentication method based on encryption to "sign" electronic payloads. Also, optionally, the boot code may be encrypted so as to hide any secret values contained in the boot code and to make reverse engineering more difficult. [0034] Referring now to FIG. 2, shown is a block diagram of a system 30 for controlling JTAG interface enablement in an end user communication device. The system 30 includes an end user communication device, such as the communication device 10 shown in FIG. 1. The communication device 10 includes a chip serial number 32, which is a unique serial number from one or more particular chips, e.g., processor chips, within the communication device 10. Also, before shipment, the communication device 10 is assumed to be preloaded with a private key 34 unique to the communication device 10. A public key or a digital certificate corresponding to this private key may be stored in the same device or it may be available from a separate repository of public keys and/or digital certificates. Also, before shipment of the communication device 10, the boot code (e.g., security processor boot code) of the communication device 10 is assumed to be configured in a manner that disables the JTAG interface (JTAG=OFF) and the boot code is encrypted using a global encryption key (GEK). The encrypted boot code is shown generally as encrypted boot code 36.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk and Aneja with the disclosure of Peterka. The motivation or suggestion would have been to implement a system that will provide efficient techniques to prevent tampering and disclosure during device booting phase. (abstract, para 0004-0008, Peterka) Claim 21 is rejected under 35 USC 103 as being unpatentable over Kim in view of Bronk, Aneja and Kravitz (US 20180006829) Regarding claim 21, although, Kim, Bronk and Aneja teach autonomous vehicle, they do not teach explicitly, however, Kravitz teach further comprising digitally signing the firmware by storing at least one public key in the firmware, the at least one public key being associated with one of the one or more sensors. [0083] In some embodiments, the disclosed invention may provide software/firmware updates uniquely for devices. For example, an image of the firmware (or a certificate associated with the firmware) is digitally signed by an established firmware signing authority recognized by a trusted public key being held, for example, by the LKSM of a secure component of the subject device to be updated. Using industry standard code signing technology, the firmware may be digitally signed using the private key of the code signing authority. The signed code is transmitted to the subject device. Upon receipt, the subject device first uses the trusted public key in its possession to verify that the code signing authority did, in fact, sign the code. Upon such verification, the subject device may complete a firmware update. Please also see para 0048, 0053-00054] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk and Aneja with the disclosure of Kravitz. The motivation or suggestion would have been to implement a system that will provide efficient techniques for vehicle IoT security and management. (abstract, para 0006-0010, Kravitz) Claims 22 & 26 are rejected under 35 USC 103 as being unpatentable over Kim in view of Bronk, Aneja, and Angus. Regarding claims 22 & 26, although, Kim, Bronk and Aneja teach autonomous vehicle, they do not teach explicitly, however, Angus teaches wherein digitally signing the firmware uses sensor data including identification data associated with the one or more sensors of the autonomous vehicle, and wherein the instructions cause the one or more computer processors to anonymize the identification data before sending the sensor data to the one or more computer processors. [0104] FIG. 9 is a schematic flow diagram illustrating yet another embodiment of a method 900 for secure provisioning of devices for manufacturing and maintenance. In one embodiment, the method 900 begins and an initialization module 302 provisions 902 a sensor 102 at a manufacturer 101. In one embodiment, the initialization module 302 stores 904 and digitally signs identification data (anonymizing/obfuscating) for the sensor 102, such as a MAC address, a public/private key pair, and/or a unique random number, in the sensor 102.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk and Aneja, with the disclosure of Angus. The motivation or suggestion would have been to implement a system that will provide efficient techniques to securely provision and maintain the devices in the wireless sensor network during the manufacturing process, throughout the supply chain, and during ongoing repairs and maintenance, to ensure the wireless sensor network remains secure. (abstract, para 0001-0008, Angus) Claims 23 & 27 are rejected under 35 USC 103 as being unpatentable over Kim in view of Bronk, Aneja , Angus and Stickle (US10397273) Regarding claim 23 & 27, although Kim, Bronk, Aneja and Angus teach sensor data, they do not teach explicitly, however, Stickle teaches wherein anonymizing the identification data comprises removing the identification data. [Col 24, lines 15-30: At 706, process 700 can aggregate sensor activity information based on attributes of the sensor and/or user associated with the sensor. For example, process 700 can aggregate sensor activity information based on the type of sensor (e.g., web server, database server, mail server, etc.), the industry that the user is in, the geographic region that the user operates in and/or that the sensor was deployed in, the country or countries in which the user operates, etc. In some embodiments, process 700 can remove any potentially identifiable information from the sensor information. For example, by removing the IP address of the sensor, the domain of the sensor, etc. In some embodiments, process 700 can correlate network activity recorded by the sensor with state changes recorded by the sensor prior to aggregating the information.] Before the effective filing date of the claimed invention, it would have been obvious to one with ordinary skill in the art to combine the teachings of Kim, Bronk, Aneja, Angus with the disclosure of Stickle. The motivation or suggestion would have been to implement a system that will provide efficient techniques for monitoring of network traffic for potentially malicious communications. (abstract, Col 01, lines 5-45, Stickle) Conclusion The prior arts made of record and listed on the PTO-892 and not relied upon are considered pertinent to applicant’s disclosure. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHER A KHAN whose telephone number is (571)272-8574. The examiner can normally be reached M-F 8:00 am-500pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A Shiferaw can be reached at 571-272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. SHER A. KHAN Examiner Art Unit 2497 /SHER A KHAN/Primary Examiner, Art Unit 2497
Read full office action

Prosecution Timeline

Sep 08, 2022
Application Filed
Sep 29, 2024
Non-Final Rejection — §103
Dec 20, 2024
Response Filed
Feb 14, 2025
Final Rejection — §103
May 20, 2025
Response after Non-Final Action
May 20, 2025
Notice of Allowance
Jun 27, 2025
Response after Non-Final Action
Aug 20, 2025
Non-Final Rejection — §103
Dec 22, 2025
Response Filed
Mar 03, 2026
Final Rejection — §103
Apr 10, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598069
MONITORING IN DISTRIBUTED COMPUTING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12562909
LINKING DIGITAL AND PHYSICAL NON-FUNGIBLE ITEMS
2y 5m to grant Granted Feb 24, 2026
Patent 12537670
KEY SHARD VERIFICATION FOR KEY STORAGE DEVICES
2y 5m to grant Granted Jan 27, 2026
Patent 12530491
SELECTIVE DELETION OF SENSITIVE DATA
2y 5m to grant Granted Jan 20, 2026
Patent 12526157
IDENTITY AUTHENTICATION METHOD AND APPARATUS, AND DEVICE, CHIP, STORAGE MEDIUM AND PROGRAM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+23.3%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 333 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month