Prosecution Insights
Last updated: April 19, 2026
Application No. 18/164,234

SYSTEMS AND METHODS FOR CYBER SECURITY AND QUANTUM ENCAPSULATION FOR SMART CITIES AND THE INTERNET OF THINGS

Final Rejection §103§112§DP
Filed
Feb 03, 2023
Examiner
SHAW, YIN CHEN
Art Unit
2498
Tech Center
2400 — Computer Networks
Assignee
Lourde Wright Holdings LLC
OA Round
2 (Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
5y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
108 granted / 221 resolved
-9.1% vs TC avg
Strong +65% interview lift
Without
With
+64.8%
Interview Lift
resolved cases with interview
Typical timeline
5y 6m
Avg Prosecution
2 currently pending
Career history
223
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
57.7%
+17.7% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 221 resolved cases

Office Action

§103 §112 §DP
Detailed Action The office action is in response to the communication dated on 08/22/2025 and 03/17/2025. In the communication dated on 08/22025, claims 1 and 4-8 are amended, claims 11-15 are newly added, and all other claims are previously presented. Claims 1-15 have been examined. Claims 1-15 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Priority The instant application, filled on 02/03/2023, is a continuation-in-part (CIP) of 18/163,790, filed on 02/02/2023, and claims priority benefit from provisional application numbers 63/306,893 and 63/306,889 both filed on 02/04/2023. The prior-filed applications provide adequate support in the manner provided by 35 U.S.C. 112(a) or pre-AIA U.S.C. 112, first paragraph for one or more claims of this application. Therefore, the effective filing date for the pending claims is 02/04/2022. Specification The replacement specification, filed on 03/17/2025, with correction of paragraph numbering is entered as it is acceptable for examination purpose. Response to Arguments Applicant’s argument, filed on 03/17/2025, with respect to the rejection of claims 1-10 under 35 USC 112(b) has been fully considered. The argument is found persuasive due to the amendment to the pending claims 1, 4 and 6-7. Therefore, the previously issued claim rejection under 35 USC 112(b) to the pending claims 1-10 is now withdrawn. Applicant’s argument, filed on 03/17/2025, with respect to the Double Patenting rejection for the pending claims 1-10 has been fully considered. The previously issued Double Patenting rejection is now withdrawn in view of the amendment. Applicant’s argument, filed on 03/17/2025, with respect to the rejection for the pending claims 1-10 under 35 USC 103 has been fully considered. Therefore, the previously issued claim rejection under 35 USC 103 to the pending claims 1-10 is now withdrawn. However, upon further consideration, new grounds of rejection are made, at least, in view of previously applied references by Bakthavatchalam and Wright in addition to a newly applied reference by Ding et al. (WO 2019/071026 A1), hereinafter Ding. Specifically, Ding cures the deficiency of the combination of the previous applied references by Bakthavatchalam and Wright with the teaching of the newly amended claim features, such as the post quantum encryption techniques, lattice-based cryptography and/or Lambert signatures for amended Claims 1, 14, 15. Please refer to the details of the prior-art rejection of the newly amended/added claims below. In regard to Applicant’s argument for claim 2 about Bakthavatchalam's rule-based comparisons are different from the present claimed invention pattern or anomaly comparison that includes models with unsupervised learning, the Office would like to point out that the claim language of claim 2 never explicitly recites the pattern or anomaly comparison includes models with the use of unsupervised learning. Therefore, the rejection of claim 2 is maintained, at least, based on the teaching of Bakthavatchalam and Wright. In regard to Applicant’s argument for claims 3-10, it is found persuasive that the previously applied references by Bakthavatchalam, Wright and Parker would not be sufficient to render these claims obvious in view of the amendment to the pending Claim 1. However, as stated above, new grounds of rejection based on the previously applied references by Bakthavatchalam, Wright (and Parker) in addition to the newly applied reference by Ding would be sufficient to render the each of the claims 3-10 obvious. Claim Objections Claim(s) 1 and 13 are objected to because of the following informalities: Claim 1 is objected for failing to explicitly include any specific component(s) of the claimed security system. A proper device/system should, at least, recite an element that further describe the structure of the recited device/system; and Claim 13 is objected for the recitation, “wherein the system encrypts…”. Correction should be made the claim to recite “where the security system encrypts...”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Specifically, independent claim 1 is amended to recite “record at least a transaction using post quantum cryptography protected blockchain”. Yet, the support in the specification of the instant application, at best, only describes “systems implement post quantum cryptography to protect the blockchain information being targeted and/or changed by hackers” in Paragraph [0082] and “Blockchain can be used to record transactions” in Paragraph [0094], without providing any details on the particularly claimed post quantum cryptography protected blockchain is to be utilized for recording at least a transaction in a conventional blockchain environment described in the specification. Claims 2-15 are dependent to claim 1, and thus, are rejected for the same deficiency as of claim 1. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6, 8-10 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over by Bakthavatchalam et al. (US 20180114023 A1), hereafter Bakthavatchalam, Wright (US 20210021592 A1), hereafter Wright, and in view of Ding et al. (WO 2019/071026 A1), hereinafter Ding. Regarding claim 1: Bakthavatchalam teaches a security system for implementing a threat characteristic recognition process in a computing environment (Bakthavatchalam [0022] FIG. 1 illustrates an embodiment of a network security appliance or device 100 that executes line-rate malware detection with respect to packetized network traffic flowing between an interface to a distrusted exterior network (“exterior interface”—e.g., Internet interface) and an interface to a nominally trusted interior network (“interior interface”)), the security system configured to: monitor data traffic [at one or more access points of the computing environment] (Bakthavatchalam [0021] In various embodiments disclosed herein, network traffic is compressed and then malware-searched within a hardware-accelerated rule search engine [0022] FIG. 1 illustrates an embodiment of a network security appliance or device 100 that executes line-rate malware detection with respect to packetized network traffic flowing between an interface to a distrusted exterior network (“exterior interface”—e.g., Internet interface) and an interface to a nominally trusted interior network (“interior interface”) [0025] … malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations — that are to be detected within inbound traffic); provide the data traffic to the security system as an input for analysis (Bakthavatchalam [0026] FIG. 2 illustrates an embodiment of a malware detection module 150 (e.g., that may be deployed within the ingress security engine 103 of FIG. 1) having a rule buffer 151 and a hardware-accelerated rule search engine 155. As shown, rule buffer 151 receives rules from a source within control plane 122 (e.g., policy engine 133 of FIG. 1) and forwards or otherwise makes those rules available to rule search engine 155. Rule search engine 155 additionally receives inbound traffic from the data plane 120 and outputs a rule-search result (“RS Result”) to notify downstream functional blocks (e.g., flow management 131 unit of FIG. 1) of a malware detection event upon confirming a match between a rule (malware signature) and contents of the inbound traffic); identify one or more characteristics of the data traffic (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131); compare the one or more characteristics of the data traffic to characteristics stored on one or more databases corresponding to suspicious or malicious behavior (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131 [0031] Thereafter, incoming traffic is routed through the traffic compression engine to deliver the compressed stream to the rule search memory (187), and at 189, the rule search memory searches the compressed traffic stream for malware signatures (i.e., through comparison with the stored, compressed rule data base) asserting a rule-search result signifying match events); prevent access to the computing environment or transmission of the data traffic if the one or more characteristics match with the characteristics stored on the one or more databases (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131. As discussed below, flow management unit 131 may take various actions with respect to reported malware detections, including blocking malware-infested traffic flows and/or seizing information with respect to such flows to enable forensic or other advanced security measures [0031] Thereafter, incoming traffic is routed through the traffic compression engine to deliver the compressed stream to the rule search memory (187), and at 189, the rule search memory searches the compressed traffic stream for malware signatures (i.e., through comparison with the stored, compressed rule data base) asserting a rule-search result signifying match events); Bakthavatchalam does not expressly teach the security system configured to: monitor data traffic at one or more access points of the computing environment; and determine if one or more features are unauthorized actions or from an unauthorized actor based on the characteristics. However, Wright teaches a security system for implementing a threat characteristic recognition process in a computing environment (Wright [0003] Disclosed examples relates to a system for securing devices and data in a computing environment [0004] In some examples, the disclosed security system is configured to provide protection for computing and networked devices from threats, as well as to protect data [0038] As the transmission characteristics evolve (e.g., from one generation of cellular transmission to the next), the range of frequencies and/or potential threats associated with those characteristics will be updated and provided to a user and/or administrator); the security system configured to: monitor data traffic at one or more access points of the computing environment (Wright [0026] In some examples, security systems and methods are employed to identify threats and/or act to mitigate threats on one or more IoT connected devices. In an example, an agent (e.g., software and/or hardware driven, such as an Ethical Agent powered by AI) can be employed into an IoT environment to scan devices and/or data traffic. The agents can scan for threats, such as connection or attempted connection to the network and/or devices from an unauthorized source [0110] In some examples, the algorithm scanning engine 122 (e.g., software and/or hardware, such as a secure FPGA configured to implement the algorithm scan) can be integrated into a system that collects, transmits, stores, and/or otherwise processes the inputs for the algorithm. This may include a server, a processor, a transmission component (e.g., a router, a cellular tower, a satellite, etc.), such that the algorithm scanning engine 122 may identify implementation of such an algorithm and provide the information to an administrator, the authorities, and/or automatically modify the algorithm's behavior); and determine if one or more features are unauthorized actions or from an unauthorized actor based on the characteristics (Wright [0011] … The security systems and methods actively look for signatures of such threats. [0054] In some examples, malware or other malicious content may exist on the client device and attempt to exploit data and/or functionality of the client device. In examples, the malicious payload(s) are prevented from being downloaded, either by having been identified as malware in advance (e.g., known malware, malware as identified by an agent, etc.), and/or by recognizing unusual behavior from the sender (as disclosed herein), such that the download is blocked and/or routed to a diversion environment for additional processing [0055] In the event that malicious payloads are downloaded and executed on the client device, the security system functions to detect the malicious data post exploitation. This can be due to unusual activity (e.g., transmitting data in the absence of a request and/or authorization from the user), and/or identification of the result of the malware as being on a list of malicious data (e.g., identified by an agent and communicated to the client device and/or user). Once identified, the security system is designed to block further execution of the malware (e.g., end processing of the affected component, end transmission of data, disconnect from the network, etc.), and/or route the malware and/or traffic to a diversion environment for additional processing). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teachings of Bakthavatchalam and Wright before them, to modify Bakthavatchalam that teaches a security system; to include monitoring data traffic at one or more access points of the computing environment and determine if the features are unauthorized actions or from an unauthorized actor based on the characteristics. One would have been motivated to have a secure system in which the system continuously monitors for known threats, as well as proactively pursues information on emerging or unknown threats as taught by Wright (see Wright [0011]). The combination of Bakthavatchalam and Wright does not expressly teach record at least a transaction using post quantum cryptography protected blockchain. However, Ding teaches record at least a transaction using post quantum cryptography protected blockchain (Ding – Paragraph [0027], “…. Blockchains are a public ledger of all transactions that have ever been executed on the participating network of nodes. Maintaining a public record of the transactions enables one to validate and protect against attacks like double spending without the use of a trusted third party (e.g., a bank)”. Paragraph [0037]: “… In order to create a new block in the chain, the nodes work towards finding a nonce value such that the hash of the previous block …”. Paragragph [0055]: “Described herein is a dual signature system for new quantumproof blockchains, both post quantum and secure in the long term”). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the combined teaching of Bakthavatchalam and Wright to be further modified by Ding’s teaching of blockchain technology. One would have been motivated to have a secure system in which determination that any transaction comprising malicious activity and DDoS attack can be made with the use of blockchain with the post-quantum cryptography (see Ding [0040] and [0046]). Regarding claim 2: Bakthavatchalam, Wright, and Ding teach the security system of claim 1, as outlined above. Bakthavatchalam further teaches wherein the one or more characteristics include a pattern or an anomaly [in comparison to authentic behavior] (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131 [0031] Thereafter, incoming traffic is routed through the traffic compression engine to deliver the compressed stream to the rule search memory (187), and at 189, the rule search memory searches the compressed traffic stream for malware signatures (i.e., through comparison with the stored, compressed rule data base) asserting a rule-search result signifying match events). Bakthavatchalam does not expressly teach wherein the one or more characteristics include a pattern or an anomaly in comparison to authentic behavior. However, Wright further teaches wherein the one or more characteristics include a pattern or an anomaly in comparison to authentic behavior (Wright – [0014] … analyzing user, device, and/or data behavior to ensure compliance in various business systems. [0057] In some examples, the security system recognizes trends in user behavior, such that anomalous actions and/or traffic can be identified and investigated (e.g., by routing to a diversion environment). This can be implemented by historical tracking and/or application of AI tools to make connections (e.g., between trusted devices), recognize patterns (e.g., in user behavior), identify associated individuals and locations (e.g., within an organization, family, etc.). Thus, when an anomalous event occurs, the security system may evaluate the risk and determine suitable actions suitable to mitigate the risk. [0126] In some examples, end user behavior can be determined by analysis of traffic and access logs using quantum mechanics. [0035] and identifying abnormal activities on the client device in comparison to a baseline data (such as via AI monitoring)). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright, and Ding before them, to have the system feature wherein the one or more characteristics include a pattern or an anomaly in comparison to authentic behavior. One would have been motivated using the same reasoning as in claim 1. Regarding claim 3: Bakthavatchalam, Wright, and Ding teach the security system of claim 1, as outlined above. Bakthavatchalam further teaches wherein the one or more characteristics [include a number of login attempts beyond a threshold number, a number of unsuccessful login attempts beyond a threshold number, a request for unauthorized data from an authorized user, or a request for an amount of data beyond a threshold amount] (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131 [0031] Thereafter, incoming traffic is routed through the traffic compression engine to deliver the compressed stream to the rule search memory (187), and at 189, the rule search memory searches the compressed traffic stream for malware signatures (i.e., through comparison with the stored, compressed rule data base) asserting a rule-search result signifying match events). Wright further teaches wherein the one or more characteristics include a number of login attempts beyond a threshold number, a number of unsuccessful login attempts beyond a threshold number, a request for unauthorized data from an authorized user, or a request for an amount of data beyond a threshold amount (Wright [Claim 1] … authenticate, using the biometric entry, the user of the client device; and permit the user access to data on the client device responsive to authentication of the user via the biometric entry [0012] Additionally or alternatively, the security systems and methods are configured to operate in the absence of a networked connection and/or a primary power source. For example, software and/or hardware can be installed on a client device, which is designed to scan the software and/or hardware of the client device to detect and/or address threats. There are particular advantages for devices that are configured for extended periods of sleep and/or passive and/or on-demand operation, such as smart speakers, device connected to the Internet of things (IoT), logistical waypoints, communications equipment, as a non-limiting list of examples [0027] In examples, the IoT connected devices are authorized to capture a particular type of information (e.g., a near field communication (NFC) enabled smart device to access a building, transfer information, payment, etc.; a biometric scanner; electric car charging station sensors; ultrasound sensors; etc.). The disclosed security systems and methods can scan associated sensors and identify whether the IoT connected device is employing expected (e.g., limited, authorized, etc.) techniques and connections to access data. If such a device attempts to expand data access beyond an authorized and/or recognized use, the security system will prevent such attempts, and/or route the commands and/or associated data to a diversion environment for additional processing). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright and Ding before them, to modify the system to include feature wherein the one or more characteristics include a number of login attempts beyond a threshold number, a number of unsuccessful login attempts beyond a threshold number, a request for unauthorized data from an authorized user, or a request for an amount of data beyond a threshold amount. One would have been motivated using the same reasoning as in claim 1. Regarding claim 4: Bakthavatchalam, Wright, and Ding teach the security system of claim 1, as outlined above. Bakthavatchalam further teaches wherein one or more results of comparing the one or more databases are cross-referenced to determine if the one or more characteristics is a match with any of the one or more databases (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131. As discussed below, flow management unit 131 may take various actions with respect to reported malware detections, including blocking malware-infested traffic flows and/or seizing information with respect to such flows to enable forensic or other advanced security measures [0026] As shown, rule buffer 151 receives rules from a source within control plane 122 (e.g., policy engine 133 of FIG. 1) and forwards or otherwise makes those rules available to rule search engine 155. Rule search engine 155 additionally receives inbound traffic from the data plane 120 and outputs a rule-search result (“RS Result”) to notify downstream functional blocks (e.g., flow management 131 unit of FIG. 1) of a malware detection event upon confirming a match between a rule (malware signature) and contents of the inbound traffic [0031] Thereafter, incoming traffic is routed through the traffic compression engine to deliver the compressed stream to the rule search memory (187), and at 189, the rule search memory searches the compressed traffic stream for malware signatures (i.e., through comparison with the stored, compressed rule data base) asserting a rule-search result signifying match events). Regarding claim 6: Bakthavatchalam, Wright, and Ding teach the security system of claim 1, as outlined above. Bakthavatchalam does not expressly teach wherein the threat characteristic recognition process is configured to run on a client device or via one or more networked computing assets. However, Wright further teaches wherein the threat characteristic recognition process is configured to run on a client device or via one or more networked computing assets (Wright [0012] Additionally or alternatively, the security systems and methods are configured to operate in the absence of a networked connection and/or a primary power source. For example, software and/or hardware can be installed on a client device, which is designed to scan the software and/or hardware of the client device to detect and/or address threats). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright and Ding before them, to modify system to include feature for the security system to run on a client device or via one or more networked computing assets. One would have been motivated using the same reasoning as in claim 1. Regarding claim 8: Bakthavatchalam, Wright, and Ding teach the security system of claim 1, as outlined above. Bakthavatchalam further teaches wherein the security system is connected to one or more [internet of things (IoT) enabled] devices [including a camera or a client device] (Bakthavatchalam [0022] FIG. 1 illustrates an embodiment of a network security appliance or device 100 that executes line-rate malware detection with respect to packetized network traffic flowing between an interface to a distrusted exterior network (“exterior interface”—e.g., Internet interface) and an interface to a nominally trusted interior network (“interior interface”). While appliance 100 (which may constitute or be part of a firewall and/or carry out various other network functions such as traffic switching/routing, access control, deduplication, accounting, etc.) is depicted as having an Ethernet-based exterior interface (implementing at least physical (PHY) and media-access control (MAC) layers of the Ethernet stack as shown at 101) and a more generalized interior interface, various alternative or more specific network interfaces may be used on either or both sides of the appliance, including proprietary interfaces where necessary. Also, while separate (split) inbound and outbound traffic paths are shown, a single bidirectional path may be implemented with respect to either or both of the exterior and interior interfaces). Bakthavatchalam does not expressly teach wherein the security system is connected to one or more internet of things (IoT) enabled devices including a camera or a client device. However, Wright further teaches wherein the security system is connected to one or more internet of things (IoT) enabled devices including a camera or a client device (Wright [0012] Additionally or alternatively, the security systems and methods are configured to operate in the absence of a networked connection and/or a primary power source. For example, software and/or hardware can be installed on a client device, which is designed to scan the software and/or hardware of the client device to detect and/or address threats. There are particular advantages for devices that are configured for extended periods of sleep and/or passive and/or on-demand operation, such as smart speakers, device connected to the Internet of things (IoT), logistical waypoints, communications equipment, as a non-limiting list of examples). Additionally, Ding teaches wherein the one or more IoT enabled devices implement blockchain technology to store and manage cryptographic credentials for IoT devices including storing public keys on a ledger, and/or store all key or certificate operations on the chain (Ding – [0015]: “Systems and method are also described herein for maintaining a public key infrastructure comprising a plurality of certificates, wherein each certificate of the plurality of certificates is fixed and associated with a single user, receiving a plurality of blockchain transactions, and for each blockchain transaction, determining a certificate used to generate the blockchain transaction, and associating the blockchain transaction with the certificate. The systems and methods may further comprise maintaining a trust score associated with each certificate. The systems and methods may also determine that a blockchain transaction comprises malicious activity, determine a certificate used to generate the blockchain transaction, and change a trust score associated with the certificate based on the malicious activity. Paragraph [0027]: “Apart from cryptocurrencies, blockchains seem to have the potential for a wide range of applications from blockchain internet of things (IoT) to programmable self-executing contracts”). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright and Ding before them, to modify system to include feature for a security system to be connected to one or more internet of things (IoT) enabled devices including a camera or a client device while implementing the blockchain technology. One would have been motivated using the same reasoning as in claim 1. Regarding claim 9: Bakthavatchalam, Wright, and Ding teach the security system of claim 1, as outlined above. Bakthavatchalam further teaches wherein the security system [is operating on a quantum-enabled device or system] (Bakthavatchalam [0022] FIG. 1 illustrates an embodiment of a network security appliance or device 100 that executes line-rate malware detection with respect to packetized network traffic flowing between an interface to a distrusted exterior network (“exterior interface”—e.g., Internet interface) and an interface to a nominally trusted interior network (“interior interface”)). Bakthavatchalam does not expressly teach wherein the security system is operating on a quantum-enabled device or system. However, Wright further teaches wherein the security system is operating on a quantum-enabled device or system (Wright [0113] FIGS. 3A and 3B provide a flowchart representative of example machine-readable instructions 300, which may be executed by the example security system 102 of FIG. 1, to implement data protection and authentication. The example instructions 300 may be stored in the memory 112 and/or one or more of the data sources 106, and executed by the processor(s) 110 of the security system 102. The example instructions 300 are described below with reference to the systems of FIG. 1. In some examples, the instructions 300 are executed in a quantum computing environment, and/or are configured to provide protection from threats generated from, associated with, transmitted by, and/or stored on a quantum computing platform). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright and Ding before them, to modify system to include feature for a security system that can be operating on a quantum-enabled device or system. One would have been motivated using the same reasoning as in claim 1. Regarding claim 10: Bakthavatchalam, Wright, and Ding teach the security system of claim 1, as outlined above. Bakthavatchalam further teaches wherein the security system [builds a machine learning algorithm to] identify the one or more characteristics (Bakthavatchalam [0022] FIG. 1 illustrates an embodiment of a network security appliance or device 100 that executes line-rate malware detection with respect to packetized network traffic flowing between an interface to a distrusted exterior network (“exterior interface”—e.g., Internet interface) and an interface to a nominally trusted interior network (“interior interface”) [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131). Bakthavatchalam does not expressly teach wherein the security system builds a machine learning algorithm to identify the one or more characteristics. However, Wright further teaches wherein the security system builds a machine learning algorithm to identify the one or more characteristics (Wright [0035] … identifying abnormal activities on the client device in comparison to a baseline data (such as via AI monitoring) [0057] In some examples, the security system recognizes trends in user behavior, such that anomalous actions and/or traffic can be identified and investigated (e.g., by routing to a diversion environment). This can be implemented by historical tracking and/or application of AI tools to make connections (e.g., between trusted devices), recognize patterns (e.g., in user behavior), identify associated individuals and locations (e.g., within an organization, family, etc.). Thus, when an anomalous event occurs, the security system may evaluate the risk and determine suitable actions suitable to mitigate the risk [0066] In some example, an AI module will be programmed to identify and enforce any regulations, laws, compliances for the relevant industry [0068] … applying an AI module to identify patterns or keywords [0070] Advantageously, the disclosed systems and methods enable the end user to operate the device and/or access their data without impact. In other words, by use of an diversion environment, as well as continuous detection and update efforts of the AI Agents, the systems and methods protect both devices and data from potential threats, be they known, unknown, or emerging). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright and Ding before them, to modify a system to include feature for the security system to include a machine learning algorithm to identify the one or more characteristics. One would have been motivated using the same reasoning as in claim 1. Regarding claim 14: Bakthavatchalam, Wright, and Ding teach the security system of claim 1, as outlined above. Ding further teaches wherein post quantum cryptography comprises lattice-based cryptography and/or Lambert signatures (Ding – Paragraph [0051]: “There are potential quantum resistant signature algorithms that are efficient, from Multivariate and Lattice based cryptography”. Paragraph [0053]: “Lattice signatures: Some of the efficient lattice based signatures include ring-Tesla, BLISS, and GLP. The ring-TESLA signature has its security based on the hardness of the Ring Learning with Errors (RLWE) problem and has public key size around 1-2 kB and larger signature size 12544 bits”). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the combined teaching of Bakthavatchalam and Wright to be further modified by Ding’s teaching of blockchain technology. One would have been motivated to have a secure system in which determination that any transaction comprising malicious activity and DDoS attack can be made with the use of blockchain with the post-quantum cryptography (see Ding [0040] and [0046]). Regarding claim 15: Bakthavatchalam, Wright, and Ding teach the security system of claim 1, as outlined above. Ding further teaches wherein IoT communications use blockchain to enforce security controls (Ding - Paragraph [0004]: “Instead of a centralized authority that is responsible for validating and maintaining a ledger of all transactions, a blockchain relies on a network of nodes to perform these operations”. Paragraph [0027]: “ Apart from cryptocurrencies, blockchains seem to have the potential for a wide range of applications from blockchain internet of things (IoT) to programmable self-executing contracts”. Paragraph [0092]: “Methods and systems are described for improved blockchain security”). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the combined teaching of Bakthavatchalam and Wright to be further modified by Ding’s teaching of blockchain technology. One would have been motivated to have a secure system in which determination that any transaction comprising malicious activity and DDoS attack can be made with the use of blockchain with the post-quantum cryptography (see Ding [0040] and [0046]). Claims 5 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over by Bakthavatchalam et al. (US 20180114023 A1), hereafter Bakthavatchalam, Wright (US 20210021592 A1), hereafter Wright, Ding et al. (WO 2019/071026 A1), hereinafter Ding, and further in view of Parker (US 20130312092 A1), hereafter Parker. Regarding claim 5: Bakthavatchalam, Wright, and Ding teach the security system of claim 1 as outlined above. The combination of Bakthavatchalam, Wright and Ding does not expressly teach wherein a match generates a positive identification report that includes details from each of the one or more databases that contributed to positive identification. However, Parker teaches wherein a match generates a positive identification report that includes details from each of the one or more databases that contributed to positive identification (Parker [0074] The intelligence engine 130 may be optionally adapted to provide an alert when a positive correlation between the EQD derived from that attack data and known adversaries. This alert is preferably triggered when the correlation achieves a predetermined probability threshold (e.g., the intelligence engine 130 calculates that there is a 95% probability that a known adversary is responsible for the attack data generated by the particular sensor node 150). The intelligence engine 130 can use any data correlation techniques known in the art for comparing the EQD to the AAD in the database 110 and determining a match probability. This alert can be either automatically sent to the owner of the sensor node 150 that provided the attack data that yielded the positive correlation, or it can be sent to an analyst who can then alert the owner of the sensor node 150 [0075] If a match is determined, based on a predetermined probability threshold, the intelligence engine preferably updates the profile (AAD) of the known adversary in the database 110 with the EQD derived from the attack data provided by the sensor node 150. If the intelligence engine 110 does not find a match based on the AAD in the database 110, then the intelligence engine 130 preferably established a new profile for an unknown adversary in the database 110 using the EQD from the unknown adversary). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright, Ding and Parker before them, to modify Bakthavatchalam, Wright and Ding that teaches a security system to include a feature function that a match generates a positive identification report that includes details from each of the databases that contributed to the positive identification. One would have been motivated to quickly identify attacks against information technology assets as taught by Parker (see Parker [0028]). Regarding claim 7: Bakthavatchalam, Wright, and Ding teach the security system of claim 1, as outlined above. Bakthavatchalam further teaches wherein [the method further comprises updating a database of the] one or more databases when a comparison of the data results in a match (Bakthavatchalam [0025] In the case of malware detection module 129, for example, policy engine 133 may supply (with or without processing) malware signatures or “rules”—continuous or disjointed strings of symbols that correspond to known malware implementations—that are to be detected within inbound traffic and reported to flow management unit 131. As discussed below, flow management unit 131 may take various actions with respect to reported malware detections, including blocking malware-infested traffic flows and/or seizing information with respect to such flows to enable forensic or other advanced security measures [0026] As shown, rule buffer 151 receives rules from a source within control plane 122 (e.g., policy engine 133 of FIG. 1) and forwards or otherwise makes those rules available to rule search engine 155. Rule search engine 155 additionally receives inbound traffic from the data plane 120 and outputs a rule-search result (“RS Result”) to notify downstream functional blocks (e.g., flow management 131 unit of FIG. 1) of a malware detection event upon confirming a match between a rule (malware signature) and contents of the inbound traffic [0031] Thereafter, incoming traffic is routed through the traffic compression engine to deliver the compressed stream to the rule search memory (187), and at 189, the rule search memory searches the compressed traffic stream for malware signatures (i.e., through comparison with the stored, compressed rule data base) asserting a rule-search result signifying match events). The combination of Bakthavatchalam, Wright and Ding does not expressly teach wherein the threat characteristic recognition process further comprises updating a database of the one or more databases when a comparison of the data traffic results in a match. However, Parker teaches wherein the method further comprises updating a database of the one or more databases when a comparison of the data results in a match (Parker [0074] The intelligence engine 130 may be optionally adapted to provide an alert when a positive correlation between the EQD derived from that attack data and known adversaries. This alert is preferably triggered when the correlation achieves a predetermined probability threshold (e.g., the intelligence engine 130 calculates that there is a 95% probability that a known adversary is responsible for the attack data generated by the particular sensor node 150). The intelligence engine 130 can use any data correlation techniques known in the art for comparing the EQD to the AAD in the database 110 and determining a match probability. This alert can be either automatically sent to the owner of the sensor node 150 that provided the attack data that yielded the positive correlation, or it can be sent to an analyst who can then alert the owner of the sensor node 150 [0075] If a match is determined, based on a predetermined probability threshold, the intelligence engine preferably updates the profile (AAD) of the known adversary in the database 110 with the EQD derived from the attack data provided by the sensor node 150. If the intelligence engine 110 does not find a match based on the AAD in the database 110, then the intelligence engine 130 preferably established a new profile for an unknown adversary in the database 110 using the EQD from the unknown adversary). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Bakthavatchalam, Wright, Ding and Parker before them, to modify Bakthavatchalam, Wright and Ding that teaches a security system to include the feature of updating a database of the one or more databases when a comparison of the data results in a match. One would have been motivated to quickly identify attacks against information technology assets as taught by Parker (see Parker [0028]). Claims 11 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over by Bakthavatchalam et al. (US 20180114023 A1), hereafter Bakthavatchalam, Wright (US 20210021592 A1), hereafter Wright, Ding et al. (WO 2019/071026 A1), hereinafter Ding, and further in view of Pogorelik et al. (WO 2020142110), hereafter Pogorelik. Regarding claim 11: Bakthavatchalam, Wright, and Ding teach the security system of claim 1, as outlined above. The combination of Bakthavatchalam, Wright, and Ding does not expressly teach wherein the security system is configured to detect adversarial data and/or mutated inputs, adversarial reprogramming, and/or data poisoning attacks. However, Pogorelik further teaches wherein the security system is configured to detect adversarial data and/or mutated inputs, adversarial reprogramming, and/or data poisoning attacks (Pogorelik – pg. 20, lines 29-30: “system 1101 can determine whether the input data 1124 is adversarial (e.g., whether the inputs are inputs 1063-m, or the like)). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teaching of Bakthavatchalam, Wright, and Ding with Pogorelik’s teaching of techniques/systems for hardening the security system with Artificial Intelligence (AI) features. One would have been motivated to further improve the security of the system by hardening the AI system features in order to have the system hardware, software, or a combination of hardware of software designed to mitigate against attack(s) (see pg. 4, lines 14-15 of Pogorelik). Regarding claim 13: Bakthavatchalam, Wright, and Ding teach the security system of claim 1, as outlined above. The combination of Bakthavatchalam, Wright, and Ding does not expressly teach wherein system encrypts data via homomorphic encryption. However, Pogorelik further teaches wherein system encrypts data via homomorphic encryption (Pogorelik – pg. 74, lines 32-33: “The data agnostic system may be used in conjunction with homomorphic encryption)). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teaching of Bakthavatchalam, Wright, and Ding with Pogorelik’s teaching of techniques/systems for hardening the security system with Artificial Intelligence (AI) features. One would have been motivated to further improv
Read full office action

Prosecution Timeline

Feb 03, 2023
Application Filed
Dec 09, 2024
Non-Final Rejection — §103, §112, §DP
Mar 17, 2025
Response after Non-Final Action
Mar 17, 2025
Response Filed
Aug 22, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 8984268
ENCRYPTED RECORD TRANSMISSION
2y 5m to grant Granted Mar 17, 2015
Patent 8976966
INFORMATION PROCESSOR, INFORMATION PROCESSING METHOD AND SYSTEM
2y 5m to grant Granted Mar 10, 2015
Patent 8959362
SYSTEMS AND METHODS FOR CONTROLLING FILE EXECUTION FOR INDUSTRIAL CONTROL SYSTEMS
2y 5m to grant Granted Feb 17, 2015
Patent 8949608
Field programmable smart card terminal and token device
2y 5m to grant Granted Feb 03, 2015
Patent 8943304
SYSTEMS AND METHODS FOR USING AN HTTP-AWARE CLIENT AGENT
2y 5m to grant Granted Jan 27, 2015
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
99%
With Interview (+64.8%)
5y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 221 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month