Detailed Action
Claims 1-20 are pending in this application.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/11/24, 1/27/25 has been considered.
Drawings
The Drawings filed on 9/5/24 are acceptable.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2, 4-15, 18-26 of U.S. Patent No. 12,118,078(17/864,306).
Although the claims at issue are not identical, they are not patentably distinct from each other because ‘078 teaches the instant claims except for using out of band memory acquisitions isolated from the one or more computer program, however using isolated out of band memory acquisitions is a method that is taught by claim 6 of ‘078, therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of claim 6 of using isolated out of band memory acquisition in order to provide the predictable result of obtaining data using isolated out of band memory acquisition. One ordinary skill in the art would have been motivated to combine the teachings in order to provide security for data collection.
Instant Claims
12,118,078
1. An integrated circuit comprising:
a host interface operatively coupled to physical memory associated with a host device; a central processing unit (CPU) operatively coupled to the host interface; and an acceleration hardware engine operatively coupled to the host interface and the CPU, wherein the CPU and the acceleration hardware engine are to host a hardware-accelerated security service to protect the host device, wherein the hardware-accelerated security service is to:
extract a plurality of features from data stored in the physical memory, the data being associated with one or more computer programs executed by the host device, wherein the data is obtained by the hardware-accelerated security service using out-of-band memory acquisitions isolated from the one or more computer programs;
determine, using a machine learning (ML) detection system, whether the one or more computer programs are subject to malicious activity based on the plurality of features extracted from the data stored in the physical memory; and
output an indication of the malicious activity responsive to a determination that the one or more computer programs are subject to the malicious activity.
2. The integrated circuit of claim 1, wherein the integrated circuit is a data processing unit (DPU), wherein the DPU is a programmable data center infrastructure on a chip.
3. The integrated circuit of claim 1, wherein the one or more computer programs comprises at least one a host operating system (OS), an application, a guest operating system, or a guest application, wherein the one or more computer programs reside in a first computing domain, wherein the hardware-accelerated security service and the ML detection system reside in a second computing domain different than the first computing domain.
4. The integrated circuit of claim 1, wherein the malicious activity is caused by malware, wherein the hardware-accelerated security service is out-of-band security software in a trusted domain that is different and isolated from the malware.
5. The integrated circuit of claim 1, further comprising a direct memory access (DMA) controller coupled to the host interface, wherein the DMA controller is to read the data from the physical memory via the host interface, wherein the host interface is a Peripheral Component Interconnect Express (PCIe) interface.
6. The integrated circuit of claim 1, wherein: the malicious activity is caused by ransomware; the hardware-accelerated security service is to obtain a series of snapshots of the data stored in the physical memory, each snapshot representing the data at a point in time; the ML detection system comprises: feature extraction logic to extract a set of features from different memory plugins from each snapshot of the series of snapshots; and a random-forest classification model, wherein the random-forest classification model is a time-series-based model trained to classify a process as ransomware or non-ransomware using cascading of different numbers of snapshots in the series of snapshots.
7. The integrated circuit of claim 6, wherein the cascading of different numbers of snapshots in the series of snapshots comprises: a first number of snapshots obtained over a first amount of time; a second number of snapshots obtained over a second amount of time greater than the first amount of time, the second number of snapshots comprising the first number of snapshots; and a third number of snapshots obtained over a third amount of time greater than the second amount of time, the third number of snapshots comprising the second number of snapshots.
8. The integrated circuit of claim 1, wherein: the malicious activity is caused by a malicious uniform resource locator (URL); the hardware-accelerated security service is to obtain a snapshot of the data stored in the physical memory, the snapshot representing the data at a point in time; the ML detection system comprises: feature extraction logic to extract a set of features from the snapshot, the set of features comprising words in a candidate URL and numeric features of a URL structure of the candidate URL; and a binary classification model trained to classify the candidate URL as malicious or benign using the set of features.
9. The integrated circuit of claim 8, wherein the feature extraction logic is to tokenize the words into tokens, and wherein the binary classification model comprises: an embedding layer to receive the tokens as an input sequence of tokens representing the words in the candidate URL and generate an input vector based on the input sequence of tokens; a Long Short-Term Memory (LSTM) layer trained to generate an output vector based on the input vector; and a fully connected neural network layer trained to classify the candidate URL as malicious or benign using the output vector from the LSTM layer and the numeric features of the URL structure.
10. The integrated circuit of claim 1, wherein: the malicious activity is caused by a domain generation algorithm (DGA); the hardware-accelerated security service is to: obtain a snapshot of the data stored in the physical memory, the snapshot representing the data at a point in time; and extract one or more candidate URLs from the snapshot; the ML detection system comprises: feature extraction logic to extract a set of features from the one or more candidate URLs, the set of features comprising domain characters of the one or more candidate URLs; and a two-stage classification model comprising: a binary classification model trained to classify the one or more candidate URLs as having a DGA domain or a non-DGA domain in a first stage using the set of features; and a multi-class classification model trained to classify a DGA family of the DGA domain between a set of DGA families in a second stage using the set of features.
11. The integrated circuit of claim 10, wherein: the feature extraction logic is to tokenize the domain characters into tokens; the binary classification model is a convolutional neural network (CNN) with an embedding layer to receive the tokens as an input sequence of tokens representing the domain characters in the one or more candidate URLs and generate an input vector based on the input sequence of tokens, and the CNN is trained to classify the one or more candidate URLs as having the DGA domain or the non-DGA domain in the first stage using the input vector from the embedding layer; and the multi-class classification model comprises a Siamese network of the CNN with the embedding layer, the Siamese network trained to classify the DGA family in the second stage using the input vector from the embedding layer.
12. A computing system comprising: a data processing unit (DPU) comprising a host interface, a central processing unit (CPU), and an acceleration hardware engine, the DPU to host a hardware-accelerated security service to protect a host device, wherein the hardware-accelerated security service is to extract a plurality of features from data stored in physical memory associated with the host device, the data being associated with one or more computer programs executed by the host device, wherein the data is obtained by hardware-accelerated security service using out-of-band memory acquisitions isolated from the one or more computer programs; and accelerated pipeline hardware coupled to the DPU, wherein the accelerated pipeline hardware is to: determine, using a machine learning (ML) detection system, whether the one or more computer programs are subject to malicious activity based on the plurality of features extracted from the data stored in the physical memory; and output an indication of the malicious activity responsive to a determination that the one or more computer programs are subject to the malicious activity.
13. The computing system of claim 12, wherein the one or more computer programs comprises at least one a host operating system (OS), an application, a guest operating system, or a guest application, wherein the one or more computer programs reside in a first computing domain, wherein the hardware-accelerated security service resides in a second computing domain different than the first computing domain, and wherein the ML detection system resides in the second computing domain or a third computing domain different than the first computing domain and the second computing domain.
14. The computing system of claim 12, wherein the malicious activity is caused by malware, wherein the hardware-accelerated security service is out-of-band security software in a trusted domain that is different and isolated from the malware.
15. The computing system of claim 12, further comprising a direct memory access (DMA) controller coupled to the host interface, wherein the DMA controller is to read the data from the physical memory via the host interface, wherein the host interface is a Peripheral Component Interconnect Express (PCIe) interface.
16. A method comprising: extracting, by a data processing unit (DPU) coupled to a host device, a plurality of features from data stored in physical memory of the host device, the data being associated with one or more computer programs executed by the host device, wherein the data is obtained by the DPU using out-of-band memory acquisitions isolated from the one or more computer programs; determining, using a machine learning (ML) detection system, whether the one or more computer programs are subject to malicious activity based on the plurality of features extracted from the data stored in the physical memory; and outputting an indication of the malicious activity responsive to a determination that the one or more computer programs are subject to the malicious activity.
17. The method of claim 16, wherein the one or more computer programs comprises at least one a host operating system (OS), an application, a guest operating system, or a guest application, wherein the one or more computer programs reside in a first computing domain, wherein the ML detection system reside in a second computing domain different than the first computing domain.
18. The method of claim 16, wherein the malicious activity is caused by malware, and wherein the method further comprises: obtaining a series of snapshots of the data stored in the physical memory, each snapshot representing the data at a point in time; and extracting a set of features from different memory plugins from each snapshot of the series of snapshots, wherein the ML detection system is trained to classify a process of the one or more computer programs as malware or non-malware using the set of features.
19. The method of claim 16, wherein the ML detection system is hosted by accelerated pipeline hardware coupled to the DPU.
20. The method of claim 16, wherein the malicious activity is caused by a malicious uniform resource locator (URL), and wherein the method further comprises: obtaining a snapshot of the data stored in the physical memory, the snapshot representing the data at a point in time; and extracting a set of features from the snapshot, the set of features comprising words in a candidate URL and numeric features of a URL structure of the candidate URL, wherein the ML detection system is trained to classify the candidate URL as malicious or benign using the set of features.
1. An integrated circuit comprising: a host interface operatively coupled to physical memory associated with a host device; a central processing unit (CPU) operatively coupled to the host interface; and an acceleration hardware engine operatively coupled to the host interface and the CPU, wherein the CPU and the acceleration hardware engine are to host a hardware-accelerated security service to protect one or more computer programs executed by the host device, wherein the hardware-accelerated security service is to:
extract a plurality of features from data stored in the physical memory, the data being associated with the one or more computer programs, wherein the data is obtained by the hardware-accelerated security service without detection by the one or more computer programs;
6. The integrated circuit of claim 1, wherein the malicious activity is caused by malware, wherein the hardware-accelerated security service is out-of-band security software in a trusted domain that is different and isolated from the malware.
determine, using a machine learning (ML) detection system, whether the one or more computer programs are subject to malicious activity based on the plurality of features extracted from the data stored in the physical memory; and
output an indication of the malicious activity responsive to a determination that the one or more computer programs are subject to the malicious activity.
2. The integrated circuit of claim 1, wherein the integrated circuit is a data processing unit (DPU), wherein the DPU is a programmable data center infrastructure on a chip.
4. The integrated circuit of claim 1, wherein the one or more computer programs comprises at least one a host operating system (OS), an application, a guest operating system, or a guest application.
5. The integrated circuit of claim 1, wherein the one or more computer programs reside in a first computing domain, wherein the hardware-accelerated security service and the ML detection system reside in a second computing domain different than the first computing domain.
6. The integrated circuit of claim 1, wherein the malicious activity is caused by malware, wherein the hardware-accelerated security service is out-of-band security software in a trusted domain that is different and isolated from the malware.
7. The integrated circuit of claim 1, further comprising a direct memory access (DMA) controller coupled to the host interface, wherein the DMA controller is to read the data from the physical memory via the host interface.
8. The integrated circuit of claim 7, wherein the host interface is a Peripheral Component Interconnect Express (PCIe) interface.
9. The integrated circuit of claim 1, wherein: the malicious activity is caused by ransomware; the hardware-accelerated security service is to obtain a series of snapshots of the data stored in the physical memory, each snapshot representing the data at a point in time; the ML detection system comprises: feature extraction logic to extract a set of features from different memory plugins from each snapshot of the series of snapshots; and a random-forest classification model, wherein the random-forest classification model is a time-series-based model trained to classify a process as ransomware or non-ransomware using cascading of different numbers of snapshots in the series of snapshots.
10. The integrated circuit of claim 9, wherein the cascading of different numbers of snapshots in the series of snapshots comprises: a first number of snapshots obtained over a first amount of time; a second number of snapshots obtained over a second amount of time greater than the first amount of time, the second number of snapshots comprising the first number of snapshots; and a third number of snapshots obtained over a third amount of time greater than the second amount of time, the third number of snapshots comprising the second number of snapshots.
11. The integrated circuit of claim 1, wherein: the malicious activity is caused by a malicious uniform resource locator (URL); the hardware-accelerated security service is to obtain a snapshot of the data stored in the physical memory, the snapshot representing the data at a point in time; the ML detection system comprises: feature extraction logic to extract a set of features from the snapshot, the set of features comprising words in a candidate URL and numeric features of a URL structure of the candidate URL; and a binary classification model trained to classify the candidate URL as malicious or benign using the set of features.
12. The integrated circuit of claim 11, wherein the feature extraction logic is to tokenize the words into tokens, and wherein the binary classification model comprises: an embedding layer to receive the tokens as an input sequence of tokens representing the words in the candidate URL and generate an input vector based on the input sequence of tokens; a Long Short-Term Memory (LSTM) layer trained to generate an output vector based on the input vector; and a fully connected neural network layer trained to classify the candidate URL as malicious or benign using the output vector from the LSTM layer and the numeric features of the URL structure.
13. The integrated circuit of claim 1, wherein: the malicious activity is caused by a domain generation algorithm (DGA); the hardware-accelerated security service is to: obtain a snapshot of the data stored in the physical memory, the snapshot representing the data at a point in time; and extract one or more candidate URLs from the snapshot; the ML detection system comprises: feature extraction logic to extract a set of features from the one or more candidate URLs, the set of features comprising domain characters of the one or more candidate URLs; and a two-stage classification model comprising: a binary classification model trained to classify the one or more candidate URLs as having a DGA domain or a non-DGA domain in a first stage using the set of features; and a multi-class classification model trained to classify a DGA family of the DGA domain between a set of DGA families in a second stage using the set of features.
14. The integrated circuit of claim 13, wherein: the feature extraction logic is to tokenize the domain characters into tokens; the binary classification model is a convolutional neural network (CNN) with an embedding layer to receive the tokens as an input sequence of tokens representing the domain characters in the one or more candidate URLs and generate an input vector based on the input sequence of tokens, and the CNN is trained to classify the one or more candidate URLs as having the DGA domain or the non-DGA domain in the first stage using the input vector from the embedding layer; and the multi-class classification model comprises a Siamese network of the CNN with the embedding layer, the Siamese network trained to classify the DGA family in the second stage using the input vector from the embedding layer.
15. A computing system comprising: a data processing unit (DPU) comprising a host interface, a central processing unit (CPU), and an acceleration hardware engine, the DPU to host a hardware-accelerated security service to protect one or more computer programs executed by a host device, wherein the hardware-accelerated security service is to extract a plurality of features from data stored in physical memory associated with the host device, the data being associated with the one or more computer programs, wherein the data is obtained by hardware-accelerated security service without detection by the one or more computer programs; and accelerated pipeline hardware coupled to the DPU, wherein the accelerated pipeline hardware is to: determine, using a machine learning (ML) detection system, whether the one or more computer programs are subject to malicious activity based on the plurality of features extracted from the data stored in the physical memory; and output an indication of the malicious activity responsive to a determination that the one or more computer programs are subject to the malicious activity.
18. The computing system of claim 15, wherein the one or more computer programs comprises at least one a host operating system (OS), an application, a guest operating system, or a guest application.
19. The computing system of claim 15, wherein the one or more computer programs reside in a first computing domain, wherein the hardware-accelerated security service resides in a second computing domain different than the first computing domain, and wherein the ML detection system resides in the second computing domain or a third computing domain different than the first computing domain and the second computing domain.
20. The computing system of claim 15, wherein the malicious activity is caused by malware, wherein the hardware-accelerated security service is out-of-band security software in a trusted domain that is different and isolated from the malware.
21. The computing system of claim 15, further comprising a direct memory access (DMA) controller coupled to the host interface, wherein the DMA controller is to read the data from the physical memory via the host interface.
22. The computing system of claim 21, wherein the host interface is a Peripheral Component Interconnect Express (PCIe) interface.
23. A method comprising: extracting, by a data processing unit (DPU) coupled to a host device, a plurality of features from data stored in physical memory of the host device, the data being associated with one or more computer programs executed by the host device, wherein the data is obtained by the DPU without detection by the one or more computer programs; determining, using a machine learning (ML) detection system, whether the one or more computer programs are subject to malicious activity based on the plurality of features extracted from the data stored in the physical memory; and outputting an indication of the malicious activity responsive to a determination that the one or more computer programs are subject to the malicious activity.
24. The method of claim 23, wherein the one or more computer programs comprises at least one a host operating system (OS), an application, a guest operating system, or a guest application.
25. The method of claim 23, wherein the malicious activity is caused by malware, and wherein the method further comprises: obtaining a series of snapshots of the data stored in the physical memory, each snapshot representing the data at a point in time; and extracting a set of features from different memory plugins from each snapshot of the series of snapshots, wherein the ML detection system is trained to classify a process of the one or more computer programs as malware or non-malware using the set of features.
26. The method of claim 23, wherein the ML detection system is hosted by accelerated pipeline hardware coupled to the DPU.
11. The integrated circuit of claim 1, wherein: the malicious activity is caused by a malicious uniform resource locator (URL); the hardware-accelerated security service is to obtain a snapshot of the data stored in the physical memory, the snapshot representing the data at a point in time; the ML detection system comprises: feature extraction logic to extract a set of features from the snapshot, the set of features comprising words in a candidate URL and numeric features of a URL structure of the candidate URL; and a binary classification model trained to classify the candidate URL as malicious or benign using the set of features.
Claims 1-5,8,9,12-20 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1,6-8,10-12,14-26 of U.S. Patent No. 12,261,881(17/864,310).
Although the claims at issue are not identical, they are not patentably distinct from each other because ‘881 teaches the instant claims and only differs in verbiage and/or statutory class.
Instant Claims
12,261,881
1. An integrated circuit comprising: a host interface operatively coupled to physical memory associated with a host device; a central processing unit (CPU) operatively coupled to the host interface; and an acceleration hardware engine operatively coupled to the host interface and the CPU, wherein the CPU and the acceleration hardware engine are to host a hardware-accelerated security service to protect the host device, wherein the hardware-accelerated security service is to:
extract a plurality of features from data stored in the physical memory, the data being associated with one or more computer programs executed by the host device, wherein the data is obtained by the hardware-accelerated security service using out-of-band memory acquisitions isolated from the one or more computer programs;
determine, using a machine learning (ML) detection system, whether the one or more computer programs are subject to malicious activity based on the plurality of features extracted from the data stored in the physical memory; and
output an indication of the malicious activity responsive to a determination that the one or more computer programs are subject to the malicious activity.
2. The integrated circuit of claim 1, wherein the integrated circuit is a data processing unit (DPU), wherein the DPU is a programmable data center infrastructure on a chip.
3. The integrated circuit of claim 1, wherein the one or more computer programs comprises at least one a host operating system (OS), an application, a guest operating system, or a guest application, wherein the one or more computer programs reside in a first computing domain, wherein the hardware-accelerated security service and the ML detection system reside in a second computing domain different than the first computing domain.
4. The integrated circuit of claim 1, wherein the malicious activity is caused by malware, wherein the hardware-accelerated security service is out-of-band security software in a trusted domain that is different and isolated from the malware.
5. The integrated circuit of claim 1, further comprising a direct memory access (DMA) controller coupled to the host interface, wherein the DMA controller is to read the data from the physical memory via the host interface, wherein the host interface is a Peripheral Component Interconnect Express (PCIe) interface.
8. The integrated circuit of claim 1, wherein: the malicious activity is caused by a malicious uniform resource locator (URL); the hardware-accelerated security service is to obtain a snapshot of the data stored in the physical memory, the snapshot representing the data at a point in time; the ML detection system comprises: feature extraction logic to extract a set of features from the snapshot, the set of features comprising words in a candidate URL and numeric features of a URL structure of the candidate URL; and a binary classification model trained to classify the candidate URL as malicious or benign using the set of features.
9. The integrated circuit of claim 8, wherein the feature extraction logic is to tokenize the words into tokens, and wherein the binary classification model comprises: an embedding layer to receive the tokens as an input sequence of tokens representing the words in the candidate URL and generate an input vector based on the input sequence of tokens; a Long Short-Term Memory (LSTM) layer trained to generate an output vector based on the input vector; and a fully connected neural network layer trained to classify the candidate URL as malicious or benign using the output vector from the LSTM layer and the numeric features of the URL structure.
7. An integrated circuit comprising: a host interface operatively coupled to physical memory associated with a host device; a central processing unit (CPU) operatively coupled to the host interface; and an acceleration hardware engine operatively coupled to the host interface and the CPU, wherein the CPU and the acceleration hardware engine are to host a hardware-accelerated security service to protect the host device, wherein the hardware-accelerated security service is to:
obtain a snapshot of data stored in the physical memory, the data being associated with one or more computer programs executed by the host device, wherein the snapshot of data is obtained by the hardware-accelerated security service using out-of-band memory acquisitions isolated from the one or more computer programs;
extract, using a machine learning (ML) detection system, a set of features from the snapshot, wherein the set of features comprising words in a candidate uniform resource locator (URL) and numeric features of a URL structure of the candidate URL; classify, using the set of features and the ML detection system, the candidate URL as malicious or benign; and
output an indication of a malicious URL responsive to the candidate URL being classified as malicious.
8. The integrated circuit of claim 7, wherein the integrated circuit is a data processing unit (DPU), wherein the DPU is a programmable data center infrastructure on a chip.
10. The integrated circuit of claim 7, wherein the one or more computer programs comprises at least one a host operating system (OS), an application, a guest operating system, or a guest application.
14. The integrated circuit of claim 7, wherein the hardware-accelerated security service is out-of-band security software in a trusted domain that is different and isolated from the malicious URL.
15. The integrated circuit of claim 7, further comprising a direct memory access (DMA) controller coupled to the host interface, wherein the DMA controller is to read the data from the physical memory via the host interface.
16. The integrated circuit of claim 15, wherein the host interface is a Peripheral Component Interconnect Express (PCIe) interface.
11. The integrated circuit of claim 7, wherein: the hardware-accelerated security service is to obtain a snapshot of the data stored in the physical memory, the snapshot representing the data at a point in time; the ML detection system comprises: feature extraction logic to extract a set of features from the snapshot, the set of features comprising words in the candidate URL and the numeric features of the URL structure of the candidate URL; and a binary classification model trained to classify the candidate URL as malicious or benign using the set of features.
12. The integrated circuit of claim 11, wherein the feature extraction logic is to tokenize the words into tokens, and wherein the binary classification model comprises: an embedding layer to receive the tokens as an input sequence of tokens representing the words in the candidate URL and generate an input vector based on the input sequence of tokens; a Long Short-Term Memory (LSTM) layer trained to generate an output vector based on the input vector; and a fully connected neural network layer trained to classify the candidate URL as malicious or benign using the output vector from the LSTM layer and the numeric features of the URL structure.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5, 12-17, 19 rejected under 35 U.S.C. 103 as being unpatentable over US 2021/0141897 issued to Seifert in view of US 2022/0092010 issued to Thyamagondlu et al.(Thyamagondulu) in view of US 2012/0291126 issued to Lagar-Cavilla et al.(Lagar) in view of US 2019/0310945 issued to Atamih et al.(Atamih).
As per claims 1,12,16,19, teaches Seifert teaches an acceleration engine are to host a security service to protect one or more computer programs executed by the host device, wherein the hardware-accelerated security service(Fig.3,7B, para.63 element 350 evaluation component used to determine whether content is malicious which protect computer programs/host) is to:
extract a plurality of features from data stored in the physical memory, the data being associated with the one or more computer programs(Fig.7B, para.102; extract features from the computer object);
determine, using a machine learning (ML) detection system, whether the one or more computer programs are subject to malicious activity based on the plurality of features extracted from the data stored in the physical memory(Fig.7B, para.103; Per block 709, based on the features (extracted at block 705), a similarity score is generated (e.g., by the unknown evaluator 211), via a deep learning model, between the computer object and each computer object of a plurality of computer objects known to contain malicious content. In some embodiments, the deep learning model is associated with a plurality of indications representing known malicious computer objects. The plurality of indications may be compared with an indication representing the computer object. In some embodiments, based at least in part on processing or running the indication of the computer object through the deep learning model, a similarity score is generated between the computer object and each of the plurality of known malicious computer objects. …… The distance may be specifically based on the exact feature values that the computer object has compared to the known malicious computer objects. For example, if the computer object has the exact feature values that have been weighted toward prominence or importance during training (e.g., as described with respect to block 708 of FIG. 7A) as some known malware computer object, then the distance between these two computer objects would be close within a threshold in feature space, such that the similarity score is high.); and
output an indication of the malicious activity responsive to a determination that the one or more computer programs are subject to the malicious activity(Fig.7B, para.108; [0108] Per block 713, one or more identifiers (e.g., names of particular malware families or files) representing at least one of the plurality of known malicious computer objects is provided (e.g., by the rendering component 217) or generated on a computing device. The one or more identifiers may indicate that the computer objects is likely malicious and/or the computer object likely belongs to a particular malicious family.).
Seifert Fig.8, para.16, 113, teach a computing device with bus, memory, processor, I/O ports, etc. however does not explicitly teach wherein the data is obtained by hardware-accelerated security service using out-of-band memory acquisitions isolated from the one or more computer programs and as per claim 1, a host interface operatively coupled to physical memory associated with a host device; a central processing unit (CPU) operatively coupled to the host interface; an acceleration hardware engine operatively coupled to the host interface and the CPU, wherein the CPU and the acceleration hardware engine; as per claim 12,19 a computing system comprising: a data processing unit (DPU) comprising a host interface, a central processing unit (CPU), and an acceleration hardware engine, the DPU to host and accelerated pipeline hardware coupled to the DPU; as per claim 16, a method comprising: by a data processing unit (DPU) coupled to a host device.
Thyamagondulu explicitly teaches a host interface operatively coupled to physical memory associated with a host device(Fig.1, para.19-34); a central processing unit (CPU) operatively coupled to the host interface(Fig.1, para.19-34); and an acceleration hardware engine operatively coupled to the host interface the host interface and the CPU, wherein the CPU(Fig.1, 104 hardware acceleration card); and as per claim 12,19, a computing system comprising: a data processing unit (DPU) comprising a host interface, a central processing unit (CPU), and an acceleration hardware engine, the DPU to host and accelerated pipeline hardware coupled to the DPU(Fig.1, para.19-34); As per claim 16, a method comprising: by a data processing unit (DPU) coupled to a host device(Fig.1, para.19-34).
Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Seifert of determination of whether to content or data is malicious to use the hardware configuration of Thyamagondulu of an hardware acceleration card in order to provide the predictable result of determination of whether content or data is malicious by a hardware acceleration card.
One ordinary skill in the art would have been motivated to combine the teachings in order to use IC and direct memory access system within IC that supports multiple different host processors(Thyamagondulu, para.1).
Seifert in view of Thyamagondulu does not explicitly teach wherein the data is obtained by hardware-accelerated security service using out-of-band memory acquisitions isolated from the one or more computer programs.
Lagar explicitly teaches wherein the data is obtained by hardware-accelerated security service using acquisitions isolated from the one or more computer programs(Fig.3-4, [0045], [0049] Meanwhile, FIG. 4 shows a system for checking data integrity on a mobile device, according to an exemplary embodiment of the subject disclosure. Similar to FIG. 3, a hypervisor 440 resides on a memory of mobile device 420. Hardware 420 includes the components of a mobile device such as the one described in FIGS. 2A-2B. Hypervisor 440 mediates between hardware 420 and domains 444 and 454, and guarantees isolation 450 between the guest domain (the monitored system) 444 and the trusted domain (the monitoring tool) 454. In this embodiment, guest domain 444 includes an operating system having kernel data pages 446, among other pages that are not shown, such as executable code pages. Trusted domain 454 includes a malware scanner 456, such as the Patagonix or Gibraltar scanners described above, a database of invariants 459, and other minimal logic that is not shown. Hypervisor 440 and the trusted domain 454 together form the Trusted Computing Base (TCB) of the system. When trusted domain 454 detects a compromised page, it is capable of taking over the user interface on the mobile device 420, alerting the user, and providing containment options. )
Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Seifert in view of Thyamagondulu of determination of whether content or data is malicious to apply the teachings of Lagar of a malware scanner in a trusted domain scanning for malware in a guest domain in order to provide the predictable result of scanning for virus in different domains from a trusted domain.
One ordinary skill in the art would have been motivated to combine the teachings in order to provide security and trust of a malware scanner.
Seifert in view of Thyamagondulu in view of Lagar does not explicitly teach out of band memory acquisition.
Atamih explicitly teaches the known method of out of band memory acquisition(Title, para.31-38; out of band memory acquisition).
Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Seifert in view of Thyamagondulu in view of Lagar of scanning for virus in different domains from a trusted domain to apply the known method of Atamih of out of band memory acquisition in order to provide the predictable result of scanning for virus in different domains from a trusted domain using out of band memory acquisition.
One ordinary skill in the art would have been motivated to combine the teachings in order to reduce security risk(Atamih, para.31)
As per claims 2, Seifert in view of Thyamagondulu in view of Lagar in view of Atamih teaches the integrated circuit of claim 1, wherein the integrated circuit is a data processing unit (DPU), wherein the DPU is a programmable data center infrastructure on a chip(Thyamagondulu, para.111,137; Architecture 600 may be used to implement IC 132, for example. In one aspect, architecture 600 may be implemented within a programmable IC. For example, architecture 600 may be used to implement a field programmable gate array (FPGA). Architecture 600 may also be representative of a system-on-chip (SoC) type of IC. An SoC is an IC that includes a processor that executes program code and one or more other circuits. The other circuits may be implemented as hardwired circuitry, programmable circuitry, and/or a combination thereof. The circuits may operate cooperatively with one another and/or with the processor). Motivation to combine set forth in claim 1 above.
As per claims 3,13,17, Seifert in view of Thyamagondulu in view of Lagar in view of Atamih teaches the computing system of claim 1,12,16, wherein the one or more computer programs comprises at least one a host operating system (OS), an application(Seifert, para.22, by a stand-alone application, a service or hosted service (stand-alone or in combination with another hosted service), or a plug-in to another product, to name a few), a guest operating system(Lagar, Fig.3-4), or a guest application, wherein the one or more computer programs reside in a first computing domain, wherein the hardware-accelerated security service resides in a second computing domain different than the first computing domain, and wherein the ML detection system resides in the second computing domain or a third computing domain different than the first computing domain and the second computing domain(Seifert, Fig.7B, para.103; Per block 709, based on the features (extracted at block 705), a similarity score is generated (e.g., by the unknown evaluator 211), via a deep learning model, between the computer object and each computer object of a plurality of computer objects known to contain malicious content. In some embodiments, the deep learning model is associated with a plurality of indications representing known malicious computer objects, Lagar, Fig.3-4). Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Seifert’s machine learning and Lagar’s trusted domain to have the machine learning in the trusted domain. One ordinary skill in the art would have been motivated to combine the teachings in order to provide security for the machine learning since it’s on a trusted domain.
As per claims 4,14, Seifert in view of Thyamagondulu in view of Lagar in view of Atamih teaches the computing system of claim 1,12, wherein the malicious activity is caused by malware(Seifert, para.4,26; malware; Lagar, para.45,49; malware scanner), wherein the hardware-accelerated security service is out-of-band security software in a trusted domain that is different and isolated from the malware(Lagar, Fig.3-4, para.45,49; malware scanner in trusted domain to scan malware in guest domain; Atamih, Title, para.31-38, out of band memory acquisition). Motivation to combine set forth in claim 1,12.
As per claims 5,15, Seifert in view of Thyamagondulu in view of Lagar in view of Atamih teaches the computing system of claim 1,12, further comprising a direct memory access (DMA) controller coupled to the host interface, wherein the DMA controller is to read the data from the physical memory via the host interface(Thyamagondulu, para.16, 46; When implementing a DMA data operation, DMA system 204 is capable of accessing queues and/or buffers 226 stored in volatile memory 134, queues and/or buffers 224, and/or queues and/or buffers 228. DMA system 204 is capable of fetching descriptors from any of the address spaces shown to perform DMA operations to support multiple host processors. For example, DMA system 204 may fetch descriptor(s), which specify DMA operations, from queues and/or buffers 224, 226, and/or 228. DMA system 204 is capable of performing DMA operations in the Host-to-Card (H2C) direction or the Card-to-Host (C2H) direction (where “host refers to host processor 106 and/or host processor 150)), wherein the host interface is a Peripheral Component Interconnect Express (PCIe) interface(Thyamagondulu, para.20,45,60,72; PCIe). Motivation to combine set forth in claim 1,12 above.
Claims 6-8, 18, 20 rejected under 35 U.S.C. 103 as being unpatentable over US 2021/0141897 issued to Seifert in view of US 2022/0092010 issued to Thyamagondlu et al.(Thyamagondulu) in view of US 2012/0291126 issued to Lagar-Cavilla et al.(Lagar) in view of US 2019/0310945 issued to Atamih et al.(Atamih) in view of US 2020/0104498 issued to Smith et al.(Smith) in view of US 2022/0046057 issued to Kutt et al.(Kutt).
As per claims 6,18, Seifert in view of Thyamagondulu in view of Lagar in view of Atamih teaches the integrated circuit of claim 1,16, wherein: the malicious activity is caused by ransomware(Seifert, para.2; ransomware is known in the art as a malicious activity), however does not explicitly teach the hardware-accelerated security service is to obtain a series of snapshots of the data stored in the physical memory, each snapshot representing the data at a point in time; the ML detection system comprises: feature extraction logic to extract a set of features from different memory plugins from each snapshot of the series of snapshots; and a random-forest classification model, wherein the random-forest classification model is a time-series-based model trained to classify a process as ransomware or non- ransomware using cascading of different numbers of snapshots in the series of snapshots.
Smith teaches the hardware-accelerated security service is to obtain a series of snapshots of the data stored in the physical memory, each snapshot representing the data at a point in time(Abstract, claim 14; para.17,22; Due to the processing efficiencies gained by the processes described above, the average time to complete the processing of the snapshots was less than thirty five seconds per snapshot; for a feature vector extraction, the average time for generating all images or extracting the byte sequences was less than ten seconds when executed on a 2.5 GHz machine as graphically shown in FIG. 2. Faster systems and the efficiencies described above render real-time detections); the ML detection system comprises: feature extraction logic to extract a set of features from different memory plugins from each snapshot of the series of snapshots(Abstract, para.17, claim 14; detect malware by training a rule-based model, a functional based model, and a deep learning-based model from a memory snapshot of a malware free operating state of a monitored device. The system extracts a feature set from a second memory snapshot captured from an operating state of the monitored device and processes the feature set by the rule-based model, the functional-based model, and the deep learning-based model.); and classification model is a time-series-based model trained to classify a process as malware or non-malware using cascading of different numbers of snapshots in the series of snapshots(Abstract, para.17, claim 14; detect malware by training a rule-based model, a functional based model, and a deep learning-based model from a memory snapshot of a malware free operating state of a monitored device. The system extracts a feature set from a second memory snapshot captured from an operating state of the monitored device and processes the feature set by the rule-based model, the functional-based model, and the deep learning-based model.).
Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify Seifert in view of Thyamagondulu in view of Lagar in view of Atamih teaching of whether content or data is malicious to apply the known method of Smith of taking snapshot of memory and using deep learning based model to determine malware and to substitute Smith’s teaching of ransomware for Seifert’s teaching of malware in order to provide the predictable result of determining whether content is ransomware through the use of snapshot of memory.
One ordinary skill in the art would have been motivated to combine the teachings in order to detect whether malware by comparing different snapshot(Smith, para.18).
Seifert in view of Thyamagondulu in view of Lagar in view of Atamih in view of Smith however does not explicitly teach a random-forest classification model.
Kutt explicitly teaches a random-forest classification model(para.146; random forest classifier).
Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify Seifert in view of Seifert in view of Thyamagondulu in view of Lagar in view of Atamih in viw of Smith teaching of determining whether content is ransomware through the use of snapshot of memory to apply the known method of Kutt of random forest classifier in order to provide the predictable result of determining whether content is malicious using random forest classifier of snapshot of memory.
One ordinary skill in the art would have been motivated to combine the teachings in order to use a model/classifier that is highly accurate, flexible, robust, and scalable.
As per claims 7, Seifert in view of Thyamagondulu in view of Lagar in view of Atamih in view of Smith in view of Kutt teaches the integrated circuit of claim 6, wherein the cascading of different numbers of snapshots in the series of snapshots comprises: a first number of snapshots obtained over a first amount of time; a second number of snapshots obtained over a second amount of time greater than the first amount of time, the second number of snapshots comprising the first number of snapshots; and a third number of snapshots obtained over a third amount of time greater than the second amount of time, the third number of snapshots comprising the second number of snapshots(Smith, Abstract, claim 14; para.17,22; Due to the processing efficiencies gained by the processes described above, the average time to complete the processing of the snapshots was less than thirty five seconds per snapshot; it is obvious to one ordinary skill in the art and by definition that the snapshots are taken at different times, ie snapshot at 1 min, 2 min, etc, which would include the previous snapshot). Motivation to combine set forth in claim 6.
As per claims 8,20, Seifert in view of Thyamagondulu in view of Lagar in view of Atamih teaches the integrated circuit of claim 1,16, wherein: the malicious activity is caused by a malicious uniform resource locator (URL)(Seifert, para.4,25; malware is URL) and classify the candidate URL as malicious or benign using the set of features(Seifert, para.4,25,28; determining whether URL is malware or not ) however does not explicitly teach the hardware-accelerated security service is to obtain a snapshot of the data stored in the physical memory, the snapshot representing the data at a point in time; the ML detection system comprises: feature extraction logic to extract a set of features from the snapshot, the set of features comprising words in a candidate URL and numeric features of a URL structure of the candidate URL; and a binary classification model trained to classify.
Smith explicitly teaches the hardware-accelerated security service is to obtain a snapshot of the data stored in the physical memory, the snapshot representing the data at a point in time(Abstract, claim 14; para.17,22; Due to the processing efficiencies gained by the processes described above, the average time to complete the processing of the snapshots was less than thirty five seconds per snapshot; for a feature vector extraction, the average time for generating all images or extracting the byte sequences was less than ten seconds when executed on a 2.5 GHz machine as graphically shown in FIG. 2. Faster systems and the efficiencies described above render real-time detections); the ML detection system comprises: feature extraction logic to extract a set of features from the snapshot(Abstract, para.17, claim 14; detect malware by training a rule-based model, a functional based model, and a deep learning-based model from a memory snapshot of a malware free operating state of a monitored device. The system extracts a feature set from a second memory snapshot captured from an operating state of the monitored device and processes the feature set by the rule-based model, the functional-based model, and the deep learning-based model )
Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify Seifert in view of Thyamagondulu in view of Lagar in view of Atamih teaching of whether content or data is malicious by a hardware acceleration card to apply the known method of Smith of taking snapshot of memory to determine malware in order to provide the predictable result of determining whether content is malicious through the use of snapshot of memory.
One ordinary skill in the art would have been motivated to combine the teachings in order to detect whether malware by comparing different snapshot(Smith, para.18).
Seifert in view of Thyamagondulu in view of Lagar in view of Atamih in view of Smith however does not explicitly teach the set of features comprising words in a candidate URL and numeric features of a URL structure of the candidate URL; and a binary classification model trained to classify.
Kutt explicitly teaches the set of features comprising words in a candidate URL and numeric features of a URL structure of the candidate URL(Fig.19; para.208; extract a set of tokens from the set of input files to generate a character encoding and a token encoding; para. 216; The URL are split between char(word) and tokens(numbers)); and a binary classification model trained to classify(para.129; binary malware classification, para.193 binary model)
Therefore it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify Seifert in view of Thyamagondulu in view of Lagar in view of Atamih in view of Smith teaching of determining whether content is malicious through the use of snapshot of memory.to apply the known method of Kutt of tokenzing URL into char/word and tokens/numbers and the use of binary classification to determine malware in order to provide the predictable result of determining whether content is malicious through the use of snapshot of memory and tokenizing of URL for binary classification.
One ordinary skill in the art would have been motivated to combine the teachings in order to easily determine whether content is malicious because binary classification has a binary result of positive or negative.
Allowable Subject Matter
Claims 9-11 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and overcoming the Double Patenting Rejection.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892.
US 2017/0134397 issued to Dennison et al., teaches a computer system identifies malicious Uniform Resource Locator (URL) data items from a plurality of unscreened data items that have not been previously identified as associated with malicious URLs. The system can execute a number of pre-filters to identify a subset of URLs in the plurality of data items that are likely to be malicious. A scoring processor can score the subset of URLs based on a plurality of input vectors using a suitable machine learning model. Optionally, the system can execute one or more post-filters on the score data to identify data items of interest. Such data items can be fed back into the system to improve machine learning or can be used to provide a notification that a particular resource within a local network is infected with malicious software.
US 2021/0377301 issued to Desai et al., teaches obtaining a Uniform Resource Locator (URL) for a site on the Internet; analyzing the URL with a Machine Learning (ML) model to determine whether or not the site is suspicious for phishing; responsive to the URL being suspicious for phishing, loading the site to determine whether or not an associated brand of the site is legitimate or not; and, responsive to the site being not legitimate for the brand, categorizing the URL for phishing and performing a first action based thereon. The systems and methods can further include, responsive to the URL being not suspicious for phishing or the site being legitimate for the brand, categorizing the URL as legitimate and performing a second action based thereon.
US 2015/0281259 issued to Ranum et al., teaches leverage active network scanning and passive network monitoring to provide strategic anti-malware monitoring in a network. In particular, the system and method described herein may remotely connect to managed hosts in a network to compute hashes or other signatures associated with processes running thereon and suspicious files hosted thereon, wherein the hashes may communicated to a cloud database that aggregates all known virus or malware signatures that various anti-virus vendors have catalogued to detect malware infections without requiring the hosts to have a local or resident anti-virus agent.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BACKHEAN TIV whose telephone number is (571)272-5654. The examiner can normally be reached on Mon.-Thurs. 5:30-3:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TONIA DOLLINGER can be reached on (571) 272-4170. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BACKHEAN TIV/
Primary Examiner
Art Unit 2459