DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 were cancelled, 21-40 were added as new by preliminary amendments.
Double Patenting
The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on non-statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a non-statutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/ patents/apply/applying-online/eterminal-disclaimer.
Instant application 18/817,448
US 12079346 B2
21. An apparatus, comprising:
one or more processors; and
one or more computer-readable non-transitory storage media coupled to the one or more processors and comprising instructions that, when executed by the one or more processors, cause the apparatus to perform operations comprising:
causing generation of a prediction model based on training data;
causing application of the prediction model to input data corresponding to first software vulnerabilities,
wherein each of the first software vulnerabilities is associated with a first prediction that a first exploit will be developed for that particular first software vulnerability; and
receiving, based on the application of the prediction model to the input data, output data that indicates, for each of the first software vulnerabilities, a second prediction that the first exploit will also be used in an attack.
1. A device, comprising:
one or more processors; and
one or more computer-readable non-transitory storage media coupled to the one or more processors and comprising instructions that, when executed by the one or more processors, cause the device to perform operations comprising:
receiving input data comprising one or more features for each software vulnerability of a plurality of software vulnerabilities; causing application of a prediction model to the input data; and
generating, based on the application of the prediction model to the input data, output data, wherein: the output data indicates a prediction of whether an exploit will be developed for each software vulnerability of the plurality of software vulnerabilities;
and the one or more features indicate a number of copies of software affected by each software vulnerability of the plurality of software vulnerabilities.
The device of claim 1, wherein the prediction of whether the exploit will be developed for each software vulnerability of the plurality of software vulnerabilities comprises a prediction of whether the exploit will be developed for each software vulnerability of the plurality of software vulnerabilities within a particular number of days.
6. The device of claim 1, wherein the output data indicates, for each software vulnerability of the plurality of software vulnerabilities, a probability that the exploit will be developed for each software vulnerability of the plurality of software vulnerabilities and used in an attack.
28. A method, comprising:
causing generation of a prediction model based on training data;
causing application of the prediction model to input data corresponding to first software vulnerabilities,
wherein each of the first software vulnerabilities is associated with a first prediction that a first exploit will be developed for that particular first software vulnerability; and
receiving, based on the application of the prediction model to the input data, output data that indicates, for each of the first software vulnerabilities, a second prediction that the first exploit will also be used in an attack.
7. A method, comprising:
receiving input data comprising one or more features for each software vulnerability of a plurality of software vulnerabilities; causing application of a prediction model to the input data; and
generating, based on the application of the prediction model to the input data, output data, wherein: the output data indicates a prediction of whether an exploit will be developed for each software vulnerability of the plurality of software vulnerabilities, and the one or more features indicate a number of copies of software affected by each software vulnerability of the plurality of software vulnerabilities.
10. The method of claim 7, wherein the prediction of whether the exploit will be developed for each software vulnerability of the plurality of software vulnerabilities comprises a prediction of whether the exploit will be developed for each software vulnerability of the plurality of software vulnerabilities within a particular number of days.
12. The method of claim 7, wherein the output data indicates, for each software vulnerability of the plurality of software vulnerabilities, a probability that the exploit will be developed for each software vulnerability of the plurality of software vulnerabilities and used in an attack.
35. One or more computer-readable non-transitory storage media embodying instructions that, when executed by a processor, cause the processor to perform operations comprising:
causing generation of a prediction model based on training data;
causing application of the prediction model to input data corresponding to first software vulnerabilities,
wherein each of the first software vulnerabilities is associated with a first prediction that a first exploit will be developed for that particular first software vulnerability; and
receiving, based on the application of the prediction model to the input data, output data that indicates, for each of the first software vulnerabilities, a second prediction that the first exploit will also be used in an attack.
13. One or more computer-readable non-transitory storage media embodying instructions that, when executed by a processor, cause the processor to perform operations comprising:
receiving input data comprising one or more features for each software vulnerability of a plurality of software vulnerabilities;
causing application of a prediction model to the input data; and generating, based on the application of the prediction model to the input data, output data, wherein: the output data indicates a prediction of whether an exploit will be developed for each software vulnerability of the plurality of software vulnerabilities;
and the one or more features indicate a number of copies of software affected by each software vulnerability of the plurality of software vulnerabilities.
16. The one or more computer-readable non-transitory storage media of claim 13, wherein the prediction of whether the exploit will be developed for each software vulnerability of the plurality of software vulnerabilities comprises a prediction of whether the exploit will be developed for each software vulnerability of the plurality of software vulnerabilities within a particular number of days.
18. The one or more computer-readable non-transitory storage media of claim 13, wherein the output data indicates, for each software vulnerability of the plurality of software vulnerabilities, a probability that the exploit will be developed for each software vulnerability of the plurality of software vulnerabilities and used in an attack.
Claims 21-40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. US 12079346 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because similar limitations with minor obvious variations.
Similarly, Claims 21-40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. US 11275844 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because similar limitations with minor obvious variations.
Similarly, Claims 21-40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. US 10762212 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because similar limitations with minor obvious variations.
Similarly, Claims 21-40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. US 10114954 B1. Although the claims at issue are not identical, they are not patentably distinct from each other because similar limitations with minor obvious variations.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 21-22, 24, 26-27, 28-29, 31, 33-36, 38, 40 are rejected under 35 U.S.C. 103 as being unpatentable over Copty et al. (US 20180232523 A1:IDS supplied).
With regards to claim 21, 28, 35 Copty discloses, An apparatus(FIG 3 and associated text; also apparatus run Method as per FIG 2a-2b; and One or more computer-readable non-transitory storage media embodying instructions that, when executed by a processor, cause the processor to perform operations as per [0006] ), comprising:
one or more processors (0091] In some exemplary embodiments, an Apparatus 300 may comprise one or more Processor(s) 302. Processor 302 may be a Central Processing Unit (CPU), a microprocessor, an electronic circuit, an Integrated Circuit (IC) or the like. Processor 302 may be utilized to perform computations required by Apparatus 300 or any of it subcomponents.); and
one or more computer-readable non-transitory storage media coupled to the one or more processors and comprising instructions that, when executed by the one or more processors, cause the apparatus to perform operations ([0006] Yet another exemplary embodiment of the disclosed subject matter is a computer program product comprising a non-transitory computer readable storage medium retaining program instructions, which program instructions when read by a processor, cause the processor to perform a method) comprising:
causing generation of a prediction model based on training data ([0061] On Step 220, a predictive model may be trained based on the set of variants and labels thereof. In some exemplary embodiments, the predictive model may be trained to determine which data is useful and which data is not needed, to give accurate predictions. The set of variant inputs may be used as a training data set to build the predictive model. A test set comprising inputs from the set of variant inputs may be used to validate the predictive model, after training thereof without the test set.[0067]);
causing application of the prediction model to input data corresponding to first software vulnerabilities ([0063] On Step 225, the predictive model may be provided to an input analysis platform configured to analyze inputs provided to the program, prior to executing the program with the inputs. The predictive model may enable the input analysis platform to predict whether an input would cause the program to reach the vulnerability prior to executing the program with the input.), wherein each of the first software vulnerabilities is associated with a first prediction that a first exploit will be developed for that particular first software vulnerability ([0062] In some exemplary embodiments, an initial training set may be utilized prior to Step 220 to train an initial predictive model. The initial predictive model may be configured to predict whether an input would cause the program to reach one or more vulnerabilities…. In some exemplary embodiments, the refinement of the initial predictive model may be performed without re-using the initial training set. In some exemplary embodiments, the predictive model may be configured to predict whether an input would cause the program to reach the one or more vulnerabilities or the vulnerability.).
Copty does not explicitly teach, in the same embodiment, regarding: receiving, based on the application of the prediction model to the input data, output data that indicates, for each of the first software vulnerabilities, a second prediction that the first exploit will also be used in an attack.
However, in some other embodiments, Copty further teaches receiving, based on the application of the prediction model to the input data, output data that indicates, for each of the first software vulnerabilities, a second prediction that the first exploit will also be used in an attack ([0066] However, after one hour of generation, the generated inputs may be used to train the predictive model and create a first predictive model. After additional inputs are generated (e.g., in a second hour), a second predictive model may be created based on all the inputs generated thus far. It will be appreciated that any timeframe for an iteration may be defined by a user based on her preferences. [0046-0048] Predictive Model 150 may determine whether Input 142 would cause Program 102 to reach the vulnerability. Additionally or alternatively, Predictive Model 150 may output a confidence score for the classification of Input 142. The confidence score may be a probability of the classification being correct (e.g. the estimated probability that a safe-labeled input would cause Program 102 to not reach the vulnerability and the estimated probability that an unsafe-labeled input would cause Program 102 to reach the vulnerability).).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Copty’s teaching in one embodiment with another embodiment in order to exploit vulnerability detection and prevention in general, and to exploit vulnerability prevention using supervised learning from fuzz testing and security analysis (Copty [0001])
With regards to claim 22, 29, 36 Copty further discloses, wherein: the training data corresponds to second software vulnerabilities ([0018] In some exemplary embodiments, the input analysis platform may utilize an initial predictive model that is configured to predict whether an input would cause the program to reach one or more vulnerabilities. The one or more vulnerabilities may or may not comprise vulnerability that the predictive model is trained based on. When the predictive model is provided to the input analysis platform, it may replace the initial predictive model. In response to the replacement, the input analysis platform may utilize the predictive model instead of the initial predictive model, to perform predictions of whether an input would cause the program to the vulnerability. In some exemplary embodiments, the predictive model may be a refinement of the initial predictive model. The predictive model may be configured to predict whether an input would cause the program to reach the one or more vulnerabilities or the vulnerability. In some exemplary embodiments, the predictive model may be trained based on the initial predictive model in addition to the set of variant inputs and labels thereof. See also [0006];); and each of the second software vulnerabilities comprises a second exploit developed for that particular second software vulnerability ([0066] However, after one hour of generation, the generated inputs may be used to train the predictive model and create a first predictive model. After additional inputs are generated (e.g., in a second hour), a second predictive model may be created based on all the inputs generated thus far. It will be appreciated that any timeframe for an iteration may be defined by a user based on her preferences. Note: second predictive model uses output of first predictive model [0046-0048] Predictive Model 150 may determine whether Input 142 would cause Program 102 to reach the vulnerability. Additionally or alternatively, Predictive Model 150 may output a confidence score for the classification of Input 142. The confidence score may be a probability of the classification being correct (e.g. the estimated probability that a safe-labeled input would cause Program 102 to not reach the vulnerability and the estimated probability that an unsafe-labeled input would cause Program 102 to reach the vulnerability)).
With regards to claims 24, 31, 38 Copty further discloses, wherein each of the second software vulnerabilities indicates whether the second exploit was used in a successflul attack ([0046] Additionally or alternatively, Predictive Model 150 may output a confidence score for the classification of Input 142. The confidence score may be a probability of the classification being correct (e.g. the estimated probability that a safe-labeled input would cause Program 102 to not reach the vulnerability and the estimated probability that an unsafe-labeled input would cause Program 102 to reach the vulnerability).
With regards to claim 27, 34, Copty further discloses, the operations further comprising generating the prediction model using machine learning ([0026] Another technical solution is to perform feature learning. In some exemplary embodiments, feature learning may be performed over the set of variant inputs, to automatically identify a vector of features representing inputs to be used by the machine learning modules. The feature learning may be performed based on the sample input and information provided about the sample input during generation of the variant inputs process (e.g., the mutation performed in genetic fuzz testing that created a variant input reaching the vulnerability), information provided by the auto-encoder neural network or the like. In some exemplary embodiments, pairs of inputs may be obtained, either by comparing the pairs after each input is created or by method of production thereof.).
With regards to claims 26, 33, 40 Copty further discloses, wherein the prediction is used to adjust a risk score of one or more of the first software vulnerabilities ([0046-0048] Predictive Model 150 may determine whether Input 142 would cause Program 102 to reach the vulnerability. Additionally or alternatively, Predictive Model 150 may output a confidence score for the classification of Input 142. The confidence score may be a probability of the classification being correct (e.g. the estimated probability that a safe-labeled input would cause Program 102 to not reach the vulnerability and the estimated probability that an unsafe-labeled input would cause Program 102 to reach the vulnerability). [0037] In some exemplary embodiments, Fuzzing Platform 110 may be configured to generate inputs based on a coverage metric, such as code coverage metric, path coverage metric, branch coverage metric, or the like. Fuzzing Platform 110 may employ a heuristic approach to input generation to improve the score of the coverage metric..
Claims 23, 25, 30, 32, 37, 39 are rejected under 35 U.S.C. 103 as being unpatentable over Copty et al(US 20180232523 A1:IDS supplied) in view of Sagoo et al(US 20070192866 A1).
With regards to claim 23, 30, 37 Copty does not but , Sagoo teaches, wherein each of the software vulnerabilities indicates whether the exploit was developed within a particular time period ([0121] E-mail worms (a threat to which systems might be exposed) might be described (in two dimensions) according to the probability that an e-mail sent across Internet links (an element of the Attack risk entity) will be carrying a worm of a given age (in hours) which exploits a software vulnerability of a given age (in days).). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to modify Copty’s device/ptoduct/method with teaching of Sagoo in order to preventing malicious application behaviors, and more particularly, to using information on malicious application behaviors among device (Sagoo [0003])
With regards to claim 25, 32, 39 Copty in view of Sagoo discloses, wherein each of the second software vulnerabilities indicates whether the second exploit was used in the successful attack (Copty [0046] Additionally or alternatively, Predictive Model 150 may output a confidence score for the classification of Input 142. The confidence score may be a probability of the classification being correct (e.g. the estimated probability that a safe-labeled input would cause Program 102 to not reach the vulnerability and the estimated probability that an unsafe-labeled input would cause Program 102 to reach the vulnerability), whether the exploit was used in the attack within a particular time period ([0121] E-mail worms (a threat to which systems might be exposed) might be described (in two dimensions) according to the probability that an e-mail sent across Internet links (an element of the Attack risk entity) will be carrying a worm of a given age (in hours) which exploits a software vulnerability of a given age (in days).). Motivation would be same as stated in claim 23.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED WALIULLAH whose telephone number is (571)270-7987. The examiner can normally be reached on 8.30 to 430 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yin-Chen Shaw can be reached on 1-571-272-8878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED WALIULLAH/Primary Examiner, Art Unit 2498 18922772’