Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
Claims 1 and 18-20 have been amended
Claims 24 and 25 have been added new
Claims 1-25 are pending
Response to Arguments
Applicant’s arguments filed on 11/24/2025 have been fully considered.
With respect to the arguments of USC 103 rejection for claim 1, applicant’s representative has stated that Muthaiah-Niemela-MA fail to teach the newly amended limitation of “pre-filtering the network traffic is determined based at least in part on converting at least a subset of the second set of features into corresponding looser, more sensitive versions of the subset of the second set of features”. Examiner respectfully disagrees. Muthaiah teaches ([Muthaiah, Col. 5 lines 8-22] “The filter 42 uses characteristic data either provided directly in the data packet or is derived from the data of the data packet. The characteristic data is compared to predetermined parameters as determined by the host vehicle. … Examples of data collected by the vehicle interface device that is used for determining the predetermined parameters may include, but is not limited to, GPS data, speed, velocity, acceleration, and steering angle data, which assist in determining a position or trajectory of the host vehicle relative to the remote vehicles. It should be understood that the filter 42 functions as a pre-security processing routine to filter and discard unwanted V2V communication messages. ”) ([Muthaiah, col. 5 lines 33-56] “In step 51, contents of the data packet are examined prior to checking the digital signature of the data packet. In step 52, a filtering decision is made by comparing characteristic data of the received data packet to the predetermined parameter set by the host vehicle. … Characteristic data may be obtained directly from the data packet without additional processing or may be derived using the data and other contents within the data packet. The predetermined parameter may include, but is not limited to, a comparable parameter of the host vehicle for determining positional data or attitude data of the host vehicle or may include a parameter relating to spurious data of the transmitted message such as malicious node tampering. … In step 53, if the information within the data packets is in compliance with the predetermined parameter, then the data packet is transferred to the security layer for authentication of the data packet. In step 54, if the information within the data packet does is not in compliance with the predetermined parameter, then the data packet is discarded.”) ([Muthaiah, col. 6 lines 3-9] “Characteristic data contained in the data packet that may be used in determining whether the data packet is beneficial or not beneficial to the host vehicle includes, but is not limited to, global positioning data … signatures, signal quality”) ([Muthaiah, col. 9 lines 56-59] “In step 68, the data packet is transferred to the security layer to authenticate the digital signature of the respective data packet.”) As can be seen from these citations of Muthaiah teaches of pre-filtering a data packet before sending it to a security layer for further analysis. The pre-filter is done based on characteristic data that is compared to predetermined parameters determined by the host vehicle. Muthaiah teaches the characteristic data can be various forms of data pertaining to the data packet including global positioning data, signatures, signal quality and other forms. All of these characteristic data are compared to predetermined parameters set by the host vehicle to discard a packet before it is transmitted to the security layer. The security layer as mentioned in Muthaiah only analyzes the digital signature. The characteristic data that the pre-filter of Muthaiah uses is “looser” and more sensitive than the single signature authentication of the security layer. Muthaiah realizes the need to pre-filter more loosely ([Muthaiah,] “nodes generate data packets with incorrect data, possibly forged digital signatures. The V2V communication system is flooded with these packets leading to high packet collision lost at the physical layer, higher buffer loss at the security layer, and the distribution of incorrect data”) ([Muthaiah, ] “A filter is shown generally at 42, and is implemented to efficiently reduce the number of data packets provided to the security layer 38. The filter 42 exists as a set of algorithms which examines each data packet prior to the security layer 38 and makes a decision to send the data packet to the security layer 38 or discard the data packet.”). This is analogous to the pre-filter model and the detection model of the current application. The specification of the current application states ([Specification, para. 0236] “the set of features used in connection with the pre-filter model is similar to the set of features used in connection with the detection model. For example, the feature(s) for pre-filtering can comprise part of the filter of the features for the detection model. Accordingly, the features for the pre-filter model may be broader (e.g., detect a broader set of samples as being malicious or suspicious) than the features for the detection model.”). The pre-filter of Muthaiah consists of broader characteristic data then the signature authentication of the security layer. Therefore, Muthaiah teaches this newly amended limitation.
Furthermore, examiner is relying on Niemela to teach of a detection model that performs malware detection. As can be seen in the anti-virus server of Niemela.
Similar rejection applies for independent claims 18-20.
Additional arguments are moot in view of new grounds of rejection necessitated by the claim amendments.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is:
“… the detection model is configured to” in claim 15
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
See specification para. 0105, 0118, and 0161 for functional support
See specification para. 0141 and 0104 for hardware support
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1, 18, 19, and 20 recite the limitation "corresponding looser, more sensitive versions". It is unclear what is meant by the term “looser”. For the purpose of examination examiner is interpreting this limitation as “second set of features into corresponding, more sensitive versions”. Examiner is omitting the term “looser”. Appropriate correction is required.
Claims 2-17 and 21-25 depend on claim 1. Therefore, they also inherit the rejection.
Claim 20 recites the limitation " pre-filtering the network traffic". There is insufficient antecedent basis for this limitation in the claim. Examiner is interpreting this limitation as “pre-filtering network traffic”. Appropriate correction is required.
Claim 24 recites the limitation " the pre-filter machine-learning model". However, independent claim 1 recites of a “a pre-filter model”. Examiner is interpreting this limitation as “the pre-filter model”. Appropriate correction is required.
Claim 24 recites the limitation " the trained detection machine-learning model". There is insufficient antecedent basis for this limitation in the claim. Examiner is interpreting this limitation as “the detection model”. Furthermore, there is no basis for “re-training” the detection model. Independent claim 1 does not recite of training the detection model. For the purpose of examination examiner is interpreting “re-training” as “training”. Appropriate correction is required.
Claim 25 recites the limitation " the detection machine-learning model" and “the pre-filter machine-learning model”. There is insufficient antecedent basis for this limitation in the claim. Examiner is interpreting this limitation as “the detection model” and “the pre-filter model”. Appropriate correction is required.
Claim Rejections: 103 Rejections
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 6-8, 11-19, and 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Muthaiah et al. 8314718 B2, 2009 (hereinafter Muthaiah), in view of Niemela 9965630 B2, 2009 (hereinafter Niemela), and further in view of MA (US-20200320192-A1), hereinafter Muthaiah-Niemela-MA.
As for claim 1, Muthaiah discloses, “A system, comprising: one or more processors (col 10 lines 5-8, “The embodiments of the invention as described herein can be implemented as software, thereby not requiring the additional cost that would be otherwise required for stand alone processors.) configured to: obtain network traffic; (Col 3 lines 60-63, “For a receiver of a respective host vehicle, the receiver has to authenticate a large number of data packets,” A receiver obtains network traffic, which consists of a large number of packets)...”
Muthaiah further discloses, “pre-filter the network traffic based (Fig. 4 elements 51-54, describes pre-filtering, vide infra Drawing 1: Fig. 4 from Muthaiah) at least in part on a first set of features (Col 5 lines 35-37, “comparing characteristic data of the received data packet to the predetermined parameter set”) for traffic reduction (Col 5 lines 55-56, “the data packet is discarded,” reduces traffic); and…”
Muthaiah further discloses, “… wherein the first set of features used in connection with pre-filtering the network traffic is determined based at least in part on converting at least a subset of the second set of features into corresponding looser, more sensitive versions of the subset of the second set of features; ([Muthaiah, Col. 5 lines 8-22] “The filter 42 uses characteristic data either provided directly in the data packet or is derived from the data of the data packet. The characteristic data is compared to predetermined parameters as determined by the host vehicle. … Examples of data collected by the vehicle interface device that is used for determining the predetermined parameters may include, but is not limited to, GPS data, speed, velocity, acceleration, and steering angle data, which assist in determining a position or trajectory of the host vehicle relative to the remote vehicles. It should be understood that the filter 42 functions as a pre-security processing routine to filter and discard unwanted V2V communication messages. ”) ([Muthaiah, col. 5 lines 33-56] “In step 51, contents of the data packet are examined prior to checking the digital signature of the data packet. In step 52, a filtering decision is made by comparing characteristic data of the received data packet to the predetermined parameter set by the host vehicle. … Characteristic data may be obtained directly from the data packet without additional processing or may be derived using the data and other contents within the data packet. The predetermined parameter may include, but is not limited to, a comparable parameter of the host vehicle for determining positional data or attitude data of the host vehicle or may include a parameter relating to spurious data of the transmitted message such as malicious node tampering. … In step 53, if the information within the data packets is in compliance with the predetermined parameter, then the data packet is transferred to the security layer for authentication of the data packet. In step 54, if the information within the data packet does is not in compliance with the predetermined parameter, then the data packet is discarded.”) ([Muthaiah, col. 6 lines 3-9] “Characteristic data contained in the data packet that may be used in determining whether the data packet is beneficial or not beneficial to the host vehicle includes, but is not limited to, global positioning data … signatures, signal quality”) ([Muthaiah, col. 9 lines 56-59] “In step 68, the data packet is transferred to the security layer to authenticate the digital signature of the respective data packet.”)
PNG
media_image1.png
457
594
media_image1.png
Greyscale
Drawing 1: Fig. 4 from Muthaiah
Muthaiah does not explicitly disclose, but Niemela teaches “use a detection model (Col 4 lines 43-46, detection model = “a computer program, comprising computer read-able code which, when run on a server, causes the server to behave as an anti-virus server as described above in the third aspect of the invention,” a virus is malware and packets, which contain a virus, constitute malicious traffic) in connection with determining whether the filtered network traffic comprises malicious traffic, the detection model being based at least in part on a second set of features for malware detection (col 1 line 36-43, signatures = features); … and a memory coupled to the one or more processors and configured to provide the one or more processors with instructions (col 6 lines 59-62, “The client device 1 may also be provided with a computer readable medium in the form of a memory 7 on which a computer program 8 in the form of computer readable code is stored.”).”
Muthaiah and Niemela represent analogous art because they both are directed either to anti-virus scanning of files stored in a file system or to scanning of network traffic for malicious packets. One of ordinary skill in the art would have had a reasonable expectation of success to modify the teachings of Muthaiah with the specified features of Niemela because they are from the same field of endeavor. {Examiner’s note: file access within a network file system generates network traffic.}
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references Muthaiah and Niemela to modify 8314718 B2 of Muthaiah with the teaching of 9965630 B2 of Niemela with the motivation to detect malware that is contained within malicious traffic or that is contained with a file system because the file system malware might have been received within network traffic. This motivation applies to the rest of the claims in the list of claims in ¶ 9.
However, Muthaiah-Niemela fail to teach “querying a pre-filter model based at least in part on a first set of features for traffic reduction, wherein the pre-filter model is a machine learning model;”.
In analogous teaching MA teaches “querying a pre-filter model based at least in part on a first set of features for traffic reduction, wherein the pre-filter model is a machine learning model;” ([MA, para. 0045] “Machine learning can determine a verdict in advance before a file is sent to the sandbox. If a file is predicted as benign, it does not need to be sent to the sandbox. Otherwise, it is sent to the sandbox for further analysis/processing. Advantageously, utilizing machine learning to pre-filter a file significantly improves user experience by reducing the overall quarantine time as well as reducing workload in the sandbox”) ([MA, para. 0050] “FIG. 3 is a diagram of a trained machine learning model 300. The machine learning model 300 includes one or more features 310 and multiple trees 320 a, 320 n. A feature is an individual measurable property or characteristic of a phenomenon being observed.”)) ([MA, para. 0073] “The machine learning 702 front ends all the decisions, namely, quarantine if the machine learning 702 determines the file 602 is malicious (step 730-1), allow and scan if the machine learning 702 determines the file 602 is not malicious (step 730-2), and allow without a scan if the policy 704 dictates for the file 602 and if the machine learning 702 determines the file is benign (step 730-3).”)
Thus, given the teaching of MA, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of pre-filtering using machine learning by MA into the teaching of a system determine a first set of features for training a pre-filtering model to detect a malicious or suspicious sample by MUTHAIA H-NIEMELA. One of ordinary skill in the art would have been motivated to do so because MA recognizes the benefits of using machine learning to pre filter ([MA, para. 0045] “Advantageously, utilizing machine learning to pre-filter a file significantly improves user experience by reducing the overall quarantine time as well as reducing workload in the sandbox”)
As for claim 6, most limitations of this claim have been noted in the rejection of claim 1.
Muthaiah further discloses “wherein pre-filtering the network traffic comprises using a pre-filter model that is based at least in part on the first set of features (Fig 5 element 62-65, a direction [element 63] or an altitude = a level [element 65] on a multilevel bridge or interchange is a feature that might lead a packet to be filtered out [element 64], vide supra Drawing 1: Fig. 4 from Muthaiah).”
Examiner applied the same motivation to combine as ¶¶ 14-15 describe.
As for claim 7, most limitations of this claim have been noted in the rejection of claim 6.
Neither Muthaiah nor Niemela discloses, but MA teaches “wherein the pre-filter model is a machine learning model ([MA, para. 0045] “Machine learning can determine a verdict in advance before a file is sent to the sandbox. If a file is predicted as benign, it does not need to be sent to the sandbox. Otherwise, it is sent to the sandbox for further analysis/processing. Advantageously, utilizing machine learning to pre-filter a file significantly improves user experience by reducing the overall quarantine time as well as reducing workload in the sandbox”)
The same motivation to modify MUTHAIA H-NIEMELA with MA as in the rejection of claim 1 applies.
PNG
media_image2.png
802
512
media_image2.png
Greyscale
Drawing 2: Fig. 5 from Muthaiah
As for claim 8, most limitations of this claim have been noted in the rejection of claim 1.
Muthaiah does not disclose, but Niemela teaches “wherein the one or more processors are further configured to query the detection model (col 7 lines 8-10, “the client device 9 [which does pre-filtering] will only detect this change when they next query the remote anti-virus server [which hosts the malware detection model]”).”
Examiner applied the same motivation to combine as ¶¶ 14-15 describe.
As for claim 11, most limitations of this claim have been noted in the rejection of claim 1.
Muthaiah discloses “wherein determining whether the filtered network traffic comprises malicious traffic is performed at a security entity (Fig. 4 element 53, Layer is a synonym for entity, vide supra Drawing 1: Fig. 4 from Muthaiah).”
Examiner applied the same motivation to combine as ¶¶ 14-15 describe.
As for claim 12, most limitations of this claim have been noted in the rejection of claim 1.
Muthaiah does not disclose, but Niemela teaches “wherein determining whether the filtered network traffic comprises malicious traffic is performed at a cloud-based security service (col 8 line 8, “an online anti-virus scan”).”
Examiner applied the same motivation to combine as ¶¶ 14-15 describe.
As for claim 13, most limitations of this claim have been noted in the rejection of claim 1.
Muthaiah discloses “wherein pre-filtering the network traffic based at least in part on the first set of features (col 5 line 10-14, “characteristic data…predetermined parameters”) is performed at a security entity (Fig. 4 element 53, Layer is a synonym for entity, Muthaiah filters benign traffic from possibly malicious traffic, vide supra Drawing 1: Fig. 4 from Muthaiah).”
Examiner applied the same motivation to combine as ¶¶ 14-15 describe.
As for claim 14, most limitations of this claim have been noted in the rejection of claim 13.
Muthaiah does not disclose, but Niemela teaches “wherein determining whether the filtered network traffic comprises malicious traffic is performed at a cloud-based security service (col 8 line 8, “an online anti-virus scan”).”
Examiner applied the same motivation to combine as ¶¶ 14-15 describe.
As for claim 15, most limitations of this claim have been noted in the rejection of claim 1.
Muthaiah does not disclose, but Niemela teaches “pre-filtering the network traffic comprises detecting (col 8 lines 23-24, “the intermediate scanning results database 19 is accessible by the server 14,” Results were detected and put into a database) one or more malicious or suspicious samples (col 8 lines 14-15, “A further problem is that of obtaining samples of suspicious files for further analysis from clients…”); and the one or more malicious or suspicious samples are forwarded to the detection model (col 8 lines 45-49, “The Server 14 uses data obtained from the intermediate scanning database 19 to determine which files should be scanned using a particular detection signature, and on which client device. Only those signatures that may be relevant to the client device 9 are sent to that client…,” The Server runs the detection model).”
Examiner applied the same motivation to combine as ¶¶ 14-15 describe.
As for claim 16, most limitations of this claim have been noted in the rejection of claim 15.
Muthaiah does not disclose, but Niemela teaches “wherein the detection model is configured to determine whether a suspicious sample is malicious (col 8 lines 48-51, “Only those signatures that may be relevant to the client device 9 are sent to that client so that each client will be sent only signatures that are, or are likely to be, relevant to that client,” On the basis of client’s initial scan, the server performs its further detection by sending signatures to the client for further scanning. Server detection of malware can rely on remote [client] execution).”
Examiner applied the same motivation to combine as ¶¶ 14-15 describe.
As for claim 17, most limitations of this claim have been noted in the rejection of claim 1.
Muthaiah does not disclose, but Niemela teaches “wherein the first set of features is distinct from the second set of features (Col 8 lines 54-57, “When new malware is identified, signature data for that malware can be either actively pushed to the client device 9 when it is available, or sent to the client device 9 when the client device 9 next connects to the server 14.).” {Examiner’s note: the intermediate scans enter data into a database that is accessible to the Server. In response, the Server sends new signature data to determine whether a suspicious file is malware. Initial and subsequent signature data can be or are distinct.}
Examiner applied the same motivation to combine as ¶¶ 14-15 describe.
As for claim 18, claim 18 is the method claim of claim 1 and is rejected by the same reasons and motivation by which claim 1 is rejected.
As for claim 19, claim 19 is the non-transitory computer readable medium claim of claim 1 and is rejected by the same reasons and motivation by which claim 1 is rejected.
As for claim 21, most limitations of this claim have been noted in the rejection of claim 1. Muthaiah further discloses “wherein determining the first set of features comprises converting the second set of features to obtain the first set of features.” ([MUTHAIA H, col. 5 lines 8-21] “The filter 42 uses characteristic data either provided directly in the data packet or is derived from the data of the data packet. The characteristic data is compared to predetermined parameters as determined by the host vehicle … Examples of data collected by the vehicle interface device that is used for determining the predetermined parameters may include, but is not limited to, GPS data, speed, velocity, acceleration, and steering angle data, which assist in determining a position or trajectory of the host vehicle relative to the remote vehicles. It should be understood that the filter 42 functions as a pre-security processing routine to filter and discard unwanted V2V communication messages.”) [Examiner’s note: the predetermined parameters are features are the second set of features that are known and desired. The unknown V2V are outside of the predetermined parameters and therefore are determined from the second set of features.]
As for claim 22, most limitations of this claim have been noted in the rejection of claim 1. MA further teaches “wherein every network traffic sample classified by the detection model is necessarily classified as malicious or suspicious by the pre-filter model.” ([MA, para. 0073] “The machine learning 702 front ends all the decisions, namely, quarantine if the machine learning 702 determines the file 602 is malicious (step 730-1), allow and scan if the machine learning 702 determines the file 602 is not malicious (step 730-2), and allow without a scan if the policy 704 dictates for the file 602 and if the machine learning 702 determines the file is benign (step 730-3).”) ([MA, para. 0076] “The smart quarantine processes 700A reduces risk relative to the conventional quarantine process 600 by utilizing the machine learning 702 to augment and improve the allow and scan step. Allow and scan is required for some files as the users 102 simply do not want every file 602 held for the sandbox. Thus, allow and scan poses some risk. The machine learning 702 can reduce this risk such that some of the files 602 that would be allowed and scanned are now held based on the determination of the machine learning 702.”).
The same motivation to modify Muthaiah-Niemela with MA as in the rejection of claim 1 applies.
As for claim 23, most limitations of this claim have been noted in the rejection of claim 1. Niemela further teaches “wherein the pre-filter model executes on a security entity disposed on a network edge and the detection model executes in a cloud-based analysis service remote from the security entity.” ([Niemela, col. 5 lines 1-4] “The client device 1 has access to a file system 2 at which a plurality of files to be scanned is stored. An anti-virus function 3 is provided, which can interact with the file system 2”) ([Niemela, col. 5 lines 6-9] “the anti-virus function 3 scans a file in the file system 2, it performs several intermediate scans, each of which looks at one aspect of a file. Signature information for each virus contains “pre-filter” information that can be used to quickly rule out a file”) ([Niemela, col. 5 lines 12-19] “An intermediate scan is used to calculate or otherwise obtain intermediate data from the file for comparison with pre-filter information stored at the anti-virus database 4. A complete scan is available once sufficient intermediate scans have been performed. An intermediate scan may be used to obtain information that may include the results of an online check”) ([Niemela, col. 7 lines 41-44] “Online anti-virus scanning is based on running an anti-virus scan at least in part remotely. The client device contacts an anti-virus server to perform the on-line anti-virus scan if files stored in the file system of the client device.”).
The same motivation to combine Muthaiah with Niemela as in the rejection of claim 1 applies.
Claims 2-4, and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Muthaiah-Niemela-MA in view of Dennison et al. 9043894 B1, 2014 (henceforth Dennison).
As for claim 2, most limitations of this claim have been noted in the rejection of claim 1.
Neither Muthaiah nor Niemela nor MA discloses, but Dennison teaches “wherein the detection model is a machine learning model (FIG. 10B element 1024, the server executes the detection model by means of machine learning, vide supra).”
Muthaiah, Niemela, MA, and Dennison represent analogous art because they all are directed either to anti-virus scanning of files stored in a file system or to scanning of network traffic for malicious packets. One of ordinary skill in the art would have had a reasonable expectation of success to modify the teachings of Muthaiah with the specified features of Niemela and then with the specified features of Dennison because they are all from the same field of endeavor. {Examiner’s note: file access within a network file system generates network traffic.}
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references Muthaiah, Niemela, MA, and Dennison to modify the disclosure of 8314718 B2 of Muthaiah in aforementioned combination with the teaching of 9965630 B2 of Niemela, with the teaching of MA, with the teaching of 9043894 B1 of Dennison out of the motivation adaptively via machine learning (1) to detect malware that is contained within malicious traffic and (2) to monitor the results of machine learning detection. This motivation applies to the rest of the claims in the list of claims in ¶ 56.
As for claim 3, most limitations of this claim have been noted in the rejection of claim 2.
Neither Muthaiah nor Niemela discloses, but Dennison teaches “wherein the machine learning model is trained using a set of one or more feature vectors (FIG. 10B element 1022, a vector contains a set of features, vide supra).”
Examiner applied the same motivation to combine as ¶¶ 59-60 describe.
As for claim 4, most limitations of this claim have been noted in the rejection of claim 3.
Neither Muthaiah nor Niemela discloses, but Dennison teaches “wherein the machine learning model is a tree-based model (col 5 lines 16-20, “The score can be based on a Support Vector Machine model, a Neural Network model, a Decision Tree model, a Naive Bayes model, or a Logistic Regression model.).”
Examiner applied the same motivation to combine as ¶¶ 59-60 describe.
As for claim 10, most limitations of this claim have been noted in the rejection of claim 1.
Neither Muthaiah nor Niemela discloses, but Dennison teaches “wherein the one or more processors are further configured to: in response to determining that the filtered network traffic comprises malicious traffic (col 7 line 58, “malicious”), provide an indication (col 7 line 54, “indicates”) that the filtered network traffic comprises malicious traffic.” {Examiner’s note: Dennison provides a user interface so that a user can supervise the results of analysis by the pre-filtering and detection processes.}
Examiner applied the same motivation to combine as ¶¶ 59-60 describe.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Muthaiah-Niemela-MA further in view of Dennison et al. 9043894 B1, 2014 (henceforth Dennison), and further in view of Chen et al. XGBoost: A Scalable Tree Boosting System, August 13-17, 2016, ACM, 785-794. (henceforth Chen).
As for claim 5, most limitations of this claim have been noted in the rejection of claim 4.
Neither Muthaiah nor Niemela nor MA nor Dennison discloses, but Chen teaches “wherein the tree-based model is trained using an XGBoost machine learning process (Introduction p 785 ¶ 3, “In this paper, we describe XGBoost, a scalable machine learning system for tree boosting.”).”
Muthaiah, Niemela, MA, and Dennison represent analogous art because they all are directed either to anti-virus scanning of files stored in a file system or to scanning of network traffic for malicious packets while Chen discloses a tool that is frequently used in a machine learning system including a machine learning system that is used for malware detection. One of ordinary skill in the art would have had a reasonable expectation of success to modify the teachings of Muthaiah with the specified features of Niemela and then with the specified features of Dennison because they are all from the same field of endeavor. {Examiner’s note: file access within a network file system generates network traffic.} A further combination with Chen is also likely to have success because Chen’s teachings are useful and successful in practically any machine learning system.
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Muthaiah, Niemela, MA, Dennison, and Chen to modify the disclosure of 8314718 B2 of Muthaiah in aforementioned combination with the teachings of 9965630 B2 of Niemela, and MA, and of 9043894 B1 of Dennison with the teaching of XGBoost: A Scalable Tree Boosting System of Chen out of obviousness to try because of the motivation of adaptive detection of malware and because XGBoost is a high-quality freeware machine learning system.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Muthaiah-Niemela-MA and further in view of Xaypanya et al. 9332028 B2, 2013 (hereinafter Xaypanya).
As for claim 9, most limitations of this claim have been noted in the rejection of claim 1.
Neither Muthaiah nor Niemela nor MA discloses, but Xaypanya teaches “wherein the one or more processors are further configured to: in response to determining that the filtered network traffic comprises malicious traffic, update a blacklist of files (col 5 line 39, “updating blacklists”) that are deemed to be malicious (FIG. 3 element 308, “malware,” malware is malicious; vide infra Drawing 3: FIG. 3 from Muthaiah), the blacklist of files being updated to include one or more identifiers (FIG. 3 element 308, “signatures,” vide infra Drawing 3: FIG. 3 from Muthaiah) corresponding to network traffic determined to be malicious.”
Muthaiah, Niemela, and Xaypanya represent analogous art because they all are directed either to anti-virus scanning of files stored in a file system or to scanning of network traffic for malicious packets. {Examiner’s note: file access within a network file system generates network traffic.} One of ordinary skill in the art would have had a reasonable expectation of success to modify the teachings of Muthaiah with the specified features of Niemela and then with the specified features of Xaypanya because they are all from the same field of endeavor.
Therefore, given the teachings as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, having the references of Muthaiah, Niemela, and Xaypanya to modify the disclosure of 8314718 B2 of Muthaiah in aforementioned combination with the teaching of 9965630 B2 of Niemela by means of teaching of 9332028 B2 of Xaypanya because of the motivation to snapshot and to track malicious traffic and files in the network.
PNG
media_image3.png
661
1089
media_image3.png
Greyscale
Drawing 3: FIG. 3 from Muthaiah
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over by Dennison et al. 9043894 B1, 2014 (henceforth Dennison) in view of Muthaiah et al. 8314718 B2, 2009, and further in view of MA (US-20200320192-A1).
As for claim 20, Dennison discloses, A system, comprising:
one or more processors configured to: determine a first set of features (FIG. 10A element 1002, “assemble training corpus comprising vectors,” vectors list features, vide infra1 for training a pre-filtering model to detect (col 22 lines 20-21, “yield a method for classifying URLs as benign or malicious,” a URL is an element of network traffic) a malicious or suspicious sample (col 21 line 53-col 22 line 27, “batch”);
train the pre-filtering model (FIG. 10A element 1004, “apply a machine learning method,” vide infra) based at least in part on a first training set and the first set of features;
determine a second set of features for training a detection model to detect malicious samples (FIG. 10B element 1022, “server receives vectors,” vide infra and
train the detection model (FIG. 10B element 1024, vide infra based at least in part on a second training set and the second set of features (col 22 lines 51-57, “training”); … cause the pre-filtering model to pre-filter network traffic …; and cause the detection model to detect malicious traffic based on a filtered network traffic output by the pre-filtering model ([Dennison, col. 7 lines 38-44] “each pre-filter can filter the data items of the outbound data connection log 102 and pass a subset of data items to the scoring processor. Nevertheless, it should be understood that pre-filters can also be executed in series. For example, a first pre-filter can filter the data items of the outbound data connection log 102,”) ([Dennison, col. 7 lines 64-67] “cause the computing system of FIG. 12 to run one or more post-filters 108A, 108B on one or more of the scored data items returned from the scoring processor 106. The post-filters can identify a subset of data items from the scored data items as likely malicious URLs.”)
and a memory coupled to the one or more processors and configured to provide the one or more processors with instructions (col 26 lines 19-21, “processors programmed to perform the techniques pursuant to program instructions in firmware, memory…”).
However, Dennison does not teach “wherein the first set of features used in connection with pre-filtering the network traffic is determined based at least in part on converting at least a subset of the second set of features into corresponding looser, more sensitive versions of the subset of the second set of features;”
In analogous teaching MUTHAIA H teaches “wherein the first set of features used in connection with pre-filtering the network traffic is determined based at least in part on converting at least a subset of the second set of features into corresponding looser, more sensitive versions of the subset of the second set of features;” ([Muthaiah, Col. 5 lines 8-22] “The filter 42 uses characteristic data either provided directly in the data packet or is derived from the data of the data packet. The characteristic data is compared to predetermined parameters as determined by the host vehicle. … Examples of data collected by the vehicle interface device that is used for determining the predetermined parameters may include, but is not limited to, GPS data, speed, velocity, acceleration, and steering angle data, which assist in determining a position or trajectory of the host vehicle relative to the remote vehicles. It should be understood that the filter 42 functions as a pre-security processing routine to filter and discard unwanted V2V communication messages. ”) ([Muthaiah, col. 5 lines 33-56] “In step 51, contents of the data packet are examined prior to checking the digital signature of the data packet. In step 52, a filtering decision is made by comparing characteristic data of the received data packet to the predetermined parameter set by the host vehicle. … Characteristic data may be obtained directly from the data packet without additional processing or may be derived using the data and other contents within the data packet. The predetermined parameter may include, but is not limited to, a comparable parameter of the host vehicle for determining positional data or attitude data of the host vehicle or may include a parameter relating to spurious data of the transmitted message such as malicious node tampering. … In step 53, if the information within the data packets is in compliance with the predetermined parameter, then the data packet is transferred to the security layer for authentication of the data packet. In step 54, if the information within the data packet does is not in compliance with the predetermined parameter, then the data packet is discarded.”) ([Muthaiah, col. 6 lines 3-9] “Characteristic data contained in the data packet that may be used in determining whether the data packet is beneficial or not beneficial to the host vehicle includes, but is not limited to, global positioning data … signatures, signal quality”) ([Muthaiah, col. 9 lines 56-59] “In step 68, the data packet is transferred to the security layer to authenticate the digital signature of the respective data packet.”)
Thus, given the teaching of MUTHAIA H, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of first feature depending on a second feature by MUTHAIA H into the teaching of a system determine a first set of features for training a pre-filtering model to detect a malicious or suspicious sample by Dennison. One of ordinary skill in the art would have been motivated to do so because MUTHAIA H recognizes the need to efficiently improve security while reducing computation power required. ([MUTHAIA H, col. 1 lines 20-23] “A substantial cost is involved in incorporating security protection regards to V2V applications. The cost incurred is that of the computational power required to process security”) ([MUTHAIA H, col. 1 lines 60-66] “An advantage of an embodiment of the invention provides for a reduced number of data packets that are provided to a security layer in response to filtering data packets to remove any that are from a vehicle determined not to be within the same road of travel as the host vehicle or from vehicles where malicious node tampering is has been detected.”)
However, Dennison-MUTHAIA H does not teach “wherein the pre-filter model is a machine learning model;”.
In analogous teaching MA teaches “wherein the pre-filter model is a machine learning model;” ([MA, para. 0045] “Machine learning can determine a verdict in advance before a file is sent to the sandbox. If a file is predicted as benign, it does not need to be sent to the sandbox. Otherwise, it is sent to the sandbox for further analysis/processing. Advantageously, utilizing machine learning to pre-filter a file significantly improves user experience by reducing the overall quarantine time as well as reducing workload in the sandbox”) ([MA, para. 0050] “FIG. 3 is a diagram of a trained machine learning model 300. The machine learning model 300 includes one or more features 310 and multiple trees 320 a, 320 n. A feature is an individual measurable property or characteristic of a phenomenon being observed.”)) ([MA, para. 0073] “The machine learning 702 front ends all the decisions, namely, quarantine if the machine learning 702 determines the file 602 is malicious (step 730-1), allow and scan if the machine learning 702 determines the file 602 is not malicious (step 730-2), and allow without a scan if the policy 704 dictates for the file 602 and if the machine learning 702 determines the file is benign (step 730-3).”)
Thus, given the teaching of MA, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of pre-filtering using machine learning by MA into the teaching of a system determine a first set of features for training a pre-filtering model to detect a malicious or suspicious sample by Dennison-MUTHAIA H. One of ordinary skill in the art would have been motivated to do so because MA recognizes the benefits of using machine learning to pre filter ([MA, para. 0045] “Advantageously, utilizing machine learning to pre-filter a file significantly improves user experience by reducing the overall quarantine time as well as reducing workload in the sandbox”)
Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Muthaiah-Niemela-MA in view of Singh (US-20190294792-A1)
Regarding claim 24, Muthaiah-Niemela-MA teach all limitations of claim 1. However, Muthaiah-Niemela-MA does not teach “wherein the pre-filter machine-learning model is obtained by re-training the trained detection machine-learning model using the first set of features”.
In analogous teaching Singh teaches “wherein the pre-filter machine-learning model is obtained by re-training the trained detection machine-learning model using the first set of features”. ([Singh, para. 0024] “the disclosed technology involves training deep model 118 as well as shallow model 116 or in conjunction with shallow model 116. In some aspects, rather than training the shallow model 116 with only ground truth training data, training is performed on outputs of the deep model 118, which helps the shallow model 116 learn appropriate data representations. Deep model 118, for example, can be trained with a high number of parameters that can extract the best possible relevant information from data (e.g., malware, network traffic, etc.) with high accuracy. Once deep model 118 is trained and tested for correctness, the shallow model 116 is trained and deployed with one or more shallow model parameters set by the trained deep model 118. To keep the size of the shallow model 116 minimal, a fixed parameter budget of the shallow model 116 can be set.”) ([Singh, para. 0025] “To mitigate the potential for accuracy loss, the training paradigm of shallow model 116 can act as a good filter, rather than as an explicit classifier. For example, the training paradigm can enable shallow model 116 to make a simplified, binary determination of whether an instruction set is potentially malicious or not (such as whether the instruction set would be relevant to the slower, but more accurate, deep model 118)”) ([Singh, para. 0023] “Shallow model 116 can be further refined at endpoint 114 by modifying, based on threshold values of one or more deep models parameters (optimized for malicious instruction detection), one or more corresponding parameters in shallow model 116. For example, deep model 118 can send modified parameters to shallow model 116 for adoption on the next instruction set(s).”)
Thus, given the teaching of Singh, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of obtaining pre-filtering model by Singh into the teaching of a system determine a first set of features for training a pre-filtering model to detect a malicious or suspicious sample by MUTHAIA H-NIEMELA-MA. One of ordinary skill in the art would have been motivated to do so because Singh recognizes the need to efficiently detect malware ([Singh, para. 0012] “Aspects of the disclosed technology address the need for providing fast, lightweight malware detection models that can be deployed on an endpoint while not sacrificing the accuracy of the malware detection.”)
Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Muthaiah-Niemela-MA in view of Schmid (US-20180012139-A1)
Regarding claim 25, Muthaiah-Niemela-MA teach all limitations of claim 1. Furthermore, Muthaiah-Niemela-MA teaches of “detection machine-learning model” and “pre-filter machine-learning model” as can be seen in the rejection of claim 1. However, Muthaiah-Niemela-MA does not teach “wherein the converting comprises automatically converting one or more regular expressions used by the … machine-learning model into corresponding broader regular expressions for use by the … machine-learning model.”
In analogous teaching Schmid teaches “wherein the converting comprises automatically converting one or more regular expressions used by the … machine-learning model into corresponding broader regular expressions for use by the … machine-learning model.” ([Schmid, para. 0034] “The pattern search module 104 can allow regular expressions to be adjusted appropriately in order to obtain desired results. Often there can be a tradeoff between accuracy and coverage for a regular expression. If a regular expression associated with an intent is broader, it can cover or have many matching messages, but may also identify messages that are not highly related to the intent. On the other hand, if a regular expression is narrower, it may identify messages that are highly related to the intent, but may not include all messages that may match the intent. Regular expressions can be edited or modified to achieve the desired balance between accuracy and coverage. For example, if a particular intent classification associated with a message does not seem to accurately reflect the user intent, regular expressions may be changed to achieve higher precision and reduce noise. In some embodiments, adjustments to regular expressions performed by the pattern search module 104 can be based on manual or machine learning techniques.”).
Thus, given the teaching of Schmid, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of regular expressions by Schmid into the teaching of a system determine a first set of features for training a pre-filtering model to detect a malicious or suspicious sample by MUTHAIA H-NIEMELA-MA. One of ordinary skill in the art would have been motivated to do so because Schmid recognizes the need to better analyze messages ([Schmid, para. 0025] “An improved approach rooted in computer technology can overcome the foregoing and other disadvantages associated with conventional approaches specifically arising in the realm of computer technology. Based on computer technology, the disclosed technology can classify messages based on potential user intent associated with the messages”) ([Schmid, para. 0034] “regular expressions may be changed to achieve higher precision and reduce noise”)
Pertinent Art
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
MENDELOWITZ (US-20220400125-A1): This prior art teaches of systems and methods for detecting potential malicious attacks in vehicles operational environment using staged Machine Learning (ML), comprising creating a plurality of features vectors each comprising a plurality of features extracted from vehicle operational data generated by a plurality of devices deployed in one or more vehicles which is indicative of operation of the one or more vehicles, detecting, in real-time, a plurality of anomaly feature vectors using one or more unsupervised ML models applied to the plurality of feature vectors, identifying, in real-time, one or more potential cyberattack events using one or more supervised ML models applied to the plurality of anomaly feature vectors, and generating an alert indicative of the one or more potential cyberattack events.
LI (US-20210182661-A1): This prior art teaches of training and enhancement of neural network models, such as from private data, are described. A slave device receives a version of a neural network model from a master. The slave accesses a local and/or private data source and uses the data to perform optimization of the neural network model. This can be done such as by computing gradients or performing knowledge distillation to locally train an enhanced second version of the model. The slave sends the gradients or enhanced neural network model to a master. The master may use the gradient or second version of the model to improve a master model.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AFAQ ALI whose telephone number is (571)272-1571. The examiner can normally be reached Mon - Fri 7:30am - 5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALI SHAYANFAR can be reached at (571) 270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.A./
02/20/2026
/AFAQ ALI/Examiner, Art Unit 2434 /NOURA ZOUBAIR/Primary Examiner, Art Unit 2434
1 Not every referenced figure is included in this office action When a figure is included in this Office Action, the reference is preceded by vide infra or vide supra. The included figure is captioned Drawing because a caption of a figure in one reference may collide with a caption of a figure in another reference. The caption of the figure in the reference is included in the caption of this office action.