Prosecution Insights
Last updated: April 19, 2026
Application No. 18/338,259

MACHINE LEARNING TECHNIQUES FOR IDENTIFYING ANOMALOUS VULNERABILITY DATA

Non-Final OA §103
Filed
Jun 20, 2023
Examiner
HABTEGEORGIS, MATTHIAS
Art Unit
2491
Tech Center
2400 — Computer Networks
Assignee
Rapid7 Inc.
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
97%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
73 granted / 97 resolved
+17.3% vs TC avg
Strong +21% interview lift
Without
With
+21.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
36 currently pending
Career history
133
Total Applications
across all art units

Statute-Specific Performance

§101
5.6%
-34.4% vs TC avg
§103
60.8%
+20.8% vs TC avg
§102
10.5%
-29.5% vs TC avg
§112
20.8%
-19.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 97 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/02/2026 has been entered. Response to Arguments Applicant’s arguments, see Remarks, filed 01/02/2026, with respect to the rejection(s) of independent claims 1, 19 and 20 under 35 USC § 103 have been fully considered but are moot since the new ground of rejection is based on newly found prior art, Young, US 2022/0398311. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 13 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over US-PGPUB No. 2018/0124094 A1 to Hamdi, US-PGPUB No. 2018/0191748 A1 to McGrew et al. (hereinafter “McGrew”), US-PGPUB No. 2020/0073740 A1 Ohana et al. (hereinafter “Ohana”), and further in view of US-PGPUB No. 2022/0398311 A1 Young et al. (hereinafter “Young”) Regarding claim 1: Hamdi discloses: A method (¶158: “… method 1100 can include the data collection engine 304 receiving vulnerability scanning data from a vulnerability scanner …”) […] to identify anomalous vulnerability data among vulnerability data acquired (¶158 -160: “The data collection engine 304 can send a request to the vulnerability scanner(s) 230 to scan the computing and network environment 210 or respective assets for vulnerabilities. … the data collection engine 304 can be configured to receive vulnerability scanning data from a plurality of vulnerability scanners, … can also receive, from one or more other data sources (e.g., databases 240, 250 or 260), data associated with published vulnerabilities, …”) for configuring vulnerability detection of a computer network security system (see Fig. 2, CEMM System 220, ¶158: “The vulnerability data can include … configuration parameters, …”) configured to monitor a computing environment (see Fig. 2, Computing and Network Environment 210), the method comprising: using at least one computer hardware processor to perform: obtaining vulnerability data (¶04: “The data collection engine can receive vulnerability data from a vulnerability scanner configured to scan the computer network for vulnerabilities.”) comprising a plurality of values of a vulnerability parameter (¶84: “for some asset parameters (e.g., CPU usage, throughput, or bit rate) the back-end system can compute average values (or other statistical parameters) over a time period …”, see also p-89: “parameter values received from the asset … parameters received from … the vulnerability scanner(s) 230 …”), wherein the vulnerability parameter can be used to configure detection of at least one vulnerability in the computing environment by the computer network security system (¶128: “The CEMM system 220 can be configured to continuously, or regularly, monitor and assess the state(s) of operation of the computing and network environment 210 or respective assets. The monitoring and assessment of the assets can allow the CEMM system 220 to be aware of the states and situations of the computing and network environment 210 as they change over time. Such awareness, can allow the CEMM system 220 to detect abnormal or unusual operational behavior (e.g., access of the computing and network environment 210 by blocked IP addresses, continuously high resources' usage in one or more assets, misconfiguration, or the like) as its occurs, and preemptively identify its root cause and address it.”); However, Hamdi does not explicitly disclose the following limitation taught by McGrew: [...] [of] machine learning [to identify anomalous vulnerability data …] (McGrew, ¶20: “malicious behavior detection process 247 may employ machine learning and/or detection rules, to detect the presence of malicious behavior in the network (e.g., the presence of malware, the exploitation of a software vulnerability, etc.).”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of Hamdi to incorporate the functionality of the malicious behavior detection process to employ machine learning to detect the presence of malicious behavior in a network, as disclosed by McGrew, such modification would allow the system to detect and address vulnerabilities faster, reduce false positives, and adapt to evolving threats more effectively, ultimately strengthening the security posture of the system. The combination of Hamdi and McGrew does not explicitly disclose the following limitations taught by Ohana: generating a plurality of datapoints representing the plurality of values of the vulnerability parameter (Ohana, ¶90: “The data-points are generated dynamically, in real-time or near real-time by the processing nodes and/or computing devices that monitor the processing nodes. The data-points are dynamic over time, representing a real time or near-real time state of operation of the respective processing nodes. Exemplary metrics include: CPU usage, memory usage, network utilization, and request count.”); clustering the plurality of datapoints to obtain a plurality of vulnerability parameter clusters (Ohana, ¶04: “… clustering the plurality of data-points into a plurality of clusters, wherein each cluster comprises a respective sub-set of the plurality of data-points having a same metric of the plurality of metrics and a timestamp within a same metric anomaly time interval of a plurality of metric anomaly time intervals,”); identifying at least one outlier datapoint using the plurality of vulnerability parameter clusters, the at least one outlier datapoint indicating at least one anomalous value of the vulnerability parameter (Ohana, ¶46: “… a start time indicative of likelihood of onset of the system level anomalous event is identified, for example, based on a timestamp associated with a nearby cluster of metric anomaly scores and/or data-points. The metric anomaly scores (and/or the data-points) which are determined as outliers in comparison to the start time are removed.”); identifying anomalous vulnerability data among the obtained vulnerability data using the at least one outlier datapoint indicating the at least one anomalous value of the vulnerability parameter (Ohana, ¶46: “The removal of outliers relative to the estimated start time further improves accuracy of detection of the anomalous event by removal of data-points less likely to be associated with the anomalous event.”); outputting an indication of the anomalous vulnerability data (Ohana, ¶111-114: “At 112, an alert indicative of the system level anomalous event is generated for a certain system level anomalous time interval, … At 114, the alert is provided, for example, presented on a display of a client terminal, …”); filtering out the at least one anomalous value of the vulnerability parameter from the plurality of values of the vulnerability parameter to obtain a filtered set of values of the vulnerability parameter (Ohana, ¶33: “creating a filtered plurality of maximum metric anomaly scores by removing maximum metric anomaly scores below a threshold indicative of likelihood of non-anomalous event from the plurality of maximum metric anomaly scores,”); It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi and McGrew to incorporate the functionality of the method to generate data-points of metric values of CPU usage, memory usage, network utilization, and request count, and removing outliers from data-points less likely to be associated with an anomalous event, as disclosed by Ohana, such modification would enable the system to determine statistical measures and setting thresholds to flag data points that significantly deviate from expected behavior. However, the combination of Hamdi, McGrew and Ohana does not explicitly disclose the following limitation taught by Young: and configuring the computer network security system to monitor at least one software application for the at least one vulnerability using the filtered set of values of the vulnerability parameter (Young, ¶25-26: “… process 200 monitors an application running on network security device 110. … process 200 determines whether the application matches a malware fingerprint 124. Process 200 may determine that the application matches a malware fingerprint 124 based on determining that the malware application comprises one or more features associated with the malware fingerprint 124. Examples of features associated with the malware fingerprint 124 may include a malware signature, suspicious system behavior, suspicious user behavior … or suspicious characteristics.”), wherein the configuring comprises configuring the computer network security system to determine whether the at least one software application is configured in accordance with at least one of the filtered set of values (Young, p-19: “Testing engine 118 determines that the malware application is configured to send a probe that seeks to obtain information associated with an environment in which the malware application runs.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew and Ohana to incorporate the functionality of the network security device to implement a testing engine (see Fig. 1 of Young, “Testing Engine 118”) to determine if a malware application is configured to send a probe, as disclosed by Young, such modification would enable the system to monitor for unusual network traffic. Regarding claim 13: The combination of Hamdi, McGrew, Ohana and Young discloses: The method of claim 1, further comprising, after clustering the plurality of datapoints to obtain the plurality of vulnerability parameter clusters: obtaining additional vulnerability data comprising additional values of the vulnerability parameter (Ohana, see Fig. 3, “Iterate 324” and “Receive data-points 302”); generating an updated plurality of datapoints representing the updated plurality of values of the vulnerability parameter (Ohana, ¶128: “At 304, the unstructured log messages are converted into numeric data-points.”); applying the clustering algorithm to the updated plurality of datapoints to obtain an updated plurality of vulnerability clusters (Ohana, ¶169: “At 306, the data-points are clustered,”); and using the updated plurality of vulnerability clusters to identify datasets including anomalous data (Ohana, ¶171: “At 310, the metric anomaly scores of each (e.g., the current) system level anomalous time interval are analyzed,”). The same motivation which is applied to claim 1 with respect to Ohana applies to claim 13. Regarding claim 19: Hamdi discloses: A vulnerability data processing system (see Fig. 2, CEMM System 220) comprising: at least one computer hardware processor (see Fig. 1D, Main Processor 121); and at least one non-transitory computer-readable storage medium storing instructions (see Fig. 1, Main Memory 122) … In addition to the above limitations, claim 19 recites substantially the same limitations as claim 1 in the form of a vulnerability data processing system to realize the corresponding functionality. Therefore, it is rejected by the same rationale. Regarding claim 20: Claim 20 substantially recites the same limitations as claim 1 in the form of a non-transitory computer-readable storage medium for storing instructions. Therefore, it is rejected by the same rationale. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, and further in view of US-PGPUB No. 2023/0267105 A1 Hanson et al. (hereinafter “Hanson”) Regarding claim 2: The combination of Hamdi, McGrew, Ohana and Young discloses the method of claim 1, but does not explicitly disclose the following limitation taught by Hanson: wherein generating the plurality of datapoints representing the plurality of values of the vulnerability parameter comprises: deduplicating the plurality of values of the vulnerability parameter to obtain a set of deduplicated vulnerability parameter values (Hanson, ¶14: “clustering together the identical ones of the plurality of distributions of entropy change values, and then removing duplicates of the identical ones of the plurality of distributions of entropy change values”); and generating the plurality of datapoints using the set of deduplicated vulnerability parameter values (Hanson, ¶14: “… the entropy change values correspond to one or more anomalies; … the segmented data is arranged in a hierarchical manner …”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana and Young to incorporate the functionality of the method to remove duplicates of the identical ones of the plurality of distributions of entropy change values in a cluster, as disclosed by Hanson, such modification would enable the system to remove redundant values and allowing to focus on unique and actionable vulnerabilities, thus improving the vulnerability detection and mitigation process. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, Hanson, and further in view of US-PGPUB No. 2020/0327444 A1 Negi et al. (hereinafter “Negi”) Regarding claim 3: The combination of Hamdi, McGrew, Ohana, Young and Hanson discloses the method of claim 2, but does not explicitly disclose the following limitation taught by Negi: wherein generating the plurality of datapoints using the set of deduplicated vulnerability parameter values comprises: applying a mask to the set of deduplicated vulnerability parameter values to obtain a plurality of masked vulnerability parameter values (Negi, ¶04: “… masking/encrypting values in all columns containing personally identifiable information,”); deduplicating the plurality of masked vulnerability parameter values to obtain a set of deduplicated masked vulnerability parameter values (Negi, ¶04: “identifying and removing columns which duplicate class attributes,”); and generating the plurality of datapoints using the set of deduplicated masked vulnerability parameter values (Negi, ¶04: “obtaining the ingestible datasets, which are capable of application to an algorithm for obtaining vector representations …”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana, Young and Hanson to incorporate the functionality of the method to mask columns containing personally identifiable information, and remove columns with duplicate class attributes, as disclosed by Negi, such modification would enable the system to protect the original, sensitive data while still allowing for the use of the dataset. Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, Hanson, Negi, and further in view of US-PGPUB No. 2020/0349430 A1 Schmidtler et al. (hereinafter “Schmidtler”) Regarding claim 4: The combination of Hamdi, McGrew, Ohana, Young, Hanson and Negi discloses the method of claim 3, but does not explicitly disclose the following limitation taught by Schmidtler: wherein generating the plurality of datapoints using the set of deduplicated masked vulnerability parameter values comprises: encoding each of the set of deduplicated masked vulnerability parameter values as a respective fixed-length vector of numeric values to obtain a plurality of fixed-length vectors as the plurality of datapoints (Schmidtler, ¶45: “… the sequence of characters … may be encoded as a fixed-length vector of real-numbered values through use of an encoding model.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana, Young, Hanson and Negi to incorporate the functionality of the method to encode a sequence of characters as a fixed-length vector of real-numbered values, as disclosed by Schmidtler, such modification would enable the system to perform to prepare data for deep learning models, since deep learning models rely on numerical vectors to identify and learn patterns indicative of vulnerabilities. Regarding claim 5: The combination of Hamdi, McGrew, Ohana, Young, Hanson, Negi and Schmidtler discloses: The method of claim 4, wherein encoding each of the set of deduplicated masked vulnerability parameter values as a respective fixed-length vector of numeric values comprises providing each of the set of deduplicated masked vulnerability parameter values as input to a trained encoder model to obtain the respective fixed-length vector of numeric values (Schmidtler, ¶73: “… encoding models may be used to generate a plurality of fixed-length vectors …”). The same motivation which is applied to claim 4 with respect to Schmidtler applies to claim 5. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, and further in view of US-PGPUB No. 2019/0121965 A1 Chai et al. (hereinafter “Chai”) Regarding claim 6: The combination of Hamdi, McGrew, Ohana and Young discloses the method of claim 1, but does not explicitly disclose the following limitation taught by Chai: wherein obtaining the plurality of values of the vulnerability parameter comprises executing a vulnerability data acquisition agent that extracts the plurality of values of the vulnerability parameter (Chai, ¶201-202: “The guard agent of the local application instance extracts [an] characteristic value of another application instance from the characteristic value database for comparison, … it may be considered that the cloud application … has security vulnerability.”) from a vulnerability data source (Chai, ¶201: “characteristic value database”, see Fig. 3). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana and Young to incorporate the functionality of the guard agent to extract characteristic values of an application from a characteristic value database, as disclosed by Chai, such modification would enable the system the characteristic values to the current application and determine if there is a security vulnerability, and process the application in a processing manner of initiating an alarm, migration, or restoration, so as to prevent the security issue if there exists a threat. Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, and further in view of US-PGPUB No. 2021/0026646 A1 Jha et al. (hereinafter “Jha”) Regarding claim 7: The combination of Hamdi, McGrew, Ohana and Young discloses the method of claim 1, but does not explicitly disclose the following limitation taught by Jha: wherein clustering the plurality of datapoints to obtain the vulnerability parameter clusters comprises clustering the plurality of datapoints using a density- based clustering algorithm (Jha, ¶35: “… the trace clusterer 208 uses a Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm, which aims at grouping datapoints based upon density parameters (Euclidean distance between datapoints and the number of points in a neighborhood).”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana and Young to incorporate the functionality of the method to cluster data points using a Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm, as disclosed by Jha, such modification would enable the system to detect potential attacks or anomalies by identifying patterns of normal behavior and flagging outliers. Regarding claim 8: The combination of Hamdi, McGrew, Ohana, Young and Jha discloses: The method of claim 7, wherein the density-based clustering algorithm is a density-based spatial clustering of applications with noise (DBSCAN) algorithm (Jha, ¶35: “… the trace clusterer 208 uses a Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm,”). The same motivation which is applied to claim 7 with respect to Jha applies to claim 8. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, US-PGPUB No. 20240273382 A1 Tsepenekas et al. (hereinafter “Tsepenekas”), and further in view of US-PGPUB No. 20180075038 A1 Azvine et al. (hereinafter “Azvine”) Regarding claim 9: The combination of Hamdi, McGrew, Ohana and Young discloses the method of claim 1, but does not explicitly disclose the following limitation taught by Tsepenekas: further comprising: obtaining an additional value of the vulnerability parameter (Tsepenekas, ¶09: “receiving a first input raw dataset and a second input raw dataset that are usable for computing common features data”); generating an additional datapoint representing the additional value of the vulnerability parameter (Tsepenekas, ¶09: “generating a second data point from the second input raw dataset:”); determining a measure of similarity between the additional datapoint and the plurality of datapoints (Tsepenekas, ¶09: “computing similarity of the first data point and the second data point”); It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana and Young to incorporate the functionality of the method for computing data contraction and estimating similarity of data points from heterogeneous data descriptors, as disclosed by Tsepenekas, such modification would enable to effectively manage vulnerabilities, provide threat intelligence, and apply defense strategies. The combination of Hamdi, McGrew, Ohana, Young and Tsepenekas does not explicitly disclose the following limitation taught by Azvine: and determining cluster membership of the additional datapoint based on the measure of similarity between the additional datapoint and the plurality of datapoints (Azvine, ¶168: “The xmu Jaccard similarities can be used as part of an identification of clusters of entities, where membership to a cluster arises due to a sufficient similarity (based on the graded boundary) with an existing member of the cluster.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana, Young and Tsepenekas to incorporate the technique of the xmu Jaccard similarities to determine membership of an entity to a cluster, as disclosed by Azvine, such modification would enable the system to identify patterns in data without prior knowledge of the correct groupings. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, Tsepenekas, Azvine, and further in view of US-PGPUB No. 2024/0195816 A1 Ferreira et al. (hereinafter “Ferreira”) Regarding claim 10: The combination of Hamdi, McGrew, Ohana, Young, Tsepenekas and Azvine discloses the method of claim 9, but does not explicitly disclose the following limitation taught by Ferreira: wherein the method further comprises: determining, based on the cluster membership of the additional datapoint, that the additional datapoint is an outlier that is outside of the plurality of vulnerability parameter clusters (Ferreira, ¶36: “a robust aggregation method may also return a score for each gradient on how much of an “outlier” the gradient is.”); and outputting an indication that the additional value of the vulnerability parameter is an anomalous value (Ferreira, ¶36: “a robust aggregation method may operate to output an “outlier” score for each gradient, where that score indicates the relative closeness of that gradient to “normal,””). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana, Young, Tsepenekas and Azvine to incorporate the technique of the robust aggregation method to output an “outlier” score for each gradient from a given cluster, as disclosed by Ferreira, such modification would enable the system to identify data points that deviate significantly from the norm within a group. Claims 12 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, Young, and further in view of US-PGPUB No. 2021/0365345 A1 Roy et al. (hereinafter “Roy”) Regarding claim 12: The combination of Hamdi, McGrew, Ohana and Young discloses the method of claim 1, but does not explicitly disclose the following limitation taught by Roy: wherein configuring the computer network security system to monitor the at least one software application for the at least one vulnerability associated using the filtered set of values of the vulnerability parameter comprises configuring the computer network security system to: determine whether the at least one software application is configured in accordance with at least one of the filtered set of values (Roy, ¶34: “analyzing the filtered data generated at block 420 to determine an average performance rating of an application.”); and when it is determined that the at least one software application is configured in accordance with the at least one filtered value, perform a remedial action to compensate for the at least one vulnerability (Roy, ¶38: “implementing a corrective action to increase overall performance of the device 50 …”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana and Young to incorporate the functionality of the method to identify if an application causes a decrease in the overall performance of a device, as disclosed by Roy, such modification would enable the system to ensure the application is not consuming excessive resources, which could indicate a vulnerability or misconfiguration. Regarding claim 21: The combination of Hamdi, McGrew, Ohana, Young and Roy discloses: (New) The method of claim 12, wherein performing the remedial action comprises updating the at least one software application and/or applying a control to the at least one software application (Roy, ¶38: “if an application is identified to be cause a decrease in the overall performance of a device 50, … a replacement application or an upgrade to the application … may be implemented to alleviate the decrease in the overall performance of the device 50.”). The same motivation which is applied to claim 12 with respect to Roy applies to claim 21. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, and further in view of US-PGPUB No. 2018/0114016 A1 Lee et al. (hereinafter “Lee”) Regarding claim 14: The combination of Hamdi, McGrew, Ohana and Young discloses the method of claim 1, but does not explicitly disclose the following limitation taught by Lee: further comprising: executing a plurality of vulnerability data acquisition agents to obtain vulnerability parameter values (Lee, ¶52: “… data is collected at each of a plurality of users' computers 101a, 101b and 101c. An agent program installed on each user's computer may be used to collect data. The agent program collects data on behaviors of using documents on the users' PCs.”, see Fig. 2); for each agent of the plurality of vulnerability data acquisition agents: generating a set of datapoints representing vulnerability parameter values obtained from execution of the agent (Lee, ¶55: “… the agent installed in each user's computer may count and store the user's behaviors of using documents, and may transmit the numerical values counted on the day and the numerical values of a certain period of time in the past to the apparatus 10.”); clustering the set of datapoints to obtain a respective plurality of vulnerability parameter clusters (Lee, ¶82: “… the type and number of the collected behavior of using documents may vary depending on the agent.”, ¶87: “… the collected data is preprocessed and clusters are created using the preprocessed data”); and using the respective plurality of vulnerability parameter clusters to identify datasets obtained from subsequent execution of the agent that include anomalous data (Lee, ¶58: “An anomaly detecting unit 190 may compare the pattern of behaviors in the past with the pattern of behaviors on the day to see if it is suspected that internal information is leaked.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana and Young to incorporate the functionality of the method to use an agent to store, transmit user behavior to an anomaly detecting unit of an apparatus to determine anomalous behavior through clustering, as disclosed by Lee, such modification would enable the system to enable the system to strengthen cybersecurity posture, and to detect and respond to threats proactively, minimizing the impact of security breaches. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, Lee, and further in view of US-PGPUB No. 2019/0155905 A1 Bachrach et al. (hereinafter “Bachrach”) Regarding claim 15: The combination of Hamdi, McGrew, Ohana, Young and Lee discloses: The method of claim 14, wherein generating the set of datapoints representing the vulnerability parameter values obtained from execution of the agent comprises: generating a set of masked vulnerability parameter values using the vulnerability parameter values obtained from execution of the agent (Lee, ¶81: “… the data can be modified by the normalization so that it has values between 0 and 1.”); The same motivation which is applied to claim 14 with respect to Lee applies to claim 15. The combination of Hamdi, McGrew, Ohana, Young and Lee does not explicitly disclose the following limitation taught by Bachrach: providing the set of masked vulnerability parameter values as input to the trained encoder model associated with the agent to obtain the set of datapoints (Bachrach, ¶64: “… a response content encoder 555 to encode an agent response message 560 following the dialogue prefix 520 in the text dialogue as a numeric array to generate a response content encoding 570.”); and using the trained encoder model associated with the agent to generate the datapoints (Bachrach, ¶64: “… a response content encoder 555 to encode an agent response message 560 … to generate a response content encoding 570.”, see Fig. 5B). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana, Young and Lee to incorporate the functionality of the method to implement a response content encoder to encode an agent response message and generate a response content encoding, as disclosed by Bachrach, such modification would enable the system to represent and detect patterns related to vulnerabilities and anomalies by encoding complex data into a more manageable form. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, USPAT No. 6473898 B1 to Waugh et al. (hereinafter “Waugh”), and further in view of US-PGPUB No. 2019/0287012 A1 to Celikyilmaz et al. (hereinafter “Celikyilmaz”) Regarding claim 16: The combination of Hamdi, McGrew, Ohana and Young discloses the method of claim 1, but does not explicitly disclose the following limitation taught by Waugh: further comprising: executing a first vulnerability data acquisition agent to obtain first vulnerability data including a first vulnerability parameter value (Waugh, col 3, lines 37-41: “by virtue of a first acquisition agent, … acquiring a first set of values for the number of subattributes determined by the first acquisition agent;”); executing a second vulnerability data acquisition agent to obtain second vulnerability data including a second vulnerability parameter value (Waugh, col 3, lines 41-45: “by virtue of a second acquisition agent, … acquiring a second set of values for the number of subattributes determined by the second acquisition agent,”); It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana and Young to incorporate the functionality of the method to implement two acquisition agents to acquire a set of values, as disclosed by Waugh, such modification would allow the system to collect data from various sources, enabling comprehensive security analysis and threat detection, and help monitor network activity, identify anomalies, and detect potential malicious behavior. The combination of Hamdi, McGrew, Ohana, Young and Waugh does not explicitly disclose the following limitation taught by Celikyilmaz: generating a first datapoint (Celikyilmaz, see Fig. 1, Encoder Output 1) representing the first vulnerability parameter value (Celikyilmaz, see Fig. 1, Input Sequence 1) using a first trained encoder model associated with the first vulnerability data acquisition agent (Celikyilmaz, see Fig. 1, Multi-Layer Encoder Agent 1, see also ¶26: “While three encoder agents 104, 105, 106 are depicted, it is to be understood that, in general, any number of two or more agents may be used in the encoder layer 102.”); and generating a second datapoint (Celikyilmaz, see Fig. 1, Encoder Output 2) representing the second vulnerability parameter value (Celikyilmaz, see Fig. 1, Input Sequence 2) using a second trained encoder model associated with the second vulnerability data acquisition agent (Celikyilmaz, see Fig. 1, Multi-Layer Encoder Agent 2). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana, Young and Waugh to incorporate the technique of implementing multiple multi-layer interconnected encoder agents to generate a sequence of output probability distributions over a vocabulary, as disclosed by Celikyilmaz, such modification would help the system to improve the ability of models to differentiate between normal and malicious data, thus enhancing the effectiveness of cyberattack detection. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, and further in view of US-PGPUB No. 2024/0314154 A1 to Shua et al. (hereinafter “Shua”) Regarding claim 17: The combination of Hamdi, McGrew, Ohana and Young discloses the method of claim 1, but does not explicitly disclose the following limitation taught by Shua: the vulnerability parameter is a version number of a software application program (Shua, ¶52: “… performing a query of the installed software 220 for unique version number … comparing to, … a set of likely or potential vulnerabilities to that software version for potential deficiencies or cybersecurity threats known or suspected to similar software types and versions.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana and Young to incorporate the functionality of the method to perform a query of an installed software for unique version number and comparing it to similar software types and versions to identify potential vulnerabilities, as disclosed by Shua, such modification would leverage resources to pinpoint known vulnerabilities and their potential impact, and provide the system with information crucial for prioritizing patching, updating, or implementing compensating controls. Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Hamdi, McGrew, Ohana, Young, and further in view of Schmidtler Regarding claim 18: The combination of Hamdi, McGrew, Ohana and Young discloses the method of claim 1, but does not explicitly disclose the following limitation taught by Schmidtler: wherein: the plurality of values of the vulnerability parameter is a plurality of strings (Schmidtler, ¶87: “domain attribute feature extractor 400 performs a method of encoding the information present in domain attribute sources (which, in some examples, are text strings) into a concatenated feature vector 450 for a time period …”) and the plurality of datapoints is a plurality of fixed-length numeric vectors representing respective ones of the plurality of strings (Schmidtler, ¶73: “… encoding models may be used to generate a plurality of fixed-length vectors that are subsequently concatenated by attribute feature extractor 210”); and generating the plurality of datapoints representing the plurality of values of the vulnerability parameter comprises generating the plurality of fixed-length numeric vectors (Schmidtler, ¶88: “It can be noted that some domain attributes are represented numerically. An attribute that is already represented numerically (e.g., the 32-bit IPv4 address) may be encoded … to produce a compressed fixed-length output vector …”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention, to modify the teachings of the combination of Hamdi, McGrew, Ohana and Young to incorporate the functionality of the method to produce a compressed fixed-length output vector of a numerically represented attribute, as disclosed by Schmidtler, such modification would enable the system to produce data that is crucial to machine learning models, and allows for efficient comparison and analysis of data. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHIAS HABTEGEORGIS whose telephone number is (571)272-1916. The examiner can normally be reached M-F 8am-5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William R. Korzuch can be reached at (571)272-7589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MATTHIAS HABTEGEORGIS/Examiner, Art Unit 2491
Read full office action

Prosecution Timeline

Jun 20, 2023
Application Filed
Jun 13, 2025
Non-Final Rejection — §103
Jul 30, 2025
Response Filed
Oct 28, 2025
Final Rejection — §103
Dec 01, 2025
Applicant Interview (Telephonic)
Dec 13, 2025
Examiner Interview Summary
Jan 02, 2026
Request for Continued Examination
Jan 17, 2026
Response after Non-Final Action
Jan 19, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591641
PROCESSING AN INPUT STREAM OF A USER DEVICE TO FACILITATE SECURITY ASSOCIATED WITH AN ACCOUNT OF A USER OF THE USER DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12574353
A Method And Unit For Adaptive Creation Of Network Traffic Filtering Rules On A Network Device That Autonomously Detects Anomalies And Automatically Mitigates Volumetric (DDOS) Attacks
2y 5m to grant Granted Mar 10, 2026
Patent 12541609
METHOD AND SYSTEM FOR IDENTIFYING HEALTH OF A MICROSERVICE BASED ON RESOURCE UTILIZATION OF THE MICROSERVICE
2y 5m to grant Granted Feb 03, 2026
Patent 12513188
METHOD AND SYSTEM FOR PROTECTING A CHECKOUT TRANSACTION FROM MALICIOUS CODE INJECTION
2y 5m to grant Granted Dec 30, 2025
Patent 12513112
NETWORK APPARATUS AND NETWORK ATTACK BLOCKING METHOD THEREOF
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
97%
With Interview (+21.3%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 97 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month