Prosecution Insights
Last updated: April 19, 2026
Application No. 18/326,843

ADAPTIVE SYSTEM FOR NETWORK AND SECURITY MANAGEMENT

Final Rejection §102§103
Filed
May 31, 2023
Examiner
ALMAGHAYREH, KHALID M
Art Unit
2492
Tech Center
2400 — Computer Networks
Assignee
Netenrich Inc.
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
208 granted / 248 resolved
+25.9% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
13 currently pending
Career history
261
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 248 resolved cases

Office Action

§102 §103
DETAILED ACTION This communication responsive to the Application No. 18/326,843 filed on May 31, 2023. Claims 1-25 are pending and are directed towards ADAPTIVE SYSTEM FOR NETWORK AND SECURITY MANAGEMENT. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 05/31/2024 was Acknowledge. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The disclosure is objected to because it contains an embedded hyperlink and/or other form of browser-executable code (See Para [0074]). Applicant is required to delete the embedded hyperlink and/or other form of browser-executable code; references to websites should be limited to the top-level domain name without any prefix such as http:// or other browser-executable code. See MPEP § 608.01. Claim Objections Applicant is advised that should claim 18 be found allowable, claim19 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-5, 7-13 and 15-25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shenoy et al. US 2019/0098037 A1 (hereinafter “Shenoy”) As per claims 1, 9 and 22, Shenoy teaches a system for identifying computer risk, the system configured to: receive a set of inputs from a plurality of third-party sources, wherein the set of inputs comprise at least one of streaming data or historical data (a security system receives activity data associated with a first source. The activity data may include data associated with a first user account during the use of a first cloud-based service…retrieve additional activity data from a second source. The additional activity data may include data associated with a second user account during the use of a second cloud-based service. Shenoy, para [0031]); perform, by an event signal processing component of the system, normalization on the set of inputs (the data loader application 206 performs operations for normalizing the data and reformatting the data into a common format for storage in, and retrieval from, the analytics and threat intelligence repository 211. Shenoy, para [0103]); perform, by the event signal processing component of the system, classification of the normalized set of inputs (Reformatting the data may include categorizing and structuring the data into the common format. Shenoy, para [0103])( Clustering and regression algorithms can be used to categorize data and find common patterns. For example, a clustering algorithm can put data into clusters by aggregating all entries of users logging in from a mobile device. Shenoy, para [0163]); generate, by a machine learning model of the event signal processing component of the system, vectorized data based on the classification of the normalized set of inputs, wherein the vectorized data indicates at least one of a (i) signal type, (ii) network environment, or (iii) objective (Decision tree, time series, naive Bayes analysis, and techniques used to build user behavior profiles are examples of machine learning techniques that can be used to generate predictions based on patterns of suspicious activity and/or external data feeds. Techniques such as clustering can be used to detect outliers and anomalous activity. Shenoy, para [0160]) (Activity data collected from various parameters over a period of time can be used with machine learning algorithms to generate patterns referred to as user behavior profiles. The activity data can include contextual information such as IP address and geographic location. Shenoy, para [0163])( Statistics such as those illustrated above can be combined into a feature vector. Feature vectors can include, for example, a count of a number of logins, a count of a number of distinct IP addresses used for logging in, a maximum distance between any two IP addresses used to log in within a 24-hour time period, a count of a number of distinct browsers used in connections to the cloud application within a 24 hour time period, and/or other measures. Feature vectors may be aggregated per cloud application and/or per user per cloud application. Shenoy, para [0130]); and generate, by the event signal processing component of the system, clusters of alerts based on the vectorized data by identifying a similarity of existing alert clusters (generate reports that may be presented visually to a system administrator via a user interface and to generate analytics for determining threat levels, detecting specific threats, and predicting potential threats, among other things. Shenoy, para [0106])( techniques such as outlier detection can establish a baseline that is useful for detecting anomalous activities. Such anomalous activities along with contextual threat intelligence can provide more accurate prediction of threats with low prediction errors. Shenoy, para [0164])( alerts 322 can be provided in visualizations 328 that can be viewed using a user interface that is accessible to an organization. Shenoy, para [0167]-[0168]). As per claims 2 and 10, Shenoy teaches the system as recited in Claim 1, wherein the set of inputs are at least one of (i) log files, (ii) performance metric information, (iii) alert data, (iv) configuration data, or (v) trace data (the activity data may be received in different formats that are used by different service providers or services. For example, the data may be formatted in JSON or other data interchange formats, or may be available as log files or database entries. Shenoy, para [0103]). As per claims 3 and 11, Shenoy teaches the system as recited in Claim 2, wherein the classification comprises categorizing each set of input, of the set of inputs, wherein each of the normalized set of inputs is categorized as having at least one of a (i) designated department of a user, or (ii) location information of an internet protocol (IP) address (the contextual data can include, for example, identification of a type of the client device, IP addresses used by the client device, geolocation data computed by a Global Positioning System (GPS) receiver of the client device, and other information about the client device or that can be obtained from the client device. Shenoy, para [0126])( Feature vectors can include, for example, a count of a number of logins, a count of a number of distinct IP addresses used for logging in, a maximum distance between any two IP addresses used to log in within a 24-hour time period, a count of a number of distinct browsers used in connections to the cloud application within a 24 hour time period, and/or other measures. Shenoy, para [0130])( Algorithm 3 provides an example of an algorithm that can be used for analytics of multiple application behavior. In algorithm 3, user IP addresses associated with various cloud service activities (such as logging in) are resolved to geolocation coordinates IP1 (Latitude 1, Longitude 1), IP2 (Latitude 2, Longitude 2), IP3 (Latitude 3, Longitude 3),. Shenoy, para [0143]). As per claims 4 and 12, Shenoy teaches the system as recited in Claim 3, wherein the classification is based on a policy or entity graph (the control manager 172 can also maintain security policies for the organization 130. A security policy can define an action or set of actions that, when detected, constitute a security violation or an event that otherwise requires attention. In some examples, actions that are defined by a policy as a security violation can occur through use of one service, meaning that all the actions were performed while using the same service. In some examples, the actions can have occurred during use of more than one service, where the services are provided by one service provider or multiple service providers. In some examples, a security policy can also define one or more remediation actions to perform when a violation of the policy is detected. Shenoy, para [0074][00178]). As per claims 5 and 13, Shenoy teaches the system as recited in Claim 1, wherein normalization of the set of inputs comprises transforming the set of inputs into a uniform format for processing (The data entered into a landing repository 210 may be in different formats and/or have different ranges of values, due, for example, from having been collected from different service providers. In some examples, the data from the data loader application 206 can be reformatted and/or structured before being moved to the analytics and threat intelligence repository 211 so that, for example, the data has a uniform format. Shenoy, para [0098]). As per claims 7 and 15, Shenoy teaches the system as recited in Claim 1, wherein the vectorized data is stored in a run-time vector database (Feature vectors can include, for example, a count of a number of logins, a count of a number of distinct IP addresses used for logging in, a maximum distance between any two IP addresses used to log in within a 24-hour time period, a count of a number of distinct browsers used in connections to the cloud application within a 24 hour time period, and/or other measures. Feature vectors may be aggregated per cloud application and/or per user per cloud application. Shenoy, para [0130]) (Table 5 below lists example values for several possible daily aggregation matrix vectors. The example vectors illustrated here include a count of logins per day for one day (“logcntday_1dy”), a count of failed logins per day for one day (“logfailcntday_1dy”), a count per day of IP addresses from which failed logins occurred over one day (“logfailipdisday_1dy”), and a count per day of IP addresses used to log in over one day (“logipdisday_1dy”). Shenoy, para [0132]). As per claims 8, 18, 19 and 23, Shenoy teaches the system as recited in Claim 1, wherein identifying the similarity of existing alert clusters is based at least on one or more correlation rules (the recommendation engine 308 can use association rule learning can to generate recommendations. In some examples, the recommendation engine 308 can use profile linking algorithms to link activities across multiple cloud applications by finding cross-service correlation. A single user can be identified across multiple cloud service using one or more attributes or identification factors, such as a primary user identifier that is commonly used across the clouds or a single sign-on (SSO) authentication mechanism (e.g., Active Directory, Okta, etc.). Examples of correlation of activities across applications find a user logged into two cloud services simultaneously from different IP addresses, find a user who performs several failed login attempts and subsequently changes the user's password, and users who frequently have with numerous failed logins for two or more cloud services, among other examples… Alerts can be constructed based on pre-defined rules that can include specific events and thresholds. Shenoy, para [0161]) (the recommendation engine 308 can use association rule learning can to generate recommendations. In some examples, the recommendation engine 308 can use profile linking algorithms to link activities across multiple cloud applications by finding cross-service correlation. A single user can be identified across multiple cloud service using one or more attributes or identification factors, such as a primary user identifier that is commonly used across the clouds or a single sign-on (SSO) authentication mechanism (e.g., Active Directory, Okta, etc.). Examples of correlation of activities across applications find a user logged into two cloud services simultaneously from different IP addresses, find a user who performs several failed login attempts and subsequently changes the user's password, and users who frequently have with numerous failed logins for two or more cloud services, among other examples. Shenoy, para [0172]). As per claim 16, Shenoy teaches the computer-implemented method of Claim 9, wherein generating, by a machine learning model, vectorized data based on the classification of the normalized set of inputs includes generating a run-time vector database, wherein the run-time vector database includes the vectorized data and additional information (Feature vectors can include, for example, a count of a number of logins, a count of a number of distinct IP addresses used for logging in, a maximum distance between any two IP addresses used to log in within a 24-hour time period, a count of a number of distinct browsers used in connections to the cloud application within a 24 hour time period, and/or other measures. Feature vectors may be aggregated per cloud application and/or per user per cloud application. Shenoy, para [0130]) (Table 5 below lists example values for several possible daily aggregation matrix vectors. The example vectors illustrated here include a count of logins per day for one day (“logcntday_1dy”), a count of failed logins per day for one day (“logfailcntday_1dy”), a count per day of IP addresses from which failed logins occurred over one day (“logfailipdisday_1dy”), and a count per day of IP addresses used to log in over one day (“logipdisday_1dy”). Shenoy, para [0132]). As per claim 17, Shenoy teaches the computer-implemented method of Claim 16, wherein the additional information includes processing results associated with at least the normalization on the set of inputs or the classification of the normalized set of inputs (the activity data may be received in different formats that are used by different service providers or services. For example, the data may be formatted in JSON or other data interchange formats, or may be available as log files or database entries. In some examples, the data loader application 206 performs operations for normalizing the data and reformatting the data into a common format for storage in, and retrieval from, the analytics and threat intelligence repository 211. Reformatting the data may include categorizing and structuring the data into the common format. In some examples, the database is adaptive to structural changes and new values, and can run automated processes to check for changed data. In some examples, the cloud crawler application 202 recognizes differences in the structure or values of the data retrieved, and can apply the changes to the application catalog database 208 and/or the analytics and threat intelligence repository 211. Shenoy, para [0103])( the data in the activity logs may be normalized by the analytics engine 300 or prior to being provided to the analytics engine 300. Normalizing the activity data 310 include reformatting the activity data 310 such data from different services and/or service providers is comparable, has the same meaning, and/or bears the same significance and relevance. After normalization, the behavioral analytics engine 304 can aggregate and compare data from different cloud services in meaningful ways. For example, a series of failed login attempts by one user with one cloud service may be deemed not to be a threat. However, a series of failed logins by the same user but at multiple different cloud services indicate a concerted effort to crack the user's password and should thus set off an alarm. Shenoy, para [0123]). As per claim 20 and 24, Shenoy teaches the computer-implemented method of Claim 9, wherein identifying the similarity of existing alert clusters is based on a machine learning model (the security monitoring and control system 102 can include a learning system 178. The learning system 178 can apply various machine learning algorithms to data collected by the security monitoring and control system 102. The information learned about the data can then be used, for example, by the data analysis system 136 to make determinations about user activities in using services provided by the service provider 110. For example, the learning system 178 can learn patterns of normal or common behaviors of users of an organization. In these and other examples, the learning system 178 can generate models that capture patterns that the learning system 178 has learned, which can be stored in the storage 122 along with other data for an organization. Shenoy, para [0077])( the threat detection and prediction analytics application 212 can generate analytics using machine learning and other algorithms. The analytics performed by the prediction analytics application 212 can include identifying and predicting security threats from patterns of activity and behavioral models. Analytics performed by the descriptive analytics application 207 and the prediction analytics application 212 can be performed using data stored in the analytics and threat intelligence repository 21. Shenoy, para [0106]-[0109])( the analytics engine 300 can perform various other analytics 306 on the activity data 310 obtained from service providers. In some examples, various types of algorithms can be particularly useful for analyzing the data. Decision tree, time series, naive Bayes analysis, and techniques used to build user behavior profiles are examples of machine learning techniques that can be used to generate predictions based on patterns of suspicious activity and/or external data feeds. Techniques such as clustering can be used to detect outliers and anomalous activity. For example, a threat can be identified based on an account accessing one or more files or failing a series of login attempts from an IP address that is flagged (by a third party feed or otherwise) as malicious. In a similar way, a threat can also be based on different patterns of activity with one cloud application or across multiple cloud applications, possibly over time. Shenoy, para [0160]). As per claim 25, Shenoy teaches the one or more non-transitory computer-readable media of Claim 22, wherein identifying the similarity of existing alert clusters is based a combination of machine learning model and one or more correlation rules ( the analytics engine 300 can perform various other analytics 306 on the activity data 310 obtained from service providers. In some examples, various types of algorithms can be particularly useful for analyzing the data. Decision tree, time series, naive Bayes analysis, and techniques used to build user behavior profiles are examples of machine learning techniques that can be used to generate predictions based on patterns of suspicious activity and/or external data feeds. Techniques such as clustering can be used to detect outliers and anomalous activity. For example, a threat can be identified based on an account accessing one or more files or failing a series of login attempts from an IP address that is flagged (by a third party feed or otherwise) as malicious. In a similar way, a threat can also be based on different patterns of activity with one cloud application or across multiple cloud applications, possibly over time. Shenoy, para [0160]) (the recommendation engine 308 can use association rule learning can to generate recommendations. In some examples, the recommendation engine 308 can use profile linking algorithms to link activities across multiple cloud applications by finding cross-service correlation. A single user can be identified across multiple cloud service using one or more attributes or identification factors, such as a primary user identifier that is commonly used across the clouds or a single sign-on (SSO) authentication mechanism (e.g., Active Directory, Okta, etc.). Examples of correlation of activities across applications find a user logged into two cloud services simultaneously from different IP addresses, find a user who performs several failed login attempts and subsequently changes the user's password, and users who frequently have with numerous failed logins for two or more cloud services, among other examples… Alerts can be constructed based on pre-defined rules that can include specific events and thresholds. Shenoy, para [0161]) (the recommendation engine 308 can use association rule learning can to generate recommendations. In some examples, the recommendation engine 308 can use profile linking algorithms to link activities across multiple cloud applications by finding cross-service correlation. A single user can be identified across multiple cloud service using one or more attributes or identification factors, such as a primary user identifier that is commonly used across the clouds or a single sign-on (SSO) authentication mechanism (e.g., Active Directory, Okta, etc.). Examples of correlation of activities across applications find a user logged into two cloud services simultaneously from different IP addresses, find a user who performs several failed login attempts and subsequently changes the user's password, and users who frequently have with numerous failed logins for two or more cloud services, among other examples. Shenoy, para [0172]). As per claim 21, Shenoy teaches the computer-implemented method of Claim 20, wherein one or more correlation rules are configured to prioritize processing over the machine learning model (Information in an alert about each risk event can include, for example, an identifier for the affected cloud service or instance a category, a priority, a date and time, a description, a recommended remediation type, and/or a status, among other information. A risk event may also have a user-selectable action, such as editing, deleting, marking status complete, and/or performing a remediation action. Selection of a remediation action may invoke an application such as the incident remediation application and/or cloud seeder application to perform the selected remediation. An alert and/or other information concerning an identified threat can be sent to an entity external to security monitoring and control system. Shenoy, para [0168]) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Shenoy et al. US 2019/0098037 A1 (hereinafter “Shenoy”) in view of Zhang et al. US 2023/0315991 A1 (hereinafter “Zhang”). As per claims 6 and 14, Shenoy teaches the system as recited in Claim 1. Shenoy does not explicitly teach wherein the signal processing component, prior to clustering the vectorized data, is further configured to: embed a graph to represent a relationship of the vectorized data; or embed a large language model (LLM) embedding to represent an attribute of the vectorized data. However, Zhang teaches embed a large language model (LLM) embedding to represent an attribute of the vectorized data (Text-based model generator 268 may receive, retrieve, or otherwise obtain raw device information in text format (e.g., entity log information, Nmap scan data, etc.). The text-based model generator 268 may process the raw device information for each device represented by the information into a set of character strings (also referred to as tokens) that can be processed by a natural language processing model. For example, the raw entity information for each entity may be processed to combine or append information for each property of the device together into a single token and collect the tokens into a paragraph (e.g., each token separated by a space or other delimiting character). The text-based model generator 268 may then apply a natural language processing model on the paragraphs for each device (e.g., as a sentence would be processed for a human readable language). The result of applying the natural language processing model to the feature/property paragraphs may be a numerical vector in a multi-dimensional or high dimensional space. Thus, each entity may be embedded in the high dimensional space and represented by a single numerical vector. Accordingly, the entities may be grouped or clustered in the high dimensional space. The groupings may represent device types with common or similar functionality. In some embodiments, the text-based model generator 268 may select entity features that most correlate with the entity groupings in the high dimensional space. The text-based model generator 268 may then train a machine learning model using as input the selected features from a set of previously classified devices. Zhang, para [0048]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the teaching of Shenoy in view of Zhang. One would be motivated to do so, to extract more attribute using text-based language models and enhance the accuracy of system classification and clustering. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. A. Apostolopoulos US 12,206,693 B1 directed to graph-based detection of network security issues. B. Haterat et al. US 2022/0318384 A1 directed to malicious pattern identification in clusters of data items. C. Mguyen et al. US 11,258,805 B2 directed to computer security event clustering and violation detection. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHALID M ALMAGHAYREH whose telephone number is (571)272-0179. The examiner can normally be reached Monday - Thursday 8AM-5PM EST & Friday variable. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RUPAL DHARIA can be reached at (571)272-3880. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Respectfully Submitted /KHALID M ALMAGHAYREH/Primary Examiner, Art Unit 2492
Read full office action

Prosecution Timeline

May 31, 2023
Application Filed
May 02, 2025
Non-Final Rejection — §102, §103
Nov 07, 2025
Response Filed
Dec 18, 2025
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596848
METHOD OF VERIFYING INTEGRITY OF DATA FROM A DEVICE UNDER TEST
2y 5m to grant Granted Apr 07, 2026
Patent 12587840
AUTHENTICATION MANAGEMENT IN A WIRELESS NETWORK ENVIRONMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12587386
CHECKOUT WITH MAC
2y 5m to grant Granted Mar 24, 2026
Patent 12579328
SYSTEM ON A CHIP AND METHOD GUARANTEEING THE FRESHNESS OF THE DATA STORED IN AN EXTERNAL MEMORY
2y 5m to grant Granted Mar 17, 2026
Patent 12572699
Using Memory Protection Data
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+25.2%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 248 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month