DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 8-9 and 14 are cancelled.
Claims 2-7 are withdrawn.
Claims 1, 10 and 15-16 are amended.
Claims 1, 10, 11-13 and 15-16 are pending.
Applicant’s arguments, see page 7, filed 10/28/2025, with respect to Figure 1, 2 and 5 have been fully considered and are persuasive. The drawing objections of 07/31/2025 has been withdrawn.
Applicant’s arguments, see page 8, filed 10/28/2025, with respect to claims 1 and 8-16 have been fully considered and are persuasive. The claim rejections under 35 USC § 101 of 07/31/2025 has been withdrawn.
Applicant’s arguments, see page 8-9, filed 10/28/2025, with respect to the rejection(s) of claim(s) 1, 15 and 16 under 35 USC § 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of newly found prior art.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Shivamoggi et al. (U. S. PGPub. No. 2021/0385253 A1) (hereinafter “Shivamoggi”) and further in view of “X. Zhu, B. Fei, D. Liu and W. Bao, "Adaptive Clustering Ensemble Method Based on Uncertain Entropy Decision-Making” (hereinafter “Adaptive Clustering…”); and Koral et al. (U. S. PGPub. No. 2020/0112571 A1) (hereinafter “Koral”)
Regarding Claim 1, Shivamoggi teaches:
at least one processor (Shivamoggi: [0061] Processor 855 generally represents any type or form of processing unit capable of processing data or interpreting and executing instructions. In certain embodiments, processor 855 may receive instructions from a software application or module. These instructions may cause processor 855 to perform the functions of one or more of the embodiments described and/or illustrated herein. For example, processor 855 may perform and/or be a means for performing all or some of the operations described herein. Processor 855 may also perform and/or be a means for performing any other operations, methods, or processes described and/or illustrated herein), at least one memory including computer program code, the at least one memory and the computer program code being configured to, with the at least one processor, cause the apparatus at least to perform (Shivamoggi: [0061], Memory 860 generally represents any type or form of volatile or non-volatile storage devices or mediums capable of storing data and/or other computer-readable instructions. Examples include, without limitation, random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory device. In certain embodiments computing system 800 may include both a volatile memory unit and a non-volatile storage device. In one example, program instructions implementing cluster detector 110 may be loaded into memory 860.):
receiving input data comprising data points (Shiva: [0051] FIG. 4 is a flowchart 400 that illustrates a process to eliminate spurious clusters in security environments, according to one embodiment. The process begins at 405 by accessing a dataset with data points at a security server (e.g., security server 105 as shown in FIG. 1));
Shivamoggi does not explicitly teaches:
applying N initial clustering algorithms at least to a subset of said data points to generate N initial clustering matrices;
generating a co-association matrix from the N initial clustering matrices;
generating a distance matrix from the co-association matrix;
applying a density based clustering algorithm to the distance matrix to generate data clusters;
determining a subset of the generated data clusters as anomalous clusters, wherein at least some of the data points in each anomalous cluster are anomalous data points, wherein each data point corresponds to properties of a network packet in received network traffic, and each anomalous cluster comprises unknown network traffic;
further determining for each anomalous cluster of unknown network traffic, whether said anomalous cluster comprises data points associated with a network attack or not;
performing at least one action based on the anomalous clusters, wherein said performing the at least one action based on the anomalous clusters comprises dropping packets coming from a same source address as packets comprising data points of the anomalous clusters determined as network attack clusters.
However in an analogous art, “Adaptive clustering…” teach:
applying N initial clustering algorithms at least to a subset of said data points to generate N initial clustering matrices (“Adaptive_clustering_”: [Abstract], K-means is used as the base clustering algorithm of clustering ensemble, and several base clustering members are randomly generated according to different the number of clusters, and the members with high stability and quality are selected as clustering ensemble inputs. [Page 62, Col 2, Section II, para 1, lines 1-8], clustering ensemble method based on uncertain entropy decision-making. The overall structure of our method is shown in Fig.1. In order to cope with the issues that a single clustering method is sensitive to the initial clustering center and cannot determine the number of clusters adaptively, our method selects several stable and high quality members from candidate base clustering members as the input of clustering ensemble.).
generating a co-association matrix from the N initial clustering matrices (“Adaptive_clustering… “: [Abstract]: Furthermore, the uncertainty of clusters in the base clusterings are calculated based on the information entropy criterion, and then the co-association matrix is established. [Page 62, Col 2, Section II, para 1, lines 8-11 and Page 63, Col1, para 1, lines 1-2 ]: Based on the information entropy criterion, the uncertainty of clusters );
generating a distance matrix from the co-association matrix (“Adaptive_clustering…”: [Abstract], The obtained co-association matrix is transformed into a distance matrix by Bhattacharyya distance among data samples)
applying a density based clustering algorithm to the distance matrix to generate data clusters (“Adaptive_clustering: [Abstract], we use the distance matrix as the input of the density peaks (DP) algorithm, and further calculate the final clustering result (=data clusters). The experimental results on real-world datasets illustrate that the proposed method has better performance than other clustering methods).
determining a subset of the generated data clusters as anomalous clusters, wherein at least some of the data points in each are anomalous data points (Koral: [0044], the processing system 104 may further perform clustering operations to identify clusters of anomalous network traffic data (e.g., DNS traffic records), and to associate the clusters with particular types of malicious activity or other types of anomalies);
wherein each data point corresponds to properties of a network packet in received network traffic, and each anomalous cluster comprises unknown network traffic (Koral: [0054], To illustrate, a network intelligence database may be maintained wherein certain sources (e.g., IP addresses) have been identified as being associated with particular types of anomalous traffic. The unknown clusters may then be labeled in accordance with the known identities and activities of these sources as derived from the network intelligence database);
further determining for each anomalous cluster of unknown network traffic, whether said anomalous cluster comprises data points associated with a network attack or not (Koral: [0044], the processing system 104 may further perform clustering operations to identify clusters of anomalous network traffic data (e.g., DNS traffic records), and to associate the clusters with particular types of malicious activity or other types of anomalies);
and performing at least one action based on the anomalous clusters wherein said performing the at least one action based on the anomalous clusters comprises dropping packets coming from a same source address as packets (Koral: [0077] At optional step 580, the processing system may apply at least one additional remedial action, wherein the at least one additional remedial action is assigned to the first DNS traffic anomaly type. The at least one additional remedial action may involve blocking DNS traffic from one or more clients associated with the additional DNS traffic records from which the additional input aggregate vector is derived, directing queries from DNS resolver(s) associated with the additional DNS traffic records from which the additional input aggregate vector is derived to a different DNS authoritative server, blocking, dropping, or redirecting additional types of traffic from the client(s) and/or DNS resolver(s) associated with the additional DNS traffic records from which the additional input aggregate vector is derived, and so forth. [0016], sources (e.g., IP addresses) associated with the anomalous network traffic data may be identified and flagged for remedial action. [0017], In addition, sources (e.g., IP addresses) that may be involved in or otherwise associated with the identified anomalous network traffic data may be identified and flagged for remedial action. In one example, the remedial action may further be tailored to the particular type of anomaly) comprising data points of the anomalous clusters determined as network attack clusters. (Koral: [0054], The other clusters 322-324 may then be identified as representing anomalous network traffic data. In one example, the other clusters 322-324 may also be labeled as particular types of anomalies. For instance, compressed vector representations that are the samples for clustering may be known to represent input vectors relating the network traffic data from particular sources to particular destinations, etc. To illustrate, a network intelligence database may be maintained wherein certain sources (e.g., IP addresses) have been identified as being associated with particular types of anomalous traffic. The unknown clusters may then be labeled in accordance with the known identities and activities of these sources as derived from the network intelligence database).
It would be obvious to a person having ordinary skill in the art, before the effective filing date of the invention, to modify Shivamoggi ’s method of accessing datasets with data points by applying Koral’s method of determining malicious cluster and perform mitigating action by dropping malicious traffic, in order to prevent bad actors to blend within the overall Internet traffic undetected and perform malicious acts (Koral: [0002])
Regarding Claim 15, this claim contains identical limitations found within that of claim 1 above albeit directed to a different statutory category (method). For this reason the same grounds of rejection are applied to claim 15.
Regarding Claim 16, Shivamoggi teaches:
A non-transitory computer readable medium having stored thereon a set of computer readable instructions that, when executed by at least one processor, cause an apparatus to at least perform (Shivamoggi: Processor 855 may also perform and/or be a means for performing any other operations, methods, or processes described and/or illustrated herein. Memory 860 generally represents any type or form of volatile or non-volatile storage devices or mediums capable of storing data and/or other computer-readable instructions. Examples include, without limitation, random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory device. In certain embodiments computing system 800 may include both a volatile memory unit and a non-volatile storage device):
This claim contains identical limitations found within that of claim 1 above albeit directed to a different statutory category (non-transitory medium). For this reason the same grounds of rejection are applied to claim 16.
Claim(s) 10 is rejected under 35 U.S.C. 103 as being unpatentable over Shivamoggi et al. (U. S. PGPub. No. 2021/0385253 A1) (hereinafter “Shivamoggi”) and further in view of “X. Zhu, B. Fei, D. Liu and W. Bao, "Adaptive Clustering Ensemble Method Based on Uncertain Entropy Decision-Making” (hereinafter “Adaptive Clustering…”) and Koral et al. (U. S. PGPub. No. 2020/0112571 A1) (hereinafter “Koral”); and in further view of Muthurajan et al. (U. S. PGPub. No. 2018/0091527 A1) (hereinafter “Muthurajan”) and Strub et al. (U. S. PGPub No. 2007/0153 689 A1) (hereinafter “strub”)
Regarding Claim 10, Shivamoggi in view of “Adaptive clustering…”, and Koral teaches:
The apparatus according to claim 1 (see rejection of claim 1 above),
wherein said determining comprises performing for each anomalous cluster of unknown network traffic (Koral: [0024] In one example, the largest cluster may be automatically labeled as being associated with “normal” network traffic data. One or more other clusters may then be identified as anomalous network traffic data. In one example, the other clusters may also be labeled, e.g., by a network technician, by a subject matter expert, etc),
Shivamoggi in view of “Adaptive clustering…” and Koral does not explicitly teaches:
determining an attack type for each data point in an anomalous cluster, wherein the attack type is either a type of malicious network traffic or none for benign network traffic;
determining a number of data points corresponding to each attack type;
However, in an analogous art, Muthurajan teach:
determining an attack type for each data point in an anomalous cluster (Muthurajan: [0032], The malware feature, and other information included in the malware data in situations where such additional malware information is specified, may be used to determine what type of attack (=attack type) will be emulated and what manner of malicious network traffic will be generated), wherein the attack type is either a type of malicious network traffic or none for benign network traffic (Muthurajan: [0013] In some implementations, the malware data includes a variety of information that enables the computing device 110 to emulate a certain type of attack or malware. For example, the malware data may include a number of network computing devices to be emulated, a number of infected devices, an amount of benign traffic to be emulated, an amount of malicious traffic to be emulated, a transmission pattern for benign traffic, a transmission pattern for malicious traffic, a type of benign traffic, and/or a type of malicious traffic)
determining a number of data points corresponding to each attack type (Muthurajan: [0029], Malware features may indicate or specify a particular type of attack, such as a data exfiltration attack, a crypto-locker attack, a data manipulation attack, a DoS/DDoS attack, and/or a data defacement attack. The malware data may include a variety of other information, such as a number of computing devices to emulate, a number of devices (=number of data points) to be infected, and data relating to the manner in which emulated network traffic is to be transmitted, such as parameters defining a network traffic volume and/or throughput for benign and/or malicious network traffic);
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Shivamoggi in view of “Adaptive clustering…”, and Koral by applying the well-known technique as disclosed by Muthurajan of identifying particular attack types, number of device infected, etc. The motivation is determining the effectiveness of various forms of network security (Muthurajan: [0007])
The above citation of Shivamoggi in view of “Adaptive clustering…”, Koral and Muthurajan does not explicitly teach:
determining an attack type with a highest number of data points as a majority attack type;
and determining that the anomalous cluster is a network attack cluster in response to the majority attack type being of some other type than none.
However, in an analogous art, Strub teach:
determining an attack type with a highest number of data points as a majority attack type (Strub: 20070153689: [0048] If the threshold detector 115 detects a condition where a particular threshold is exceeded (=highest number) (or reached), then this indicates that a certain type of attack is potentially underway. The controller 119 reacts to this condition by changing its state to reflect the particular attack that has been detected)
and determining that the anomalous cluster is a network attack cluster in response to the majority attack type being of some other type than none (Strub: [0048] If the threshold detector 115 detects a condition where a particular threshold is exceeded (or reached)(=highest number), then this indicates that a certain type of attack is potentially underway. The controller 119 reacts to this condition by changing its state to reflect the particular attack that has been detected. For example, this might be the detection of a Denial of Service (DoS) attack directed at a certain subnet. The change of state of the controller causes the meter (or monitor) parameters to be altered in such a way as to help isolate the attack characteristics).
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify “Shivamoggi in view of “Adaptive clustering…”, Koral and Muthurajan by applying the well-known technique as disclosed by Strub of detecting certain type of attack if a particular threshold is exceeded (or reached)(=highest number), then this indicates that a certain type of attack is potentially underway. The motivation is to provide the ability to effectively detect and isolate malicious traffic in the network before its effect is felt by the intended recipients (Strub: [0002]).
Claim(s) 11-14 is rejected under 35 U.S.C. 103 as being unpatentable over Shivamoggi et al. (U. S. PGPub. No. 2021/0385253 A1) (hereinafter “Shivamoggi”) and further in view of “X. Zhu, B. Fei, D. Liu and W. Bao, "Adaptive Clustering Ensemble Method Based on Uncertain Entropy Decision-Making” (hereinafter “Adaptive Clustering…”) and Koral et al. (U. S. PGPub. No. 2020/0112571 A1) (hereinafter “Koral”); and in further view of Muthurajan et al. (U. S. PGPub. No. 2018/0091527 A1) (hereinafter “Muthurajan”) and Strub et al. (U. S. PGPub No. 2007/0153 689 A1) (hereinafter “strub”); and in further view of Lee et al. (U. S. PGPub. No. 2022/0201011 A1) (hereinafter “Lee”)
Regarding Claim 11, “Shivamoggi in view of “Adaptive clustering…”, Koral and Muthurajan and Strub teach:
The apparatus according to claim 10, (see rejection of claim 10 above),
and wherein - the definition of an attack type comprises values or values ranges for at least one of the following parameters (Koral: [0032], present disclosure for detecting anomalous domain name system traffic records via an encoder-decoder neural network and/or for identifying anomalous network traffic data via normalized distance-based clustering, as described herein. For instance, although examples of the present disclosure are described primarily in connection with DNS traffic records, in other, further, and different examples, network traffic records may relate to other types of network traffic, such as: server connection request messages at one or more servers of one or more domains, e.g., transmission control protocol (TCP) SYN/ACK messaging, Uniform Datagram Protocol (UDP) messaging, IP packets for streaming video, streaming audio, or general Internet traffic, and so forth. Accordingly, in one example, network traffic data may be gathered and/or provided by server(s) 116 and/or server(s) 118):
o source Internet Protocol, IP, address (Koral: 0016], sources (e.g., IP addresses) associated with the anomalous network traffic data may be identified and flagged for remedial action);
o destination IP address (Koral: [0024] In one example, the largest cluster may be automatically labeled as being associated with “normal” network traffic data. One or more other clusters may then be identified as anomalous network traffic data. In one example, the other clusters may also be labeled, e.g., by a network technician, by a subject matter expert, etc. In one example, the other clusters may be labeled automatically. For instance, compressed vector representations that are the samples for clustering may be known to represent input vectors relating the network traffic data from particular sources to particular destinations, etc);
o IP packet size; o destination Transmission Control Protocol, TCP, port number (Koral: [0031], network traffic records may relate to other types of network traffic, such as: server connection request messages at one or more servers of one or more domains, e.g., transmission control protocol (TCP) SYN/ACK messaging…);
o destination User Datagram Protocol, UDP, port number (Koral: [0031], network traffic records may relate to other types of network traffic, such as: server connection request messages at one or more servers of one or more domains,….Uniform Datagram Protocol (UDP) messaging, IP packets for streaming video, streaming audio, or general Internet traffic, and so forth);
or o inter-packet interval of IP packets received from the same source IP address (Koral: [0048], The input vector 201 may have a plurality of features, e.g., nine (9) features, 50 features, 60 features, 100 features, etc., which may be aggregated from DNS traffic records in a network. Example types of features are indicated in the feature key 205 and may include: a DNS resolver IP address for which the DNS traffic records are aggregated, a time block from which the DNS traffic records are aggregated, a number of queries (e.g., in thousands) processed by the DNS resolver in the time period, a number of DNS authoritative servers contacted by the DNS resolver in the time period, a number of unique clients submitted queries to the DNS resolver in the time period, a number of distinct top level domains queried in the time period, a number of distinct second level domains queried in the time period, a number of DNS resolvers serviced by the top DNS authoritative server contacted by the DNS resolver in the time period, and a number of queries submitted to the DNS authoritative server by the DNS resolver in the time period)
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Shivamoggi in view of “Adaptive clustering…” by applying the well-known technique as disclosed by Koral of identifying the one or more clusters representing anomalous clusters and being associated with probe attack. The motivation is to prevent access to the system from the bad attackers (Koral: [0001])
The above citation of the Shivamoggi in view of “Adaptive clustering…”, Koral and Muthurajan and Strub does not explicitly teach:
wherein - at least one definition of an attack type is pre-defined and stored to the apparatus; wherein - determining an attack type for each data point in an anomalous cluster comprises comparing a data point to the at least one stored definition of an attack type; wherein - an attack type other than none is determined in response to finding a matching comparison between the data point and a definition of an attack type; wherein - an attack type of none is determined in response to not finding a matching comparison between the data point and any of the stored definitions of an attack type;
However, in an analogous art, Lee teach:
wherein at least one definition of an attack type is pre-defined and stored to the apparatus (Lee: [0004] Most attack detection systems (or intrusion detection systems) proposed so far detect exploit attacks based on predefined detection rules (=predefined attack types);
wherein - determining an attack type for each data point in an anomalous cluster comprises comparing a data point to the at least one stored definition of an attack type (Lee: [0093], one or more vulnerability information corresponding to one or more vulnerabilities of the device may be determined, and the first attack type may be determined based on vulnerability information matching the extracted keyword among the one or more vulnerability information. That is, as described above, by linking and collecting various types of information, it may be possible to easily search for vulnerability information matching a keyword);
wherein - an attack type other than none is determined in response to finding a matching comparison between the data point and a definition of an attack type (Lee: [0094] In some other embodiments related to step S200, even in a situation in which the above-described various information may not be linked and collected, vulnerability information matching a keyword among a plurality of vulnerability information collected from the vulnerability collection channel may be determined, and the first attack type may be determined based on the determined vulnerability information);
wherein an attack type of none is determined in response to not finding a matching comparison between the data point and any of the stored definitions of an attack type (Lee: [0074] Next, the detection unit 240 may detect an exploit attack based on the ruleset 42 set by the management unit 220. The detection unit 240 may be a type of detection engine, and may monitor the operation of devices belonging to the domain or network traffic, and take appropriate actions (e.g., allow, block, etc.) on the operation of the device or network traffic based on the ruleset 42)
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify “Shivamoggi in view of “Adaptive clustering…”, Koral and Muthurajan, Strub by applying the well-known technique as disclosed by Lee of determine the first attack type based on predetermined detection rules . The motivation is to improve detection accuracy for an exploit attack (Lee:[0002])
Regarding Claim 12, “Shivamoggi in view of “Adaptive clustering…”, Koral and Muthurajan, Strub and Lee teach:
The apparatus according to claim 11, (see rejection of claim 11 above),
wherein the parameters in the definition of an attack type are provided in an executable script (Lee: [0004] Most attack detection systems (or intrusion detection systems) proposed so far detect exploit attacks based on predefined detection rules (=executable script)), and wherein comparing a data point to the definition of an attack type is performed by executing the script (Lee: [0093] one or more vulnerability information corresponding to one or more vulnerabilities of the device may be determined, and the first attack type may be determined based on vulnerability information matching the extracted keyword among the one or more vulnerability information. That is, as described above; by linking and collecting various types of information, it may be possible to easily search for vulnerability information matching a keyword. [0094] In some other embodiments related to step S200, even in a situation in which the above-described various information may not be linked and collected, vulnerability information matching a keyword among a plurality of vulnerability information collected from the vulnerability collection channel may be determined, and the first attack type may be determined based on the determined vulnerability information).
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify Shivamoggi in view of “Adaptive clustering…”, Koral and Muthurajan, Strub by applying the well-known technique as disclosed by Lee of determine the first attack type based on predetermined detection rules . The motivation is to improve detection accuracy for an exploit attack (Lee:[0002])
Regarding Claim 13, “Shivamoggi in view of “Adaptive clustering…”, Koral and Muthurajan, Strub and Lee teach:
The apparatus according to claim 11, (see rejection of claim 11 above),
wherein the definitions of attack types stored to the apparatus are periodically updated by adding new attack types, removing attack types and/or changing the parameters of attack types (Lee: [0069] The storage unit 180 may manage (e.g., storage, inquiry, modification, deletion, etc.) various types of information used in the exploit attack type classification apparatus 10. Various types of information may include, for example, device information, vulnerability information, exploit information, ruleset information, log data, and attack type information (e.g., attack type classification information of an exploit code), but may not be limited thereto. Various types of information may be stored in the storages 181 to 186 managed by the storage unit 180. For efficient information management, the storages 181 to 186 may be implemented by a storage medium converted into DB).
A person having ordinary skill in the art, before the effective filing date of the invention, would have found it obvious to modify “Shivamoggi in view of “Adaptive clustering…”, Koral and Muthurajan, Strub by applying the well-known technique as disclosed by Lee of modification, deletion of the various types of information. The motivation is to improve detection accuracy for an exploit attack (Lee:[0002])
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of References Cited for a listing of analogous art.
Yang et al. (U. S. PGPub. No. 2020/0142763 A1): A computer-implemented method is presented for detecting anomalies in dynamic datasets generated in a cloud computing environment. The method includes monitoring a plurality of cloud servers receiving a plurality of data points, employing a two-level clustering training module to generate micro-clusters from the plurality of data points, each of the micro-clusters representing a set of original data from the plurality of data points, employing a detecting module to detect normal data points, abnormal data points, and unknown data points from the plurality of data points via a detection model, employing an evolving module using a different evolving mechanism for each of the normal, abnormal, and unknown data points to evolve the detection model, and generating a system report displayed on a user interface, the system report summarizing the micro-cluster information.
Bjarnason et al. (U S PGPub. No.2023/0164176 A1): A method for detecting patterns using statistical analysis is provided. The method includes receiving a subset of structured data having a plurality of fields. A plurality of value combinations is generated for the plurality of fields using a statistical combination function. Each combination of the generated plurality of value combinations is stored as a separate entry in a results table. The entry in the results table includes a counter associated with the stored combination. A value of the counter is incremented for every occurrence of the stored combination in the generated plurality of value combinations. The results table is sorted based on the counters' values and based on a number of fields in each combination. One or more entries having highest counter values are identified in the results table.
Saha et al. (U. S. PGPub. No. 2019/0303710 A1): Examples provide a system for detecting anomalies in a dataset. The system includes one or more processors and a memory storing the dataset. The one or more processors are programmed to identify a first set of data points in a cluster, identify a second set of data points outside of the cluster as noisy data points, and determine whether each of the noisy data points is an anomaly by: determining a distance between the noisy data point and other data points in the dataset, ranking the distances between the noisy data point and the other data points, and applying a weight to each of the ranked distances to determine an outlier value for the noisy data point. When the outlier value for the noisy data point exceeds a threshold, the noisy data point is identified as an anomaly, and result is displayed in a user interface.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RUPALI DHAKAD whose telephone number is (571)270-3743. The examiner can normally be reached M-F 8:30-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached at 5712705143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/R.D./Examiner, Art Unit 2437
/ALEXANDER LAGOR/Supervisory Patent Examiner, Art Unit 2437