Prosecution Insights
Last updated: April 19, 2026
Application No. 18/444,454

TUNING-FREE UNSUPERVISED ANOMALY DETECTION BASED ON DISTANCE TO NEAREST NORMAL POINT

Non-Final OA §101§103
Filed
Feb 16, 2024
Examiner
BLACK, LINH
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
3 (Non-Final)
51%
Grant Probability
Moderate
3-4
OA Rounds
5y 1m
To Grant
62%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
222 granted / 437 resolved
-4.2% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
40 currently pending
Career history
477
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
64.0%
+24.0% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 437 resolved cases

Office Action

§101 §103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/30/2025 has been entered. Claims 1-24 are pending in the application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 10/30/2025 have been fully considered. Regarding the arguments on pages 9-16 in relating to the 35 U.S.C. § 101 rejection that “the claims are in fact not directed towards an abstract idea”, “the claims are in fact not directed towards an abstract idea”, “the current claims do not fall within this category since the claims include limitations pertaining to an ensemble detection mechanism that comprises multiple anomaly detection mechanisms which is used to detect anomalies”, examiner respectfully disagrees. In data-driven system, different types of data anomalies/deviations from standard, expected, or normal patterns can exist. The deviations from norms are typically predicted based on observations and/or calculations. Hence, data anomaly detections have long been practiced in the technological art to prevent machine failures, malware infections, fraudulent activities etc. and combining results of multiple anomaly detection methods/mechanisms/algorithms which are mentally performable process with pen and paper. Human can perform anomaly detections manually, especially on complex, small, or context-heavy datasets where domain expertise is important. Using visual inspection like graphs or charts, obvious outliers can be identified. With high-volume data, automated machine learning has been applied to detect effectively real-time anomalies. Even if performing this mentally/manually is time consuming, "relying on a computer to perform routine tasks more quickly or more accurately is insufficient to render a claim patent eligible". (Citing Alice, 573 U.S. at 224 ("use of a computer to create electronic records, track multiple transactions, and issue simultaneous instructions" is not an inventive concept)). The recitation of generic computer components including recited “a system, comprising: a processor; a memory for holding programmable code; and wherein the programmable code includes instructions executable by the processor”, and “a computer program product embodied on a computer readable medium, the computer readable medium having stored thereon a sequence of instructions which, when executed by a processor” which perform generic computing functions or generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f). Nothing in the claim elements preclude the steps from practically being performed in the mind. If a claim limitation, under its broadest reasonable interpretation, covers the performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas. Accordingly, the claims recite an abstract idea. As explained in MPEP 2106.04(I), “even newly discovered or novel judicial exceptions are still exceptions. … Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 714-15, 112 USPQ2d 1750, 1753-54 (Fed. Cir. 2014). Cf. Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151, 120 USPQ2d 1473, 1483 (Fed. Cir. 2016) ("a new abstract idea is still an abstract idea") (emphasis in original). Similarly, as stated in MPEP 2106.05(a), discussing the improvements consideration states, “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements.”; In re TLI Commc'ns LLC Pat. Litig., 823 F.3d 607, 614 (Fed. Cir. 2016) (holding generic computer components insufficient to add an inventive concept to an otherwise abstract idea). Applicant's improvement argument is also not an improvement to computer related technology because any improvement is purely in the abstract idea. The claims are considered to recite entirely mental processes. The additional elements are generic computing components. As noted in MPEP 2106.05(a), "It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements." Here, the only improvement applicant argues for is found fully in the abstract idea (judicial exception) alone. That is not an improvement to the functioning of the computer or computer technology. Steps in claims 1-24 can be performed using human mental evaluation or judgement, and fall within the "Mental Processes" and "Mathematical Concepts" groupings of abstract ideas. Accordingly, the claims recite abstract ideas and are not patent eligible. See MPEP §2106.04(a)(2)(III). Regarding the applicant arguments on pages 17-19 in relating to the amended limitations “used by a global anomaly detection mechanism of the multiple anomaly detection mechanisms and detection of a cluster anomaly at a second value for the detection parameter used by a cluster anomaly detection mechanism of the multiple anomaly detection mechanisms, and wherein the global anomaly detection mechanism detects isolated data outliers and the cluster anomaly detection mechanism detects a group of anomalous data points that are clustered with each other”, please see a new combination of references with columns and lines cited below. Specification, para. 4 states … “there are many other algorithms that detect anomalies in a more complex manner. Some examples are nearest neighbor, clustering, and subspace. These algorithms are all different, but what they have in common is that they all, internally, use a decision function that produces scores, called anomaly scores. These anomaly scores are generated by the algorithm per instance (e.g., row in a database table) of the training or test dataset”. para. 6: … “The global anomaly pertains to the type anomaly that corresponds to isolated data outliers. The clustered anomaly pertains to a set of data points that may group with each other, but overall still correspond to anomalies when compared to normal data. The local anomaly pertains to isolated data items that may look similar to and has a close distance to good data, but which is actually anomalous data”. para. 8: “provide an improved approach to implement anomaly detection, where an ensemble detection mechanism is provided. An embodiment is directed to an improvement to the KNN algorithm where scaling is applied to permit efficient detection of multiple categories of anomalies”. Dodson teaches anomaly detection in fig. 1: the exemplary architecture 100 comprises an exemplary anomaly detection and causation detection system 105, the input source comprises a database or data store that stores pre-obtained data from any of the aforementioned sources; para. 69: data instances are comprised of a value and a parameter. For example, a data instance related to Memory usage could include {2.0, Gigs}, where 2.0 is the value and Gigs is the parameter. See also para. 76: create feature vectors from the data instances and attributes identified therein. Again, a feature is parameter and/or value pair. para. 73: it is advantageous to perform clustering in combination with anomaly detection to ensure that actions taken in response to the anomaly detection occur with respect to the correct clusters of entities and/or objects; para. 79: outlier thresholds are established in order to effectuate filtering and exclusion of outlier data instances. Feature vectors are displayed to a user through an interactive graphical user interface. When outlier thresholds are adjusted, changes to the graphs/plots are visible in real-time; para. 90-91: the outlier detection methods/mechanisms herein can be applied to many use cases outside of digital security and machine data; para. 95: using an ensemble of normalized measures of outlier factors that include, but are not limited to, local outlier factor, distance to kth nearest neighbor/kNN, distance to sum distance to k nearest neighbors, and so forth; para. 140 para. 129-130: the clustering and outlier methodologies described herein can be improved through use of scaling. If any given component of f is scaled, say by changing units, the standard deviation is similarly scaled, and therefore the clustering remains unchanged. In practice, a MAD estimate of the standard deviation is used to avoid outliers unduly impacting normalization. Dodson does teach at the clustering and outlier methodologies described herein can be improved through use of scaling as shown above, thus, outliers and anomalies are detected and can be adjusted – para. 79, 129-130, thus, Dodson does teach the argued limitation “performing scaling to adjust a detection parameter, where the scaling is adjusted to perform detection of a global anomaly at a first value for the detection parameter and detection of a cluster anomaly at a second value for the detection parameter”. Please see combination of references cited below in relating to the newly amended limitations. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-8 fall within the statutory category of a process. Claims 9-16 fall within the statutory category of an apparatus or system. Claims 17-24 fall within the statutory category of an article of manufacture. Step 2A, Prong One: the claims recite a Judicial Exception. Claim 1 recites steps of: “identifying data to be analyzed for anomaly detection; analyzing the data using an ensemble detection mechanism that comprises multiple anomaly detection mechanisms” are mental processes as an evaluation or judgement and mathematical concepts – groupings of abstract ideas. Data anomaly detections have long been practiced in the technological art to prevent machine failures, malware infections, fraudulent activities etc. and combining results of multiple anomaly detection methods/mechanisms/algorithms which are mentally performable process with pen and paper. performing scaling to adjust a detection parameter, where the scaling is adjusted to perform detection of a global anomaly at a first value for the detection parameter used by a global anomaly detection mechanism of the multiple anomaly detection mechanisms and detection of a cluster anomaly at a second value for the detection parameter; used by a cluster anomaly detection mechanism of the multiple anomaly detection mechanisms, and wherein the global anomaly detection mechanism detects isolated data outliers and the cluster anomaly detection mechanism detects a group of anomalous data points that are clustered with each other; and outputting an indication of whether a given data point corresponds to an anomaly”, under its broadest reasonable interpretation, covers performance of the limitations in the mind but for the recitation of generic computer components including recited “a system, comprising: a processor; a memory for holding programmable code; and wherein the programmable code includes instructions executable by the processor”, and “a computer program product embodied on a computer readable medium, the computer readable medium having stored thereon a sequence of instructions which, when executed by a processor” which perform generic computing functions or generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f). Nothing in the claim elements preclude the steps from practically being performed in the mind. If a claim limitation, under its broadest reasonable interpretation, covers the performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas. Accordingly, the claims recite an abstract idea. Regarding the steps of performing scaling… and used of multiple anomaly detection mechanisms, in the broadest reasonable interpretation, recite mental processes since said steps can be done manually in a non-automated fashion and are mental processes as an evaluation or judgement and mathematical concepts – groupings of abstract ideas. Scaling data has long been an important step for detect anomalies which would normalizes all features to a similar range and combining results of multiple anomaly detection methods/mechanisms/algorithms to capture different anomalies which are mentally performable process with pen and paper for outputting end results. Accordingly, the claimed limitations recited above are abstract ideas under mental process. Each claimed step can be performed in the human mind, with the use of a physical aid such as pen and paper, and thus the steps fall within the grouping of abstract ideas, and claim 1 recites an abstract idea. See MPEP §2106.04(a)(2)(III). Even if performing this mentally/manually is time consuming, "relying on a computer to perform routine tasks more quickly or more accurately is insufficient to render a claim patent eligible". (Citing Alice, 573 U.S. at 224 ("use of a computer to create electronic records, track multiple transactions, and issue simultaneous instructions" is not an inventive concept)). Independent claims 9 and 17 recite limitations of commensurate scope. For the reasons stated above for claim 1, it is determined that claims 9 and 17 are mental processes as an evaluation or judgement and mathematical concepts – groupings of abstract ideas. Step 2A, Prong Two: exception is not integrated into a practical application. The judicial exception is not integrated into a practical application because the additional elements and combination of additional elements do not impose meaningful limits on the judicial exception. In particular, the claims recite the additional elements “a system, comprising: a processor; a memory for holding programmable code; and wherein the programmable code includes instructions executable by the processor”, and “a computer program product embodied on a computer readable medium, the computer readable medium having stored thereon a sequence of instructions which, when executed by a processor” in claims 9 and 17. The additional elements which are high-level recitation of generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. Viewing the additional limitations together and the claims as a whole, nothing provides integration into a practical application. Thus, claims 1, 9 and 17 are directed to abstract ideas. Step 2B: “Inventive Concept” or “Significantly More” The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. More particularly, the claim recites generic computer components (e.g., “a system, comprising: a processor; a memory for holding programmable code; and wherein the programmable code includes instructions executable by the processor”, and “a computer program product embodied on a computer readable medium, the computer readable medium having stored thereon a sequence of instructions which, when executed by a processor” in claims 9 and 17) performing generic computing functions that are well understood, routine, and conventional (e.g., plotting data, reorganizing data, forecasting data). See Alice, 573 U.S. at 226 (“Nearly every computer will include a “communications controller’ and [a] ‘data storage unit’ capable of performing the basic calculation, storage, and transmission functions required by the method claims.”); In re TLI Commc’ns LLC Pat. Litig., 823 F.3d 607, 614 (Fed. Cir. 2016) (holding generic computer components insufficient to add an inventive concept to an otherwise abstract idea); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355 (Fed. Cir. 2014) (“That a computer receives and sends the information over a network--with no further specification--is not even arguably inventive.”) Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. The claims do not amount to significantly more than the underlying abstract idea. Claims 2-8, 10-16, 18-24 add further limitations which are also directed to an abstract idea. The claims recite steps of “wherein the multiple anomaly detection mechanisms comprise a generalized k nearest neighbor mechanism, where the detection parameter that is scaled comprises a parameter k, and a first k value used for the detection of the global anomaly is relatively lower than a second k value used for detection of the clustered anomaly”; “wherein the multiple anomaly detection mechanisms comprise a generalized k nearest neighbor mechanism, where the detection parameter that is scaled comprises a k parameter, and the k parameter is dynamically selected”; “the ensemble detection mechanism implements an ensemble process flow that decreases a resources needed to detect multiple different types of anomalies as compared to separate processing flows to detect multiple different types of anomalies, wherein the resources comprise processor or memory resources, and the multiple different types of anomalies comprise two or more of global, local, or cluster anomalies”; “wherein the multiple anomaly detection mechanisms comprise a mechanism that performs: calculating a distance to a nearest set of k neighbors of n points in a dataset to produce a two-dimensional array A; using A to calculate d corresponding to an array containing mean distances of points to their kth nearest neighbors; performing the scaling to scale rows of A to produce a scaled distance matrix; and generating an anomaly score corresponding to a scaled distance from a nearest neighbor row”; “determining an index of points in a neighboring cluster; calculating an inverse density of the neighboring cluster; determining a median density of the neighboring cluster; and calculating a maximum scaled distance based upon density, and using the maximum scaled distance to generate the anomaly score for a local anomaly”; “wherein the multiple anomaly detection mechanisms comprise a generalized k nearest neighbor mechanism which scales up anomaly scores of points near dense clusters”; “wherein the data is stored in rows in a relational database table of a database system and interactions with the database system are implemented at least by executing database commands that cause the database system to perform operations on corresponding data stored in a corresponding relational database table of the database system … score generation for the rows in the relational database table based on at least the combination of the first attribute in the first column and the second attribute in the second column of the relational database table.” Regarding the limitations in the dependent claims above, in the broadest reasonable interpretation, recite mental processes and mathematical algorithms/calculations since said steps can be done manually in a non-automated fashion. Even if performing this mentally/manually is time consuming, "relying on a computer to perform routine tasks more quickly or more accurately is insufficient to render a claim patent eligible". (Citing Alice, 573 U.S. at 224 ("use of a computer to create electronic records, track multiple transactions, and issue simultaneous instructions" is not an inventive concept)). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Accordingly, the claims recite an abstract idea. Said claims can be performed using human mental analyzing and evaluation, and fall into the abstract idea of a mental process and mathematical calculations, similar to the independent claims. Because the additional elements do not impose meaningful limitations on the judicial exception, the claims are directed to an abstract idea and are not patent eligible. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 17-24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claims recite the limitation “a computer-readable medium” which appears to encompass intangible media including carrier wave or signal or transmission media etc., thus directed to non-statutory subject matter. See the open-ended statements in the specification, para. 67-68: “The term "computer readable medium" or "computer usable medium" as used herein refers to any medium that participates in providing instructions to processor 1407 for execution” and “Common forms of computer readable media include, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer can read.” Applicants are advised to amend the preamble language using “a non-transitory computer-readable storage medium” at this time to overcome the 101 issue. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 7-12, 15-20 and 23-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dodson (US 20180316707) in view of Stapleton et al. (US 10045218). Specification, para. 4 states … “there are many other algorithms that detect anomalies in a more complex manner. Some examples are nearest neighbor, clustering, and subspace. These algorithms are all different, but what they have in common is that they all, internally, use a decision function that produces scores, called anomaly scores. These anomaly scores are generated by the algorithm per instance (e.g., row in a database table) of the training or test dataset”. para. 6: … “The global anomaly pertains to the type anomaly that corresponds to isolated data outliers. The clustered anomaly pertains to a set of data points that may group with each other, but overall still correspond to anomalies when compared to normal data. The local anomaly pertains to isolated data items that may look similar to and has a close distance to good data, but which is actually anomalous data”. para. 8: “provide an improved approach to implement anomaly detection, where an ensemble detection mechanism is provided. An embodiment is directed to an improvement to the KNN algorithm where scaling is applied to permit efficient detection of multiple categories of anomalies”. As per claims 1, 9, 17, Dodson et al. teaches a method, comprising: identifying data to be analyzed for anomaly detection; analyzing the data using an ensemble detection mechanism that comprises multiple anomaly detection mechanisms (fig. 1: the exemplary architecture comprises an exemplary anomaly detection and causation detection system, the input source comprises a database or data store that stores pre-obtained data from any of the aforementioned sources; para. 21-22: the data instances are obtained over a period of time and each of the data instances are time stamped. Various analyses described herein examine the data instances over all or a portion of the period of time for which the input stream is collected. Anomalies can be detected within this input stream; para. 90-91: the outlier detection methods/mechanisms herein can be applied to many use cases outside of digital security and machine data; para. 95: using an ensemble of normalized measures of outlier factors that include, but are not limited to, local outlier factor, distance to kth nearest neighbor/kNN, distance to sum distance to k nearest neighbors, and so forth; para. 140); performing scaling to adjust a detection parameter, where the scaling is adjusted to perform detection of a global anomaly at a first value for the detection parameter used by a global anomaly detection mechanism of the multiple anomaly detection mechanisms (para. 22: process of counterfactual processing uses reductionism and elimination to isolate principle values and/or corresponding categorical attributes that contribute to the anomaly; para. 69: data instances are comprised of a value and a parameter. For example, a data instance related to Memory usage could include {2.0, Gigs}, where 2.0 is the value and Gigs is the parameter; para. 73: it is advantageous to perform clustering in combination with anomaly detection to ensure that actions taken in response to the anomaly detection occur with respect to the correct clusters of entities and/or objects; para. 76: create feature vectors from the data instances and attributes identified therein. Again, a feature is parameter and/or value pair; para. 79: outlier thresholds are established in order to effectuate filtering and exclusion of outlier data instances. Feature vectors are displayed to a user through an interactive graphical user interface. When outlier thresholds are adjusted, changes to the graphs/plots are visible in real-time; para. 102-105: the anomaly detection and threshold comparison may apply to multi-dimensional attribute sets that are included in time-series data instances. The data instances relate to attributes or parameters of a computing environment. Thus, the anomalous activity is occurring with respect to a computing environment; para. 129-130: the clustering and outlier methodologies described herein can be improved through use of scaling. If any given component off is scaled, say by changing units, the standard deviation is similarly scaled, and therefore the clustering remains unchanged. In practice, a MAD estimate of the standard deviation is used to avoid outliers unduly impacting normalization); and detection of a cluster anomaly at a second value for the detection parameter used by a cluster anomaly detection mechanism of the multiple anomaly detection mechanisms (para. 2: provide clustering and outlier detection and extraction; para. 11, 71: the clustering type defines the desired features that are considered when evaluating data instance in a high-order analysis; para. 73: it is advantageous to perform clustering in combination with anomaly detection to ensure that actions taken in response to the anomaly detection occur with respect to the correct clusters of entities and/or objects; para. 129-130; figs. 5-7: data instance clustering and outlier detection to support anomaly detection in a computing environment; performing outlier detection on each of the feature vectors using an ensemble of normalized measures of outlier factors); outputting an indication of whether a given data point corresponds to an anomaly (para. 62, 127: use a cluster to describe a very small (possibly one) data point which is an extreme outlier; para. 142-143: the system extracts the dimensions that are most important for labeling a point as an outlier. The system can accomplish this in a greedy fashion by zeroing the affect of each feature in turn on all of the outlier functions and seeing which has the largest resulting decrease in the overall measure of outlier-ness for the point. the output of this would be presented to the user so 1) they could use online to detect outliers in real time as new data are observed). Dodson does not explicitly teach wherein the global anomaly detection mechanism detects isolated data outliers and the cluster anomaly detection mechanism detects a group of anomalous data points that are clustered with each other. Stapleton et al. teaches said limitations at col. 5:23-49: distance-based methods identify outliers as points that lie unusually far away from their neighbors according to chosen distance metric, e.g., Euclidean distance. These algorithms use k-nearest neighbors (kNN) a framework algorithm, for example, classifying points as outliers if the fraction of the nearest k neighbors falling within a distance d from the point falls below a specified threshold. All such methods judge a point's "outlierness" based on a global comparison to the training data and thus can miss points that are locally anomalous. Clustering based methods can somewhat mitigate the limitations of global comparison by identifying outliers as points without strong membership in any one cluster group. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Dodson and the global anomaly detection mechanism of Stapleton in order to effectively ensemble/utilize the combination of multiple anomaly detection mechanisms for detecting and/or removing extreme points, prevent skewed results, errors and fraudulent activity, thus, improve data quality, lead to more accurate predictions and business decisions. As per claims 2, 10, 18, Dodson teaches wherein the multiple anomaly detection mechanisms comprise a generalized k nearest neighbor mechanism, where the detection parameter that is scaled comprises a k parameter (para. 95: using an ensemble of normalized measures of outlier factors that include, but are not limited to, local outlier factor, distance to k th nearest neighbor, distance to sum distance to k nearest neighbors, and so forth; para. 129-130: the clustering and outlier methodologies described herein can be improved through use of scaling. If any given component off is scaled, say by changing units, the standard deviation is similarly scaled, and therefore the clustering remains unchanged; para. 140: ensemble of measures of the degree to which each vector is an outlier, these include "distance to kth nearest neighbor", "total distance to k nearest neighbours", local outlier factor". These will collectively be denoted as the outlier functions; and a first k value used for the detection of the global anomaly is relatively lower than a second k value used for detection of the clustered anomaly (para. 46: ask whether the anomaly scores are lower when a portion of the data instances with specific categorical attributes are removed and the remaining data instances are rescored. If yes, the specific categorical attributes whose data instances were removed likely contributed to the discrepant anomaly score; para. 59, 61: efficiently subtracting components from a behavioral profile (e.g., analysis of the input stream) until the component contributing to the unusual behavior is isolated and located). Stapleton also teaches at col. 5:23-41: distance-based methods identify outliers as points that lie unusually far away from their neighbors according to chosen distance metric, e.g., Euclidean distance. These algorithms use k-nearest neighbors (kNN) a framework algorithm, for example, classifying points as outliers if the fraction of the nearest k neighbors falling within a distance d from the point falls below a specified threshold. As per claims 3, 11, 19, Dodson teaches wherein the multiple anomaly detection mechanisms comprise a generalized k nearest neighbor mechanism, where the detection parameter that is scaled comprises a k parameter (para. 18, 95-96: a collection of multiple affine random projections of the feature vectors as well as a step 608 of performing outlier detection on each of the projected feature vectors. The step includes using an ensemble of normalized measures of outlier factors that include, but are not limited to, local outlier factor, distance to kth nearest neighbor, distance to sum distance to k nearest neighbors, and so forth; para. 129: the clustering and outlier methodologies described herein can be improved through use of scaling), the k parameter is dynamically selected (para. 94-95: automatically normalizing multi-dimensional feature vectors corresponding to the data instances using linear functions of the values within each dimension, the step includes using an ensemble of normalized measures of outlier factors that include, but are not limited to, local outlier factor, distance to kth nearest neighbor, distance to sum distance to k nearest neighbors, and so forth; para. 56,140). As per claims 4, 12, 20, Dodson teaches wherein the ensemble detection mechanism implements an ensemble process flow that decreases a resources needed to detect multiple different types of anomalies as compared to separate processing flows to detect multiple different types of anomalies (fig. 6, item 608: performing outlier detection on each of the feature vectors using an ensemble of normalized measures of outlier factors; para. 21, 31-32: the principle values (include network traffic volume, memory access and/or usage, processor usage rates, file transfer, file access, device access, and so forth/types) selected for the data instances can be user-selected or user-defined, or can be based on prior knowledge, such as prior instances of anomalous network activity. For example, if prior anomalies in increased CPU usage in a cloud were linked to malicious behavior, the principle values could include CPU usage aspects; para. 142: the system extracts the dimensions that are most important for labeling a point as an outlier. The system can accomplish this in a greedy fashion by zeroing the affect of each feature in turn on all of the outlier functions and seeing which has the largest resulting decrease in the overall measure of outlier-ness for the point), wherein the resources comprise processor or memory resources, and the multiple different types of anomalies comprise two or more of global, local, or cluster anomalies (para. 22: when an anomaly is detected, a cause or causes of the anomaly are located through a process of counterfactual processing. An exemplary process of counterfactual processing uses reductionism and elimination to isolate principle values and/or corresponding categorical attributes that contribute to the anomaly; para. 51: a global anomaly or isolated data outliers; para. 56: an anomaly is something which should occur rarely based on historical data, so has a score greater than a fixed threshold. The score may therefore amount to a dynamic threshold since it is based on the data characteristics. The system 105 separately and precisely controls the rate at which the system 105 generate alerts at a specific severity based on the anomaly score, i.e., the system 105 does not allow this to exceed (over a very long time frame, although it can exceed it for shorter time frames) more than a certain value. Thus, the global/isolated anomaly which had been reduced or eliminated would be lower than, e.g., the anomaly that is detected is a spike in traffic between the computing system and a foreign server). As per claims 7, 15, 23, Dodson teaches wherein the multiple anomaly detection mechanisms comprise a generalized k nearest neighbor mechanism which scales up anomaly scores of points near dense clusters (para. 56, 73: it is advantageous to perform clustering in combination with anomaly detection to ensure that actions taken in response to the anomaly detection occur with respect to the correct clusters of entities and/or objects; para. 121, 129-130: the clustering and outlier methodologies described herein can be improved through use of scaling. If any given component off is scaled, say by changing units, the standard deviation is similarly scaled, and therefore the clustering remains unchanged. In practice, a MAD estimate of the standard deviation is used to avoid outliers unduly impacting normalization; para. 102-105: the anomaly detection and threshold comparison may apply to multi-dimensional attribute sets that are included in time-series data instances. The data instances relate to attributes or parameters of a computing environment. Thus, the anomalous activity is occurring with respect to a computing environment; para. 51: a global anomaly or isolated data outliers). Claim(s) 5, 13, 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dodson (US 20180316707) in view of Stapleton et al. (US 10045218) and further in view of El-Moussa (US 20200053104). As per claims 5, 13, 21, Dodson teaches wherein the multiple anomaly detection mechanisms comprise a mechanism that performs: calculating a distance to a nearest set of k neighbors of n points in a dataset and generating an anomaly score corresponding to a scaled distance (para. 21-22: the data instances are obtained over a period of time and each of the data instances are time stamped. Various analyses described herein examine the data instances over all or a portion of the period of time for which the input stream is collected. Anomalies can be detected within this input stream; para. 81, 95: using an ensemble of normalized measures of outlier factors that include, but are not limited to, local outlier factor, distance to kth nearest neighbor, distance to sum distance to k nearest neighbors, and so forth; para. 129-130: the clustering and outlier methodologies described herein can be improved through use of scaling. If any given component off is scaled, say by changing units, the standard deviation is similarly scaled, and therefore the clustering remains unchanged. In practice, a MAD estimate of the standard deviation is used to avoid outliers unduly impacting normalization; fig. 2, item 210: generating anomaly score for each of the data instances over continuous time intervals). Dodson and Stapleton do not explicitly teach to produce a two-dimensional array A; using A to calculate d corresponding to an array containing mean distances of points to their kth nearest neighbors; performing the scaling to scale rows of A to produce a scaled distance matrix; from a nearest neighbor row. El-Moussa (US20200053104) teaches to produce a two-dimensional array A; using A to calculate d corresponding to an array containing mean distances of points to their kth nearest neighbors; performing the scaling to scale rows of A to produce a scaled distance matrix; from a nearest neighbor row (para. 82: the entropy data structure 656 is replaced by or supplemented with a set of Fourier transform coefficients or coefficient indicators such as: an array of coefficients; an array of coefficient ranges; an array of coefficient high, low, average, median, mean or mode values; an array of coefficients each coefficient being associated with one or more deviation values or proportions; and the like; para. 117). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Dodson, Stapleton to include the density based clustering of El-Moussa to effectively determine distances of points in order to isolate/reduce anomaly of Dodson and Stapleton. Claim(s) 6,14,22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dodson (US 20180316707) in view of Stapleton et al. (US 10045218) and further in view of El-Moussa (US20200053104) and Azar (US 20140314301). As per claims 6, 14, 22, Dodson teaches determining a median density of the neighboring cluster (para. 56: the system 105 characterizes historical deviations using a density function, i.e., a chance f(x) of seeing a deviation x in the value of the set function); calculating a maximum scaled distance based upon density, and using the maximum scaled distance to generate the anomaly score for a local anomaly (para. 91: process of point outlier detection with respect to multi-dimensional attribute sets. In general, the process includes extracting outliers in data instances using distance based methods. The data labeling, i.e. as outlier (or inlier being not an outlier) thus generated, is then automatically mapped onto a set of rules which can be use to identify outliers (or inliers); para. 129: The clustering and outlier methodologies described herein can be improved through use of scaling.) determining an index of points in a neighboring cluster (para. 137: the resulting vectors indexed into storage); Dodson and Stapleton do not explicitly teach an index of points in a neighboring cluster; calculating an inverse density of the neighboring cluster. El-Moussa (US20200053104) teaches calculating an inverse density of the neighboring cluster; determining a median density of the neighboring cluster; calculating a maximum scaled distance based upon density, and using the maximum scaled distance to generate the anomaly score for a local anomaly (para. 100: inverse distance metrics, such as similarity measures based on Euclidean distance, employ clustering algorithms such as, inter alia, k-means algorithms, distribution-based clustering algorithms and/or density-based clustering algorithms to identify clusters of entropy measures among all entropy measures for a window. Such clustering algorithms can be adapted to identify windows having most tightly clustered entropy measures as windows having most consistently similar entropy measures; a maximum, average or most frequent deviation from a central, average, mean or median entropy measure can be used as a measure of the degree or consistency of similarity of all entropy measures in a cluster for a window). An inverse density at para. 14: the final density map or density maps are, when the mixing matrix is well-conditioned, derived by applying a pseudo-inverse of the mixing matrix to the optical density data, and when the mixing matrix is medially-conditioned, by applying a piece-wise pseudo-inverse of the mixing matrix to the optical density data. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Dodson, Stapleton to include the density based clustering of El-Moussa to effectively determine entropy measures in order to isolate/reduce anomaly of Dodson. Even if Dodson, Stapleton, El-Moussa, do not explicitly teach median of the density, Azar (US 20140314301) teaches at para. 125-126: the optimal stain/tissue combination is that which maximizes the Fisher criterion or the Mahalanobis distance and therefore corresponds to clusters that are as compact and distant apart as possible, or maximizes the Rand index or the F-measure, or other cluster validation measures, or minimizes the statistical classification error; absorption is measured by a representative value in each density map, where stain is present. One such representative value of the absorption for a stain/tissue combination is the median of the part of the density map where stain is present. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Dodson, Stapleton, and El-Moussa to include median of density of Azar to effectively identify, isolate and/or reduce anomaly of Dodson. Claim(s) 8, 16, 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dodson (US 20180316707) in view of Stapleton et al. (US 10045218) and further in view of Verma et al. (US 20190132224). As per claims 8, 16, 24, Dodson teaches at para. 24-25: the input source 110 comprises a database or data store that stores pre-obtained data from any of the aforementioned sources; para. 43: if removal does reduce or remove the anomaly, it can be determined that the object of the computing environment responsible for producing the removed categorical attributes is likely a source (could be one of many) for the anomaly; para. 137: the resulting vectors indexed into storage. Dodson, Stapleton do not explicitly teach relational database. Verma et al. teaches wherein the data is stored in rows in a relational database table of a database system and interactions with the database system are implemented at least by executing database commands that cause the database system to perform operations on corresponding data stored in a corresponding relational database table of the database system (para. 65-68: the one or more data sets may include relational data, and the detection device 110 may construct a relational table. Pruning rule may be configured to, for each node, keep only the N nearest neighbors with respect to a similarity measure. Each vertices' neighbor may have particular rank, and the first pruning rule may retain the top-N ranked neighbors and filter (or remove) all neighbors whose rank is lower than N. In aspects, the particular ranks may be determined using a row_number () function, such as the row number () function of the Spark SQL stack; para. 166), the ensemble detection mechanism interacts with the database system to perform: analysis of the data stored in the rows in the relational database table of the database system based on at least a first attribute stored in a first column and a second attribute stored in a second column of the relational database table (para. 39: by giving a situation specific context to what an outlier is, and recording their behaviour, the disclosed outlier network activity analysis techniques may be applied to a wide range of situations. In these situations, behaviour modeling may be performed using an ensemble of graph analytics techniques, described in more detail below, which may facilitate improved outlier detection and identification across a broad range of situations and use cases; para. 145: in the table view 1620, different columns may be associated with different data points, such as network metrics, user activity information, or other information that may be associated with driving the outlier behavior. In aspects, the underlying values may be presented as normalized values so that the different data points are presented using a similar scale), score generation for the rows in the relational database table of the database system based on at least a first attribute stored in a first column and a second attribute stored in a second column of the relational database table (para. 61: the detection device utilizes a graph-centric approach to model and explore interactions among a network or group of customers by evaluating their holistic behavior. The one or more data sets may include raw relational data, and the detection device may be configured to convert the raw relational data into a graph structure based on the set of features. For example, the set of features may include attributes that have been selected based on an analysis of the particular use case for which the outlier network activity is being performed; para. 73: the attribute prediction module may be configured to determine a set of node rankings based on the analyzing and then assign each node of the plurality of nodes to one of a plurality of classes based on the set of node rankings. For example, given a semi-labeled network model with few labeled legitimate and abnormal nodes and many unknown nodes, the attribute prediction module 130 may use a collective inference procedure to infer a set of class labels and scores for the unknown nodes by taking into account the fact that inferences about nodes can mutually affect one another; para. 90). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Dodson, Stapleton to include the usage of storing and interaction with data stored in relational database with attributes/columns and tables of Verma to effectively store, interact and/or manipulate datasets in order to identify and isolate/reduce anomal(ies) of Dodson. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Abousleman et al. (US 20170277966) teaches at para. 33: local anomalies, global anomaly; para. 60-62: MediaInArray(clusters). Cella et al. (US 20230083724) teaches at para. 70: graph clustering analysis for anomaly or fraud detection; para. 518: anomaly detection. Kallo (US 11201876) teaches at col. 23, line 65 to col. 24, line 54: employ clustering algorithms such as, inter alia, k-means algorithms, distribution-based clustering algorithms and/or density-based clustering algorithms to identify clusters of CFD measures among all CFD measures for a window. A maximum, average or most frequent deviation from a central, average, mean or median CFD measure can be used as a measure of the degree or consistency of similarity of all CFD measures in a cluster for a window. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINH BLACK whose telephone number is (571)272-4106. The examiner can normally be reached 9AM-5PM EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LINH BLACK/Examiner, Art Unit 2163 1/9/2026 /TONY MAHMOUDI/Supervisory Patent Examiner, Art Unit 2163
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Jan 11, 2025
Non-Final Rejection — §101, §103
Apr 16, 2025
Response Filed
Jul 26, 2025
Final Rejection — §101, §103
Oct 21, 2025
Applicant Interview (Telephonic)
Oct 21, 2025
Examiner Interview Summary
Oct 30, 2025
Request for Continued Examination
Nov 05, 2025
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602376
SYSTEMS AND METHODS FOR DATA CURATION IN A DOCUMENT PROCESSING SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12530339
DISTRIBUTED PLATFORM FOR COMPUTATION AND TRUSTED VALIDATION
2y 5m to grant Granted Jan 20, 2026
Patent 12468835
SYSTEM AND METHOD FOR SESSION-AWARE DATASTORE FOR THE EDGE
2y 5m to grant Granted Nov 11, 2025
Patent 12461923
SUITABILITY METRICS BASED ON ENVIRONMENTAL SENSOR DATA
2y 5m to grant Granted Nov 04, 2025
Patent 12450239
METHODS AND APPARATUS FOR IMPROVING SEARCH RETRIEVAL
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
51%
Grant Probability
62%
With Interview (+11.5%)
5y 1m
Median Time to Grant
High
PTA Risk
Based on 437 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month