Prosecution Insights
Last updated: April 19, 2026
Application No. 17/825,788

Unsupervised Anomaly Detection With Self-Trained Classification

Final Rejection §101§103
Filed
May 26, 2022
Examiner
KAPOOR, DEVAN
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
2 (Final)
11%
Grant Probability
At Risk
3-4
OA Rounds
3y 3m
To Grant
28%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
1 granted / 9 resolved
-43.9% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
38.1%
-1.9% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is responsive to the application filed on 12/08/2025. Claims 1,3-11, and 13-22 are pending and have been examined. This action is Final. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Response to Arguments Argument 1: The applicant argues that the 101 rejection should be withdrawn because the amended independent claims now require receiving, from one or more devices over a network, unlabeled training data comprising a plurality of training examples, which Applicant contends cannot be performed in the human mind using evaluation and observation as previously characterized. The applicant maintains that this additional network-based data reception and machine learning training framework removes the claims from being directed to a mental process and therefore the claims are not directed to an abstract idea, or alternatively integrate any abstract idea into a practical application. The applicant also requests consideration of newly added claims 21 and 22 under 101, asserting that the recited specific data types such as image, video, audio, and structured data, as well as specific anomaly detection use cases such as manufacturing defects, fraudulent activity, security breaches, patient data patterns, and improper cloud usage, further demonstrate technological application rather than abstract subject matter. Examiner Response to Argument 1: The examiner has considered the argument set forth above but is not persuaded because the amended claims remain directed to an abstract idea, namely mathematical concepts and data classification based on model outputs, including applying one or more machine learning models to generate anomaly scores or classification outcomes, selecting a refined dataset based on those outcomes, and training a model using the refined dataset, which are mathematical relationships and calculations used to analyze and categorize data. The additional limitation of receiving unlabeled training data from one or more devices over a network does not change the character of the claimed invention, as receiving data and providing an anomaly indication are insignificant extra-solution activities, and transmitting or receiving data over a network is a well understood, routine, and conventional computer function that does not integrate the judicial exception into a practical application or provide significantly more. Further, the claims recite generic processors performing the above mathematical concepts at a high level of generality without any specific improvement to computer technology or other technical field, and therefore amount to no more than instructions to apply the abstract idea on a computer. With respect to newly added claims 21 and 22, reciting example data types such as image, video, audio, and structured data, and reciting example use cases such as manufacturing defects, fraudulent activity, security breaches, anomalous patient data patterns, and improper cloud usage, merely limits the abstract idea to particular fields of use or environments and recites intended applications and results, which does not integrate the judicial exception into a practical application or add significantly more. Accordingly, the rejection of the amended claims under 35 U.S.C. 101 is maintained. Argument 2: The applicant argues that the 103 rejection over Pang in view of Xiao in view of Glassman should be withdrawn because the amended independent claims now require that the unlabeled training data comprises anomalous and non-anomalous training examples and that there are fewer anomalous training examples than non-anomalous training examples. The applicant contends that even if Pang is interpreted as disclosing unlabeled data that may include normal and anomalous samples, Pang does not teach or suggest the specific quantitative relationship requiring fewer anomalous examples than non-anomalous examples. The applicant further argues that none of the additional references relied upon in the rejection remedy this alleged deficiency. Based on this limitation, the applicant asserts that the independent claims are not rendered obvious by the cited combination and that the 103 rejection should therefore be withdrawn. Examiner Response to Argument 2: The examiner has considered the argument set forth above but is not persuaded because the applied combination of Pang in view of Xiao and further in view of Glassman continues to teach or render obvious the amended limitation that the unlabeled training data comprises anomalous and non-anomalous training examples and that there are fewer anomalous training examples than non-anomalous training examples. In particular, Pang expressly teaches an unlabeled set of training examples with no class label information while nonetheless distinguishing anomalous versus normal frames for purposes of anomaly scoring, for example assigning higher anomaly scores when one frame is anomalous and another is normal, which supports that the unlabeled data set includes both anomalous and non-anomalous examples. While Pang may not expressly recite a numeric inequality for the relative quantities of anomalous versus non-anomalous examples, Xiao teaches that anomalies are rare in collected data leading to class imbalance, which the examiner interprets to be the same as there being fewer anomalous training examples than non-anomalous training examples because both are directed to anomalous examples comprising a minority class relative to non-anomalous examples in anomaly detection datasets. Accordingly, Xiao remedies the alleged deficiency identified by applicant, and the combination would have been obvious to a person of ordinary skill in the art because Xiao provides an ensemble based anomaly detection framework and expressly characterizes the rarity and imbalance of anomalies in the collected data, such that incorporating this known dataset characteristic and ensemble categorization into Pang’s unlabeled iterative self-training anomaly detection framework predictably improves robustness and generalization, while Glassman provides an explicit processor based system implementation for anomaly analysis. Therefore, the rejection under 35 U.S.C. 103 is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1,3-11, and 13-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, Step 1: The limitation is directed to a system, which falls under a category of machine. The claim satisfies Step 1. Step 2A Prong 1: “categorize, using a plurality of first machine learning models, each of the training examples as an anomalous training example or non-anomalous training example;” -- The limitation is directed to categorizing training examples as anomalous or non-anomalous. The limitation is directed to a process that can be performed in the human mind using evaluation, observation and evaluation, and thus the limitation is directed to a mental process. The use of the plurality of first machine learning models is discussed next at Prong 2. “generate a refined set of training data including the training examples categorized as non-anomalous training examples;” - The limitation is directed to generating a set of training data that will include a categorized training example as anomalous or not. The limitation is directed to a process that can be performed in the human mind using evaluation, observation, and judgement, with aid of pen and paper, and thus the limitation is directed to a mental process. Step 2A Prong 2 and Step 2B: “A system for anomaly detection, comprising one or more processors, wherein the one or more processors are configured to:…train a second machine learning model, using the refined set of training data” -- The limitation recites a system that will comprise one (or more processors) that will configure to train another model using a set of data. The limitation amounts to no more than mere instructions to apply onto a computer, and it does not integrate to a practical application, nor provide significantly more than the judicial exception. Furthermore, the use of the models from prong 1 which are recited at a high level of generality and amounts to mere instructions to perform the abstract idea (the categorization) on a computer, and thus cannot integrate to a practical application, nor provide significantly more than the judicial exception as well (see MPEP 2106.05(f)). “receive, from one or more devices over a network, unlabeled training data comprising a plurality of training examples,…to receive input data and to generate output data indicating whether the input data is anomalous or non-anomalous.”-- The limitation recites receiving gathered, unlabeled data that comprises training examples, and also receiving input data and/or generate output data indicating whether the data is anomalous/non-anomalous. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, the act of sending/receiving data over a network is a well-understood, routine, and conventional activity (WURC) that cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). “wherein the unlabeled training data comprises one or more anomalous training examples and one or more non-anomalous training examples and there are fewer anomalous training examples than non-anomalous training examples;” -- The limitation recites unlabeled training data will comprise of either anomalous or non-anomalous training examples and tabulating that there are more non-anomalous training examples vs anomalous. The limitation amounts to no more than mere instructions to apply onto a computer, and it does not integrate to a practical application, nor does it provide significantly more than the judicial exception (see MPEP 2106.05(f)). Thus, claim 1 is non-patent eligible. Claims 11 and 20 are analogous to claim 1, with the exception of different claim types (method for claim 11, and non-transitory CRM for claim 20), and hence the rejection stated above can apply as well. Regarding claim 3, Step 1: The limitation is directed to a system, which falls under a category of machine. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The system of claim 1, wherein the one or more processors are further configured to train the plurality of first machine learning models using the refined set of training data.” --The limitation is directed to the processors being further configured to train the machine learning models using a refined, trained data set. The limitation amounts to no more than mere further limiting to a field of use/environment, and cannot be integrated to a practical application, nor provide significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 3 is non-patent eligible. Claim 13 is analogous to claim 3 (aside from system vs method claim) and would face the same rejection set forth above. Regarding claim 4, Step 1: The limitation is directed to a system, which falls under a category of machine. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The system of claim 1, wherein the one or more processors are configured to perform additional iterations of: categorizing each of the training examples using the plurality of first machine learning models; and updating, based on the additional iterations, the refined set of training data.” -- The limitation recites that one or more of the processors will be configured to performed additional repetitive iterations for categorizing training examples, and updating the refined set of the training data. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of performing iterative/repeating calculations and updating the iterations (electronic recordkeeping) is a well-understood, routine and conventional activity (WURC), and it cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 4 is non-patent eligible. Claim 14 is analogous to claim 4 (aside from system vs method claim) and would face the same rejection set forth above. Regarding claim 5, Step 1: The limitation is directed to a system, which falls under a category of machine. The claim satisfies Step 1. Step 2A Prong 1: “categorize the unlabeled training data” -- The limitation is directed to categorizing unlabeled training data. The limitation is directed to a process that can be performed in the human mind using evaluation, observation, and judgment, with aid of pen and paper, and thus the limitation is directed to a mental process. Step 2A Prong 2 and Step 2B: “The system of claim 4, wherein the one or more processors are further configured to train a third machine learning model using the refined set of training data,” -- The limitation recites one or more processors are further configured to train a model using a set of training data. The limitation in its generic recite does not amount to no more than mere instructions to apply training onto a computer (the third machine learning model), which cannot be integrated to a practical application, nor provide significantly more than a judicial exception (see MPEP 2106.05(f)). “using the plurality of first machine learning models, the one or more processors are configured to process, using the plurality of first machine learning models, respective one or more feature values for each training example of the unlabeled training data, wherein the respective one or more feature values are generated using the third machine learning model.” -- The limitation recites using a plurality of machine learning models, the processors will be configured to process featured values per training example of the unlabeled training data, where the respective feature values are the generated. The limitation amounts to no more than mere instructions to apply onto a computer, and does not integrate to a practical application, nor provides significantly more than the judicial exception (see MPEP 2106.05(f)). “wherein the third machine learning model is trained to receive training examples and to generate one or more respective feature values for each of the received training examples;” -- The limitation recites that the model is trained to receive training examples and to generate more values for the received examples. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, sending/receiving data over a network is a well-understood, routine, and conventional activity (WURC), and it cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 5 is non-patent eligible. Claim 15 is analogous to claim 5 (aside from system vs method claim) and would face the same rejection set forth above. Regarding claim 6, Step 1: The limitation is directed to a system, which falls under a category of machine. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The system of claim 5, wherein the one or more processors are configured to perform additional iterations of training the third machine learning model using the refined set of training data.” -- The limitation recites that processors will be configured to perform “additional iterations” of training for the learning model using a set of training data. The limitation is directed to an insignificant, extra-solution activity that cannot be integrated to a practical application (see MPEP 2106.05(g)). Furthermore, under Step 2B, the act of performing additional iterations is a well-understood, routine, and conventional activity (WURC), and it cannot provide significantly more than the judicial exception (see MPEP 2106.05(d)(II)). Thus, claim 6 is non-patent eligible. Regarding claim 7, Step 1: The limitation is directed to a system, which falls under a category of machine. The claim satisfies Step 1. Step 2A Prong 1: “determine that at least one first score does not meet one or more thresholds;… in response to the determination that the at least one first score does not meet one or more thresholds, exclude the first training example from the unlabeled training data.” --The limitation is directed to determining that at least a score does not meet one or more of a threshold, and in response exclude a training example from the unlabeled data. The limitation is directed to a process that can be performed in the human mind using evaluation, observation, and judgement, and thus the limitation is directed to a mental process. “process a first training example of the unlabeled training data through each of the plurality of first machine learning models to generate a plurality of first scores corresponding to respective probabilities that the first training example is non-anomalous or anomalous;” -- The limitation is directed to processing training examples of the unlabeled training data through the models to generate score values that will correspond to a specific probability of the training example being anomalous or not. The limitation is directed to the use of a mathematical concept/calculation, and therefore the limitation is directed to math. Step 2A Prong 2 and Step 2B: “The system of claim 1, wherein the one or more processors are further configured to: train each of the first machine learning models using a respective subset of the unlabeled training data;” -- The limitation recites that one or more processors will be further configured to trained the models using a respective subset of training data. The limitation does not amount to no more than mere instructions to apply onto a computer, and it does not integrate to a practical application, nor provide significantly more than a judicial exception (see MPEP 2106.05(f)). Thus, claim 7 is non-patent eligible. Claim 16 is analogous to claim 7, and thus it will face the same rejection set forth above. Regarding claim 8, Step 1: The limitation is directed to a system, which falls under a category of machine. The claim satisfies Step 1. Step 2A Prong 1: “The system of claim 7, wherein the one or more thresholds are based on a predetermined percentile value of a distribution of scores corresponding to respective probabilities that training examples in the unlabeled training data are non-anomalous or anomalous.” -- The limitation is directed to the thresholds based on predetermined percentile values of scores that correspond to a certain probability of a training example being anomalous or not. The limitation is directed to the use of a mathematical concept/calculation, and thus the limitation is directed to math. There are no elements to be evaluated on Step 2A Prong 2 and Step 2B. Thus, claim 8 is non-patent eligible. Claim 17 is analogous to claim 8, and thus it will face the same rejection set forth above. Regarding claim 9, Step 1: The limitation is directed to a system, which falls under a category of machine. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The system of claim 7, wherein the one or more thresholds are based on a predetermined percentile value of a distribution of scores corresponding to respective probabilities that training examples in the unlabeled training data are non-anomalous or anomalous.” -- The limitation recites that the thresholds are based on a predetermined percentile value that is further corresponding to the respective probabilities of the training examples. The limitation does not amount to no more than merely further limiting to a field of use/environment, and it cannot be integrated to a practical application, nor provide significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 9 is non-patent eligible. Claim 18 is analogous to claim 9, and thus it will face the same rejection set forth above. Regarding claim 10, Step 1: The limitation is directed to a system, which falls under a category of machine. The claim satisfies Step 1. Step 2A Prong 1: “each threshold based on the predetermined percentile value of a respective distribution of scores” -- The limitation is directed to each of the thresholds are based on a predetermined percentile value of distributed scores. The limitation is directed to a process that can be performed in the human mind, with aid of pen and paper, and thus the limitation is directed to a mental process. Step 2A Prong 2 and Step 2B: “The system of claim 8, wherein the one or more thresholds comprise a plurality of thresholds… generated from training examples processed by a respective first machine learning model of the plurality of first machine learning models.” -- The limitation recites that thresholds will comprise of a plurality of thresholds and that the training examples will be processed by a ML model from a group of first ML models. The limitation does not amount to no more than mere instructions to apply onto a computer, and thus it does not integrate to a practical application, nor provide significantly more than the judicial exception (see MPEP 2106.05(f)). Thus, claim 10 is non-patent eligible. Claim 19 is analogous to claim 10, and thus it will face the same rejection set forth above. Regarding claim 21, Step 1: The limitation is directed to a system, which falls under a category of machine. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The system of claim 1, wherein the input data comprises one or more of images, video, audio, or data and the output data comprises one or more anomalous images, video, audio or data.” -- The limitation recites the type of input information and the type of output information for the anomaly detection result. The limitation amounts to no more than merely limiting to a field of use or environment by specifying example data modalities and describing the output as anomalous, and it does not integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 21 is non-patent eligible. Regarding claim 22, Step 1: The limitation is directed to a system, which falls under a category of machine. The claim satisfies Step 1. There are no elements to be evaluated under Step 2A Prong 1. Step 2A Prong 2 and Step 2B: “The system of claim 1, wherein the output data comprises data that identifies one or more of anomalous parts as part of a first manufacturing process, an anomalous process as part of a second manufacturing process, fraudulent activity as part of a credit card transaction, a security breach on a monitored network, anomalous patterns associated with patient data, and improper usage of a cloud computing platform.” -- The limitation recites example application contexts and example types of anomalies being identified in the output. The limitation amounts to no more than merely limiting to a field of use or environment and reciting intended results in various domains, and it does not integrate the judicial exception into a practical application, nor provide significantly more than the judicial exception (see MPEP 2106.05(h)). Thus, claim 22 is non-patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1,3-6, 11, 13-15, and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over NPL reference “Self-trained deep ordinal regression for end-to-end video anomaly detection.”, by Pang et. al. (referred herein as Pang) in view of NPL reference “Unsupervised anomaly detection with distillated teacher-student network ensemble.”, by Xiao et. al. (referred herein as Xiao) further in view of US11507785B2, by Glassman et. al. (referred herein as Glassman). Regarding claim 1, Pang teaches: receiving, from one or more devices over a network, unlabeled training data comprising a plurality of training examples; ([Pang, page 12173] “unsupervised video anomaly detection…which requires identifying abnormal frames from a large volume of video frames with no manually labeled normal/abnormal training data.”, wherein the examiner interprets “no manually labeled normal/abnormal training data” to be the same as “unlabeled training data” because they both describe input examples without human-provided labels. Pang further teaches “Formally, given a set of K video frames X = {x1, x2, · · · , xK} with no class label information, our goal is to learn an anomaly scoring function φ : X → R that directly assigns anomaly scores to the video frames such that φ(xi) > φ(xj ) if xi is an anomalous frame and xj is a normal frame.”, wherein the examiner interprets “a set of K video frames X” to be the same as “a plurality of training examples” because both are directed to multiple data instances used for training, and interprets “with no class label information” to be the same as “unlabeled training data” because both describe data lacking manually assigned labels. Pang further describes large-scale video surveillance and Internet video filtering applications, wherein the examiner interprets such distributed video acquisition settings to be consistent with “from one or more devices over a network” because they are directed to receiving device-generated video data communicated for analysis in a networked environment.) generating a refined set of training data including the training examples categorized as non-anomalous training examples; ([Pang, page 12174] “Although existing methods cannot produce well optimized anomaly scores, they generally achieve good accuracy in correctly identifying a subset of normal and anomalous events. These identified normal and anomalous events can be leveraged by the end-to-end anomaly score learner to iteratively improve and optimize the anomaly scores”, wherein the examiner interprets “correctly identifying a subset of normal events and leveraging them” to be the same as “generating a refined set of training data including non-anomalous training examples” because both describe filtering the reliable normal subset for further training.) and train a second machine learning model, using the refined set of training data, ([Pang, page 12175] “we first initialize A and N using anomaly scores generated by some existing unsupervised anomaly detection methods (see Sec. 4.1), and we iteratively update A and N and retrain φ until the best φ is achieved.”, wherein the examiner interprets “iteratively update A and N and retrain φ” to be the same as “training a second machine learning model using the refined set of training data” because both describe retraining a model on progressively refined subsets of anomalous and non-anomalous samples.) to receive input data and to generate output data indicating whether the input data is anomalous or non-anomalous; ([Pang, page 12175] “The proposed approach uses each estimate of A and N to further optimize the anomaly scores. These scores then, in turn, help generate new, more accurate, sets A and N. This iterative self-training achieves much better detection performance and coverage”, wherein the examiner interprets “optimize the anomaly scores achieving much better detection performance” to be the same as “generate output data indicating whether the input is anomalous or non-anomalous” because both are directed to producing anomaly scores that classify input data into anomalous versus non-anomalous categories.) Pang does not teach wherein the unlabeled training data comprises one or more anomalous training examples and one or more non-anomalous training examples and there are fewer anomalous training examples than non-anomalous training examples; categorize, using a plurality of first machine learning models, each of the training examples as an anomalous training example or non-anomalous training example. Xiao teaches: wherein the unlabeled training data comprises one or more anomalous training examples and one or more non-anomalous training examples and there are fewer anomalous training examples than non-anomalous training examples; ([Xiao, page 2] “Last but not least, anomalies are rare in terms of collected data, leading to the problem of class imbalance.”, wherein the examiner interprets “anomalies are rare in terms of collected data” and “class imbalance” to be the same as “there are fewer anomalous training examples than non-anomalous training examples” because they are both directed to anomalous examples forming a minority relative to non-anomalous examples within the dataset used for anomaly detection.) categorize, using a plurality of first machine learning models, each of the training examples as an anomalous training example or non-anomalous training example. ([Xiao, page 1] “an ensemble of student networks provides a better capability to avoid generalizing the auxiliary task performance on anomalous samples” and [Xiao, page 3] “combine outcomes from weak anomaly detectors to produce an ensembled anomaly score”, wherein the examiner interprets “ensemble of student networks” and “combine outcomes from weak anomaly detectors” to be the same as “a plurality of first machine learning models categorizing training examples” because both are directed to multiple models being applied to classify examples as anomalous or non-anomalous.) Pang, Xiao, and the instant application are analogous art because they are all directed to iterative machine learning training with refined anomaly detection data. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the unlabeled data and training refined set of training data disclosed by Pang to include the ensemble approach of anomalous/or non-anomalous samples disclosed by Xiao. One would be motivated to do so to efficiently enhance robustness and generalization in anomaly classification by leveraging multiple models in an ensemble, as suggested by Xiao ([Xiao, page 3] “combine outcomes from weak anomaly detectors to produce an ensembled anomaly score”). Furthermore, though Pang and Xiao implicitly teaches the use of a system for anomaly detection that will comprise of processors, Glassman teaches this explicitly at: ([Glassman, col 2, lines 9-14], “The anomaly analysis apparatus in one exemplary embodiment comprises a storage unit and a processor electrically connected to the storage unit. The storage unit stores the coefficients resulting from training the support vector machines (SVM), which define the SVM for use with new data.” wherein the examiner interprets “anomaly analysis apparatus with a processor” to be the same as “a system for anomaly detection comprising one or more processors” because they are both processor-based systems configured for anomaly detection.) Pang, Xiao, Glassman, and the instant application are analogous art, because they are all directed to a system for to iterative machine learning training with refined anomaly detection data. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the iterative anomaly detection framework disclosed by Pang to include the ensemble-based categorization of anomalous versus non-anomalous training samples disclosed by Xiao, and to implement the system using the processor-based anomaly detection apparatus disclosed by Glassman. One would be motivated to do so to effectively improve anomaly detection accuracy and robustness in real-world applications by combining the strengths of multiple weak models into an ensemble, as suggested by Xiao ([Xiao, page 3] “combine outcomes from weak anomaly detectors to produce an ensembled anomaly score”), while also ensuring practical implementation in a processor-driven anomaly detection system as disclosed by Glassman ([Glassman, col. 2, lines 9-14] “The anomaly analysis apparatus in one exemplary embodiment comprises a storage unit and a processor electrically connected to the storage unit. The storage unit stores the coefficients resulting from training the support vector machines (SVM), which define the SVM for use with new data.”). Claims 11 and 20 are analogous to claim 1, with the exception of different claim types (method for claim 11, and non-transitory CRM for claim 20), and hence the rejection stated above can apply as well. Regarding claim 3, Pang, Xiao, and Glassman teaches The system of claim 1, (see rejection of claim 1). Pang further teaches wherein the one or more processors are further configured to train the plurality of first machine learning models using the refined set of training data. ([Pang, page 12175] “…we first initialize A and N using anomaly scores generated by some existing unsupervised anomaly detection methods (see Sec. 4.1), and we iteratively update A and N and retrain φ until the best φ is achieved.”, wherein the examiner interprets “iteratively update A and N” to be the same as “refined set of training data” because both are directed to progressively filtering anomalous (A) and non-anomalous (N) subsets, and the examiner interprets “retrain φ” to be the same as “train the plurality of first machine learning models” because both describe training machine learning models using the refined subsets of data to improve anomaly detection accuracy.) Claim 13 is analogous to claim 3 (aside from system vs method claim) and would face the same rejection set forth above. Regarding claim 4, Pang, Xiao, and Glassman teaches The system of claim 1, (see rejection of claim 1). Pang further teaches wherein the one or more processors are configured to perform additional iterations of: categorizing each of the training examples using the plurality of first machine learning models; ([Pang, page 12175] “…we first initialize A and N using anomaly scores generated by some existing unsupervised anomaly detection methods…and we iteratively update A and N and retrain φ until the best φ is achieved.” wherein the examiner interprets “initialize A and N using anomaly scores” to be the same as “categorizing each of the training examples” because both describe dividing examples into anomalous (A) and non-anomalous (N) categories using model outputs; and interprets “iteratively update A and N and retrain φ” to be the same as “perform additional iterations” because both describe repeating the categorization process multiple times.) and updating, based on the additional iterations, the refined set of training data. ([Pang, page 12175 ] “A corresponding new set of anomaly scores are then generated, which are used to update the membership of A and N.” wherein the examiner interprets “generate new anomaly scores…update the membership of A and N” to be the same as “updating the refined set of training data based on additional iterations” because both describe refining the anomalous and non-anomalous subsets after each iteration.) Claim 14 is analogous to claim 4 (aside from system vs method claim) and would face the same rejection set forth above. Regarding claim 5, Pang, Xiao, and Glassman teaches The system of claim 4, (see rejection of claim 4). Pang further teaches wherein the one or more processors are further configured to train a third machine learning model using the refined set of training data, ([Pang, page 12175] “…we first initialize A and N using anomaly scores generated by some existing unsupervised anomaly detection methods (see Sec. 4.1), and we iteratively update A and N and retrain φ until the best φ is achieved.” wherein the examiner interprets “iteratively update A and N and retrain φ” to be the same as “train a third machine learning model using the refined set of training data” because both describe retraining a new model instance on progressively refined anomalous and non-anomalous subsets.) Glassman further teaches wherein the third machine learning model is trained to receive training examples and to generate one or more respective feature values for each of the received training examples;… and wherein the respective one or more feature values are generated using the third machine learning model. ([Glassman, col 7, lines 12-15] “extracting at least two sets of refined data, wherein the two sets of refined data are sets of numbers representing features values data received from the sensors or manufactured part; classifying the at least two sets of refined data as a representation of a property or output of the part” wherein the examiner interprets “extracting refined data sets representing feature values” to be the same as “the third machine learning model…generate one or more respective feature values for each training example” because both describe converting received inputs into numerical feature values for downstream processing. The examiner further interprets “sets of numbers representing feature values” to be the same as “feature values generated using the third machine learning model” because both describe feature values as the inputs that are produced for each example and then processed by downstream classifiers.). Pang, Glassman, and the instant application are analogous art, because they are all directed to the system of claim 4, and receiving feature values during training. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of claim 4 disclosed by Pang, Xiao, and Glassman to include the iterative retraining approach disclosed by Pang and the extraction of feature values from refined training data as disclosed by Glassman. One would be motivated to do so to effectively enable downstream classification based on consistent numerical feature representations, as suggested by Glassman ([Glassman, col. 7, lines 12-15] “classifying…sets of refined data as a representation of a property or output of the part”). Pang and Glassman do not teach wherein to categorize the unlabeled training data using the plurality of first machine learning models, the one or more processors are configured to process, using the plurality of first machine learning models, respective one or more feature values for each training example of the unlabeled training data, Xiao further teaches wherein to categorize the unlabeled training data using the plurality of first machine learning models, the one or more processors are configured to process, using the plurality of first machine learning models, respective one or more feature values for each training example of the unlabeled training data, ([Xiao, Abstract] “an ensemble of student networks provides a better capability to avoid generalizing the auxiliary task performance on anomalous samples…Second, the ensemble of student networks provides a capability to avoid potential generalizing of the auxiliary task of anomalous samples. Third, multiple anomaly scores are provided to detect anomalies from various aspects”, wherein the examiner interprets “ensemble of student networks providing multiple anomaly scores” to be the same as “plurality of first machine learning models categorizing training examples” because both are directed to multiple models applied to feature values to categorize examples as anomalous or non-anomalous.) Pang, Xiao, Glassman, and the instant application are analogous art because they are all directed to iterative machine learning training with refined anomaly detection data. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of claim 4 disclosed by Pang, Xiao, and Glassman to include the use of multiple models (an ensemble) for categorization as disclosed by Xiao. One would be motivated to do so to efficiently improve robustness and accuracy of categorization by leveraging ensemble consensus, as suggested by Xiao ([Xiao, Abstract] “an ensemble of student networks provides a better capability to avoid generalizing the auxiliary task performance on anomalous samples…Second, the ensemble of student networks provides a capability to avoid potential generalizing of the auxiliary task of anomalous samples.”) Claim 15 is analogous to claim 5 (aside from system vs method claim) and would face the same rejection set forth above. Regarding claim 6, Pang, Xiao, and Glassman teaches The system of claim 5, (see rejection of claim 5). Pang further teaches wherein the one or more processors are configured to perform additional iterations of training the third machine learning model using the refined set of training data. ([Pang, page 12175] “we first initialize A and N using anomaly scores generated by some existing unsupervised anomaly detection methods (see Sec. 4.1), and we iteratively update A and N and retrain φ until the best φ is achieved (see Section 4.3).” AND ([Pang, page 12180] “Figure 6 shows the AUC results of our method at each iteration during self-training. Our performance gets larger improvement with increasing iterations in the first few iterations on most datasets and then becomes stable at the 4th or 5th iteration…We found empirically that five iterations are often sufficient to reach the possibly best performance on different datasets.”, wherein the examiner interprets “iteratively update A and N and retrain φ” to be the same as “perform additional iterations of training the third machine learning model using the refined set of training data” because they are both directed to repeatedly retraining a model on progressively refined anomalous/non-anomalous subsets. The examiner further interprets “performance gets larger improvement with increasing iterations…stable at the 4th or 5th iteration” to be the same as “performing additional iterations of training” because both are directed to repeating the training cycles on refined data to achieve improved anomaly detection performance.) Claims 7 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Pang in view of Xiao in view of Glassman further in view of US10785241B2, by Li et. al. (referred herein as Li). Regarding claim 7, Pang, Xiao, and Glassman teaches The system of claim 1, (see rejection of claim 1). Pang further teaches determine that at least one first score does not meet one or more thresholds; and in response to the determination that the at least one first score does not meet one or more thresholds, exclude the first training example from the unlabeled training data. (Pang, page 12177] “Particularly, we include the 10% most anomalous frames into [A] according to their anomaly scores, because anomaly scores often follow a Gaussian distribution and this decision threshold can provide an approximate 90% confidence level of making false positive errors in such cases …To generate the pseudo normal frame set N, we select the 20% most normal frames based on the anomaly scores…These two cutoff thresholds are used by default as they consistently obtain substantially improved performance on datasets with diverse anomaly rates.”, wherein the examiner interprets “including only the top anomalous and normal samples using cutoff thresholds” to be the same as “excluding training examples that do not meet one or more thresholds” because both describe removing examples from consideration when their anomaly scores fail to meet preset criteria.) Xiao further teaches process a first training example of the unlabeled training data through each of the plurality of first machine learning models to generate a plurality of first scores corresponding to respective probabilities that the first training example is non-anomalous or anomalous; [Xiao, Abstract] “…an ensemble of student networks provides a better capability to avoid generalizing the auxiliary task performance on anomalous samples…Second, the ensemble of student networks provides a capability to avoid potential generalizing of the auxiliary task of anomalous samples. Third, multiple anomaly scores are provided to detect anomalies from various aspects”, wherein the examiner interprets “multiple anomaly scores” from student networks to be the same as “a plurality of first scores corresponding to respective probabilities” because both describe generating multiple outputs for the same input example, representing its likelihood of being anomalous or non-anomalous.) Pang, Xiao, Glassman, and the instant application are analogous art, because they are all directed to anomaly detection systems using machine learning models to classify, filter, and refine training data through thresholds, ensembles, and iterative processes. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify system of claim 1 disclosed by Pang, Xiao, and Glassman to include the “cutoff thresholds used to exclude samples that do not meet anomaly score criteria” disclosed by Pang and “Third, multiple anomaly scores are provided to detect anomalies from various aspects” disclosed by Xiao. One would be motivated to do so to efficiently improve robustness and generalization across diverse anomaly conditions, as suggested by Xiao ([Xiao, Abstract] “multiple anomaly scores are provided to detect anomalies from various aspects”). Pang, Xiao, and Glassman do not teach wherein the one or more processors are further configured to: train each of the first machine learning models using a respective subset of the unlabeled training data;. Li teaches wherein the one or more processors are further configured to: train each of the first machine learning models using a respective subset of the unlabeled training data; ([Li, col 8, lines 55-58] “After the uniform training sample sampling, M training sample subsets can be constructed based on sampled training samples, and then training samples in each training sample subset are classified, to construct the M random binary trees”, wherein the examiner interprets “M training sample subsets constructed from sampled training samples” to be the same as “a respective subset of the unlabeled training data” because both describe dividing the training data into subsets for separate model training.) Pang, Xiao, Glassman, Li, and the instant application are analogous art because they are all directed to anomaly detection using ensembles of machine learning models for refining training data. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of claim 1 disclosed by Pang, Xiao, and Glassman to include the use of “M training sample subsets constructed from sampled training samples” disclosed by Li. One would be motivated to do so to effectively improve diversity and robustness of model training by preventing overfitting and improving generalization across datasets, as suggested by Li ([Li, col. 8, lines 55-58] “M training sample subsets can be constructed…to construct the M random binary trees”). Claim 16 is analogous to claim 7, and thus it will face the same rejection set forth above.Claims 8-10, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Pang in view of Xiao in view of Glassman in view of Li further in view of NPL reference “Same But DifferNet: Semi-Supervised Defect Detection with Normalizing Flows” by Rudolph et. al. (referred herein as Rudolph). Regarding claim 8, Pang, Xiao, Glassman, and Li teaches The system of claim 7, (see rejection of claim 7). Pang, Xiao, Glassman, and Li do not teach wherein the one or more thresholds are based on a predetermined percentile value of a distribution of scores corresponding to respective probabilities that training examples in the unlabeled training data are non-anomalous or anomalous. Rudolph teaches wherein the one or more thresholds are based on a predetermined percentile value of a distribution of scores corresponding to respective probabilities that training examples in the unlabeled training data are non-anomalous or anomalous. ([Rudolph, page 1907-1908] “The feature distribution of normal samples is captured by utilizing the latent space of a normalizing flow…each vector is assigned to a likelihood. This enables DifferNet to calculate a likelihood for each image. From this likelihood we derive a scoring function to decide if an image contains an anomaly.” AND [Rudolph, page 1908] “The most common samples are assigned to a high likelihood whereas uncommon images are assigned to a lower likelihood.”, wherein the examiner interprets “deriving a scoring function from likelihoods to decide if an image contains an anomaly” to be the same as “one or more thresholds based on a predetermined percentile value of a distribution of scores” because both describe setting a decision threshold on the statistical distribution of anomaly scores. The examiner further interprets “most common samples assigned to a high likelihood and uncommon images to a low likelihood” to be the same as “scores corresponding to respective probabilities that training examples are non-anomalous or anomalous” because both describe probabilistic values used to assign training examples as belonging to normal (high likelihood) or anomalous (low likelihood) categories.) Pang, Xiao, Glassman, Li, Rudolph, and the instant application are analogous art because they are all directed to anomaly detection using the aid of statistical thresholds. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of claim 7 disclosed by Pang, Xiao, Glassman, and Li to include the likelihood-based thresholding disclosed by Rudolph. One would be motivated to do so to effectively distinguish anomalous from non-anomalous training examples by leveraging probability-based decision boundaries, as suggested by Rudolph ([Rudolph, page 1908] “From this likelihood we derive a scoring function to decide if an image contains an anomaly. The most common samples are assigned to a high likelihood whereas uncommon images are assigned to a lower likelihood.”). Claim 17 is analogous to claim 8, and thus it will face the same rejection set forth above. Regarding claim 9, Pang, Xiao, Glassman, Li, Rudolph teaches The system of claim 8, (see rejection of claim 8). Pang further teaches wherein the one or more thresholds comprise a plurality of thresholds, each threshold based on the predetermined percentile value of a respective distribution of scores generated from training examples processed by a respective first machine learning model of the plurality of first machine learning models. ([Pang, Sec. 5.1] “…we include the 10% most anomalous frames into [A] according to their anomaly scores, because anomaly scores often follow a Gaussian distribution [19] and this decision threshold can provide an approximate 90% confidence level of making false positive errors in such cases… To generate the pseudo normal frame set N, we select the 20% most normal frames based on the anomaly scores. These two cutoff thresholds are used by default as they consistently obtain substantially improved performance on datasets with diverse anomaly rates.” wherein the examiner interprets “10% most anomalous frames” and “20% most normal frames” to be the same as “a plurality of thresholds, each threshold based on the predetermined percentile value” because both are directed to selecting cutoff values based on percentile ranks of the score distributions; “anomaly scores often follow a Gaussian distribution” to be the same as “respective distribution of scores generated from training examples” because both describe the statistical distribution of scores output by trained models; and “cutoff thresholds…consistently obtain improved performance” to be the same as “thresholds… of a respective first machine learning model” because both apply percentile-based thresholds to the distributions of anomaly scores produced by the models). Claim 18 is analogous to claim 9, and thus it will face the same rejection set forth above. Regarding claim 10, Pang, Xiao, Glassman, Li, and Rudolph teaches The system of claim 9, (see rejection of claim 10). Pang further teaches wherein the one or more processors are further configured to: generate the one or more thresholds based on minimizing, over one or more iterations of an optimization process, respective intra-class variances among anomalous and non-anomalous training examples in the training data. ([Pang, page 12175] “we first initialize A and N using anomaly scores generated by some existing unsupervised anomaly detection methods (see Sec. 4.1), and we iteratively update A and N and retrain φ until the best φ is achieved (see Section 4.3).” AND [Pang, page 12175] “optimizing the objective in Eqn. (1) will identify Θ∗ corresponding to a version of φ(x; Θ∗) that assigns scores to X such that suspicious abnormal and normal samples have anomaly scores as close to respective c1 and c2 as possible, yielding an optimal anomaly ranking.”, wherein the examiner interprets “iteratively update A and N and retrain φ until the best φ is achieved” to be the same as “minimizing, over one or more iterations of an optimization process” because both describe repeatedly refining and optimizing parameters to improve separation between anomalous and non-anomalous classes. The examiner further interprets “assign scores…as close to respective c1 and c2 as possible” to be the same as “minimizing intra-class variances among anomalous and non-anomalous training examples” because both are directed to reducing the spread of scores within each class, thereby generating thresholds that separate anomalous and non-anomalous examples.) Claim 19 is analogous to claim 10, and thus it will face the same rejection set forth above. Regarding claim 21, Pang, Xiao, and Glassman teach The system of claim 1 (see rejection of claim 1). Pang further teaches wherein the input data comprises one or more of images, video, audio, or data and the output data comprises one or more anomalous images, video, audio or data; ([Pang, page 12173] “unsupervised video anomaly detection...which requires identifying abnormal frames from a large volume of video frames with no manually labeled normal/abnormal training data.”, wherein the examiner interprets “video frames” and “abnormal frames” to be the same as “video” and “anomalous video” because they are both directed to video data in which certain frames are identified as anomalous, and because the claim uses “one or more of”, Pang meeting the “video” modality is sufficient even if Pang does not discuss audio.) Regarding claim 22, Pang, Xiao, and Glassman teach The system of claim 1 (see rejection of claim 1). Xiao further teaches: anomalous patterns associated with patient data; ([Xiao, page 8, sec 5.1] “As shown in Table 1, the experiments are conducted on 10 publicly available datasets from various domains, including network intrusion detection, fraud detection, medical disease detection, etc. Two datasets contain real anomalies, including Lung and U2R, while other datasets are transformed from extremely imbalanced datasets. Following [22,25,38,39], the rare class in the imbalanced dataset is treated as semantically anomalies.”, wherein the examiner interprets “medical disease detection… Two datasets contain real anomalies…treated as semantically anomalies.” in anomaly detection datasets to be the same as “anomalous patterns associated with patient data” because they are both directed to identifying abnormal patterns in medical or patient related data used for anomaly detection.) fraudulent activity as part of a credit card transaction, a security breach on a monitored network; ([Xiao, page 1, sec 1] “anomalies in credit card transactions could imply online a fraud [3], while an unusual computer network traffic recording could signify unauthorized access [4]. Due to the great empirical value, efficient and accurate anomaly detection algorithms are desired.”, wherein the examiner interprets “fraud” in credit card transactions to be the same as “fraudulent activity as part of a credit card transaction” because they are both directed to detecting anomalous activity in credit card transaction data, and interprets “unauthorized access” signaled by unusual network traffic to be the same as “a security breach on a monitored network” because they are both directed to identifying anomalous network behavior associated with unauthorized access. Xiao addresses this as his main “problem to solve” throughout reference.) Glassman further teaches : wherein the output data comprises data that identifies one or more of anomalous parts as part of a first manufacturing process, an anomalous process as part of a second manufacturing process; ([Glassman, page 20, col 27, lines 16-17, 22] “classifying the at least two sets of refined data as a representation of a property or output of the part…anomaly detection of the manufactured part”, wherein the examiner interprets “anomaly detection of the manufactured part” to be the same as “output data...identifies...anomalous parts” because they are both directed to identifying parts determined to have an anomaly in a manufacturing context.) improper usage of a cloud computing platform; ([Glassman, page 20, col 27, lines 44-45] “In some instances, method 500 is cloud based.”, wherein the examiner interprets “cloud based” anomaly detection to be the same as “improper usage of a cloud computing platform” because they are both directed to deploying anomaly detection in a cloud environment to identify anomalous or improper behavior within that computing context.) Pang, Xiao, Glassman, and the instant application are analogous art, because they are all directed to anomaly detection systems that analyze input data and generate output data identifying anomalous conditions in real world operational environments, including networked monitoring and industrial or enterprise deployments. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the system of claim 1 disclosed by Pang, Xiao, and Glassman to include the application specific anomaly identification outputs disclosed by Xiao identifying anomalous parts of a manufacturing process disclosed by Glassman. One would be motivated to do so to effectively broaden the applicability and robustness of the anomaly detection output across common anomaly scenarios encountered in practice, including identifying fraud in transaction streams and unauthorized access in monitored network traffic as suggested by Xiao (([Xiao, page 1, sec 1] “anomalies in credit card transactions could imply online a fraud [3], while an unusual computer network traffic recording could signify unauthorized access [4]. Due to the great empirical value, efficient and accurate anomaly detection algorithms are desired.”), and identifying anomalous manufactured parts and supporting cloud based anomaly detection deployments as suggested by Glassman ([Glassman, page 20, col 27, lines 16-17, 22] “classifying the at least two sets of refined data as a representation of a property or output of the part…anomaly detection of the manufactured part”). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVAN KAPOOR whose telephone number is (703)756-1434. The examiner can normally be reached Monday - Friday: 9:00AM - 5:00 PM EST (times may vary). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEVAN KAPOOR/Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

May 26, 2022
Application Filed
Sep 04, 2025
Non-Final Rejection — §101, §103
Dec 08, 2025
Response Filed
Feb 24, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
11%
Grant Probability
28%
With Interview (+16.7%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month