Prosecution Insights
Last updated: April 19, 2026
Application No. 18/123,673

ANOMALOUS DATA IDENTIFICATION FOR TABULAR DATA

Non-Final OA §101§103§112
Filed
Mar 20, 2023
Examiner
PHAM, JESSICA THUY
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Adobe Inc.
OA Round
1 (Non-Final)
33%
Grant Probability
At Risk
1-2
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
1 granted / 3 resolved
-21.7% vs TC avg
Minimal -33% lift
Without
With
+-33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
38 currently pending
Career history
41
Total Applications
across all art units

Statute-Specific Performance

§101
26.8%
-13.2% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
11.0%
-29.0% vs TC avg
§112
22.7%
-17.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are pending and examined herein. Claims 3-6 are rejected under 35 U.S.C. 112(b). Claims 1-20 are rejected under 35 U.S.C. 101. Claims 1-20 are rejected under 35 U.S.C. 103. Examiner’s Note Independent claim 1 recites "One or more computer storage media storing computer-useable instructions that, when used by a computing device, cause the computing device to perform operations, the operations comprising:" and dependent claims 2-9 recite the compute storage media of claim 1. According to the original speciation of the applicant, the utilization of computer-readable media excludes signals, [0073] states "Computer storage media does not comprise signals per se.". Claim Objections Applicant is advised that should claim 9 be found allowable, claim 20 will be objected to under 37 CFR 1.75 as being a substantial duplicate thereof. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3-6, 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites the limitation "the labels" in the second paragraph of the claim. There is insufficient antecedent basis for this limitation in the claim. Claim 4 recites the limitation "the labels" in the first paragraph of the claim. There is insufficient antecedent basis for this limitation in the claim. Dependent claims 5-6 fail to resolve the issue and are rejected with the same rationale. Claim 20 recites the limitation "the records" in the first paragraph of the claim. There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. MPEP § 2109(III) sets out steps for evaluating whether a claim is drawn to patent-eligible subject matter. The analysis of claims 1-20, in accordance with these steps, follows. Step 1 Analysis: Step 1 is to determine whether the claim is directed to a statutory category (process, machine, manufacture, or composition of matter. Claims 1-9 are directed to an article of manufacture, claims 10-15 are directed to a process, and claims 16-20 are directed to a machine. All claims are directed to statutory categories and analysis proceeds. Combined Step 2A Prong One, Step 2A Prong Two, and Step 2B Analysis: Step 2A Prong One asks if the claim recites a judicial exception (abstract idea, law of nature, or natural phenomenon). If the claim recites a judicial exception, analysis proceeds to Step 2A Prong Two, which asks if the claim recites additional elements that integrate the abstract idea into a practical application. If the claim does not integrate the judicial exception, analysis proceeds to Step 2B, which asks if the claim amounts to significantly more than the judicial exception. If the claim does not amount to significantly more than the judicial exception, the claim is not eligible subject matter under 35 U.S.C. 101. None of the claims represent an improvement to technology. Regarding claim 1, the following are abstract ideas: generating an anomaly score for each data element of each tabular data record; (Generating anomaly scores can be practically performed in the human mind. This is a mental process.) generating an anomaly score for each attribute and each tabular data record using the evidence sets; and (Generating anomaly scores can be practically performed in the human mind. This is a mental process.) defining an evidence set for each attribute and each tabular data record based on the anomaly scores for the data elements; (Defining/grouping data can be practically performed in the human mind. This is a mental process.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: One or more computer storage media storing computer-useable instructions that, when used by a computing device, cause the computing device to perform operations, the operations comprising: (This limitation recites generic computer parts and processes; this amounts to mere instructions to apply an exception.) receiving a set of tabular data records, each tabular data record comprising data elements for a plurality of attributes, each data element providing a value for a corresponding attribute; (Receiving data is a known process in computing. This amounts to mere instructions to apply an exception.) providing an output identifying one or more anomalous data subsets determined based on the anomaly scores for the attributes and tabular data records, each anomalous data subset identifying a subset of attributes and a subset of tabular data records. (Outputting data is an insignificant extra-solution activity. See MPEP § 2106.05(d)(II), list 3, ex. iv.) Regarding claim 2, the rejection of claim 1 is incorporated herein. The following are abstract ideas: wherein generating the anomaly score for a first data element of a first tabular data record comprises: (Generating anomaly scores can be practically performed in the human mind. This is a mental process.) generating, …, a predicted value for an attribute corresponding to the first data element given one or more other data elements for the first tabular data record; and (Generating a value can be practically performed in the human mind. This is a mental process.) determining a reconstruction loss based on the predicted value. (Determining a loss function can be practically performed in the human mind. This is a mental process.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: using a machine learning model (This recites generic machine learning components and processes; this amounts to mere instructions to apply an exception.) Regarding claim 3, the rejection of claim 1 is incorporated herein. Further, the following are abstract ideas: wherein defining the evidence set for each attribute and each tabular data record based on the anomaly scores for the data elements comprises: (Defining/grouping data can be practically performed in the human mind. This is a mental process.) assigning labels to the data elements based on the anomaly scores; and (Assigning labels based on scores can be practically performed in the human mind. This is a mental process.) defining the evidence sets using the labels. (Defining/grouping data can be practically performed in the human mind. This is a mental process.) Regarding claim 4, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the labels comprise a first label indicating a corresponding data element as a possibly anomaly and a second label indicating a corresponding data element as not a possible anomaly. (This is the insignificant extra-solution activity of selecting a particular data source or type of data to be manipulated. See MPEP § 2106.05(g), ‘Selecting a particular data source or type of data to be manipulated’, ex. i-iv.) Regarding claim 5, the rejection of claim 4 is incorporated herein. Further, the following is a continuation of an abstract idea: wherein the evidence set for a first attribute comprises an indication of tabular data records in which the data element for the first attribute is labeled with the first label. (As the evidence set being defined is an abstract idea (see claim 1), specifying what the evidence set comprises is a continuation of the abstract idea, i.e. one could practically perform defining an evidence set, wherein the evidence set comprises an indication, in the human mind. This is a continuation of a mental process.) Regarding claim 6, the rejection of claim 4 is incorporated herein. Further, the following is a continuation of an abstract idea: wherein the evidence set for a first tabular data record comprises an indication of attributes in which the data element for the first tabular data record is labeled with the first label. (As the evidence set being defined is an abstract idea (see claim 1), specifying what the evidence set comprises is a continuation of the abstract idea, i.e. one could practically perform defining an evidence set, wherein the evidence set comprises an indication, in the human mind. This is a continuation of a mental process.) Regarding claim 7, the rejection of claim 1 is incorporated herein. Further, the following is a continuation of an abstract idea: wherein the anomaly score for each attribute and each tabular data record comprises a Shapley value. (As the anomaly score being generated is an abstract idea (see claim 1), specifying what the evidence set comprises is a continuation of the abstract idea, i.e. one could practically perform generation of anomaly scores, wherein the anomaly scores comprise Shapley values, in the human mind. This is a continuation of a mental process.) Regarding claim 8, the rejection of claim 7 is incorporated herein. Further, the following is an abstract idea: wherein the Shapley value for each attribute and each tabular data record is determined by defining a cooperative game using the evidence sets for attributes and records as players. (One could practically perform the defining of a cooperative game in the human mind (see [0052] of the specification, where defining a cooperative game entails defining a set of attributes as the set of players and performing calculations). This is a mental process.) Regarding claim 9, the rejection of claim 1 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the output comprises restructured tabular data in which the tabular data records and attributes are ordered based on the anomaly scores for the tabular data records and the attributes, and (This is the extra-solution activity of sorting information. See MPEP § 2106.05(II), list 2, ex. vi.) wherein the restructured tabular data includes a visual indicator identifying a first anomalous data subset. (This is the extra-solution activity of displaying information. See MPEP § 2106.05(II), list 2, ex. iv.) Regarding claim 10, the following are abstract ideas: assigning, by the data element analysis component, a label to each data element indicative of whether each data element is anomalous; (Assigning a label to data can be practically performed in the human mind. This is a mental process.) determining, by an evidence set component, an evidence set for each attribute and each record using the labels; (Determining a set of data can be practically performed in the human mind. This is a mental process.) generating, by an anomaly scoring component, an anomaly score for each attribute and each record based on the evidence sets; and (Generating a score based on evidence sets can be practically performed in the human mind. This is a mental process.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A computer-implemented method comprising: (This recites generic computer components and processes. This amounts to mere instructions to apply an exception.) receiving, by a data element analysis component, tabular data comprising a set of records, each record including data elements for a set of attributes; (Receiving data is a known process in computing. This amounts to mere instructions to apply an exception.) outputting, by a user interface component, an indication of one or more anomalous data subsets based on the anomaly scores for the attributes and records, each anomalous data subset comprising a subset of attributes and a subset of record. (Outputting data is an insignificant extra-solution activity. See MPEP § 2106.05(d)(II), list 3, ex. iv.) Regarding claim 11, the rejection of claim 10 is incorporated herein. Further, the following is an abstract idea: generating an anomaly score for each data element, wherein the data elements are assigned labels based on the anomaly scores. (Generating a score can be practically performed in the human mind. This is a mental process.) Regarding claim 12, the rejection of claim 11 is incorporated herein. Further, the following are abstract ideas: wherein generating the anomaly score for a first data element for a first record comprises: (Generating a score can be practically performed in the human mind. This is a mental process.) generating, …, a predicted value for an attribute corresponding to the first data element given one or more other data elements for the first record; and (Generating a value can be practically performed in the human mind. This is a mental process.) determining a reconstruction loss based on the predicted value. (Determining a loss function can be practically performed in the human mind. This is a mental process.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: using a machine learning model (This recites generic machine learning components and processes; this amounts to mere instructions to apply an exception.) Regarding claim 13, the rejection of claim 11 is incorporated herein. Further, the following are abstract ideas: wherein the labels comprise a first label indicating a corresponding data element as a possibly anomaly and a second label indicating a corresponding data element as not a possible anomaly. (This is the insignificant extra-solution activity of selecting a particular data source or type of data to be manipulated. See MPEP § 2106.05(g), ‘Selecting a particular data source or type of data to be manipulated’, ex. i-iv.) Regarding claim 14, the rejection of claim 11 is incorporated herein. Further, the following are abstract ideas: wherein the anomaly score for each attribute and each tabular data record comprises a Shapley value. (As the anomaly score being generated is an abstract idea (see claim 1), specifying what the evidence set comprises is a continuation of the abstract idea, i.e. one could practically perform generation of anomaly scores, wherein the anomaly scores comprise Shapley values, in the human mind. This is a continuation of a mental process.) Regarding claim 15, the rejection of claim 11 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the anomalous data subsets are ordered based on the anomaly scores for the subsets of attributes and the subsets of records corresponding to the anomalous data subsets. (This is the extra-solution activity of sorting information. See MPEP § 2106.05(II), list 2, ex. vi.) Regarding claim 16, the following are abstract ideas: generating, by a data element analysis component, an anomaly score for each data element in tabular data, the tabular data comprising a set of records, each record including data elements for a set of attributes; (Generating a score can be practically performed in the human mind. This is a mental process.) determining, by an evidence set component, an evidence set for each attribute and each record using the labels; (Determining a set of data can be practically performed in the human mind. This is a mental process.) generating, by an anomaly scoring component, an anomaly score for each attribute and each record based on the evidence sets; (Generating a score based on evidence sets can be practically performed in the human mind. This is a mental process.) generating, by an anomalous data subset component, one or more anomalous data subsets based on the anomaly scores for the attributes and records, each anomalous data subset comprising a subset of attributes and a subset of records; and (Determining a set of data can be practically performed in the human mind. This is a mental process.) assigning, by the data element analysis component, labels to the data elements based on the anomaly scores for the data elements; (Assigning a label to data can be practically performed in the human mind. This is a mental process.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: A computer system comprising: (This recites generic computer components. This amounts to mere instructions to apply an exception.) one or more processors; and (This recites generic computer components. This amounts to mere instructions to apply an exception.) one or more computer storage media storing computer-useable instructions that, when used by the one or more processors, causes the one or more processors to perform operations comprising: (This recites generic computer components and processes. This amounts to mere instructions to apply an exception.) outputting, by a user interface component, an indication of the one or more anomalous data subsets. (Outputting data is an insignificant extra-solution activity. See MPEP § 2106.05(d)(II), list 3, ex. iv.) Regarding claim 17, the rejection of claim 16 is incorporated herein. Further, the following are abstract ideas: wherein generating the anomaly score for a first data element of a first tabular data record comprises: (Generating anomaly scores can be practically performed in the human mind. This is a mental process.) generating, …, a predicted value for an attribute corresponding to the first data element given one or more other data elements for the first tabular data record; and (Generating a value can be practically performed in the human mind. This is a mental process.) determining a reconstruction loss based on the predicted value. (Determining a loss function can be practically performed in the human mind. This is a mental process.) The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: using a machine learning model (This recites generic machine learning components and processes; this amounts to mere instructions to apply an exception.) Regarding claim 18, the rejection of claim 16 is incorporated herein. The following claim elements are additional elements which, taken alone or in combination with the other additional elements, do not integrate the judicial exception into a practical application nor amount to significantly more than the judicial exception: wherein the labels comprise a first label indicating a corresponding data element as a possibly anomaly and a second label indicating a corresponding data element as not a possible anomaly. (This is the insignificant extra-solution activity of selecting a particular data source or type of data to be manipulated. See MPEP § 2106.05(g), ‘Selecting a particular data source or type of data to be manipulated’, ex. i-iv.) Regarding claim 19, the rejection of claim 16 is incorporated herein. Further, the following is a continuation of an abstract idea: wherein the anomaly score for each attribute and each tabular data record comprises a Shapley value. (As the anomaly score being generated is an abstract idea (see claim 16), specifying what the evidence set comprises is a continuation of the abstract idea, i.e. one could practically perform generation of anomaly scores, wherein the anomaly scores comprise Shapley values, in the human mind. This is a continuation of a mental process.) Regarding claim 20, the rejection of claim 1 is incorporated herein. Further, the following are abstract ideas: wherein the output comprises restructured tabular data in which the tabular data records and attributes are ordered based on the anomaly scores for the tabular data records and the attributes, and (This is the extra-solution activity of sorting information. See MPEP § 2106.05(II), list 2, ex. vi.) wherein the restructured tabular data includes a visual indicator identifying a first anomalous data subset. (This is the extra-solution activity of displaying information. See MPEP § 2106.05(II), list 2, ex. iv.) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Antwarg (“Explaining Anomalies Detected by Autoencoders Using SHAP”, July 2, 2020). Regarding claim 1, Antwarg teaches One or more computer storage media storing computer-useable instructions that, when used by a computing device, cause the computing device to perform operations, the operations comprising: (Page 12 states "The flow chart describing the process of providing an explanation for an anomaly revealed by an autoencoder can be seen in Figure 1. The code for the explanation method can be found in github." As there is code for the method, one of ordinary skill in the art would understand that, when the method is performed, a computer storage media is necessary for the code/computer-useable instructions to be stored in. Additionally, one of ordinary skill in the art would understand that the code/computer-useable instructions, when used by a processor/computing device, would cause the computing device to perform the operations on the computer storage media.) receiving a set of tabular data records, each tabular data record comprising data elements for a plurality of attributes, each data element providing a value for a corresponding attribute; (Page 17 states "We evaluated our suggested method for explaining anomalies using four different approaches: (1) we performed a user study conducted on real data with domain experts, (2) we used simulated data in which we know which features should explain the anomalies, (3) we assessed the robustness of the explanations on real-world data, and (4) We examined the affect of changing the value of the features that explain the anomaly on the anomaly score." Page 18 shows the tabular dataset, where the attributes are in the top row, with corresponding data elements with values in the boxes underneath.) generating an anomaly score for each data element of each tabular data record; (Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” Page 12, Figure 1, shows that the instances are all input into the autoencoder model. Therefore, each feature of each instance (which is each data element) will be put through the autoencoder model to obtain reconstruction errors of each feature, which will be recorded in the errorList.) defining an evidence set for each attribute and each tabular data record based on the anomaly scores for the data elements; (As each attribute is given a score, one of ordinary skill in the art would realize that the scores form an evidence set for each attribute. The evidence set for each tabular data record is the TopMfeatures, explained on page 9, as it includes the features with the highest errors. Therefore, it would be obvious to one of ordinary skill in the art that an evidence set is defined based on the scores for attributes and tabular data records.) generating an anomaly score for each attribute and each tabular data record using the evidence sets; and (Page 9 states "First, we extract the features with the highest reconstruction error from the ErrorList and save them in the topMfeatures list. Next, for each feature x ' in topMfeatures, we use Kernel SHAP to obtain the SHAP values, i.e, the importance of each feature x 1 , x 2 , … ,   x n (except for x i ) in predicting the examined feature i . Kernel SHAP receives f and a background set with j instances for building the local explanation model and calculating the SHAP values. Then, f takes X   and i as input and predicts X ' ; the value in the i'th feature (a feature in the topMfeatures) is returned by Algorithm 2. The result of this step is a two-dimensional list shaptopMfeatures, in which each row represents the SHAP values for one feature from the topMfeatures." The SHAP values are interpreted as the anomaly scores.) providing an output identifying one or more anomalous data subsets determined based on the anomaly scores for the attributes and tabular data records, each anomalous data subset identifying a subset of attributes and a subset of tabular data records. (Page 16, Figure 4(b) shows an output that identifies subsets of anomalous data, with the anomalous record (TopErrorsList) and attributes (Contributing to anomaly.) Regarding claim 2, the rejection of claim 1 is incorporated herein. Further, Antwarg teaches wherein generating the anomaly score for a first data element of a first tabular data record comprises: generating, using a machine learning model, a predicted value for an attribute corresponding to the first data element given one or more other data elements for the first tabular data record; and (Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” The first feature is interpreted as the first data element and the other features are interpreted as the one or more other data elements. The reconstructed value x 1 ' is interpreted as the predicted value.) determining a reconstruction loss based on the predicted value. (The reconstruction error L X ,   X ' = ∑ i = 1 n x i - x i ' 2 is interpreted as the reconstruction loss, which is determined based on the predicted value as the predicted value is part of the loss function.) Regarding claim 3, the rejection of claim 1 is incorporated herein. Further, Antwarg teaches wherein defining the evidence set for each attribute and each tabular data record based on the anomaly scores for the data elements comprises: assigning labels to the data elements based on the anomaly scores; and (Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” As grouping the data elements is assigning a category to the data elements, adding the features to topMfeatures is interpreted as labeling the data elements.) defining the evidence sets using the labels. (The evidence set for each tabular data record and attribute is defined by the topMfeatures, explained on page 9, as it includes the features with the highest errors. Therefore, it would be obvious to one of ordinary skill in the art that an evidence set is defined based on the labels.) Regarding claim 4, the rejection of claim 1 is incorporated herein. Further, Antwarg teaches wherein the labels comprise a first label indicating a corresponding data element as a possibly anomaly and a second label indicating a corresponding data element as not a possible anomaly. (Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” As grouping the data elements is assigning a category to the data elements, adding the features to topMfeatures is interpreted as labeling the data elements with a first label, which one of ordinary skill in the art would understand indicates a possible anomaly, based on the high error/anomaly score. Not including the features in topMfeatures is interpreted as the second label, which one of ordinary skill in the art would understand indicates not a possible anomaly, as the errors/anomaly score is lower.) Regarding claim 5, the rejection of claim 4 is incorporated herein. Further, Antwarg teaches wherein the evidence set for a first attribute comprises an indication of tabular data records in which the data element for the first attribute is labeled with the first label. (As each attribute is given a score and is/is not included in the topMfeatures, one of ordinary skill in the art would realize that the scores form an evidence set for each attribute which has an indication of tabular data records in which the attribute is labeled with the first label.) Regarding claim 6, the rejection of claim 4 is incorporated herein. Further, Antwarg teaches wherein the evidence set for a first tabular data record comprises an indication of attributes in which the data element for the first tabular data record is labeled with the first label. (As each attribute is given a score and is/is not included in the topMfeatures, one of ordinary skill in the art would realize that the features that are included in the topMfeatures form an evidence set for each record.) Regarding claim 7, the rejection of claim 1 is incorporated herein. Further, Antwarg teaches wherein the anomaly score for each attribute and each tabular data record comprises a Shapley value. (Page 9 states "First, we extract the features with the highest reconstruction error from the ErrorList and save them in the topMfeatures list. Next, for each feature x ' in topMfeatures, we use Kernel SHAP to obtain the SHAP values, i.e, the importance of each feature x 1 , x 2 , … ,   x n (except for x i ) in predicting the examined feature i . Kernel SHAP receives f and a background set with j instances for building the local explanation model and calculating the SHAP values. Then, f takes X   and i as input and predicts X ' ; the value in the i'th feature (a feature in the topMfeatures) is returned by Algorithm 2. The result of this step is a two-dimensional list shaptopMfeatures, in which each row represents the SHAP values for one feature from the topMfeatures." The SHAP values are interpreted as the anomaly scores.) Regarding claim 8, the rejection of claim 7 is incorporated herein. Further, Antwarg teaches wherein the Shapley value for each attribute and each tabular data record is determined by defining a cooperative game using the evidence sets for attributes and records as players. (Page 5 states "SHAP has a sound theoretic basis, which is a benefit in regulated scenarios. It uses Shapley values from game theory to explain a specific prediction by assigning an importance value (SHAP value) to each feature that has the following properties: (1) local accuracy-the explanation model has to at least match the output of original model; (2) missingness-features missing in the original input must have no impact; (3) consistency-if we revise a model such that it depends more on a certain feature, then the importance of that feature should not decrease, regardless of other features." Therefore, finding a Shapley value is defining a cooperative game. Page 9 states "First, we extract the features with the highest reconstruction error from the ErrorList and save them in the topMfeatures list. Next, for each feature x ' in topMfeatures, we use Kernel SHAP to obtain the SHAP values, i.e, the importance of each feature x 1 , x 2 , … ,   x n (except for x i ) in predicting the examined feature i . Kernel SHAP receives f and a background set with j instances for building the local explanation model and calculating the SHAP values. Then, f takes X   and i as input and predicts X ' ; the value in the i'th feature (a feature in the topMfeatures) is returned by Algorithm 2. The result of this step is a two-dimensional list shaptopMfeatures, in which each row represents the SHAP values for one feature from the topMfeatures." The SHAP values are interpreted as the anomaly scores.)e extract the features with the highest reconstruction error from the ErrorList and save them in the topMfeatures list. Next, for each feature x ' in topMfeatures, we use Kernel SHAP to obtain the SHAP values, i.e, the importance of each feature x 1 , x 2 , … ,   x n (except for x i ) in predicting the examined feature i . Kernel SHAP receives f and a background set with j instances for building the local explanation model and calculating the SHAP values. Then, f takes X   and i as input and predicts X ' ; the value in the i'th feature (a feature in the topMfeatures) is returned by Algorithm 2. The result of this step is a two-dimensional list shaptopMfeatures, in which each row represents the SHAP values for one feature from the topMfeatures." As the topMfeatures are used, the evidence sets for attributes and records are used as players.) Regarding claim 9, the rejection of claim 1 is incorporated herein. Further, Antwarg teaches wherein the output comprises restructured tabular data in which the tabular data records and attributes are ordered based on the anomaly scores for the tabular data records and the attributes, and wherein the restructured tabular data includes a visual indicator identifying a first anomalous data subset. (Page 16, Figure 1 shows the restructured tabular data, where the TopErrors list comprising instances of data (data records) is ordered based on the anomaly score. The contributing to anomaly column comprises the attributes ordered by anomaly score. The coloring is interpreted as the visual indictor identifying the anomalous data subset.) Regarding claim 10, Antwarg teaches A computer-implemented method comprising: (Page 12 states "The flow chart describing the process of providing an explanation for an anomaly revealed by an autoencoder can be seen in Figure 1. The code for the explanation method can be found in github." As there is code for the method, one of ordinary skill in the art would understand that a computer is used to implement the method.) receiving, by a data element analysis component, tabular data comprising a set of records, each record including data elements for a set of attributes; (Page 17 states "We evaluated our suggested method for explaining anomalies using four different approaches: (1) we performed a user study conducted on real data with domain experts, (2) we used simulated data in which we know which features should explain the anomalies, (3) we assessed the robustness of the explanations on real-world data, and (4) We examined the affect of changing the value of the features that explain the anomaly on the anomaly score." Page 18 shows the tabular dataset, where the attributes are in the top row, with corresponding data elements with values in the boxes underneath. This code used to implement this step and the next step is interpreted as the data element analysis component.) assigning, by the data element analysis component, a label to each data element indicative of whether each data element is anomalous; (Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” As grouping the data elements is assigning a category to the data elements, adding the features to topMfeatures is interpreted as labeling the data elements with a first label, which one of ordinary skill in the art would understand indicates a possible anomaly, based on the high error/anomaly score. Not including the features in topMfeatures is interpreted as the second label, which one of ordinary skill in the art would understand indicates not a possible anomaly, as the errors/anomaly score is lower.) determining, by an evidence set component, an evidence set for each attribute and each record using the labels; (The evidence set for each tabular data record and attribute is defined by the topMfeatures, explained on page 9, as it includes the features with the highest errors. Therefore, it would be obvious to one of ordinary skill in the art that an evidence set is defined based on the labels.) generating, by an anomaly scoring component, an anomaly score for each attribute and each record based on the evidence sets; and (Page 9 states "First, we extract the features with the highest reconstruction error from the ErrorList and save them in the topMfeatures list. Next, for each feature x ' in topMfeatures, we use Kernel SHAP to obtain the SHAP values, i.e, the importance of each feature x 1 , x 2 , … ,   x n (except for x i ) in predicting the examined feature i . Kernel SHAP receives f and a background set with j instances for building the local explanation model and calculating the SHAP values. Then, f takes X   and i as input and predicts X ' ; the value in the i'th feature (a feature in the topMfeatures) is returned by Algorithm 2. The result of this step is a two-dimensional list shaptopMfeatures, in which each row represents the SHAP values for one feature from the topMfeatures." The SHAP values are interpreted as the anomaly scores. The code used to implement this step is interpreted as the anomaly scoring component.) outputting, by a user interface component, an indication of one or more anomalous data subsets based on the anomaly scores for the attributes and records, each anomalous data subset comprising a subset of attributes and a subset of record. (Page 16, Figure 4(b) shows an output that identifies subsets of anomalous data, with the anomalous record (TopErrorsList) and attributes (Contributing to anomaly). The code used to implement this step is interpreted as the user interface component.) Regarding claim 11, the rejection of claim 10 is incorporated herein. Further, Antwarg teaches generating an anomaly score for each data element, wherein the data elements are assigned labels based on the anomaly scores. (Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” Page 12, Figure 1, shows that the instances are all input into the autoencoder model. Therefore, each feature of each instance (which is each data element) will be put through the autoencoder model to obtain reconstruction errors of each feature, which will be recorded in the errorList.) Regarding claim 12, the rejection of claim 11 is incorporated herein. Further, Antwarg teaches wherein generating the anomaly score for a first data element for a first record comprises: generating, using a machine learning model, a predicted value for an attribute corresponding to the first data element given one or more other data elements for the first tabular data record; and (Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” The first feature is interpreted as the first data element and the other features are interpreted as the one or more other data elements. The reconstructed value x 1 ' is interpreted as the predicted value.) determining a reconstruction loss based on the predicted value. (The reconstruction error L X ,   X ' = ∑ i = 1 n x i - x i ' 2 is interpreted as the reconstruction loss, which is determined based on the predicted value as the predicted value is part of the loss function.) Regarding claim 13, the rejection of claim 11 is incorporated herein. Further, Antwarg teaches wherein the labels comprise a first label indicating a corresponding data element as a possibly anomaly and a second label indicating a corresponding data element as not a possible anomaly. (Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” As grouping the data elements is assigning a category to the data elements, adding the features to topMfeatures is interpreted as labeling the data elements with a first label, which one of ordinary skill in the art would understand indicates a possible anomaly, based on the high error/anomaly score. Not including the features in topMfeatures is interpreted as the second label, which one of ordinary skill in the art would understand indicates not a possible anomaly, as the errors/anomaly score is lower.) Regarding claim 14, the rejection of claim 11 is incorporated herein. Further, Antwarg teaches wherein the anomaly score for each attribute and each tabular data record comprises a Shapley value. (Page 9 states "First, we extract the features with the highest reconstruction error from the ErrorList and save them in the topMfeatures list. Next, for each feature x ' in topMfeatures, we use Kernel SHAP to obtain the SHAP values, i.e, the importance of each feature x 1 , x 2 , … ,   x n (except for x i ) in predicting the examined feature i . Kernel SHAP receives f and a background set with j instances for building the local explanation model and calculating the SHAP values. Then, f takes X   and i as input and predicts X ' ; the value in the i'th feature (a feature in the topMfeatures) is returned by Algorithm 2. The result of this step is a two-dimensional list shaptopMfeatures, in which each row represents the SHAP values for one feature from the topMfeatures." The SHAP values are interpreted as the anomaly scores.) Regarding claim 15, the rejection of claim 11 is incorporated herein. Further, Antwarg teaches wherein the Shapley value for each attribute and each tabular data record is determined by defining a cooperative game using the evidence sets for attributes and records as players. (Page 5 states "SHAP has a sound theoretic basis, which is a benefit in regulated scenarios. It uses Shapley values from game theory to explain a specific prediction by assigning an importance value (SHAP value) to each feature that has the following properties: (1) local accuracy-the explanation model has to at least match the output of original model; (2) missingness-features missing in the original input must have no impact; (3) consistency-if we revise a model such that it depends more on a certain feature, then the importance of that feature should not decrease, regardless of other features." Therefore, finding a Shapley value is defining a cooperative game. Page 9 states "First, we extract the features with the highest reconstruction error from the ErrorList and save them in the topMfeatures list. Next, for each feature x ' in topMfeatures, we use Kernel SHAP to obtain the SHAP values, i.e, the importance of each feature x 1 , x 2 , … ,   x n (except for x i ) in predicting the examined feature i . Kernel SHAP receives f and a background set with j instances for building the local explanation model and calculating the SHAP values. Then, f takes X   and i as input and predicts X ' ; the value in the i'th feature (a feature in the topMfeatures) is returned by Algorithm 2. The result of this step is a two-dimensional list shaptopMfeatures, in which each row represents the SHAP values for one feature from the topMfeatures." The SHAP values are interpreted as the anomaly scores.)e extract the features with the highest reconstruction error from the ErrorList and save them in the topMfeatures list. Next, for each feature x ' in topMfeatures, we use Kernel SHAP to obtain the SHAP values, i.e, the importance of each feature x 1 , x 2 , … ,   x n (except for x i ) in predicting the examined feature i . Kernel SHAP receives f and a background set with j instances for building the local explanation model and calculating the SHAP values. Then, f takes X   and i as input and predicts X ' ; the value in the i'th feature (a feature in the topMfeatures) is returned by Algorithm 2. The result of this step is a two-dimensional list shaptopMfeatures, in which each row represents the SHAP values for one feature from the topMfeatures." As the topMfeatures are used, the evidence sets for attributes and records are used as players.) Regarding claim 15, the rejection of claim 11 is incorporated herein. Further, Antwarg teaches wherein the anomalous data subsets are ordered based on the anomaly scores for the subsets of attributes and the subsets of records corresponding to the anomalous data subsets. (Page 16, Figure 1 shows the restructured tabular data, where the TopErrors list comprising instances of data (data records) is ordered based on the anomaly score. The contributing to anomaly column comprises the attributes ordered by anomaly score. The coloring is interpreted as the visual indictor identifying the anomalous data subset.) Regarding claim 16, Antwarg teaches A computer system comprising: (Page 12 states "The flow chart describing the process of providing an explanation for an anomaly revealed by an autoencoder can be seen in Figure 1. The code for the explanation method can be found in github." As there is code for the method, one of ordinary skill in the art would understand that a computer system is necessary for the execution of the method. one or more processors; and (As there is code for the method, one of ordinary skill in the art would understand that a processor is necessary for the method to be performed.) one or more computer storage media storing computer-useable instructions that, when used by the one or more processors, causes the one or more processors to perform operations comprising: Additionally, one of ordinary skill in the art would understand that the code/computer-useable instructions, when used by a processor/computing device, would cause the computing device to perform the operations on the computer storage media.) generating, by a data element analysis component, an anomaly score for each data element in tabular data, the tabular data comprising a set of records, each record including data elements for a set of attributes; (Page 17 states "We evaluated our suggested method for explaining anomalies using four different approaches: (1) we performed a user study conducted on real data with domain experts, (2) we used simulated data in which we know which features should explain the anomalies, (3) we assessed the robustness of the explanations on real-world data, and (4) We examined the affect of changing the value of the features that explain the anomaly on the anomaly score." Page 18 shows the tabular dataset, where the attributes are in the top row, with corresponding data elements with values in the boxes underneath. Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” Page 12, Figure 1, shows that the instances are all input into the autoencoder model. Therefore, each feature of each instance (which is each data element) will be put through the autoencoder model to obtain reconstruction errors of each feature, which will be recorded in the errorList. The code for this step and the next step is interpreted as the data element analysis component.) assigning, by the data element analysis component, labels to the data elements based on the anomaly scores for the data elements; (Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” As grouping the data elements is assigning a category to the data elements, adding the features to topMfeatures is interpreted as labeling the data elements with a first label, which one of ordinary skill in the art would understand indicates a possible anomaly, based on the high error/anomaly score. Not including the features in topMfeatures is interpreted as the second label, which one of ordinary skill in the art would understand indicates not a possible anomaly, as the errors/anomaly score is lower.) determining, by an evidence set component, an evidence set for each attribute and each record using the labels; (The evidence set for each tabular data record and attribute is defined by the topMfeatures, explained on page 9, as it includes the features with the highest errors. Therefore, it would be obvious to one of ordinary skill in the art that an evidence set is defined based on the labels. The code for this step and the next step is interpreted as the evidence set component.) generating, by an anomaly scoring component, an anomaly score for each attribute and each record based on the evidence sets; (Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” Page 12, Figure 1, shows that the instances are all input into the autoencoder model. Therefore, each feature of each instance (which is each data element) will be put through the autoencoder model to obtain reconstruction errors of each feature, which will be recorded in the errorList. The code for this step and the next step is interpreted as the anomaly scoring component.) generating, by an anomalous data subset component, one or more anomalous data subsets based on the anomaly scores for the attributes and records, each anomalous data subset comprising a subset of attributes and a subset of records; and (Page 16, Figure 4(b) shows an output that identifies subsets of anomalous data, with the anomalous record (TopErrorsList) and attributes (Contributing to anomaly. The code for this step and the next step is interpreted as the anomalous data subset component.) outputting, by a user interface component, an indication of the one or more anomalous data subsets. (Page 16, Figure 4(b) shows an output that identifies subsets of anomalous data, with the anomalous record (TopErrorsList) and attributes (Contributing to anomaly. The code for this step and the next step is interpreted as the anomalous data subset component.) Regarding claim 17, the rejection of claim 16 is incorporated herein. Further, Antwarg teaches generating, using a machine learning model, a predicted value for an attribute corresponding to the first data element given one or more other data elements for the first tabular data record; and (Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” The first feature is interpreted as the first data element and the other features are interpreted as the one or more other data elements. The reconstructed value x 1 ' is interpreted as the predicted value.) determining a reconstruction loss based on the predicted value. (The reconstruction error L X ,   X ' = ∑ i = 1 n x i - x i ' 2 is interpreted as the reconstruction loss, which is determined based on the predicted value as the predicted value is part of the loss function.) Regarding claim 18, the rejection of claim 16 is incorporated herein. Further, Antwarg teaches wherein the labels comprise a first label indicating a corresponding data element as a possibly anomaly and a second label indicating a corresponding data element as not a possible anomaly. (Page 9 states "Given input instance X with a set of features x 1 , x 2 , … ,   x n and its corresponding output X ' and reconstructed values x ' 1 , x ' 2 , … ,   x ' n , using an autoencoder model f , the reconstruction error of the instance is the sum of errors of each feature L X ,   X ' = ∑ i = 1 n x i - x i ' 2 . Let x ( 1 ) , x ( 2 ) , … ,   x ( n ) be a reordering of the features in errorList, such that x 1 - x 1 ' ≥ … ≥ x m - x m ' ,   t o p M f e a t u r e s = x 1 ,   … ,   x m contains a set of features for which the total corresponding errors t o p M e r r o r s   : x 1 - x 1 ' ≥ … ≥ x m - x m '   represent an adjustable percent of L ( X ,   X ' ) .” As grouping the data elements is assigning a category to the data elements, adding the features to topMfeatures is interpreted as labeling the data elements with a first label, which one of ordinary skill in the art would understand indicates a possible anomaly, based on the high error/anomaly score. Not including the features in topMfeatures is interpreted as the second label, which one of ordinary skill in the art would understand indicates not a possible anomaly, as the errors/anomaly score is lower.) Regarding claim 19, the rejection of claim 16 is incorporated herein. Further, Antwarg teaches wherein the anomaly score for each attribute and each tabular data record comprises a Shapley value. (Page 9 states "First, we extract the features with the highest reconstruction error from the ErrorList and save them in the topMfeatures list. Next, for each feature x ' in topMfeatures, we use Kernel SHAP to obtain the SHAP values, i.e, the importance of each feature x 1 , x 2 , … ,   x n (except for x i ) in predicting the examined feature i . Kernel SHAP receives f and a background set with j instances for building the local explanation model and calculating the SHAP values. Then, f takes X   and i as input and predicts X ' ; the value in the i'th feature (a feature in the topMfeatures) is returned by Algorithm 2. The result of this step is a two-dimensional list shaptopMfeatures, in which each row represents the SHAP values for one feature from the topMfeatures." The SHAP values are interpreted as the anomaly scores.) Regarding claim 20, the rejection of claim 1 is incorporated herein. Further, Antwarg teaches wherein the output comprises restructured tabular data in which the records and attributes are ordered based on the anomaly scores for the records and the attributes, and wherein the restructured tabular data includes a visual indicator identifying a first anomalous data subset. (Page 16, Figure 1 shows the restructured tabular data, where the TopErrors list comprising instances of data (data records) is ordered based on the anomaly score. The contributing to anomaly column comprises the attributes ordered by anomaly score. The coloring is interpreted as the visual indictor identifying the anomalous data subset.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSICA THUY PHAM whose telephone number is (571)272-2605. The examiner can normally be reached Monday - Friday, 9 A.M. - 5:00 P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.T.P./ Examiner, Art Unit 2121 /MARSHALL L WERNER/ Primary Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Mar 20, 2023
Application Filed
Dec 23, 2025
Non-Final Rejection — §101, §103, §112
Apr 02, 2026
Examiner Interview Summary
Apr 02, 2026
Applicant Interview (Telephonic)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
33%
Grant Probability
0%
With Interview (-33.3%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month