Prosecution Insights
Last updated: April 19, 2026
Application No. 18/045,664

DETERMINATION OF EVENT TYPES FROM AUTOENCODER-BASED UNSUPERVISED EVENT DETECTION

Non-Final OA §101§103
Filed
Oct 11, 2022
Examiner
MRABI, HASSAN
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
285 granted / 363 resolved
+23.5% vs TC avg
Strong +32% interview lift
Without
With
+32.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
19 currently pending
Career history
382
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
54.4%
+14.4% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
6.2%
-33.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 363 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This Office Action is sent in response to Application’s Communication received on 10/11/2022 for application number 18/045664. The Office hereby acknowledges receipt of the following and placed of record in file: Specification, Drawing, Abstract, Oath/Declaration, and Claims. Claims (1-9), (10-17) and (18-20) are presented for examination. Information Disclosure Statement The information disclosure statements (IDS) submitted on 10/02/2025, 09/03/2025, 07/10/2025, 03/21/2025, 01/14/2025, 10/03/2024, 08/06/2024, 02/14/2024, 08/15/2023, 04/12/2023, 1/21/2022 were filed prior to current Office Action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC§ 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefore, subject to the conditions and requirements of this title. Claims (1-9), (10-17) and (18-20) are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Claims (1-9), (10-17) and (18-20) are drawn to a method each of which is within the four statutory categories (e.g., a process, a machine). Step 2A - Prong One: In prong one of step 2A, the claims are analyzed to evaluate whether they recite a judicial exception. Claim 1. obtaining a database of reconstruction error vector samples; clustering the reconstruction error vector samples sampled from said database into clusters of reconstruction error vector samples; selecting candidate samples from a first cluster included in the clusters; assigning a label to each of the candidate samples; and applying the label to all reconstruction error vector samples in the first cluster when the label assigned to a sufficient subset of the candidate samples is the same. The limitation recites “clustering the reconstruction error vector samples sampled from said database into clusters of reconstruction error vector samples” which recites a mathematical concept. For example, the claimed “clustering” under its broadest reasonable interpretation when read in light of the specification encompasses using mathematical formulas as describes in paragraphs [0012-0014] to cluster the samples and determine the correct cluster and label. The limitations recite “assigning a label to each of the candidate samples” which recites a mathematical concept” which recites a mathematical concept. For example, the claimed “assigning” under its broadest reasonable interpretation when read in light of the specification encompasses using mathematical formulas, as describes in paragraph [0012-0014] to generate the correct label assign a label to each of the candidate samples, and applies the label to all reconstruction error vector samples in the first cluster. The limitations recite “obtaining a database of reconstruction error vector samples” and “selecting candidate samples from a first cluster included in the clusters” are defined as concepts that can practically be performed in the human mind, or by a human using pen and paper as a physical because they include mental processes that include observations, evaluations, judgments, and opinions. Step 2A Prong 2: Claim 1 recites additional elements such as “applying the label to all reconstruction error vector samples in the first cluster when the label assigned to a sufficient subset of the candidate samples is the same” which are recited at a high level, the elements are merely reciting the words that pertain to a generic computer (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). The “applying” is an additional element amount to merely Instructions to Apply an Exception. The additional element amount to merely words “apply it” and are mere instructions to implement an abstract idea or other exception on a computer. The limitation does not integrate the judicial exception into a practical application, therefore; the addition of element does not amount to an inventive concept. Dependent claims (2-9), (11-17) and (19-20) fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims (2-9), (11-17) and (19-20) are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim). The Examiner has therefore determined that the elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claims are directed to an abstract idea. Step 2B: The claim does not provide an inventive concept (significantly more than the abstract idea). The claim is ineligible. The “applying” step is considered merely words “apply it” and are mere instructions to implement an abstract idea or other exception on a computer. As for the steps of “obtaining” and “assigning” are mere data gathering and output using medium that is recited at a high level of generality and amount to processing input data using a processor that is recited at high level of generality using a generic computer. Even when considered in combination, the additional elements represent mere instructions to apply an exception and insignificant extra-solution activity, which cannot provide an inventive concept. Dependent claims (2-9), (11-17) and (19-20) fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims (2-9), (11-17) and (19-20) are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim). The Examiner has therefore determined that the elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claims are directed to an abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under AIA 35 U.S.C. 103(a) as being unpatentable over Bersia. US Patent Application Publication US 20220114637 A1 (hereinafter Bersia) in view of Salle et al. US Patent Application Publication US 20220129712 A1 (hereinafter Salle). Regarding claim 1, Bersia teaches A method comprising: obtaining a database of reconstruction error vector samples (FIG. 2, [0086], [0199-0200] wherein Bersia describes a mapping of encoded features obtained in data samples that is stored in a database, wherein the mapping is performed by a decoder to a reconstruction of the samples as an input). Bersia does not teach clustering the reconstruction error vector samples sampled from said database into clusters of reconstruction error vector samples; selecting candidate samples from a first cluster included in the clusters; assigning a label to each of the candidate samples; and applying the label to all reconstruction error vector samples in the first cluster when the label assigned to a sufficient subset of the candidate samples is the same. However in analogous art of using an autoencoder for determining of event types. Salle teaches clustering the reconstruction error vector samples sampled from said database into clusters of reconstruction error vector samples (FIG. 1, Abstract, [0013], [0019-0024] wherein Salle describes, as illustrated in FIG. 1, processing data samples as input through an autoencoder that includes an encoder and decoder and clusters data into a constructed content that includes reconstruction loss or error) selecting candidate samples from a first cluster included in the clusters; assigning a label to each of the candidate samples (FIG. 1, [0017-0018], [0022], [0024], [0029], [0047], [0052] wherein Salle describes label assignment that is based on clusterer as illustrated in FIG. 1) and applying the label to all reconstruction error vector samples in the first cluster when the label assigned to a sufficient subset of the candidate samples is the same (FIG. 1, [0017-0024], [0029], [0047], [0052] wherein Salle incorporates the latent feature vector that can be provided to a clustering classification layer as the clusterer in FIG. 1. The clusterer determines a cluster of the clusters to which the content belongs. The clusterer determines a distance between the latent feature vector of the content and latent features vectors of one or more points (e.g., a central value, such as a latent feature vector of a centroid) of the clusters. The distance can be converted to a predicted probability that indicates how likely it is that the content belongs to the cluster). It would have been obvious to a person in the ordinary skill in the art before the effective filing date of the claimed invention to combine Salle with Bersia by incorporating the method of clustering the reconstruction error vector samples sampled from said database into clusters of reconstruction error vector samples; selecting candidate samples from a first cluster included in the clusters; assigning a label to each of the candidate samples; and applying the label to all reconstruction error vector samples in the first cluster when the label assigned to a sufficient subset of the candidate samples is the same of Salle into the method of obtaining a database of reconstruction error vector samples of Bersia for the purpose of training the clustering autoencoder to cluster an input in a latent feature (Salle: [0013]). Regarding claim 2, Bersia as modified by Salle teaches collecting data samples from multiple nodes operating in an environment and storing the data samples in a sample database, wherein the data samples are associated to the reconstruction error vector samples (FIG. 2, [0086], [0199-0200] wherein Bersia describes a mapping of encoded features obtained in data samples that is stored in a database, wherein the mapping is performed by a decoder to a reconstruction of the samples as an input). Regarding claim 3, Bersia as modified by Salle teaches generating the reconstruction error vector samples from the data samples using an unsupervised autoencoder (FIG. 1, [0022] wherein Salle produces a reconstruction content that includes reconstruction loss or error), ([0209-0210] wherein Bersia generates a reconstruction error). Regarding claim 4, Bersia as modified by Salle teaches wherein the reconstruction error vector samples are generated as an absolute element-wise difference between data samples input into an unsupervised encoder and reconstruction samples output from the unsupervised autoencoder (FIG. 1, [0022] wherein Salle describes how the decoder can construct content based on the latent feature vector wherein a difference between the content input into the encoder and the reconstructed content is produced by the decoder that can be determined based on a cost function known as the loss. Additionally, a difference between the label and the predicted probability from the clusterer can be determined based on the loss. The cost determined by the loss can be fed back to the clustering autoencoder. The loss can be backpropagated through the clustering autoencoder, such as for training the clustering autoencoder). Regarding claim 5, Bersia as modified by Salle teaches retrieving context samples from data samples for each of the candidate samples, wherein the context samples occur immediately before and/or after the corresponding candidate sample ([0011], [0015], [0030], [0043], [0047-0049] wherein Salle incorporates samples based on clustering assignment wherein the samples are assigned a value that indicates low confidence or high confidence). Regarding claim 6, Bersia as modified by Salle teaches considering the context samples associated to the candidate samples when assigning labels to the candidate samples (FIG. 1. [0017-0018], [0022], as shown in FIG. 1, Salle assigns labels to samples). Regarding claim 7, Bersia as modified by Salle teaches deploying an autoencoder to nodes in an environment (FIG. 1, [0002], [0013] wherein Salle describes a system that deploys an autoencoder to nodes). Regarding claim 8, Bersia as modified by Salle teaches generating, by the autoencoder operating on a node, a first reconstructed sample output from a first data sample input to the autoencoder; generating a first reconstruction error vector sample from the data sample input and the reconstructed sample output; determining whether the first reconstruction error vector sample belongs in the first cluster; and applying the label associated with the reconstruction error samples in the first cluster to the first reconstruction error sample (FIG. 1, [0022] wherein Salle describes how the decoder can construct content based on the latent feature vector wherein a difference between the content input into the encoder and the reconstructed content is produced by the decoder that can be determined based on a cost function known as the loss. Additionally, a difference between the label and the predicted probability from the clusterer can be determined based on the loss. The cost determined by the loss can be fed back to the clustering autoencoder. The loss can be backpropagated through the clustering autoencoder, such as for training the clustering autoencoder), (FIG. 1, [0022] wherein Salle produces a reconstruction content that includes reconstruction loss or error), ([0209-0210] wherein Bersia generates a reconstruction error), (FIG. 1, [0017-0024], [0029], [0047], [0052] wherein Salle incorporates the latent feature vector that can be provided to a clustering classification layer as the clusterer in FIG. 1. The clusterer determines a cluster of the clusters to which the content belongs. The clusterer determines a distance between the latent feature vector of the content and latent features vectors of one or more points (e.g., a central value, such as a latent feature vector of a centroid) of the clusters. The distance can be converted to a predicted probability that indicates how likely it is that the content belongs to the cluster). Regarding claim 9, Bersia as modified by Salle teaches automatically determining a threshold value for an autoencoder based on a distance of reconstruction error vector samples from a centroid of a cluster near an origin of a cluster space (FIG. 1, Claims 8 and 17 text, [0013], [0039], [0059], [0082], [0091] wherein Salle describes a probability of being in a class that can be determined based on a distance from a central value that represents the class, such as a latent feature representation of a centroid or other central point of the cluster). Claims 10 and 18 are similar in scope to claim 1 therefore the claim is rejected under similar rationale. Claim 11 is similar in scope to claim 2 therefore the claim is rejected under similar rationale. Claim 12 is similar in scope to claim 3 therefore the claim is rejected under similar rationale. Claim 13 is similar in scope to claim 4 therefore the claim is rejected under similar rationale. Claim 14 is similar in scope to claim 5 therefore the claim is rejected under similar rationale. Claim 15 is similar in scope to claim 6 therefore the claim is rejected under similar rationale. Claim 16 is similar in scope to claim 8 therefore the claim is rejected under similar rationale. Claim 17 is similar in scope to claim 9 therefore the claim is rejected under similar rationale. Regarding claim 19, Bersia as modified by Salle teaches clustering reconstruction error samples associated with data samples that have been processed by an autoencoder into clusters and labelling each of the reconstruction error samples in each of the clusters based on labels assigned to candidate samples from each of the clusters (FIG. 1. [0017-0018], [0022], as shown in FIG. 1, Salle assigns labels to samples), ([0011], [0015], [0030], [0043], [0047-0049] wherein Salle incorporates samples based on clustering assignment wherein the samples are assigned a value that indicates low confidence or high confidence). Claim 20 is similar in scope to claim 9 therefore the claim is rejected under similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASSAN MRABI whose telephone number is (571)272-8875. The examiner can normally be reached on Monday-Friday, 7:30am-5pm. Alt, Friday, EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 571-270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HASSAN MRABI/Examiner, Art Unit 2144
Read full office action

Prosecution Timeline

Oct 11, 2022
Application Filed
Nov 29, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579411
RESONATOR NETWORK BASED NEURAL NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12579710
Transforming Content Across Visual Mediums Using Artificial Intelligence and User Generated Media
2y 5m to grant Granted Mar 17, 2026
Patent 12554924
Computer-Implemented Methods and Systems for Generative Text Painting
2y 5m to grant Granted Feb 17, 2026
Patent 12547905
PROBABILISTIC ENTITY-CENTRIC KNOWLEDGE GRAPH COMPLETION
2y 5m to grant Granted Feb 10, 2026
Patent 12536782
METHOD AND APPARATUS FOR TRAINING CLASSIFICATION TASK MODEL, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+32.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 363 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month