Prosecution Insights
Last updated: April 19, 2026
Application No. 17/561,951

METHODS AND SYSTEMS FOR AUTOMATICALLY IDENTIFY IN A DATASET INSUFFICIENT DATA FOR LEARNING, OR RECORDS WITH ANOMALOUS COMBINATIONS OF FEATURE VALUES

Non-Final OA §101§112
Filed
Dec 26, 2021
Examiner
STEVENS, ROBERT
Art Unit
2164
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
92%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
420 granted / 517 resolved
+26.2% vs TC avg
Moderate +11% lift
Without
With
+11.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
15 currently pending
Career history
532
Total Applications
across all art units

Statute-Specific Performance

§101
22.1%
-17.9% vs TC avg
§103
44.0%
+4.0% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
17.6%
-22.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 517 resolved cases

Office Action

§101 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Allowable Subject Matter Claims 1-20 are allowable over the prior art. However, the claims remain rejected under 35 USC §§101 and 112. . Reasons For Allowance The cited references do not disclose partitioning the dataset along the numeric and/or categorical features according to the density measurement of each observation by a perpendicular cut along the feature spaces, receiving a map of a plurality of hyper-rectangular shapes representing various levels of density including empty spaces; displaying the received map of plurality of hyper-rectangular shapes, being human- interpretable regions on a Graphic user interface, GUI, wherein the plurality of hyper-rectangular shapes are selectable and present information about the selected hyper-rectangular shape level of density when selected by a user. Information Disclosure Statement (IDS) The information disclosure statement filed 12/26/2021 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. The IDS has been placed in the application file, but the information referred to therein (by the missing copy) has not been considered. A copy of Non-Patent Literature Cite No. 2 was not included with the IDS. Specification The Abstract is objected to for the following reason: The abstract should be in narrative form and generally limited to a single paragraph preferably within the range of 50 to 150 words in length. See MPEP 608(b)(I.)(C.) Applicant is reminded of the proper content of an abstract of the disclosure. A patent abstract is a concise statement of the technical disclosure of the patent and should include that which is new in the art to which the invention pertains. The abstract should not refer to purported merits or speculative applications of the invention and should not compare the invention with the prior art. If the patent is of a basic nature, the entire technical disclosure may be new in the art, and the abstract should be directed to the entire disclosure. If the patent is in the nature of an improvement in an old apparatus, process, product, or composition, the abstract should include the technical disclosure of the improvement. The abstract should also mention by way of example any preferred modifications or alternatives. Where applicable, the abstract should include the following: (1) if a machine or apparatus, its organization and operation; (2) if an article, its method of making; (3) if a chemical compound, its identity and use; (4) if a mixture, its ingredients; (5) if a process, the steps. Extensive mechanical and design details of an apparatus should not be included in the abstract. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length. See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts. The disclosure is also objected to because of the following exemplary informalities: Page 8 line 23 of the Specification contains a grammatical error, as the sentence does not start with a capital letter (“in the following description …”). Applicant is respectfully reminded to review the specification/abstract/claims/drawings for all informalities, and correct them. Appropriate correction is required. Claim Rejections – 35 U.S.C. § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter. These claims are rejected under 35 USC §101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites at a very level, the manipulating and subsequent viewing of that manipulated data. Thus, the claims encompass the performance of the limitations in the mind, or alternatively the solving of a math problem (i.e., a series of mathematical steps) that are not tied to a practical application. Regarding the independent claims: Step 1: Yes, claim 1 recites a method for a series of steps executed (therefore a process embodied in a product/machine), claim 15 is directed to a system (therefore a product/machine), and claim 18 is directed to a computer program product (therefore a product). Thus, each of these claims is directed to a statutory category. Step 2A, Prong 1 (Judicial Exception Recited?): Yes. Claims 1, 15 and 18 recite limitations directed to an abstract idea: “calculating observation density for each observation according to a distance or anomaly based metric; partitioning the dataset along the numeric and/or categorical features according to the density measurement of each observation by a perpendicular cut along the feature spaces”. As drafted, each of these limitations recites a mentally performable process, or alternatively a mathematical process, as one can perform a calculation and divide/categorize/partition data via a mental (or mathematical) process or using paper and pencil. Step 2A, Prong 2 (Integrated into a Practical Application?): No. Claim 1 recites the following additional elements, "computerized”, claim 15 recites “processor”, and claim 18 recites “one or more computer storage media”. Each of these are merely high-level recitations of generic computer components and represent mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. Additionally, claims 1, 15 and 18 each recites “receiving a dataset”, “receiving a density measurement”, “receiving a map”, and “displaying the received map” and ”present[ing] information” is insignificant extra-solution activity as retrieval/receiving of data (i.e. mere data gathering) such as 'obtaining information' as identified in MPEP 2106.05(g) and does not provide integration into a practical application. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose meaningful limits on practicing the abstract idea. Viewing the additional limitations together and the claims as a whole, nothing provides integration into a practical application. Therefore, each claim is directed to an abstract idea. Step 2B (Inventive Concept Provided?): No the discussion for the additional elements representing mere implementation using generic computing elements are carried over and do not provide significantly more. Mere instructions to apply an exception using generic computer components (e.g., storage) cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. With respect to the receiving steps and displaying/presenting steps discussed above, and when re-evaluated these elements are well-understood, routine, and conventional as evidenced by the court cases in MPEP 2106.05(d)(II), "i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); … OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);" and thus remain insignificant extra-solution activity that does not provide significantly more. Therefore, each of the claims, taken as a whole, does not change this conclusion and the claim is ineligible. Claims 2-14 depend upon claim 1, and do not correct the issues set forth above. These claims essentially add generic computing elements, modify data, perform further calculations / data processing, and searching, and thus the same rationale as above applies. Therefore, these claims are likewise rejected. Claims 16-17 and 19-20 depend upon claims 15 and 18, respectively, and do not correct the issues set forth above. These claims essentially add generic computing elements, modify data, and perform further calculations / data processing, and thus the same rationale as above applies. Therefore, these claims are likewise rejected. 35 USC §101 – media issue Additionally, regarding independent claim 18: The claim is directed to a “computer program product comprising one or more computer readable storage media”. The Specification at page 7 line 25 through page 8 line 2 discusses both a “computer program product” (e.g., “may include”) and a “computer readable storage medium” in an open ended manner (e.g., ‘may be’). Therefore, the claim has been interpreted as encompassing signal subject matter. During examination, the PTO is obliged to give claims their broadest reasonable interpretation consistent with the specification. See In re Zletz, 893 F.2d 319 (Fed. Cir. 1989) (during patent examination the pending claims must be interpreted as broadly as their terms reasonably allow). When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter. See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and “Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 U.S.C. § 101,” Aug. 24, 2009, p. 2. The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP § 2111.01. The same is true even when the computer readable medium is limited to a “storage” medium. See Ex parte Mewherter, No. 2012-007692, p. 6-14 (PTAB May 8, 2013) (precedential) (providing a “growing body of evidence … demonstrating that the ordinary and customary meaning of ‘computer readable storage medium’ to a person of ordinary skill in the art was broad enough to encompass both non-transitory and transitory media”). The Office suggests adding the limitation “non-transitory” to the claim. See Subject Matter Eligibility of Computer Readable Media, 1351 OG 212 (February 23, 2010). Claims 19-20 depend upon claim 18, and do not correct the issues set forth above. Therefore, these claims are likewise rejected. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. § 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Regarding independent claims 1, 15 and 18: The claims are unclear as there appears to be missing essential elements. The 1st portion of the claim deals with data processing involving feature processing, clustering, and dataset partitioning. The second portion of the claim deals with the reception of a data structure called a map of shapes that represents a density having no explicit [claimed] connection to the processed/clustered/partitioned data. It is unclear if/how the earlier processed data is being displayed (or just a generic/default/uninstantiated map data structure) via this mapping mechanism. Claims 2-14, 16-17 and 19-20 depend upon claims 1, 15 and 18, respectively, and do not correct the issues set forth above. Therefore, these claims are likewise rejected. Further regarding dependent claims 11 and 12: It is unclear whether the terms presented in the claims are referencing the same or different variables. For example, compare yi vs yi. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Relevance is provided in at least the Abstract of each cited document. Non-Patent Literature Bhattacharya, Indranil, et al., “A Semi-Supervised Machine Learning Approach to Detect Anomalies in Big Accounting Data”, ECIS 2020, Marrakech, Morrocco, June 2020, pp. 1-14. Anomaly detection in large scale accounting data is one of the long-standing challenges in the financial audit practice. Accounting professionals have therefore resorted to advanced machine learning techniques to address this. Although being quite successful, existing supervised and unsupervised anomaly detection algorithms come with certain drawbacks. In order to overcome them, an innovative semi-supervised machine learning approach is proposed in this paper which combines both unsupervised and supervised algorithms for anomaly detection in big data. The unsupervised algorithm, i.e. DBSCAN is first applied on a representative subset of the data to generate a training set based on pseudo labels of anomalies. (page 1, Abstract). Pseudo labels of anomalies on X1 are generated by applying the DBSCAN (Ester et al., 1996) algorithm. DBSCAN is a density-based spatial clustering technique with the application of noise. It forms clusters based on spatial density. Observations that are not part of any cluster are defined as noise and we treat them as anomalies. DBSCAN has three parameters, epsilon (ε), MinPts (n) and distance metric. Two observations are called neighbors if they are within ε distance of each other. A cluster is a collection of minimum n observations where every observation has at least one neighbor. DBSCAN constructs clusters with observations delineating these two properties. Figure 1 demonstrates cases where DBSCAN algorithm forms spatial clusters in a 2-dimensional space and helps to detect anomalies. (page 5, 2nd paragraph of section “3.1 Pseudo Labelling Using DBSCAN”). Taha, Ayman, et al., “Anomaly Detection Methods for Categorical Data: A Review”, ACM Computing Surveys, Vol. 52, No. 2, Article 38 May 2019, pp. 1-35. Anomaly detection has numerous applications in diverse fields. For example, it has been widely used for discovering network intrusions and malicious events. It has also been used in numerous other applications such as identifying medical malpractice or credit fraud. Detection of anomalies in quantitative data has received a considerable attention in the literature and has a venerable history. By contrast, and despite the widespread availability use of categorical data in practice, anomaly detection in categorical data has received relatively little attention as compared to quantitative data. This is because detection of anomalies in categorical data is a challenging problem. Some anomaly detection techniques depend on identifying a representative pattern then measuring distances between objects and this pattern. Objects that are far from this pattern are declared as anomalies. However, identifying patterns and measuring distances are not easy in categorical data compared with quantitative data. Fortunately, several papers focussing on the detection of anomalies in categorical data have been published in the recent literature. In this article, we provide a comprehensive review of the research on the anomaly detection problem in categorical data. Previous review articles focus on either the statistics literature or the machine learning and computer science literature. This review article combines both literatures. We review 36 methods for the detection of anomalies in categorical data in both literatures and classify them into 12 different categories based on the conceptual definition of anomalies they use. For each approach, we survey anomaly detection methods, and then show the similarities and differences among them. We emphasize two important issues, the number of parameters each method requires and its time complexity. (page 1, Abstract). Density-based anomalies or local anomalies approach aims at identifying observations that have outlying behavior in local areas, in which observations usually share similar characteristics [35]. Local anomalies are different from global anomalies, which are inconsistent with the pattern suggested by the majority of all other observations, not only observations within their local areas [35, 44, 98, 182]. Local anomaly detection methods for categorical data include (a) the Hyperedgebased Outlier Test (HOT) [176], (b) the k-Local Anomalies Factor k-LOF [182], and (c) the WATCH method [112]. (page 15, 1st paragraph of section “5 Density-Based Methods”). The Ranking-based Outliers Analysis and Detection (ROAD) method defines two types of outliers; frequency-based and clustering-based outliers. Frequency-based outliers are those observations that have infrequent categories (small average marginal frequencies). However, clustering-based outliers are those observations that have infrequent combinations of frequent categories. ROAD develops two different ranking schemes; one for each outlier-type. First, it computes a density score for each observation (the average marginal frequencies) as den(xi ) = 1 q q j=1 f (xij), which is the same as AVF(xi ) in Equation (1). ROAD sorts observations based on their density scores. Then it gives the higher probability for being frequency-based outliers to those observations having small density scores. (pages 18-19, 2nd – 3rd paragraphs of section “6 Clustering-Based Methods”). “Hyperrectangle”, Wikipedia, downloaded from: https://en.wikipedia.org/wiki/Hyperrectangle, February 16, 2026, pp. 1-4. In geometry, a hyperrectangle (also called a box, hyperbox, kk-cell or orthotope), is the generalization of a rectangle (a plane figure) and the rectangular cuboid (a solid figure) to higher dimensions. A necessary and sufficient condition is that it is congruent to the Cartesian product of finite intervals. This means that a k-kkkdimensional rectangular solid has each of its edges equal to one of the closed intervals used in the definition. (page 1, 1st paragraph). Campello, Ricardo J. G. B., et al., “Hierarchical Density Estimates for Data Clustering, Visualization, and Outlier Detection”, ACM Transactions on Knowledge Discovery from Data (TKDD), Volume 10, Issue 1, Article No. 5, July 2015, pp. 1-51. An integrated framework for density-based cluster analysis, outlier detection, and data visualization is introduced in this article. The main module consists of an algorithm to compute hierarchical estimates of the level sets of a density, following Hartigan’s classic model of density-contour clusters and trees. Such an algorithm generalizes and improves existing density-based clustering techniques with respect to different aspects. It provides as a result a complete clustering hierarchy composed of all possible density-based clusters following the nonparametric model adopted, for an infinite range of density thresholds. The resulting hierarchy can be easily processed so as to provide multiple ways for data visualization and exploration. It can also be further postprocessed so that: (i) a normalized score of “outlierness” can be assigned to each data object, which unifies both the global and local perspectives of outliers into a single definition; and (ii) a “flat” (i.e., nonhierarchical) clustering solution composed of clusters extracted from local cuts through the cluster tree (possibly corresponding to different density thresholds) can be obtained, either in an unsupervised or in a semisupervised way. In the unsupervised scenario, the algorithm corresponding to this postprocessing module provides a global, optimal solution to the formal problem of maximizing the overall stability of the extracted clusters. If partially labeled objects or instance-level constraints are provided by the user, the algorithm can solve the problem by considering both constraints violations/satisfactions and cluster stability criteria. An asymptotic complexity analysis, both in terms of running time and memory space, is described. Experiments are reported that involve a variety of synthetic and real datasets, including comparisons with state-of-the-art, density-based clustering and (global and local) outlier detection methods. (page 1, Abstract). Ankerst, Mihael, et al., “OPTICS: Ordering Points to Identify the Clustering Structure”, SIGMOD ‘99, Philadelphia, PA, June 1-3, 1999, pp. 367-375. Cluster analysis is a primary method for database mining. It is either used as a stand-alone tool to get insight into the distribution of a data set, e.g. to focus further analysis and data processing, or as a preprocessing step for other algorithms operating on the detected clusters. Almost all of the well-known clustering algorithms require input parameters which are hard to determine but have a significant influence on the clustering result. Furthermore, for many real-data sets there does not even exist a global parameter setting for which the result of the clustering algorithm describes the intrinsic clustering structure accurately. We introduce a new algorithm for the purpose of cluster analysis which does not produce a clustering of a data set explicitly; but instead creates an augmented ordering of the database representing its density-based clustering structure. This cluster-ordering contains information which is equivalent to the density-based clusterings corresponding to a broad range of parameter settings. It is a versatile basis for both automatic and interactive cluster analysis. We show how to automatically and efficiently extract not only 'traditional' clustering information (e.g. representative points, arbitrary shaped clusters), but also the intrinsic clustering structure. For medium sized data sets, the cluster-ordering can be represented graphically and for very large data sets, we introduce an appropriate visualization technique. Both are suitable for interactive exploration of the intrinsic clustering structure offering additional insights into the distribution and correlation of the data. (page 49, Abstract). An important property of many real-data sets is that their intrinsic cluster structure cannot be characterized by global density parameters. Very different local densities may be needed to reveal clusters in different regions of the data space. For example, in the data set depicted in Figure 1, it is not possible to detect the clusters A, B, Cl, C,, and C-, simultaneously using one global density parameter (page 51, section “3.1 Motivation”). The key idea of density-based clustering is that for each object of a cluster the neighborhood of a given radius (E) has to contain at least a minimum number of objects (MinPrs), i.e. the cardinality of the neighborhood has to exceed a threshold. (page 51, 1st paragraph of section “3.2 Density-Based Clustering”). US Patent Application Publications Kim 2020/0257982 Techniques are described herein for encoding categorical features of property graphs by vertex proximity. In an embodiment, an input graph is received. The input graph comprises a plurality of vertices, each vertex of said plurality of vertices is associated with vertex properties of said vertex. The vertex properties include at least one categorical feature value of one or more potential categorical feature values. For each of the one or more potential categorical feature values of each vertex, a numerical feature value is generated. The numerical feature value represents a proximity of the respective vertex to other vertices of the plurality of vertices that have a categorical feature value corresponding to the respective potential categorical feature value. Using the numerical feature values for each vertex, proximity encoding data is generated representing said input graph. The proximity encoding data is used to efficiently train machine learning models that produce results with enhanced accuracy. (Abstract). Balabine 2023/0133127 An apparatus, for efficiently classifying a data object, including representing the data object as a data object vector in a vector space, each dimension of the data object vector corresponding to a different feature of the data object, determining a distance between the data object vector and centroids of data domain clusters in the vector space, each data domain cluster comprising data domain vectors representing data domains, sorting the data domain clusters according to their respective distances to the data object vector, and iteratively applying data domain classifiers corresponding to data domains represented in a closest data domain cluster in the sorted data domain clusters to the data object. (Abstract). When applying a data object model, such as the generic model 300, to a data object of unknown type, any features having continuous values (i.e., having a range of values) can be collapsed to a single value reflecting the corresponding feature value of the data object. For example, the minimal to maximal data object length feature values could have a range for a specific domain, such as 9-11 for social security numbers (e.g., “313125231” or “313-12-5231”). When applying this feature to the unknown data object “223-13-8310,” these value would be collapsed to “11,” since the unknown data object has 11 characters. The result is that both the minimal data object length and maximal data object length have a value of “11.” The continuous features can also be converted into categorical features by separating certain ranges into categories (e.g., low, medium, high) and the value of the data object can be converted into the appropriate category. The process of converting the continuous dimensions of the model space into categorical can be performed by indicating if the length of the evaluated data object and the observed number of tokens fit into the respective intervals in the original model. This approach can alleviate problems with data objects which length and composition may vary significantly. (paras 0052-0053). In an exemplary embodiment, k-means clustering is used for clustering and Gower distance is used for distance determination. When using k-means clustering, the quality of the constructed clusters can be determined and used to construct better clusters. In particular, since the k-means algorithm takes the number of produced clusters, k, as a parameter, the silhouette coefficient (a measure of how similar an object is to its own cluster compared to other clusters) is used to determine quality of the constructed clusters with various values of k and, opportunistically, over multiple iterations using a fixed value of k. Once the computation is completed, a clustering arrangement with a maximal observed value of the silhouette coefficient is chosen and the centroid vectors of each cluster are computed. (para 0074). Typically, the feature space for data domain models and data object models will include categorical variables. Since the feature space includes categorical variables, a specialized metric, such as Gower distance, can be used. For example, the k-means clustering algorithm can be used for clustering and the Gower metric can be used as a distance measure. (para 0084). US Patents Averbuch 10,713,236 A computer implemented method for detecting at least one anomaly in a dataset, comprising: managing a dataset including a plurality of data entities each including at least one value; receiving a semantic model that defines associations between two or more data entities; forming a plurality of multi dimensional data instances, each multi dimensional data instance formed from at least one of a permutation and a combination of a set of data entities from the plurality of data entities according to the semantic model; analyzing the multi dimensional data instances to detect at least one anomalous value, the anomalous value representing a statistically significant deviation according to a deviation requirement, of one or more values from a set of values of the multi dimensional data instances; and providing the detected at least one anomalous value. (Abstract). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to examiner ROBERT STEVENS whose telephone number is (571) 272-4102. The examiner can normally be reached Mon - Fri 6:00 - 2:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached on (571) 270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT STEVENS/Primary Examiner, Art Unit 2164 February 17, 2026
Read full office action

Prosecution Timeline

Dec 26, 2021
Application Filed
Feb 17, 2026
Non-Final Rejection — §101, §112
Mar 13, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585618
SYSTEMS AND METHODS FOR SEQUENCE-BASED DATA CHUNKING FOR DEDUPLICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12579100
COMPUTER SYSTEMS THAT PUT PARENTS IN CONTROL OF THEIR KID'S ONLINE SAFETY: THE STATE OF A KID (E.G., EMOTIONAL STATE), INDUCED BY CONTENT FROM A SOCIAL MEDIA PLATFORM, TRIGGERS PARENT-PRESCRIBED ACTIONS BY THE KID'S COMPUTER SYSTEM COMPRISING AT LEAST ONE OF BLOCKING THE CONTENT AND INFORMING AT LEAST ONE OF THE PARENT, THE KID, AND THE SOCIAL MEDIA PLATFORM OF THE INDUCED STATE
2y 5m to grant Granted Mar 17, 2026
Patent 12572579
LARGE LANGUAGE MODEL BASED SYSTEM UPGRADE CLASSIFIER
2y 5m to grant Granted Mar 10, 2026
Patent 12572542
SYSTEMS AND METHODS FOR GENERATING AND DISPLAYING A DATA PIPELINE USING A NATURAL LANGUAGE QUERY, AND DESCRIBING A DATA PIPELINE USING NATURAL LANGUAGE
2y 5m to grant Granted Mar 10, 2026
Patent 12561519
SCALABLE FORM MATCHING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
92%
With Interview (+11.1%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 517 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month