Prosecution Insights
Last updated: April 19, 2026
Application No. 18/154,716

AUTONOMOUS LEARNING PLATFORM FOR NOVEL FEATURE DISCOVERY

Non-Final OA §102§103§DP
Filed
Jan 13, 2023
Examiner
SMITH, BRIAN M
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
VISA INTERNATIONAL SERVICE ASSOCIATION
OA Round
1 (Non-Final)
52%
Grant Probability
Moderate
1-2
OA Rounds
4y 3m
To Grant
89%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
129 granted / 246 resolved
-2.6% vs TC avg
Strong +37% interview lift
Without
With
+37.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
34 currently pending
Career history
280
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 246 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority This application is a continuation of parent application 15/590,988, now US Patent 11,586,960. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 3, 5-7, 9, and 10 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Aghdam et al., “Text feature selection using ant colony optimization,” as cited by the applicant in the IDS dated 1/13/2023 and by the examiner in the parent application. Regarding Claim 1, Aghdam teaches a method of performing autonomous learning for updating input features used for an artificial intelligence model (Aghdam, title, “Text feature selection”), the method comprising: (a) receiving updated data of an information space … including historical data of requests to the artificial intelligence model and output results associated with the requests (Aghdam, pg. 6848, 2nd column, 2nd-3rd paragraphs, “There are some publicly available standard datasets that can be used as test collections for text categorizations. The most widely used in the Reuters collection, consisting of stories … classified under categories … We used Reuters-21578, the newer version of the corpus … we adopt the top ten classes” where the stories/documents to be classified are requests to the model and the classes/categories are output results) that includes a graph of nodes having a defined topology (Aghdam, pg. 6845, 2nd column, Fig. 1, where the text features, i.e. terms/words (see pg. 6844, 1st column, 3rd paragraph) from the corpus/information space correspond to nodes in the graph representation, also pg. 6845, 1st column, last paragraph, “Nodes are fully connected”) wherein different categories of input data correspond to different input notes of the graph (the features are associated with/correspond to/are used to predict the categories/classes with a text classification model); (b) updating, using multiple iterations of a path optimization model until convergence, edge connections between the nodes of the graph by performing a plurality of path optimizations that each use a set of agents to explore the information space over cycles … thereby determining optimized paths (Aghdam, pg. 6846, Fig. 2 including pg. 6846, 1st column, Eq. (3) denoting updating edge pheromones/connections & pg. 6845, 1st column, last paragraph, “edges between them [nodes] denoting the choice of the next feature” where one iteration of the larger loop in Fig. 2 denotes one of a plurality of path optimizations performed by the ant colony optimization path optimization model until convergence/“Evaluate Stopping Criterion,”/“if an optimal subset has been found” and where the plurality of path optimizations includes a first number, but not all, of the total number of iterations before the stopping criterion is met and the “Stop” path is taken) to reduce a cost function (Aghdam, pg. 6846, 2nd column, last paragraph, “to decrease the MSE (mean square error) of the classifier”), each connection including a strength value (Aghdam, pg. 6845, 2nd column, 3rd paragraph, “The heuristic desirability of traversal and node pheromone levels are combined to form the so-called probabilistic transition rule” where “pheromone level” is a strength value) wherein during each path optimization of the path optimizations performed in step (b), path information is shared between a rest of the agents at each cycle for determining a next position value for each of the set of agents in the graph (Aghdam, pg. 6845, 2nd column, 3rd paragraph, “The heuristic desirability of traversal and node pheromone levels are combined to form the so-called probabilistic transition rule, denoting the probability that ant k will include the feature i in its solution at time step t” with pg. 6846, 1st column, 1st paragraph, “Pheromone update” denoting sharing path information) (c) after using the path optimization model to determine the optimized paths in step (b), using input nodes of the optimized paths to determine candidate input features (Aghdam, pg. 6846 Fig.2, wherein after the stopping criterion is implemented, the best subset of nodes of the paths is selected, “Return Best Subset”) (d) training the artificial intelligence model using the historical data for the candidate input features, the training providing a ranking of candidate input features relative to a measure of effect on an output value of the artificial intelligence model (Aghdam, pg. 6847, Fig. 3, “Classifier” built with the selected “Feature Subset” providing “Feature Subset Score”) and (e) selecting a plurality of top candidate input features to be used in the artificial intelligence model (Aghdam, pg. 6846, Fig. 2, “Return Best Subset” & pg. 6847, Fig. 3, “Best Feature Subset”). Regarding Claim 3, Aghdam teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). Aghdam further teaches wherein sharing path information between the rest of the agents during each cycle comprises: adjusting parameters affecting a probability of each path being searched in a subsequent cycle, wherein determining a next position value for the agents comprises selecting paths during the subsequent cycle based on the updated position (Aghdam, pg. 6846, 1st column, 1st paragraph, “Pheromone update rule” with pg. 6845, 2nd column, 3rd paragraph, “The heuristic desirability of traversal and node pheromone levels are combined to form the so-called probabilistic transition rule, denoting the probability that ant k will include the feature i in its solution at time step t,” also see pg. 6846, Fig. 2). Regarding Claim 5, Aghdam teaches the method of Claim 3 (and thus the rejection of Claim 3 is incorporated). Aghdam further teaches wherein the path information includes an error relative to a target goal (Aghdam, pg. 6486, 2nd column, last paragraph, “each ant builds solutions completely … the evaluation criterion is mean square error (MSE) of the classifier”). Regarding Claim 6, Aghdam teaches the method of Claim 5 (and thus the rejection of Claim 5 is incorporated). Aghdam further teaches performing step b) until the target goal has been met (pg. 6845, 1st column, last paragraph – 2nd column, 1st paragraph, “to satisfy the traversal stopping criterion (e.g. suitably high classification accuracy has been achieved with this subset)” & pg. 6856, Fig. 2 “Evaluate Stopping Criterion”). Regarding Claim 7, Aghdam teaches the method of Claim 5 (and thus the rejection of Claim 5 is incorporated). Aghdam further teaches initializing the agents at a start of each cycle according to a weighted distribution that is based on a gradient of the error relative to the target goal (Aghdam, pg. 6846, 1st column, 1st paragraph, pheromone amounts, which are a distribution known to the agents/ants, are initialized based on a gradient based on/weighted using the error, see “The pheromone is updated according to … the measure of the classifier performance”). Regarding Claim 9, Aghdam teaches the method of Claim 7 (and thus the rejection of Claim 7 is incorporated). Aghdam further teaches wherein the path information includes movement vectors that point in a direction of new information (Aghdam, pg. 6845, 2nd column, Eq. (2), “ P i k ( t ) ” are vectors that promote movement in the ants to move them in the direction of additional features/new information to add to the candidate feature set). Regarding Claim 10, Aghdam teaches the method of Claim 7 (and thus the rejection of Claim 7 is incorporated). Aghdam further teaches wherein the set of agents are initialized at the start of each cycle based on a specified search criteria (Aghdam, pg. 6486, 2nd column, last paragraph, “each ant builds solutions completely … the evaluation criterion is mean square error (MSE) of the classifier”) and wherein the specified search criteria includes a definition of the target goal, a definition of the cost function (Aghdam, pg. 6486, 2nd column, last paragraph, “each ant builds solutions completely … the evaluation criterion is mean square error (MSE) of the classifier” where the goal is correct classification of the documents, the cost function is error of a misclassified document), a predetermined cost requirement (Aghdam, pg. 6845, 1st column, last paragraph – 2nd column, 1st paragraph, “the current subset is determined to satisfy the traversal stopping criterion (e.g. suitably high classification accuracy has been achieved with this subset)”) and an initial distribution of agents (Aghdam, pg. 6846, 1st column, 4th paragraph, “generating a number of ants which are then placed randomly on the graph”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Aghdam et al., “Text feature selection using ant colony optimization,” in view of Wagner, “Specifications of Building Polish Lexica for Application in ASR and TTS systems.” Regarding Claim 2, Aghdam teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). Aghdam further teaches when the two or more candidate input features share a predetermined number of input nodes (all entries in Aghdam share a predetermined number of input nodes as the text feature graph is fully connected, see Aghdam, pg. 6845, 1st column, last paragraph, “Nodes are fully connected”). Aghdam does not teach, but Wagner teaches, combining two or more candidate input features into a single input feature (Wagner, pg. 6, 2nd column, 1st paragraph, “Compound entries are multiple-token entries. There are allowed only in the proper names, special applications words, and foreign words sections of lexica. Some of these are classified as phrases (e.g. New_York, happy_hour) and are treated as single entries … if their individual components never occur, are meaningless, or take a very different meaning when considered stand-alone”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine some special text features of Aghdam (which have been shown to share a predetermined number of input nodes) into single a single feature, as does Wagner, in the invention of Aghdam. The motivation to do so is that “their individual components never occur, are meaningless, or take a very different meaning when considered stand-alone” (Wagner, pg. 6, 2nd column, 1st paragraph), i.e. the individual components would not make good features. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Aghdam, in view of Breckenridge, US PG Pub 2012/0191631. Regarding Claim 4, Aghdam teaches the method of Claim 3 (and thus the rejection of Claim 3 is incorporated). Aghdam does not teach repeating steps a) through e) at a later point in time, and wherein the updated data at the later point in time includes historical data of additional requests to the artificial intelligence model and output results associated with the additional requests. However, Breckenridge teaches repeating the training of an artificial intelligence model at a later point in time after additional training data (i.e. historical data of requests and output results) has been acquired (Breckenridge, [0010], “the updateable trained predictive models can by dynamically updated as new training data becomes available. Static trained predictive models … can be regenerated using an updated set of training data”) and that training models can include feature selection (Breckenridge, [0040], “The selection of features, i.e. feature induction, can occur during multiple iterations of computing the training function over the training data”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to generate updated classification models, including updated feature sets, such as those of Aghdam, as new training data is acquired, in the manner of Breckenridge. The motivation to do so is so that “As a particular client’s training data changes over time, the client entity can be provided access to a trained predictive model that has been trained with training data reflective of the changes” (Breckenridge, [0021]) i.e. the models can be improved by including more and newer training examples. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Aghdam, in view of Shen et al., “An Improved Parallel Bayesian Text Classification Algorithm.” Regarding Claim 8, Aghdam teaches the method of Claim 7 (and thus the rejection of Claim 7 is incorporated). Aghdam further teaches receiving a request for a prediction comprising one or more input features (Aghdam, pg. 6852, “we use a simple classifier (nearest neighbor classifier) in that which can affect the categorization performance” where using the classifier on a received input document is receiving a request for a prediction) Aghdam does not teach, but Shen (also performing text classification) teaches, placing the input features into an index table of key-value pairs linking the input features to their predicted outcomes (Shen, pg. 6, 1st column, 2nd-to-last paragraph, “using the key-value method to analyze the records of processing data set” with pg. 8, 2nd column, 6th paragraph “(key,value) corresponds to (text type, text content)” and pg. 7, Fig. 2, “Returns the … categories of key value pairs” where “category” denotes predicted outcomes and “text content” denotes input features as the features of Aghdam are terms/words/tokens); querying the index table of key-value pair for a predicted outcome linked to the one or more input features (Shen, pg. 7, Fig. 2, “Returns the … categories of key value pairs”) and generating a prediction based on the predicted outcome (Shen, pg. 7, Fig. 2, “Combined data generation classifier”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to parallelize the classification algorithm of Aghdam, including implementing the key-value table, as does Shen, in order to do text prediction, as does Shen. The motivation to do so is to provide “better speedup” (Shen, pg. 1, Abstract). Claims 11 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Aghdam, in view of Dawson et al., “Improving Ant Colony Optimization performance on the GPU using CUDA.” Regarding Claim 11, Aghdam teaches the method of Claim 1 (and thus the rejection of Claim 1 is incorporated). Aghdam does not teach, but Dawson (also performing ant colony optimization) teaches wherein the graph is sharded into overlapping subgraphs that are searched for a solution by the agents in parallel (Dawson, pg. 1905, 1st column, last paragraph, “the first stage of DS-Roulette tiles 4 thread warps (128 threads) so as to provided complete coverage of all potential cities [nodes/features] … We will henceforth refer to a tiled warp consisting of 32 threads as a sub-block that represents a block of potential cities” with pg. 1903, 1st column, 1st paragraph, “A thread is responsible for an individual city” so a sub-block is a shard of the graph, with, Abstract, “a parallel implementation of the Ant System algorithm”). It would have been obvious to parallelize the ant colony optimization algorithm of Aghdam by using the parallelization strategy of Dawson on threads and tiles on a GPU, thus sharding the graph into subgraphs. The motivation to do so is that “our new parallel algorithm executes up to 82x faster” (Dawson, Abstract) i.e. to improve the speed of the optimization algorithm. Regarding Claim 12, the Aghdam/Dawson combination of Claim 11 teaches the method of Claim 11 (and thus the rejection of Claim 11 is incorporated). The combination, by implementing the strategy of Dawson, has already been shown to teach distributing the subgraphs amongst cores in a multi-core graphic processing unit (Dawson, pg. 1906, 2nd column, 4th paragraph, “The GPU contains 580 CUDA cores … up to 1024 threads per thread block” where each “sub-block” of 32 threads was previously identified with a shard/subgraph). Claims 13 and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Aghdam, in view of Breckenridge, US PG Pub 2012/0191631. Claims 13 and 15-19 recite a server computer comprising: a processor; a network interface; and a non-transitory computer readable medium comprising code for instructing the processor to implement the precise method of Claims 1 and 3-7. Aghdam (in combination with Breckenridge) has already been shown to teach the limitations of the methods, but Aghdam does not explicitly teach a server computer comprising a processor, a network interface, and a non-transitory computer-readable medium. Breckenridge, however, teaches these limitations (Breckenridge, Fig. 1) in a system to train an artificial intelligence model (including feature selection). It would have been obvious to implement the system of Aghdam (and of the Aghdam/Breckenridge combination of Claim 4) on a server like that of Breckenridge. The motivation to do so is to provide “a service … in the cloud, where a client can provide input data and a prediction request and receive in response a predictive output without expending client-side computing resources or requiring client-side expertise for predictive analytical modeling” (Breckenridge, [0010]). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Aghdam, in view of Breckenridge, and further in view of Wagner. Claim 14 recites a server computer comprising: a processor; a network interface; and a non-transitory computer readable medium comprising code for instructing the processor to implement the precise method of Claim 2. Aghdam (in combination with Wagner) has already been shown to teach the limitations of the methods, but Aghdam does not explicitly teach a server computer comprising a processor, a network interface, and a non-transitory computer-readable medium. Breckenridge, however, teaches these limitations (Breckenridge, Fig. 1) in a system to train an artificial intelligence model (including feature selection). It would have been obvious to implement the system of Aghdam/Wagner on a server like that of Breckenridge. The motivation to do so is to provide “a service … in the cloud, where a client can provide input data and a prediction request and receive in response a predictive output without expending client-side computing resources or requiring client-side expertise for predictive analytical modeling” (Breckenridge, [0010]). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Aghdam, in view of Breckenridge, and further in view of Shen et al., “An Improved Parallel Bayesian Text Classification Algorithm.” Claim 20 recites a server computer comprising: a processor; a network interface; and a non-transitory computer readable medium comprising code for instructing the processor to implement the precise method of Claim 8. Aghdam (in combination with Shen) has already been shown to teach the limitations of the methods, but Aghdam does not explicitly teach a server computer comprising a processor, a network interface, and a non-transitory computer-readable medium. Breckenridge, however, teaches these limitations (Breckenridge, Fig. 1) in a system to train an artificial intelligence model (including feature selection). It would have been obvious to implement the system of Aghdam/Shen on a server like that of Breckenridge. The motivation to do so is to provide “a service … in the cloud, where a client can provide input data and a prediction request and receive in response a predictive output without expending client-side computing resources or requiring client-side expertise for predictive analytical modeling” (Breckenridge, [0010]). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over Claims 1-20, respectively, of U.S. Patent No. 11586960. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the reference patent anticipate the respective claims of the instant application – the claims of the instant application are strictly broader than their respective patented claims. Conclusion All references relied upon the prior art rejections are as cited by the applicant in the Information Disclosure Statement dated January 13th, 2023 as relevant from the parent application. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRIAN M SMITH whose telephone number is (469)295-9104. The examiner can normally be reached Monday - Friday, 8:00am - 4pm Pacific. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRIAN M SMITH/Primary Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Jan 13, 2023
Application Filed
Jan 26, 2026
Non-Final Rejection — §102, §103, §DP
Apr 16, 2026
Examiner Interview Summary
Apr 16, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596936
PREDICTIVE DATA ANALYSIS TECHNIQUES USING GRAPH-BASED CODE RECOMMENDATION MACHINE LEARNING MODELS
2y 5m to grant Granted Apr 07, 2026
Patent 12585985
RECOGNITION SYSTEM, MODEL PROCESSING APPARATUS, MODEL PROCESSING METHOD, AND RECORDING MEDIUM FOR INTEGRATING MODELS IN RECOGNITION PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12555025
METHOD AND SYSTEM FOR INTEGRATING FIELD PROGRAMMABLE ANALOG ARRAY WITH ARTIFICIAL INTELLIGENCE
2y 5m to grant Granted Feb 17, 2026
Patent 12518198
System and Method for Ascertaining Data Labeling Accuracy in Supervised Learning Systems
2y 5m to grant Granted Jan 06, 2026
Patent 12488068
PERFORMANCE-ADAPTIVE SAMPLING STRATEGY TOWARDS FAST AND ACCURATE GRAPH NEURAL NETWORKS
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
52%
Grant Probability
89%
With Interview (+37.0%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 246 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month