Prosecution Insights
Last updated: April 19, 2026
Application No. 18/581,556

SYSTEMS AND METHODS FOR CLASSIFYING TOKEN SEQUENCE EMBEDDINGS

Final Rejection §101
Filed
Feb 20, 2024
Examiner
VOGT, JACOB BUI
Art Unit
2653
Tech Center
2600 — Communications
Assignee
Capital One Services LLC
OA Round
2 (Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
4 granted / 7 resolved
-4.9% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
33 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
35.1%
-4.9% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101
DETAILED ACTION This communication is in response to the Amendments and Arguments filed on 01/16/2026. Claims 1, 2, 4-9, 11-13 and 21-27 are pending and have been examined. Hence, this action has been made FINAL. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The reply filed on 01/16/2026 has been entered. Applicant’s arguments with respect to claims 1, 2, 4-9, and 11-13 have been considered but are not persuasive. With respect to the applicant’s arguments to claim rejections under 35 U.S.C § 101, Applicant has amended each of the independent claims and asserts that “Removing the universal vectors from consideration when calculating archetype vectors would both expedite the calculation of the archetype vectors. Furthermore, such operations may increase the accuracy the "classification" described because the classification is "associated with a minimum distance of the distance metrics" as indicated by claim 1.” The examiner respectfully disagrees with these assertions. While removing universal vectors from consideration is noted, the applicant fails to explain how classification accuracy is increased by removing universal vectors from consideration. While the applicant cites “classification [being] ‘associated with a minimum distance of the distance metrics’”, it is unclear what specific benefit this limitation actually provides to classification or the field of information retrieval as a whole. With respect to the applicant’s arguments to claim rejections under 35 U.S.C § 103, the Applicant asserts that the newly added limitations are not found in the currently applied prior art with respect to Chiang et al. in view of Warren et al. The Applicant’s amendment overcomes the current prior art of record. Claim Objections Claim 7 is objected to because of the following informalities: Claim 7, line 6, should be “the respective embedding and the initial cluster centroid; and” Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 2, 4-9, 11-13 and 21-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. All of the claims are method claims (2, 4-9, 11-13), apparatus/machine claims (21-27) or manufacture claim under (Step 1), but under Step 2A all of these claims recite abstract ideas and specifically mental processes. These mental processes are more particularly recited in claims 2 and 21 as: providing a first machine learning model with one or more first inputs to obtain a first plurality of embeddings… generating a plurality of clusters based on the first plurality of embeddings… generating one or more universal vectors by ranking real-value segments of the plurality of clusters according to occurrence frequencies… generating one or more common vectors in the plurality of clusters by selecting one or more vectors associated with one or more intra-cluster occurrence frequencies that is greater than an intra-cluster admittance threshold… generating a one or more archetype vectors… generating distance metrics based on the one or more archetype vectors and a second plurality of embeddings derived from one or more second inputs… classifying the second plurality of embeddings with a classification associated with a minimum distance of the distance metrics… generating a target response based on the classification… Under Step 2A Prong One, claims 2 and 21 are directed to an abstract idea and specifically a mental process. As detailed above, the steps of providing, generating, classifying, etc. may be practically performed in the human mind with the use of a physical aid such as a pen and paper. For example, a human employee could receive a query and a document corpus from their boss with instructions to find the query within the document corpus. The employee could convert both the query and the document corpus into a plurality of word vector embeddings by hand, writing down each vector embedding on its own respective slip of paper. The human could then group the pieces of paper (herein referred to as “word vectors”) into clusters based on a computed similarity to one another, and then create a set of universal vectors from the set of all word vectors. This set of universal vectors could include every vector whose occurrence frequency in the document corpus is above a minimum inter-cluster frequency threshold value. The human could further create a set of common vectors for each cluster, wherein each set of common vectors could include every vector within a cluster that is above some pre-defined minimum frequency threshold. The human could then remove the set of universal vectors from the set of common vectors for each cluster, leaving only a subset of unique vectors for each cluster. The human could then calculate, for each cluster, an average vector from the subset of unique vectors for that cluster, label that average vector as the “archetype vector,” and then compute similarities between each cluster’s archetype vector and the query embedding computed from earlier. Finally, the human may determine an archetype vector with the greatest similarity to the query embedding, classify the query embedding into that cluster group, and present a visualization of the clustering to their boss. Under Step 2A Prong Two, this judicial exception is not integrated into a practical application because claims 2, 4-13 and 21-27 do not recite additional elements that integrate the exception into a practical application. In particular, claims 2 and 21 recite the additional elements of a processor (¶ [0029]), non-transitory computer-readable media (¶ [0031]), a machine learning model (¶ [0017]), and a user interface (¶ [0014]). These additional elements are recited at a high level of generality and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). Further, claims 2 and 21 recite the additional elements of “receiving…” and “obtaining…” which amount to insignificant extra-solution activities which are not indicative of integration into a practical application as per MPEP 2106.05(g). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Under Step 2B, the claims do not recite additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements of using a computer is noted as a general computer {processor (¶ [0029]); non-transitory computer-readable media (¶ [0031]); machine learning model (¶ [0017]); clustering model (¶ [0019]); user interface (¶ [0014])}. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Further, the additional limitations in the claims noted above are directed towards insignificant extra-solution activities. The claims are not patent eligible. With respect to claims 4 and 22, the claim relates to limiting the generation of the set of common components to a fixed number for each cluster. This relates to the human employee only selecting 10 vectors from each cluster of vectors as the set of common vectors. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claims 5 and 23, the claim relates to generating the set of universal components by first ranking all components based on frequency of occurrence and then only selecting an upper portion of the vectors as the set of universal vectors. This relates to a human employee, after sorting words into clusters, ranking every word across all clusters according to frequency of occurrence and then selecting an upper portion of the ranked words to be the set of universal components. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claims 6 and 24, the claim relates to training a clustering model to sort values, then using the clustering model to generate a plurality of embeddings. This relates to a human employee learning how to properly cluster words overtime through repeated experiences, and once trained, generating a plurality of clusters from the data their boss gave them. The additional element of a “clustering model” is recited at a high level of generality (¶ [0019]) and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claims 7 and 25, the claim relates to performing a specific method of clustering. This relates to a human employee initially selecting random words to be cluster medoids, generating a set of initial clusters by assigning the remaining words to the nearest cluster medoid, and then recalculating the number of medoids by selecting the embedding with the minimum average distance to the other embeddings in its cluster. The additional element of a “clustering model” is recited at a high level of generality (¶ [0019]) and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claims 8 and 26, the claim relates to calculating a similarity score for each embedding in a cluster to the closest embedding in the set of unique components for that cluster, and then removing all embeddings from a cluster that are higher than a threshold similarity. This relates to a human employee calculating by hand the distance between each embedding in a cluster and a selected closest embedding from the set of unique components for that cluster, and then removing every embedding whose similarity score is above a certain threshold. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 9, the claim relates to determining an archetype vector by determining a least average distance of a vector subset of the one or more unique vectors. This relates to a human employee determining an archetype vector by first creating a subset of the unique vectors for each cluster, and then calculating an average vector for all the vectors within the subset vectors. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claims 11, the claim relates to maintaining a vector collection for each cluster in a plurality of clusters and removing at least one universal vector from the vector collection to generate a set of unique components for each cluster. This relates to a human employee maintaining a notated set of vectors that comprise embeddings within a cluster and then removing embeddings from the cluster that also exist within the set of universal vectors. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claims 12, the claim relates to extracting embedding maps from a machine learning model that are then subsequently used to generate real values. This relates to a human employee computing an embedding map from a document vector by hand before splitting the map in order to generate a plurality of word vector embeddings. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 13, the claim relates to deriving a text sequence from at least one unique vector. This relates to a human employee converting the set of unique vectors back into human-readable text. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 27, the claim relates to universal vectors occurring in a majority of a plurality of clusters. This relates to a human employee generating universal vectors in such a way that they verify that each universal vector is present in at least 51% of clusters. No additional limitations are present. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. For all of the above reasons, taken alone or in combination, claims 2, 4-9, 11-13 and 21-27 recite a non-statutory mental process. Allowable Subject Matter Claim 1 is allowed. Claims 2 and 21 would be allowable if rewritten or amended to overcome the rejection(s) under 35 U.S.C. 101, set forth in this Office action. The following is a statement of reasons for the indication of allowable subject matter: The prior art of record does not teach certain distinguishing features as described below in reference to claims 1, 2, and 21. Regarding claim 1, US 20180121444 A1 (Bao et al.) disclose a system for improving model performance by reducing vector search spaces, comprising one or more processors (Bao et al. ¶ [0022], " The computer 100 contains one or more general-purpose programmable central processing units (CPUs) 103A, 103B, 103C, and 103D, herein generically referred to as processor 101.") and one or more non-transitory, computer-readable media comprising instructions (Bao et al. ¶ [0022], "Each processor 101 executes instructions stored in the system memory 102 and may comprise one or more levels of on-board cache.") that, when executed, cause operations comprising: obtaining, during a communication session, first data comprising one or more first inputs (Bao et al. ¶ [0050], "natural language processor 324 may be configured to analyze an information corpus of text stored within one or more data sources locally accessible to natural language processor 324 in order to perform an unsupervised learning of the information corpus to generate a vector representation of every word or phrase of the text of the information corpus.") and second data comprising a query (Bao et al. ¶ [0052], "the expander 322 may be configured to receive the expansion query from query expansion submission module 210 of computer 100. "); providing a first machine learning model with the one or more first inputs to obtain an earlier plurality of embeddings (Bao et al. ¶ [0050], "natural language processor 324 may be configured to analyze an information corpus of text stored within one or more data sources locally accessible to natural language processor 324 in order to ... generate a vector representation of every word or phrase of the text of the information corpus. The vectors of the words or phrases of the text of the information corpus may be further referred herein as corpus vectors" Corpus vectors are considered analogous to an earlier plurality of embeddings ) and with the query to obtain a later plurality of embeddings (Chiang et al. ¶ [0052], "The expander 322 may further retrieve the word(s) or phrases(s) within the expansion query and submits those word(s) or phrases(s) to natural language processor 324 in order for the natural language processor 324 to generate the query vector(s)." Query vectors are considered analogous to a later plurality of embeddings); clustering the earlier plurality of embeddings using a clustering model, to generate clusters (Bao et al. ¶ [0073], "the clusterizer 320 may be a computer module configured to group similar corpus vectors into clusters or groups."); generating, for each cluster in the clusters, a set of common vectors in the cluster (Bao et al. ¶ [0096]-[0098], "Method 400 may continue with the NLPS determining the relative similarity of corpus vectors within the most similar cluster relative to the query vector (block 408). ... Method 400 may continue with the NLPS forming a ranked list of expanded words or phrases that are associated with the corpus vectors within the most similar cluster (block 410). For example, the NLPS may form an ordered list 450 of words or phrases. The word or phrase associated with corpus vector 522 may be ranked first within the list 450 because corpus vector 522 was determined to be the most similar vector to query vector 440." Ranked corpus vectors within a cluster are considered analogous to a set of common vectors) by selecting one or more vectors of the cluster having an intra-cluster [occurrence frequency] metric that is greater than an intra-cluster admittance threshold (Bao et al. ¶ [0109]-[0111], "Method 600 may continue with the NLPS determine an alike score “s” for each of the next corpus vectors (block 614). ... Method 600 may continue with the NLPS ranking the corpus vectors within the list of next vectors by alike value “s” (block 616). ... Method 600 may continue with the NLPS removing any corpus vectors within the ranked list of next corpus vectors if the alike value “s” is less than a predetermined threshold (block 618)." Removing vectors with a metric below a threshold is considered analogous to selecting vectors having an intra-cluster metric that is greater than an intra-cluster admittance threshold); generating a set of archetype vectors by (i) [determining a set of unique vectors by removing the universal vectors from the set of common vectors and (ii)] determining the set of archetype vectors based on the set of unique vectors (Bao et al. ¶ [0088], "Method 500 may continue with the NLPS assigning or designate a particular corpus vector within the cluster as a representative corpus vector of the cluster (block 506)." Representative vectors are considered analogous to a set of archetype vectors) [without using vectors of the universal vectors]; generating distance metrics between the later plurality of embeddings and the set of archetype vectors (Bao et al. ¶ [0051], "The expander 322 may further determine the most similar representative corpus vector(s) amongst the various representative corpus vectors in order to determine one or more most similar vector cluster(s). As such, the expander 322 may only consider the representative corpus vectors to determine which cluster or clusters are most similar to the query vector(s)." Similarity is considered analogous to a distance metric); classifying the later plurality of embeddings with a classification associated with a minimum distance of the distance metrics (Bao et al. ¶ [0080], "The expander 322 may compare the query vector(s) to each cluster's representative corpus vector. The cluster(s) associated with the one or more of the representative corpus vectors that are most similar to the query vector(s) may be designated by the expander 322 as the most similar cluster(s) to the expansion query." Similarity is considered analogous to a distance metric); and presenting in a user interface, a target response during the communication session by generating the target response based on the classification (Bao et al. ¶ [0125], "Referring to FIG. 10 which depicts an exemplary graphical user interface of client computer 100 that sends an expansion query of “arthritis” and receives and displays a ranked list 700 of words or phrases that may accurately expand upon the expansion query, according to one or more embodiments of the present invention."). US 20020174095 A1 (Lulich et al.) disclose generating universal [vectors] features by ranking [real-value] segments of the clusters according to occurrence frequencies (Lulich et al. ¶ [0040], "salient features are determined and selected from the extracted features, which have been ranked based upon each feature's number of occurrences (i.e. frequency) in the data object."), wherein each [vector] feature of the universal [vectors] features comprises a [real-value] segment associated with an occurrence frequency greater than an inter-cluster threshold (Lulich et al. ¶ [0040], "If the frequency of the selected feature is equal to the associated rank of the selected feature, the selected feature is designated as a "corner" feature, block 606. Once the corner feature is established, a first set of features having a higher frequency of occurrence than that of the corner feature are identified and a second set of features having a lower frequency of occurrence than that of the corner feature are identified. ... the features included within the first and second sets of features are determined to be salient features." See Fig. 7. Features plotted to the left of the 80-20 range of the salient features (i.e. features with greater frequency than the selected salient features) are considered analogous to universal vectors); generating, for each cluster in the clusters, a set of common [vectors] features in the cluster (Lulich et al. ¶ [0034], "Once a node is selected, previously categorized content corresponding to the selected node and any sub-nodes (i.e. child nodes) is aggregated to form what is referred to as a content class of data, block 304. Similarly, previously categorized content corresponding to non-content nodes are aggregated to form an anti-content class of data, block 306. ... Once the content and anti-content classes of data have been formed, feature sets are created from each respective class of data (e.g., content and anti-content).... In one embodiment, the feature sets are N-gram based feature sets." N-gram based features sets are considered analogous to a set of common vectors) by selecting one or more vectors of the cluster having an intra-cluster occurrence frequency (Lulich et al. ¶ [0038], "Sliding a 3-character wide window one character at a time across character string 512, results in the construction of a list of thirty-four unique 3-chracter strings each having a frequency of occurrence ranging from one to four." The intra-cluster admittance threshold frequency can be any value, including 0 or 1. Thus, using a sliding window to select N-grams, where each N-gram must occur at least once, is considered analogous to selecting one or more vectors of the cluster having an intra-cluster occurrence frequency that is greater than an intra-cluster admittance threshold.) [that is greater than an intra-cluster admittance threshold]; generating, for each cluster in the clusters, a set of unique [vectors] features by removing the universal [vectors] features from the set of common [vectors] features (Lulich et al. ¶ [0044], "In FIG. 7, a plot of feature frequency values versus feature rank values is shown. In the plot, the selected "corner" node is indicated, as is the first set of features located to the left of the corner node, representing twenty percent of the cumulative frequencies to the right of the corner node, and the second set of features located to the right of the "corner" node, representing eighty percent of the cumulative frequencies to the left of the corner node as described above." See Figure 7. Salient features (i.e. features within the 80-20 percentage range of the plot) are considered analogous to a set of unique vectors. Features with greater frequency than the selected salient features are considered analogous to universal vectors. For further support, see ¶ [0045], "salient features may also be determined and selected from the content and anti-content classes of data by eliminating a certain number of the most frequent features." A set of most frequent features may also be considered analogous to a set of universal vectors, since both are generated based on ranking features' frequencies of occurrence.); and generating a set of archetype [vectors] features by (i) determining a set of unique [vectors] features by removing the universal [vectors] features from the set of common [vectors] features (Lulich et al. ¶ [0044], "In FIG. 7, a plot of feature frequency values versus feature rank values is shown. In the plot, the selected "corner" node is indicated, as is the first set of features located to the left of the corner node, representing twenty percent of the cumulative frequencies to the right of the corner node, and the second set of features located to the right of the "corner" node, representing eighty percent of the cumulative frequencies to the left of the corner node as described above." ¶ [0045], "salient features may also be determined and selected from the content and anti-content classes of data by eliminating a certain number of the most frequent features." See Figure 7. A set of most frequent features may also be considered analogous to a set of universal vectors) and (ii) determining the set of archetype [vectors] features based on the set of unique [vectors] features without using [vectors] features of the universal [vectors] features (Lulich et al. ¶ [0040], "If the frequency of the selected feature is equal to the associated rank of the selected feature, the selected feature is designated as a "corner" feature, block 606. Once the corner feature is established, a first set of features having a higher frequency of occurrence than that of the corner feature are identified and a second set of features having a lower frequency of occurrence than that of the corner feature are identified." A corner feature is considered analogous to an archetype vector). "VLCP: A High-Performance FPGA-based CNN Accelerator with Vector-level Cluster Pruning" (Ran et al.) disclose clustering the earlier plurality of embeddings using a clustering model, to generate clusters (Ran et al. pg. 3, Section 2.2, Paragraph 1, "For a tensor with shape C × R × S , the C kernels are first grouped into clusters, each containing 𝑀 adjacent kernels. Each cluster then has 𝑀×𝑅 vectors."); and generating universal vectors by ranking real-value segments of the clusters according to [occurrence frequencies] a metric (Ran et al. pg. 3, Section 2.2, Paragraph 1, "Next, the vectors with the smallest L1 norms are pruned" Vectors having the smallest L1 norms imply ranking real-value segments of the clusters according to a metric.), wherein each vector of the universal vectors comprises a real-value segment associated with [an occurrence frequency] a metric greater than an inter-cluster threshold (Ran et al. pg. 3, Section 2.2, Paragraph 1, "Each cluster then has M × R vectors. Next, the vectors with the smallest L1 norms are pruned by a predefined value of 𝑃, which can be calculated as the product of the target sparsity 𝑠 and the number of vectors in a cluster, M × R " Target sparsity s and number of vectors M × R are both cluster-independent variables. Thus, predefined value P is considered analogous to an inter-cluster threshold). There are prior arts that show creating subsets of vectors. This is well-known in the art. However, none of the prior arts either alone or in combination thereof teach or make obvious the combination of limitations as recited in the independent claims. The limitation of “generating a one or more archetype vectors by (i) determining a one or more unique vectors by removing the one or more universal vectors from the one or more common vectors and (ii) determining the one or more archetype vectors based on the one or more unique vectors” is not taught by the prior art of record. Further, none of the reference teaches or fairly suggests the combination of claimed elements. The Examiner finds no reason or motivation to combine the above references in an obviousness rejection thus placing the application in condition for allowance. Claims 2 and 21 indicate allowable subject matter by analogy to claim 1. Claims 4-9 and 11-13, and 22-27 indicate allowable subject matter by dependence upon claims 1, 2, and 21. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB B VOGT whose telephone number is (571)272-7028. The examiner can normally be reached Monday - Friday 9:30am - 7pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACOB B VOGT/ Examiner, Art Unit 2653 /Paras D Shah/ Supervisory Patent Examiner, Art Unit 2653 03/17/2026
Read full office action

Prosecution Timeline

Feb 20, 2024
Application Filed
Oct 18, 2025
Non-Final Rejection — §101
Dec 11, 2025
Interview Requested
Dec 19, 2025
Applicant Interview (Telephonic)
Dec 22, 2025
Examiner Interview Summary
Jan 16, 2026
Response Filed
Mar 17, 2026
Final Rejection — §101
Apr 15, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12505279
METHOD AND SYSTEM FOR DOMAIN ADAPTATION OF SOCIAL MEDIA TEXT USING LEXICAL DATA TRANSFORMATIONS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+100.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month