Prosecution Insights
Last updated: April 19, 2026
Application No. 19/052,200

Systems and Methods for Updating a Validation Sample

Non-Final OA §101§103
Filed
Feb 12, 2025
Examiner
ALLEN, BRITTANY N
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
Relativity Oda LLC
OA Round
1 (Non-Final)
42%
Grant Probability
Moderate
1-2
OA Rounds
4y 8m
To Grant
79%
With Interview

Examiner Intelligence

Grants 42% of resolved cases
42%
Career Allow Rate
163 granted / 391 resolved
-13.3% vs TC avg
Strong +38% interview lift
Without
With
+37.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
31 currently pending
Career history
422
Total Applications
across all art units

Statute-Specific Performance

§101
17.5%
-22.5% vs TC avg
§103
52.8%
+12.8% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 391 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Remarks This action is in response to the application received on 2/12/25. Claims 1-20 are pending in the application. Claims 1-20 are rejected under 35 U.S.C. 101. Claims 1-6, 8-11, 13-17, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Peraud et al. (US 2022/0230089), and further in view of Lewis et al. (US 9,367,814). Claims 7, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Peraud in view of Lewis, and further in view of Simard et al. (US 2023/0315773). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 2A, Prong One asks: Is the claim directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea? See MPEP 2106.04 Part I. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. See MPEP 2106.04(a). With respect to claims 1, 19, and 20, the limitation of “generating, via the one or more processors, a request for documents from the corpus of documents based on the first issue or the new issue” and “generating, via the one or more processors, a prompt based on the prompt criteria”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “via the one or more processors,” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “via the one or more processors” language, “generating” in the context of this claim encompasses the user mentally determining data that is needed. Similarly, the limitation of “classifying, via the one or more processors, documents within the initial set of documents by inputting the prompt and the documents within the initial set of documents into the generative Al model”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, but for the “via the one or more processors” language, “classifying” in the context of this claim encompasses the user mentally determining a group of similar documents. The limitation of “evaluating, via the one or more processors, classification performance of the prompt”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. For example, but for the “via the one or more processors” language, “evaluating” in the context of this claim encompasses the user mentally determining if the initial data for forming groups of documents is sufficient. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. At step 2a, prong two, this judicial exception is not integrated into a practical application. The recite a processor to execute the operations and a generative artificial intelligence model, however, these are recited as a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to apply the exception using generic computer components. Additionally, the claim recites “obtaining, via one or more processors, an initial set of documents, “obtaining, via the one or more processors, prompt criteria,” “obtaining, via the one or more processors, documents responsive to the request for documents,” and “adding, via the one or more processors, the obtained documents to the initial set of documents.” These elements do not integrate the abstract idea into a practical application because they do not impose a meaningful limit on the judicial exception and provide only insignificant extra solution activity that is mere data gathering in conjunction with the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. With respect to “obtaining, via one or more processors, an initial set of documents, “obtaining, via the one or more processors, prompt criteria,” and “obtaining, via the one or more processors, documents responsive to the request for documents,”, the courts have found limitations directed towards data gathering to be well-understood, routine, and conventional. See MPEP 2106.05(d)(II). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information). With respect to “adding, via the one or more processors, the obtained documents to the initial set of documents”, the courts have found limitations directed towards storing to be well-understood, routine, and conventional. See MPEP 2106.05(d)(II). Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts") and “storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). Considering the additional elements individually and in combination and the claim as a whole, the additional elements do not provide significantly more than the abstract idea. The claim is not patent eligible. With respect to claims 2, 4, 10, and 16, the limitations further define elements of the mental process and do not impose a meaningful limit on the judicial exception. With respect to claim 3, the limitations directed towards “identifying” and “generating” are further mental process steps that encompass the user analyzing data and determining documents to gather. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claims 5, 7, 8, 11, 12, 15, and 17, the limitations are directed towards further mental steps by “evaluating” data which encompasses the user analyzing data. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claims 6, 9, and 18, the limitations directed towards “identifying” are further mental process steps that encompass the user analyzing data and determining documents to gather. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to claim 13, the limitations directed towards “obtaining” and “applying” provide additional elements, however, these elements do not integrate the abstract idea into a practical application because they do not impose a meaningful limit on the judicial exception. They provide only insignificant extra solution activity that is mere data gathering in conjunction with the abstract idea. With respect to “obtaining,” the courts have found limitations directed towards data gathering to be well-understood, routine, and conventional. See MPEP 2106.05(d)(II). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information). With respect to “applying”, the courts have found limitations directed towards storing to be well-understood, routine, and conventional. See MPEP 2106.05(d)(II). Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining "shadow accounts") and “storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015). With respect to claim 14, the limitations directed towards “classifying” are further mental process steps that encompass the user mentally determining a group of similar documents. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 8-11, 13-17, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Peraud et al. (US 2022/0230089), and further in view of Lewis et al. (US 9,367,814). With respect to claim 1, Peraud teaches a computer-implemented method for using a generative artificial intelligence (Al) model to classify documents, the method comprising: obtaining, via one or more processors, an initial set of documents from a corpus of documents (Peraud, pa 0151, sources 704 may provide feedback 706 from users or customers & pa 0168, In act 902, datasets may be received. For example, feedback may be received from users through sources. User feedback may include free-form natural language in one or more formats, such as text, audio, video, etc.); obtaining, via the one or more processors, prompt criteria … (Peraud, pa 0152, The feedback 706 received from the source 704 may be converted into new verbatims 708 that the trained model 702 has yet to analyze. As with the verbatims 106 used for training, the new verbatims 708 may be translated … The trained model 702, consistent with the present concepts, can assist in understanding and gaining insight into the new verbatims 708, for example, the issues customers are experiencing, the complaints that customers have, etc.); generating, via the one or more processors, a prompt based on the prompt criteria (Peraud, pa 0152, Next, the new verbatims 708 may be converted into vectors 712 & pa 0169, In act 904, vectors corresponding to the datasets may be generated.); classifying, via the one or more processors, documents within the initial set of documents by inputting the prompt and the documents within the initial set of documents into the generative Al model (Peraud, pa 0154, The classification model 714 may take a new vector 712 that represents a new verbatim 708 and attempt the classification into one or more known topics. The classification model 714 may compare the new verbatim 708 against multiple examples of verbatims that make up the known topics. & pa 0158, if the classification model 714 did not assign the new verbatim 708 into any of the known topics, then the new verbatim 708 may be assigned to an unknown class.); evaluating, via the one or more processors, classification performance of the prompt to identify (i) that the initial set of documents does not include enough documents associated with a first issue of the one or more issues, or (ii) that the corpus of documents is associated with a new issue (Peraud, pa 0158, at inference time, new verbatims that are not assigned to the known topics can be set aside for new topic detection. That is, if the distances for the new verbatim 708 are higher than the threshold, then the new verbatim 708 may be passed into the clustering model 716. Then, the clustering model 716 may attempt to identify where the new verbatim 708 belongs. & pa 0160, the clustering model 716 may use deep clustering techniques to cluster the new verbatims 708 of unknown topic into groups based on similarity of topics, and synthetic labels may be assigned to the new verbatims 708. These machine-labeled new verbatims 708 may then be reviewed or audited by a human (e.g., an SME 718 or other human annotators) who can make a decision on whether certain new verbatims 708 represent a new topic or new expressions of a known topic.). Peraud doesn't expressly discuss prompt criteria defining an inquiry associated with the corpus of documents, wherein the prompt criteria define one or more issues associated with the corpus of documents. Lewis teaches prompt criteria defining an inquiry associated with the corpus of documents, wherein the prompt criteria define one or more issues associated with the corpus of documents (Lewis, Col. 6 Li. 42-48, The clustering module 125 can define clusters corresponding to such unplanned information ( e.g., words), and associate documents with corresponding clusters. For example, the clustering module 125 may identify one or more words or phrases, such as "inbox" and "capacity" that are common to at least some of the documents.); generating, via the one or more processors, a request for documents from the corpus of documents based on the first issue or the new issue (Lewis, Col. 11 Li. 27-31, classification manager 205 applies document classifiers A, B, and C to portions of data associated with corpus 110, including, e.g., unclassified documents.); obtaining, via the one or more processors, documents responsive to the request for documents (Lewis, Col. 11 Li. 31-38, Based on an application of document classifiers 215 (e.g., document classifiers A, B, and C), classification manager 205 generates classified documents 230 by associating an unclassified Document 11 with label A, associating an unclassified Document 12 with label B, and associating an unclassified Document 13 with label B.); and adding, via the one or more processors, the obtained documents to the initial set of documents (Lewis, Col. 12 Li. 39-43, confidence score generator 220 may be configured to assign classifications to documents when a confidence score associated with the classification (for a particular document) is greater than a pre-defined confidence threshold, including, e.g., a 90% confidence threshold. Examiner note: by assigning classifications to documents, they become a part of the previously created cluster for labels A, B, or C). It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified Peraud with the teachings of Lewis because it ensures that documents of the corpus are considered as the corpus is updated with new information (Lewis, Col. 4 Li. 23-42). With respect to claim 2, Peraud in view of Lewis teaches the computer-implemented method of claim 1, wherein the corpus of documents is associated with a vector space (Peraud, pa 0144, distances between the raw vectors and the seed vectors may be calculated. These may be Euclidean distances in a vector space.). With respect to claim 3, Peraud in view of Lewis teaches the computer-implemented method of claim 2, wherein generating the request for documents comprises: identifying, via the one or more processors, one or more key terms associated with (1) the first issue of the one or more issues or (2) the new issue associated with the corpus of documents; and generating, via the one or more processors, the request for documents based on the one or more key terms (Lewis, Col. 10 Li. 3-12, Training module 210 associates document classifier A with label A and the attributes of label A. In particular, document classifier A is trained to recognize appropriate documents to associate with label A, namely documents including data that matches the attributes associated with label A. Training module 210 similarly associates document classifiers B and C with labels B and C, and the attributes of labels B and C, respectively. & Col. 11 Li. 27-38, classification manager 205 uses labels A, B, and C to generate document classifiers A, B, and C. In this example, classification manager 205 applies document classifiers A, B, and C to portions of data associated with corpus 110, including, e.g., unclassified documents. Based on an application of document classifiers 215 (e.g., document classifiers A, B, and C), classification manager 205 generates classified documents 230 by associating an unclassified Document 11 with label A, associating an unclassified Document 12 with label B, and associating an unclassified Document 13 with label B.). With respect to claim 4, Peraud in view of Lewis teaches the computer-implemented method of claim 3, wherein the one or more key terms are associated with one or more entities (Lewis, Col. 5 Li. 48-51, users of a product (illustrated as users 102, 103 and 104) provide information, such as complaints, comments relating to the product, etc. that forms at least a portion of the document corpus 110.). With respect to claim 5, Peraud in view of Lewis teaches the computer-implemented method of claim 2, wherein evaluating classification performance of the prompt comprises: evaluating, via the one or more processors, the vector space to identify one or more clusters of documents (Peraud, pa 0144, distances between the raw vectors and the seed vectors may be calculated. These may be Euclidean distances in a vector space.). With respect to claim 6, Peraud in view of Lewis teaches the computer-implemented method of claim 5, wherein evaluating classification performance of the prompt further comprises: identifying, via the one or more processors, one or more respective clusters of documents in the vector space associated with one or more misclassifications of documents of the initial set of documents (Peraud, pa 0144, distances between the raw vectors and the seed vectors may be calculated. These may be Euclidean distances in a vector space. & pa 0159, collect all new verbatims 708 that were classified as an unknown topic, and these new verbatims 708 may be sent to the clustering model 716.). With respect to claim 8, Peraud in view of Lewis teaches the computer-implemented method of claim 6, wherein identifying that the corpus of documents is associated with the new issue comprises: evaluating, via the one or more processors, the one or more respective clusters of documents associated with the one or more misclassifications of documents and the prompt criteria to identify that at least one cluster of documents is not associated with at least one issue of the one or more issues (Lewis, Col. 11 Li. 63-66, Classification manager 205 analyzes the confidence scores associated with the classified documents to determine which classification to associate with the unclassified cluster.). With respect to claim 9, Peraud in view of Lewis teaches the computer-implemented method of claim 5, wherein evaluating classification performance of the prompt further comprises: identifying, via the one or more processors, one or more respective clusters of documents in the vector space associated with one or more low-confidence classifications of documents of the initial set of documents (Peraud, pa 0159, The new verbatims 708 that were not classified into a known topic (i.e., were classified into the unknown topic category) may be processed by the clustering model 716 & Lewis, Col. 13 Li. 55-62, Classification manager 205 ( e.g., using training module 210) selects 325 one or more classified documents 230 that are associated with a classification confidence level below a predetermined threshold value (e.g., 50%, 60%, or 75%) to create a set of low-confidence documents. Classification manager 205 declassifies 330 the low-confidence documents by disassociating the low-confidence documents from each of the associated classifications.). With respect to claim 10, Peraud in view of Lewis teaches the computer-implemented method of claim 9, wherein the one or more low- confidence classifications of documents are one or more of: weak classifications of documents, or documents with no classifications (Peraud, pa 0159, The new verbatims 708 that were not classified into a known topic (i.e., were classified into the unknown topic category) may be processed by the clustering model 716 & Lewis, Col. 13 Li. 55-62, Classification manager 205 ( e.g., using training module 210) selects 325 one or more classified documents 230 that are associated with a classification confidence level below a predetermined threshold value (e.g., 50%, 60%, or 75%) to create a set of low-confidence documents.). With respect to claim 11, Peraud in view of Lewis teaches the computer-implemented method of claim 1, wherein evaluating classification performance of the prompt includes: generating, via the one or more processors, one or more respective statistical metrics for each of the one or more issues, wherein the one or more respective statistical metrics each include one or more of: an accuracy metric (Peraud, pa 0148, In act 618, a classification accuracy may be calculated and checked against the classification accuracy threshold,), a precision metric, a recall metrics, or an elusion metric. With respect to claim 13, Peraud in view of Lewis teaches the computer-implemented method of claim 1, wherein evaluating classification performance of the prompt comprises: obtaining, via the one or more processors, review data associated with the initial set of documents including ground truth data associated the one or more issues (Peraud, pa 0142, a human may define a taxonomy of known topics that she wishes to infer, track, and monitor. The set of classes may include sets of seed datasets belonging to the set of classes. The seed datasets may have ground-truth labels (or definite labels) corresponding to the set of classes, assigned by the SME.); and applying, via the one or more processors, the review data to determine classification performance of the prompt with respect to the one or more issues (Peraud, pa 0145, In act 610, pseudo labels corresponding to the classes may be assigned to the raw datasets based on the distances calculated in act 608.). With respect to claim 14, Peraud in view of Lewis teaches the computer-implemented method of claim 1, further comprising: classifying, via the one or more processors, the obtained documents responsive to the request for documents by inputting the prompt and the obtained documents into the generative Al model (Peraud, pa 0172, the classifier model can be trained to identify datasets related to the new class in the future.). With respect to claim 15, Peraud in view of Lewis teaches the computer-implemented method of claim 14, further comprising: evaluating, via the one or more processors, classification performance of the prompt based on ground truth data associated with the obtained documents (Peraud, pa 0142, a human may define a taxonomy of known topics that she wishes to infer, track, and monitor. The set of classes may include sets of seed datasets belonging to the set of classes. The seed datasets may have ground-truth labels (or definite labels) corresponding to the set of classes, assigned by the SME. & pa 0145, In act 610, pseudo labels corresponding to the classes may be assigned to the raw datasets based on the distances calculated in act 608.). With respect to claim 16, Peraud in view of Lewis teaches the computer-implemented method of claim 1, wherein the one or more issues are associated a knowledge graph of facts (Peraud, pa 0142, a human may define a taxonomy of known topics that she wishes to infer, track, and monitor. The set of classes may include sets of seed datasets belonging to the set of classes. Examiner note: a taxonomy serves the same purpose here as a knowledge graph and are both well known elements for storing data). With respect to claim 17, Peraud in view of Lewis teaches the computer-implemented method of claim 16, further comprising: evaluating, via the one or more processors, the knowledge graph of facts and the prompt criteria to identify the new issue (Peraud, pa 0142, a human may define a taxonomy of known topics that she wishes to infer, track, and monitor. The set of classes may include sets of seed datasets belonging to the set of classes. The seed datasets may have ground-truth labels (or definite labels) corresponding to the set of classes, assigned by the SME. & pa 0145, In act 610, pseudo labels corresponding to the classes may be assigned to the raw datasets based on the distances calculated in act 608. & pa 0160, may use deep clustering techniques to cluster the new verbatims 708 of unknown topic into groups based on similarity of topics, and synthetic labels may be assigned to the new verbatims 708. These machine-labeled new verbatims 708 may then be reviewed or audited by a human (e.g., an SME 718 or other human annotators) who can make a decision on whether certain new verbatims 708 represent a new topic or new expressions of a known topic. A new topic identified by the SME 718 may become a new known topic that may be added to the classification model 714, and the new verbatims 708 for the new topic may serve as seed verbatims for the new topic.). With respect to claims 19 and 20, the limitations are essentially the same as claim 1, and are rejected for the same reasons. Claims 7, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Peraud in view of Lewis, and further in view of Simard et al. (US 2023/0315773). With respect to claim 7, Peraud in view of Lewis teaches the computer-implemented method of claim 6, as discussed above. Peraud in view of Lewis doesn't expressly discuss wherein identifying that the initial set of documents does not include enough documents associated with the first issue of the one or more issues comprises: evaluating, via the one or more processors, the one or more respective clusters of documents associated with the one or more misclassifications of documents and the corpus of documents to identify that the initial set of documents does not include enough documents from the one or more respective clusters of documents. Simard teaches wherein identifying that the initial set of documents does not include enough documents associated with the first issue of the one or more issues (Simard, pa 0086-0087, This guide 630 presents recommendations 632, 634 or recommended actions, suggestions, tips, or other information that guides the user in the process of refining the classification model to obtain better results. As shown in this example, a first recommendation 632 is presented along with a second recommendation 636. [0087] As illustrated by way of example in FIG. 6, recommendation #1 suggests “YOU DO NOT HAVE ENOUGH DOCUMENTS FOR A STATISTICALLY VALID SAMPLE TO OBTAIN YOUR DESIRED CONFIDENCE LEVEL.” A user interface element 634 is displayed to enable the user to remedy the deficiency by clicking on the recommended action (e.g., “CLICK HERE TO ADD MORE DOCUMENTS”).) comprises: evaluating, via the one or more processors, the one or more respective clusters of documents associated with the one or more misclassifications of documents and the corpus of documents to identify that the initial set of documents does not include enough documents from the one or more respective clusters of documents (Simard, pa 0079, A test run may be performed on a small training corpus of documents that the user selects for this purpose. Based on a review collection, the system displays classification metrics on a metric panel from this test run to provide the user with feedback on the accuracy of the model. The metrics displayed in the metrics panel enable the user to optimize a model's accuracy.). It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified Peraud with the teachings of Simard because it provides effective feedback for accurately classifying data (Simard, pa 0081-0082). With respect to claim 12, Peraud in view of Lewis teaches the computer-implemented method of claim 11, as discussed above. Peraud in view of Lewis doesn't expressly discuss wherein identifying that the initial set of documents does not include enough documents associated with the first issue of the one or more issues comprises: evaluating, via the one or more processors, one or more statistical metrics of the one or more respective statistical metrics associated with the first issue to identify that at least one statistical metric is statistically insignificant based on the amount of documents of the initial set of documents associated with the first issue. Simard teaches wherein identifying that the initial set of documents does not include enough documents associated with the first issue of the one or more issues (Simard, pa 0086-0087, This guide 630 presents recommendations 632, 634 or recommended actions, suggestions, tips, or other information that guides the user in the process of refining the classification model to obtain better results. As shown in this example, a first recommendation 632 is presented along with a second recommendation 636. [0087] As illustrated by way of example in FIG. 6, recommendation #1 suggests “YOU DO NOT HAVE ENOUGH DOCUMENTS FOR A STATISTICALLY VALID SAMPLE TO OBTAIN YOUR DESIRED CONFIDENCE LEVEL.” A user interface element 634 is displayed to enable the user to remedy the deficiency by clicking on the recommended action (e.g., “CLICK HERE TO ADD MORE DOCUMENTS”).) comprises: evaluating, via the one or more processors, one or more statistical metrics of the one or more respective statistical metrics associated with the first issue to identify that at least one statistical metric is statistically insignificant based on the amount of documents of the initial set of documents associated with the first issue (Simard, pa 0081, the dynamic user feedback guide may be activated and displayed only when the accuracy falls below a predetermined threshold. For example, if the recall and/or precision values are below a predetermined threshold, the guide may be activated and displayed onscreen. ) It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified Peraud with the teachings of Simard because it provides effective feedback for accurately classifying data (Simard, pa 0081-0082). With respect to claim 18, Peraud in view of Lewis teaches the computer-implemented method of claim 16, as discussed above. Peraud doesn't expressly discuss wherein identifying that the initial set of documents does not include enough documents associated with a first issue of the one or more issues comprises: identifying, via the one or more processors, one or more regions of the knowledge graph of facts associated with one or more misclassifications of documents of the initial set of documents and the first issue of the one or more issues. Simard teaches wherein identifying that the initial set of documents does not include enough documents associated with a first issue of the one or more issues (Simard, pa 0086-0087, This guide 630 presents recommendations 632, 634 or recommended actions, suggestions, tips, or other information that guides the user in the process of refining the classification model to obtain better results. As shown in this example, a first recommendation 632 is presented along with a second recommendation 636. [0087] As illustrated by way of example in FIG. 6, recommendation #1 suggests “YOU DO NOT HAVE ENOUGH DOCUMENTS FOR A STATISTICALLY VALID SAMPLE TO OBTAIN YOUR DESIRED CONFIDENCE LEVEL.” A user interface element 634 is displayed to enable the user to remedy the deficiency by clicking on the recommended action (e.g., “CLICK HERE TO ADD MORE DOCUMENTS”).) comprises: identifying, via the one or more processors, one or more regions of the knowledge graph of facts associated with one or more misclassifications of documents of the initial set of documents and the first issue of the one or more issues (Peraud, pa 0142, a human may define a taxonomy of known topics that she wishes to infer, track, and monitor. The set of classes may include sets of seed datasets belonging to the set of classes. The seed datasets may have ground-truth labels (or definite labels) corresponding to the set of classes, assigned by the SME. & pa 0145, In act 610, pseudo labels corresponding to the classes may be assigned to the raw datasets based on the distances calculated in act 608.). It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified Peraud with the teachings of Simard because it provides effective feedback for accurately classifying data (Simard, pa 0081-0082). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRITTANY N ALLEN whose telephone number is (571)270-3566. The examiner can normally be reached M-F 9 am - 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRITTANY N ALLEN/ Primary Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

Feb 12, 2025
Application Filed
Feb 18, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585707
SYSTEMS AND METHODS FOR DOCUMENT ANALYSIS TO PRODUCE, CONSUME AND ANALYZE CONTENT-BY-EXAMPLE LOGS FOR DOCUMENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12561342
MULTI-REGION DATABASE SYSTEMS AND METHODS
2y 5m to grant Granted Feb 24, 2026
Patent 12530391
Digital Duplicate
2y 5m to grant Granted Jan 20, 2026
Patent 12524389
ENTERPRISE ENGINEERING AND CONFIGURATION FRAMEWORK FOR ADVANCED PROCESS CONTROL AND MONITORING SYSTEMS
2y 5m to grant Granted Jan 13, 2026
Patent 12524475
CONCEPTUAL CALCULATOR SYSTEM AND METHOD
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
42%
Grant Probability
79%
With Interview (+37.7%)
4y 8m
Median Time to Grant
Low
PTA Risk
Based on 391 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month