Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception such as a natural phenomenon, abstract idea, or law of nature, without significantly more and/or a practical application per se, specifically with one or more of:
1) Not integrating a judicial exception into a practical application (see explanation below), and
2) Not reciting elements that would amount to significantly more than the judicial exception (see explanation below).
Accordingly, claims 1-19 are directed towards patent ineligible subject matter under 35 U.S.C. 101.
The independent claims:
When taking the current claim limitations of the present invention, we see that they are directed to parsing or analyzing text in multiple documents for review then classifying the documents based on a description and historical data.
Regarding the claim limitations of claim(s) as recited:
parsing text of a document review protocol in order to extract at least one description of at least one concept to be tagged; tagging at least a portion of a plurality of documents in order to create a plurality of tagged documents by applying a language model to the plurality of documents, wherein tagging the at least a portion of the plurality of documents further comprises querying the language model using at least one query generated based on the extracted at least one description; constructing at least one classifier machine learning model based on the extracted at least one description; and training the at least one classifier machine learning model using a training set, wherein the training set includes the plurality of tagged documents.
Step 1: IS THE CLAIM DIRECTED TO A PROCESS, MACHINE, MANUFACTURE OR COMPOSITION OF MATTER?
Yes
Step 2A.1: IS THE CLAIM DIRECTED TO A LAW OF NATURE, A NATURAL PHENOMENON (PRODUCT OF NATURE) OR AN ABSTRACT IDEA?
Yes
Step 2A.2: DOES THE CLAIM RECITE ADDITIONAL ELEMENTS THAT INTEGRATE THE JUDICIAL EXCEPTION INTO A PRACTICAL APPLICATION?
Regarding the independent claims. No, analogous to Solutran, Inc. v. Elavon, Inc., 931 F.3d 1161, 2019 USPQ2d 281076 (Fed. Cir. 2019), the claims are directed to general document analysis, such as lacking a clear improvement of function/technology in the scope of parsing or analyzing text in multiple documents for review then classifying the documents based on a description and historical data, therefore lacking a practical application thereof.
Further as demonstrated in Solutran, Inc. v. Elavon, Inc., 931 F.3d 1161, 2019 USPQ2d 281076 (Fed. Cir. 2019), the claims were to methods for electronically processing paper checks, all of which contained limitations setting forth receiving merchant transaction data from a merchant, crediting a merchant’s account, and receiving and scanning paper checks after the merchant’s account is credited. In part one of the Alice/Mayo test, the Federal Circuit determined that the claims were directed to the abstract idea of crediting the merchant’s account before the paper check is scanned. The court first determined that the recited limitations of “crediting a merchant’s account as early as possible while electronically processing a check” is a “long-standing commercial practice” like in Alice and Bilski. 931 F.3d at 1167, 2019 USPQ2d 281076, at *5 (Fed. Cir. 2019). The Federal Circuit then continued with its analysis under part one of the Alice/Mayo test finding that the claims are not directed to an improvement in the functioning of a computer or an improvement to another technology. In particular, the court determined that the claims “did not improve the technical capture of information from a check to create a digital file or the technical step of electronically crediting a bank account” nor did the claims “improve how a check is scanned.” Id.
Regarding the December 5th 2025 Memo in light of September 26, 2025 Appeals Review Panel Decision in Ex parte Desjardins, Appeal 2024-000567 for Application 16/319,040, in deciding if a recited abstract idea does or does not direct the entire claim to an abstract idea, when a claim is considered as a whole.
The claim which demonstrated improvements to technology and/or function recites: "adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task.".
The decision recites that “We are persuaded that constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation.”
When considering the limitation decided upon, there are clear improvements to machine learning that are not rudimentary or a long-standing practice, for instance adjusting for optimization and protection of performance, as claimed, are improvements to a machine learning models operations, not simply a general mathematical or generic recitation, but rather an improvement to function.
Specifically, Ex Parte Desjardins explained the following:
Enfish ranks among the Federal Circuit's leading cases on the eligibility of technological improvements. In particular, Enfish recognized that “[m]uch of the advancement made in computer technology consists of improvements to software that, by their very nature, may not be defined by particular physical features but rather by logical structures and processes.” 822 F.3d at 1339. Moreover, because “[s]oftware can make non-abstract improvements to computer technology, just as hardware improvements can,” the Federal Circuit held that the eligibility determinations should turn on whether “the claims are directed to an improvement to computer functionality versus being directed to an abstract idea.” Id. at 1336. (Desjardins, page 8).
Further, specifically:
“Paragraph 21 of the Specification, which the Appellant cites, identifies improvements in training the machine learning model itself. Of course, such an assertion in the Specification alone is insufficient to support a patent eligibility determination, absent a subsequent determination that the claim itself reflects the disclosed improvement. See MPEP § 2106.05(a) (citing Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1316 (Fed. Cir. 2016)). Here, however, we are persuaded that the claims reflect such an improvement. For example, one improvement identified in the 8 Appeal2024-000567 Application 16/319,040 Specification is to "effectively learn new tasks in succession whilst protecting knowledge about previous tasks." Spec. ,r 21. The Specification also recites that the claimed improvement allows artificial intelligence (AI) systems to "us[e] less of their storage capacity" and enables "reduced system complexity." Id. When evaluating the claim as a whole, we discern at least the following limitation of independent claim 1 that reflects the improvement: "adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task." We are persuaded that constitutes an improvement to how the machine learning model itself operates, and not, for example, the identified mathematical calculation. Under a charitable view, the overbroad reasoning of the original panel below is perhaps understandable given the confusing nature of existing § 101 jurisprudence, but troubling, because this case highlights what is at stake. Categorically excluding AI innovations from patent protection in the United States jeopardizes America's leadership in this critical emerging technology. Yet, under the panel's reasoning, many AI innovations are potentially unpatentable-even if they are adequately described and nonobvious-because the panel essentially equated any machine learning with an unpatentable "algorithm" and the remaining additional elements as "generic computer components," without adequate explanation. Dec. 24. Examiners and panels should not evaluate claims at such a high level of generality.”
Further in Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), the claimed invention was a method of training a machine learning model on a series of tasks. The Appeals Review Panel (ARP) overall credited benefits including reduced storage, reduced system complexity and streamlining, and preservation of performance attributes associated with earlier tasks during subsequent computational tasks as technological improvements that were disclosed in the patent application specification. Specifically, the ARP upheld the Step 2A Prong One finding that the claims recited an abstract idea (i.e., mathematical concept). In Step 2A Prong Two, the ARP then determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. Importantly, the ARP evaluated the claims as a whole in discerning at least the limitation “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task” reflected the improvement disclosed in the specification. Accordingly, the claims as a whole integrated what would otherwise be a judicial exception instead into a practical application at Step 2A Prong Two, and therefore the claims were
The claim itself does not need to explicitly recite the improvement described in the specification (e.g., “thereby increasing the bandwidth of the channel”). See, e.g., Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential), in which the specification identified the improvement to machine learning technology by explaining how the machine learning model is trained to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting,” and that the claims reflected the improvement identified in the specification. Indeed, enumerated improvements identified in the Desjardins specification included disclosures of the effective learning of new tasks in succession in connection with specifically protecting knowledge concerning previously accomplished tasks; allowing the system to reduce use of storage capacity; and the enablement of reduced complexity in the system. Such improvements were tantamount to how the machine learning model itself would function in operation and therefore not subsumed in the identified mathematical calculation.
The second paragraph of MPEP § 2106.05(a), subsection I, is revised to add new examples xiii and xiv to the list of examples that may show an improvement in computer functionality:
xiii. An improved way of training a machine learning model that protected the model’s knowledge about previous tasks while allowing it to effectively learn new tasks; Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential); and
xiv. Improvements to computer component or system performance based upon adjustments to parameters of a machine learning model associated with tasks or workstreams; Ex Parte Desjardins, Appeal No. 2024-000567 (PTAB September 26, 2025, Appeals Review Panel Decision) (precedential).
Step 2B: DOES THE CLAIM RECITE ADDITIONAL ELEMENTS THAT AMOUNT TO SIGNIFICANTLY MORE THAN THE JUDICIAL EXCEPTION?
No, the claims amount a person analyzing text in multiple documents for review then classifying the documents based on a description and historical data of previous documents in a set.
• Collecting and comparing known information (Classen)
• Collecting information, analyzing it, and displaying certain results of the collection and analysis (Electric Power Group; West View†)
• Comparing new and stored information and using rules to identify options (Smartgene)†
• Data recognition and storage (Content Extraction)
Assistance for Applicant in amending to overcome 101:
Limitations that the courts have found to qualify as “significantly more” when recited in a claim with a judicial exception include:
i. Improvements to the functioning of a computer, e.g., a modification of conventional Internet hyperlink protocol to dynamically produce a dual-source hybrid webpage, as discussed in DDR Holdings, LLC v. Hotels.com, L.P., 773 F.3d 1245, 1258-59, 113 USPQ2d 1097, 1106-07 (Fed. Cir. 2014) (see MPEP § 2106.05(a));
ii. Improvements to any other technology or technical field, e.g., a modification of conventional rubber-molding processes to utilize a thermocouple inside the mold to constantly monitor the temperature and thus reduce under- and over-curing problems common in the art, as discussed in Diamond v. Diehr, 450 U.S. 175, 191-92, 209 USPQ 1, 10 (1981) (see MPEP § 2106.05(a));
iii. Applying the judicial exception with, or by use of, a particular machine, e.g., a Fourdrinier machine (which is understood in the art to have a specific structure comprising a headbox, a paper-making wire, and a series of rolls) that is arranged in a particular way to optimize the speed of the machine while maintaining quality of the formed paper web, as discussed in Eibel Process Co. v. Minn. & Ont. Paper Co., 261 U.S. 45, 64-65 (1923) (see MPEP § 2106.05(b));
iv. Effecting a transformation or reduction of a particular article to a different state or thing, e.g., a process that transforms raw, uncured synthetic rubber into precision-molded synthetic rubber products, as discussed in Diehr, 450 U.S. at 184, 209 USPQ at 21 (see MPEP § 2106.05(c));
v. Adding a specific limitation other than what is well-understood, routine, conventional activity in the field, or adding unconventional steps that confine the claim to a particular useful application, e.g., a non-conventional and non-generic arrangement of various computer components for filtering Internet content, as discussed in BASCOM Global Internet v. AT&T Mobility LLC, 827 F.3d 1341, 1350-51, 119 USPQ2d 1236, 1243 (Fed. Cir. 2016) (see MPEP § 2106.05(d)); or
vi. Other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment, e.g., an immunization step that integrates an abstract idea of data comparison into a specific process of immunizing that lowers the risk that immunized patients will later develop chronic immune-mediated diseases, as discussed in Classen Immunotherapies Inc. v. Biogen IDEC, 659 F.3d 1057, 1066-68, 100 USPQ2d 1492, 1499-1502 (Fed. Cir. 2011) (see MPEP § 2106.05(e)).
To help in amending the claims and for analysis purposes, example claims 3 and 4 are listed below from the courts, however such example amendment potentials are not limited to the provided examples and alternative amendments are possible using i-vi from the courts. The example below show differences between eligible claims (court claim 4) and ineligible claims (court claim 3), which thus illustrates significantly more which is tied to hardware that is not generally recited in the art. In this case general changing of font size in claim 3 versus a significant step of conditionally changing font size tied to hardware in claim 4.
See below examples based on MPEP and not on the current claim set, to help amend to overcome 101 rejections:
Regarding independent claim examples:
For instance in the example claims, for example claims 3 and 4 below:
Ineligible
3. A computer‐implemented method of resizing textual information within a window displayed in a graphical user interface, the method comprising:
(not significant) generating first data for describing the area of a first graphical element;
(not significant) generating second data for describing the area of a second graphical element containing textual information;
(not significant) calculating, by the computer, a scaling factor for the textual information which is proportional to the difference between the first data and second data.
The claim recites that the step of calculating a scaling factor is performed by “the computer” (referencing the computer recited in the preamble). Such a limitation gives “life, meaning and vitality” to the preamble and, therefore, the preamble is construed to further limit the claim. (See MPEP 2111.02.)
However, the mere recitation of “computer‐implemented” is akin to adding the words “apply it” in conjunction with the abstract idea. Such a limitation is not enough to qualify as significantly more. With regards to the graphical user interface limitation, the courts have found that simply limiting the use of the abstract idea to a particular technological environment is not significantly more. (See, e.g., Flook.)
Whereas in similar claim 4:
Eligible
4. A computer‐implemented method for dynamically relocating textual information within an underlying window displayed in a graphical user interface, the method comprising:
displaying a first window containing textual information in a first format within a graphical user interface on a computer screen;
displaying a second window within the graphical user interface;
constantly monitoring the boundaries of the first window and the second window to detect an overlap condition where the second window overlaps the first window such that the textual information in the first window is obscured from a user’s view;
determining the textual information would not be completely viewable if relocated to an unobstructed portion of the first window;
calculating a first measure of the area of the first window and a second measure of the area of the unobstructed portion of the first window;
calculating a scaling factor which is proportional to the difference between the first measure and the second measure;
scaling the textual information based upon the scaling factor;
(significant step) automatically relocating the scaled textual information, by a processor, to the unobscured portion of the first window in a second format during an overlap condition so that the entire scaled textual information is viewable on the computer screen by the user;
(significant step) automatically returning the relocated scaled textual information, by the processor, to the first format within the first window when the overlap condition no longer exists.
These limitations are not merely attempting to limit the mathematical algorithm to a particular technological environment. Instead, these claim limitations recite a specific application of the mathematical algorithm that improves the functioning of the basic display function of the computer itself. As discussed above, the scaling and relocating the textual information in overlapping windows improves the ability of the computer to display information and interact with the user.
The dependent claims are rejected as follows, for the same reasoning as being directed towards patent ineligible subject matter under 35 U.S.C. 101, and not adding eligible subject matter to the respective parent claim.
Claims 2-9 and 12-10 do not contribute any further to technological or functional improvements as practical applications, nor do they demonstrate significantly more. These claims are directed to concept identification, semantically similar user review processes, scoring as performed for the purposes of classification or ranking such as that which a human can perform in lieu of a model per se, correcting concept identification (tagging), and creating a set of documents based on overall analysis. Such claims are grouped together as they are very similar in scope and can be rejected for the same reason as the independent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20220327173 A1 RAJPARA; Vishalkumar (hereinafter RAJPARA) in view of US 20240135740 A1 Nakshathri; Srirama R (hereinafter Nakshathri).
Re claim 1, RAJPARA teaches
1. A method for building predictive machine learning models, comprising: (model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
parsing text of a document review protocol in order to extract at least one description of at least one concept to be tagged; (parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038… model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
constructing at least one classifier machine learning model based on the extracted at least one description; and (classification with a model is a classifier per se 0081, training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 61, and 6b, utilizing parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038… model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
training the at least one classifier machine learning model using a training set, wherein the training set includes the plurality of tagged documents. (classification with a model is a classifier per se 0081, training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 61, and 6b, utilizing parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038… model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
However, while RAJPARA teaches learning models and classification, tagging, reviewing docs, and grouping documents thereof, it fails to teach:
tagging at least a portion of a plurality of documents in order to create a plurality of tagged documents by applying a language model to the plurality of documents, wherein tagging the at least a portion of the plurality of documents further comprises querying the language model using at least one query generated based on the extracted at least one description; (Nakshathri a learning model queries a language model with document tagging and review 0081, with a language model 0090 vectorized 0099)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of RAJPARA to incorporate the above claim limitations as taught by Nakshathri to allow for combining prior art elements according to known methods to yield predicate results, such as a known use of a language model + learning model of Nakshathri with the existing learning model combined to cause interaction/retrieval/querying from learning model to user and the language model as a medium for linguistic purposes, analogous with the meaning extraction in RAJPARA, to at least produce improved context recognition and better grouping of words for descriptions of documents or their respective text.
Re claim 10, this claim has been rejected for teaching a broader, or narrower claim based on general inclusion of hardware alone (e.g. processor, memory, instructions), representation of claim 1 omitting/including hardware for instance, otherwise amounting to a virtually identical scope
Re claim 11, this claim has been rejected for teaching a broader, or narrower claim based on general inclusion of hardware alone (e.g. processor, memory, instructions), representation of claim 1 omitting/including hardware for instance, otherwise amounting to a virtually identical scope
Re claims 2 and 12, RAJPARA teaches
2. The method of claim 1, wherein parsing the document review protocol further comprises:
determining a structure of the document review protocol; (assigning rules such as in fig. 6a and 6b)
identifying the at least one concept to be tagged based on the structure of the document review protocol; and (use the structure rules in fig. 6a and 6b, training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b, utilizing parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038… model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
parsing out text indicating at least one rule for identifying the at least one concept to be tagged within the plurality of documents based on the structure of the document review protocol and the identified at least one concept to be tagged; and (utilizing parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038…user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b, model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
extracting the parsed out text. (following completion of search, results shown as in fig. 7, utilizing parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038…user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b, model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
Re claims 3 and 13, RAJPARA teaches
3. The method of claim 1, wherein the at least one concept to be tagged includes at least one case-specific concept indicated in the document review protocol. (identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038… model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077 supplemented with fig. 6a-6b)
Re claims 4 and 14, RAJPARA teaches
4. The method of claim 1, further comprising: determining a score for each document of the plurality of documents based on the extracted at least one description of the at least one concept to be tagged, wherein the determined score for each document represents a likelihood that the document includes text indicating at a portion of the at least one concept to be tagged. (scoring as in probabilistic weight/scores 0005… classification with a model is a classifier per se 0081, training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b, utilizing parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038… model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
Re claims 5 and 15, RAJPARA teaches
5. The method of claim 1, further comprising: updating the at least one classifier machine learning model based on feedback data until each of at least one performance metric for the at least one classifier machine learning model meets a respective performance threshold. (using a threshold for a review total for instance as a performance metric as user provides feedback for tagging and review 0078… classification with a model is a classifier per se 0081, training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b, utilizing parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038… model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
6. The method of claim 5, wherein the feedback data includes at least one feedback tag for the plurality of documents. (user provides feedback for tagging and review 0078… training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b, utilizing parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038… model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
Re claims 7 and 17, RAJPARA teaches
7. The method of claim 5, wherein the feedback data includes at least one feedback modification to the document review protocol. (modifying parameters as in in fig. 5 with 0085 and 0074, as user changes tags, new sets are presented, e.g. fig. 14 new batches come based on tag changes such as subsets of documents e.g. 0009 and 0030… and user provides feedback for tagging and review 0078… training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b, utilizing parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038… model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
Re claims 8 and 18, RAJPARA teaches
8. The method of claim 7, wherein each document of the at least a portion of the plurality of documents is tagged with a respective first tag, further comprising: (Nth number of different parameters selected as in in fig. 5 with 0085 and 0074, as user changes tags, new sets are presented, e.g. fig. 14 new batches come based on tag changes such as subsets of documents e.g. 0009 and 0030… and user provides feedback for tagging and review 0078… training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b)
determining a second tag for each document of the at least a portion of the plurality of documents based on the at least one feedback modification to the document review protocol; (Nth number of different parameters selected as in in fig. 5 with 0085 and 0074, as user changes tags, new sets are presented, e.g. fig. 14 new batches come based on tag changes such as subsets of documents e.g. 0009 and 0030… and user provides feedback for tagging and review 0078… training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b)
identifying at least one first document to be reviewed from among the plurality of documents, wherein the second tag each first document is different from the first tag for the first document; (Nth number of different parameters selected as in in fig. 5 with 0085 and 0074, as user changes tags, new sets are presented, e.g. fig. 14 new batches come based on tag changes such as subsets of documents e.g. 0009 and 0030… and user provides feedback for tagging and review 0078… training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b)
presenting the at least one first document to a user for review; and (fig. 7 presenting results base on modification in fig. 5-6b)
re-tagging the plurality of documents based on the review. (modifying parameters as in in fig. 5 with 0085 and 0074, as user changes tags, new sets are presented, e.g. fig. 14 new batches come based on tag changes such as subsets of documents e.g. 0009 and 0030… and user provides feedback for tagging and review 0078… training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b, utilizing parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038… model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
Re claims 9 and 19, RAJPARA teaches
9. The method of claim 1, further comprising:
iteratively determining a subset of the plurality of documents to be labeled based on user inputs and querying a user based on the subset of documents determined at each iteration, wherein the user provides the user inputs indicating labels based on the subset of documents queried at each iteration; and (modifying parameters as in in fig. 5 with 0085 and 0074, as user changes tags, new sets or subsets per se are presented e.g. a narrower set, e.g. fig. 14 new batches come based on tag changes such as subsets of documents e.g. 0009 and 0030… and user provides feedback for tagging and review 0078… training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b, utilizing parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038… model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
tagging the determined subset of documents based on the labels indicated in the user inputs. (labeling e.g. 0029… modifying parameters as in in fig. 5 with 0085 and 0074, as user changes tags, new sets or subsets per se are presented e.g. a narrower set, e.g. fig. 14 new batches come based on tag changes such as subsets of documents e.g. 0009 and 0030… and user provides feedback for tagging and review 0078… training using tags, user interaction queries model rules and outputs tags and settings thereof 0058 with fig. 4, 6a, and 6b, utilizing parsing text to identify a concept/term for tagging that is case or genre specific such as name of person or company fig. 5 with 0038… model is generated/updated based on inputs using document tagging and review of documents thereof 0074-0077)
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 8560378 B1 Kibbe; Laura M.
Document review
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C COLUCCI whose telephone number is (571)270-1847. The examiner can normally be reached on M-F 9 AM - 5 PM.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Flanders can be reached at (571)272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL COLUCCI/Primary Examiner, Art Unit 2655 (571)-270-1847
Examiner FAX: (571)-270-2847
Michael.Colucci@uspto.gov