Prosecution Insights
Last updated: April 19, 2026
Application No. 17/566,898

ENHANCED LEXICON-BASED CLASSIFIER MODELS WITH TUNABLE ERROR-RATE TRADEOFFS

Non-Final OA §101§102§103
Filed
Dec 31, 2021
Examiner
STORK, KYLE R
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
4y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
554 granted / 865 resolved
+9.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
51 currently pending
Career history
916
Total Applications
across all art units

Statute-Specific Performance

§101
14.9%
-25.1% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 865 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This non-final office action is in response to the election filed 13 October 2025. Claims 8-17 and 19-27 are pending. Claims 8, 17, and 21 are independent claims. Claims 1-7 are cancelled. Claims 21-27 are newly added. Election/Restrictions Applicant's election with traverse of invention II in the reply filed on 13 October 2025 is acknowledged. The traversal is on the ground(s) that no serious burden would be imposed on the examiner if the restriction were not required (page 1). This is not found persuasive because, as noted by the examiner in the Restriction requirement mailed 12 August 2025, the two groups have attained recognition in the art as separate field of search based upon separate classification. Further, a different field of search would be required, including searching different classes/subclasses, electronic resources, employing different search queries, and different fields of search. This results in a serious burden to the examiner if the restriction were not required. The requirement is still deemed proper and is therefore made FINAL. Information Disclosure Statement The information disclosure statements (IDS) submitted on 28 April 2022, 9 June 2023 (2), and 30 October 2023 (2), and 30 October 2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Drawings The examiner accepts the drawings filed 31 December 2021. Claim Objections The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The numbering of claims is not in accordance with 37 CFR 1.126 which requires the original numbering of the claims to be preserved throughout the prosecution. When claims are canceled, the remaining claims must not be renumbered. When new claims are presented, they must be numbered consecutively beginning with the number next following the highest numbered claims previously presented (whether entered or not). The claim sets filed 31 December 2021 and 13 October 2025 fail to include claim 18. Misnumbered claims 19-27 have been renumbered as claims 18-26. For the purpose of examination, the examiner treat claim 18 as though it is cancelled. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 8-17 and 19-27 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. When considering subject matter eligibility under 35 USC 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1; MPEP 2106.03). If the claim falls within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed toward a judicial exception (Step 2A; MPEP 2106.04). This step is broken into two prongs. The first prong (Step 2A, Prong 1) determines whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined at Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2; MPEP 2106.04). The second prong (Step 2A, Prong 2) determines whether the claims integrate the judicial exception into a practical application. If the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determine whether the claim is a patent-eligible exception (Step 2B; MPEP 2106.05). If an abstract idea is present int the claim, in order to recite statutory subject matter, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application or amounts to significantly more than the abstract idea itself (see: 2019 PEG). Step 1: According to Step 1 of the two Step analysis, claims 8-16 are directed toward a system (machine). Claims 17 and 19-20 are directed toward a computer storage media (manufacture). Claims 21-27 are directed toward a method (process) Therefore, each of these claims falls within one of the four statutory categories. Claim 8: Step 2A, Prong 1: Following the determination that the claims fall within one of the statutory categories (Step 1), it must be determined if the claims recite a judicial exception (Step 2A, Prong 1). In this instance, the claims are determined to recite a judicial exception (abstract idea; mental process). Claim 8 recites the elements: generating a first sub-model of the classifier model based on a first lexicon that includes a first plurality of strings that are included in a first plurality of training records that are labeled as belonging to the positive class (mental process; As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses performing a judgement or evaluation to sort/filter strings that belong to the positive class) generating a second sub-model of the classifier model based on a first lexicon that includes a second plurality of strings that are included in a second plurality of training records that are labeled as belonging to the negative class (mental process; As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses performing a judgement or evaluation to sort/filter strings that belong to the negative class) generating a third sub-model of the classifier model based on a first lexicon that includes a third plurality of strings that are included in both the first plurality of training records and the second plurality of training records (mental process; As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses performing a judgement or evaluation to sort/filter strings that belong to both the positive class and negative class) integrating the first sub-model, the second sub-model, and the third sub-model to generate the classifier model (mental process; As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgement, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitations encompasses performing a judgement based upon the previously evaluations of different models to sort/filter items into positive/negative/both positive and negative classes) Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims disclose the following additional elements: one or more hardware processors one or more computer-readable media having executable instructions embodied thereon, which, when executed by the one or more processors, cause the one or more hardware processors to execute actions These additional elements are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims disclose the following additional elements: one or more hardware processors one or more computer-readable media having executable instructions embodied thereon, which, when executed by the one or more processors, cause the one or more hardware processors to execute actions In this instance, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 9: With respect to dependent claim 9, the claim depends upon independent claim 8. The analysis of claim 8 is incorporated herein by reference. Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims disclose the following additional elements: generating a fourth lexicon based on the first plurality of training records generating a fifth lexicon based on the second plurality of training records generating the first, second, and third lexicons based on the fourth lexicon an the fifth lexicon These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims disclose the following additional elements: generating a fourth lexicon based on the first plurality of training records generating a fifth lexicon based on the second plurality of training records generating the first, second, and third lexicons based on the fourth lexicon an the fifth lexicon These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) In this instance, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 10: With respect to dependent claim 10, the claim depends upon dependent claim 9. The analysis of claim 9 is incorporated herein by reference. Step 2A, Prong 1: The claim recites the elements: determining an intersection of the fourth and fifth lexicons (mental process; As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses performing an evaluation to determine the intersection between the fourth and fifth lexicons) determining a set difference of the fourth and fifth lexicons (mental process; As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses performing an evaluation to determine the set difference between the fourth and fifth lexicons) determining a set difference of the fifth and fourth lexicons (mental process; As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses performing an evaluation to determine the set difference between the fifth and fourth lexicons) Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims further disclose the additional elements: generating the first lexicon to include the determined set difference of the fourth and fifth lexicons generating the second lexicon to include the determined set difference of the fifth and fourth lexicons generating the third lexicon to include the determined intersection of the fourth and fifth lexicons These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims further disclose the additional elements: generating the first lexicon to include the determined set difference of the fourth and fifth lexicons generating the second lexicon to include the determined set difference of the fifth and fourth lexicons generating the third lexicon to include the determined intersection of the fourth and fifth lexicons These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) In this instance, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 11: With respect to dependent claim 11, the claim depends upon independent claim 8. The analysis of claim 8 is incorporated herein by reference. Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims further disclose the additional elements: accessing labeled archive data The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). The claims further disclose the additional elements: segmenting the labeled archived data into a set of testing data and a set of training data segmenting the training data, based on the labels included in the set of training data, into the first plurality of training records and the second plurality of training records employing the labeled testing data to validate the classifier model These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims further disclose the additional elements: accessing labeled archive data The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). The claims further disclose the additional elements: segmenting the labeled archived data into a set of testing data and a set of training data segmenting the training data, based on the labels included in the set of training data, into the first plurality of training records and the second plurality of training records employing the labeled testing data to validate the classifier model These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) In this instance, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 12: With respect to dependent claim 12, the claim depends upon dependent claim 11. The analysis of claim 11 is incorporated herein by reference. Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims further disclose the additional elements: employing the set of training data to train the classifier model These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims further disclose the additional elements: employing the set of training data to train the classifier model These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) In this instance, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 13: With respect to dependent claim 13, the claim depends upon dependent claim 11. The analysis of claim 11 is incorporated herein by reference. Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims further disclose the additional elements: segmenting the set of training data into a set of lexicon training data and a set of scoring threshold data segmenting the set of lexicon training data into the first plurality of training records and the second plurality of training records updating the classifier model such that the updated classifier model, when benchmarked against the set of scoring threshold data, exhibits a predetermined tradeoff between a false positive error rate (FPR) and a false negative error rate (FNR) of the classifier model These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims further disclose the additional elements: segmenting the set of training data into a set of lexicon training data and a set of scoring threshold data segmenting the set of lexicon training data into the first plurality of training records and the second plurality of training records updating the classifier model such that the updated classifier model, when benchmarked against the set of scoring threshold data, exhibits a predetermined tradeoff between a false positive error rate (FPR) and a false negative error rate (FNR) of the classifier model These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) In this instance, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 14: With respect to dependent claim 14, the claim depends upon independent claim 8. The analysis of claim 8 is incorporated herein by reference. Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims further disclose the additional elements: receiving a balance parameter that indicates a target tradeoff between a false positive error rate (FPR) of the classifier model and a false negative error rate (FNR) of the classifier model employing the balance parameter to update the classifier model such that the updated classifier model, when benchmarked against a third plurality of training records, exhibits the target tradeoff between the FPR of the classifier model and the FNR of the classifier employing the tuned classifier model to classify the text-based content as belonging to a positive class of the classifier model These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims further disclose the additional elements: receiving a balance parameter that indicates a target tradeoff between a false positive error rate (FPR) of the classifier model and a false negative error rate (FNR) of the classifier model employing the balance parameter to update the classifier model such that the updated classifier model, when benchmarked against a third plurality of training records, exhibits the target tradeoff between the FPR of the classifier model and the FNR of the classifier employing the tuned classifier model to classify the text-based content as belonging to a positive class of the classifier model These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) In this instance, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 15: With respect to dependent claim 15, the claim depends upon dependent claim 14. The analysis of claim 14 is incorporated herein by reference. Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims further disclose the additional elements: iteratively employing one or more threshold parameters of the classifier model to determine a classification of each record of the third plurality of training records iteratively employing the label and the classification of each record of the third plurality of records to determine each of the FPR and the FNR of the classifier model iteratively adjusting the one or more threshold parameters of the classifier model such that the classifier model, when benchmarked against the third plurality of training records, exhibits the indicative tradeoff between the FRP of the classifier model and the FNR of the classifier model These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims further disclose the additional elements: iteratively employing one or more threshold parameters of the classifier model to determine a classification of each record of the third plurality of training records iteratively employing the label and the classification of each record of the third plurality of records to determine each of the FPR and the FNR of the classifier model iteratively adjusting the one or more threshold parameters of the classifier model such that the classifier model, when benchmarked against the third plurality of training records, exhibits the indicative tradeoff between the FRP of the classifier model and the FNR of the classifier model These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) In this instance, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 16: With respect to dependent claim 16, the claim depends upon independent claim 8. The analysis of claim 8 is incorporated herein by reference. Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims further disclose the additional elements: deploying the classifier model in a compliance enforcement pipeline These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims further disclose the additional elements: deploying the classifier model in a compliance enforcement pipeline These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) In this instance, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 17: With respect to independent claim 17, the claim recites the elements substantially similar to those in claim 8. The analysis of claim 8 is incorporated herein by reference. Claim 19: With respect to dependent claim 19, the claim depends upon independent claim 17. The analysis of claim 17 is incorporated herein by reference. Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims further disclose the additional elements: receiving text-based content The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). The claims further recite the elements: employing the classifier model to classify the text-based content as belonging to the positive class in response to classifying the text-based content as belonging to the positive class of the classifier model, performing one or more mitigation actions that alters subsequent transmissions of the text-based content These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims further disclose the additional elements: receiving text-based content The courts have found limitations directed to obtaining information electronically, recited at a high level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping," and "storing and retrieving information in memory"). The claims further recite the elements: employing the classifier model to classify the text-based content as belonging to the positive class in response to classifying the text-based content as belonging to the positive class of the classifier model, performing one or more mitigation actions that alters subsequent transmissions of the text-based content These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) In this instance, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 20: With respect to dependent claim 20, the claim recites the limitations substantially similar to those in claim 14. The analysis of claim 14 is incorporated herein by reference. Claim 21: With respect to independent claim 21, the claim recites the elements substantially similar to those in claim 8. The analysis of claim 8 is incorporated herein by reference. Claim 22: With respect to dependent claim 22, the claim recites the limitations substantially similar to those in claim 19. The analysis of claim 19 is incorporated herein by reference. Claim 23: With respect to dependent claim 23, the claim recites the limitations substantially similar to those in claim 19. The analysis of claim 19 is incorporated herein by reference. Claim 24: With respect to dependent claim 24, the claim depends upon dependent claim 23. The analysis of claim 23 is incorporated herein by reference. Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims further disclose the additional elements: wherein the mitigation action includes at least one of: providing an alter indicating the text-based content, deleting the text-based content, replacing the text-based content, quarantining the text-based content, or any combination thereof These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims further disclose the additional elements: wherein the mitigation action includes at least one of: providing an alter indicating the text-based content, deleting the text-based content, replacing the text-based content, quarantining the text-based content, or any combination thereof These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Claim 25: With respect to dependent claim 25, the claim recites the limitations substantially similar to those in claim 14. The analysis of claim 14 is incorporated herein by reference. Claim 26: With respect to dependent claim 26, the claim depends upon dependent claim 25. The analysis of claim 25 is incorporated herein by reference. Step 2A, Prong 1: The claim recites the elements: determine a first score for the second text-based content, wherein the first score indicates a likelihood that the second text-based content is associated with the positive class (mental process; As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses performing a evaluation to determine a first score indicating a likelihood that the text is associated with the positive class) determine a second score for the second text-based content, wherein the second score indicates a likelihood that the second text-based content is associated with the negative class (mental process; As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses performing a evaluation to determine a second score indicating a likelihood that the text is associated with the negative class) determine a third score for the second text-based content, wherein the third score indicates a likelihood that the text-based content is associated with both the positive class of the classifier model and the negative class (mental process; As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses performing a evaluation to determine a third score indicating a likelihood that the text is associated with both the positive class and the negative class) generate an overall score for the second text-based content that is based on a combination of the first score, the second score, and the third score (mental process; As drafted and under its broadest reasonable interpretation, this limitation covers performance of the limitation in the mind (including an observation, evaluation, judgment, opinion) or with the aid of pencil and paper but for the recitation of generic computer components. For example, this limitation encompasses performing a evaluation to determine an overall score based on the first, second, and third scores) Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims further disclose the additional elements: causing the first sub-model to determine likelihood that the second text-based content is associated with the positive class of the classifier model causing the second sub-model to determine a likelihood that the second text-based content is associated with the negative class of the classifier model causing the third sub-model to determine a likelihood that the text-based content is associated with both the positive class of the classifier model and the negative class of the classifier model causing the updated classifier model based on a combination of the first score, the second score, and the third score These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims further disclose the additional elements: causing the first sub-model to determine likelihood that the second text-based content is associated with the positive class of the classifier model causing the second sub-model to determine a likelihood that the second text-based content is associated with the negative class of the classifier model causing the third sub-model to determine a likelihood that the text-based content is associated with both the positive class of the classifier model and the negative class of the classifier model causing the updated classifier model based on a combination of the first score, the second score, and the third score These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Claim 27: With respect to dependent claim 27, the claim depends upon dependent claim 25. The analysis of claim 25 is incorporated herein by reference. Step 2A, Prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claims further disclose the additional elements: causing the tuned classifier model to be deployed in a compliance enforcement pipeline These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Accordingly, at Step 2A, prong two, the additional elements individually or in combination do no integrate the judicial exception into a practical application. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claims further disclose the additional elements: causing the tuned classifier model to be deployed in a compliance enforcement pipeline These elements recited at a high-level of generality with no detail of the training process and amounts to no more than adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea (See MPEP 2106.05(f)) Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 21-24 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhou et al. (Sentiment Analysis with Automatically Constructed Lexicon and Three-Way Decision, 2014, hereafter Zhou). As per independent claim 21, Zhou discloses a method comprising: generating training records that include a first lexicon, a second lexicon, and a third lexicon, the first lexicon including a first plurality of strings that are included in the training records that are labeled as belonging to a positive class, a second plurality strings that are included in the training records that are labeled as belonging to the negative class, and the third lexicon includes a plurality of strings that are included in both the first plurality of strings and the second plurality of strings (Figure 2; Section 3.3: Here, a feature lexicon is used to analyze and categorize comments (strings) based on sentiment analysis. This is performed via pattern extraction and sentiment polarity assignment. In this instance, each noun-adjective pattern is identified and a sentiment is applied to the extracted pattern. Based upon the sentiments, the noun-adjective pairs are stored in a data structure. Finally, each sentiment score is transformed to either “+1” (positive sentiment) or “-1” (negative sentiment).” Further, there exists a set of nouns such that the nouns are the same but the adjective portions are antonyms (classified as both the first plurality of training records (positive) and second plurality of training records (negative)). In this instance, the pattern with the larger sentiment score is given the sentiment polarity of “+1” (positive) and the other the sentiment polarity of “-1” (negative)) generating a first sub-model based on a first lexicon, a second sub-model based on a second lexicon, and a third sub-model based on a third lexicon (Figure 3; Section 3.4: Here, a plurality of sub-models “labeled data” are generated from application of the general lexicon and the feature lexicon. These sub-models are labeled data sets that are used to train the supervised learning model. The first sub-model comprises the set of items that are identified by the general lexicon and the feature lexicon as having a positive polarity; the second sub-model comprises the set of items that are identified by the general lexicon and the feature lexicon as having a negative polarity; the third sub-model comprises the set of items that are identified as having different polarities as labeled by the general lexicon and the feature lexicon) generating a classifier model by at least integrating the first sub-model, the second sub-model, and the third sub-model (Figure 3; Section 3.4: Here, the classified data define lexicons. These lexicons are used to generate a plurality of “labeled data” sets. These labeled data sets are combined to train a supervised learning model that is used to classify the dataset of labeled data that has been labeled with different polarities by the general lexicon and the feature lexicon) As per dependent claim 22, Zhou discloses wherein the method further comprises causing the classifier model to classifying a text-based content as belonging to a positive class (Figure 2; Section 3.3: Here, there exists a set of nouns such that the nouns are the same but the adjective portions are antonyms (classified as both the first plurality of training records (positive) and second plurality of training records (negative)). In this instance, the pattern with the larger sentiment score is given the sentiment polarity of “+1” (positive) and the other the sentiment polarity of “-1” (negative)). As per dependent claim 23, Zhou discloses wherein the method further comprises, in response to classifying the text-based content as belonging to the positive class, performing a mitigation action that alters subsequent transmission of the text-based content (Figure 3; Section 3.4: Here, a three-way decision is performed. Specifically, a general lexicon and the feature lexicon are compared. If both lexicons agree on the sentiment polarity label, the sentiment polarity label is treated as correct and it is added to the labeled data set. If the polarity is different, the result is placed into a rejection set for additional processing. The removal of text-based content to the rejection set is a mitigation action that alters the subsequent transmission and processing of the text-based content) As per dependent claim 24, Zhou discloses wherein the mitigation action includes at least one of: providing an alter indicating the text-based content, deleting the text-based content, replacing the text-based content, quarantining the text-based content (Figure 3; Section 3.4: Here, a three-way decision is performed. Specifically, a general lexicon and the feature lexicon are compared. If both lexicons agree on the sentiment polarity label, the sentiment polarity label is treated as correct and it is added to the labeled data set. If the polarity is different, the result is placed into a rejection set for additional processing. The removal of text-based content to the rejection set is a mitigation action that alters the subsequent transmission and processing of the text-based content), or any combination thereof. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 8-10, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou and further in view of Alspector et al. (US 8713014, patented 29 April 2014, hereafter Alspector). As per independent claim 8, Zhou discloses a system for generating an integrated classifier model that has a positive class and a negative class, the system comprising: generating a first sub-model of the classifier model based on a first lexicon that includes a first plurality of strings that are included in a first plurality of training records that are labeled as belonging to the positive class (Figure 2; Section 3.3: Here, a feature lexicon is used to analyze and categorize comments (strings) based on sentiment analysis. This is performed via pattern extraction and sentiment polarity assignment. In this instance, each noun-adjective pattern is identified and a sentiment is applied to the extracted pattern. Based upon the sentiments, the noun-adjective pairs are stored in a data structure. Finally, each sentiment score is transformed to either “+1” (positive sentiment) or “-1” (negative sentiment).” In this instance the first sub-model is the “labeled data” set having strings labeled with a positive sentiment by both the general lexicon and the feature lexicon) generating a second sub-model of the classifier model based on a second lexicon that includes a second plurality of strings that are included in a second plurality of training records that are labeled as belonging to the negative class (Figure 2; Section 3.3: Here, a feature lexicon is used to analyze and categorize comments (strings) based on sentiment analysis. This is performed via pattern extraction and sentiment polarity assignment. In this instance, each noun-adjective pattern is identified and a sentiment is applied to the extracted pattern. Based upon the sentiments, the noun-adjective pairs are stored in a data structure. Finally, each sentiment score is transformed to either “+1” (positive sentiment) or “-1” (negative sentiment).” In this instance the first sub-model is the “labeled data” set having strings labeled with a negative sentiment by both the general lexicon and the feature lexicon) generating a third sub-model of the classifier model based on a third lexicon that includes a third plurality of strings that are included in both the first plurality of training records and the second plurality of training records (Figure 2; Section 3.3: Here, there exists a set of nouns such that the nouns are the same but the adjective portions are antonyms (classified as both the first plurality of training records (positive) and second plurality of training records (negative)). In this instance, the pattern with the larger sentiment score is given the sentiment polarity of “+1” (positive) and the other the sentiment polarity of “-1” (negative). In this instance the first sub-model is the “labeled data” set having strings labeled with a positive sentiment and a negative sentiment by the general lexicon and the feature lexicon) integrating the first sub-model, the second sub-model, and the third sub-model to generate the classifier model (Figure 3; Section 3.4: Here, the classified data define lexicons. These lexicons are used to generate a plurality of “labeled data” sets. These labeled data sets are combined to train a supervised learning model that is used to classify the dataset of labeled data that has been labeled with different polarities by the general lexicon and the feature lexicon) Zhou fails to specifically disclose: one or more hardware processors one or more computer-readable media having executable instructions embodied thereon, which, when executed by the one or more processors, cause the one or more hardware processors to execute actions However, Alspector, which is analogous to the claimed invention because it is directed toward use of lexicons in a classifier system, discloses one or more hardware processors and one or more computer readable media having executable instructions embodied thereon, which, when executed by the one or more processors, cause the one or more hardware processors to execute actions (column 17, lines 27-32: Here, a computer-usable storage medium or device stores programmatic instructions which are executed by a processor to perform the operations). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Alspector with Zhou, with a reasonable expectation of success, as it would have allowed for storing programmatic instructions executable by a processor to perform classification using lexicons (Alspector: column 17, lines 27-32). As per dependent claim 9, Zhou and Alspector disclose the limitations similar to those in claim 8, and the same rejection is incorporated herein. Zhou fails to specifically disclose: generating a fourth lexicon based on the first plurality of training records generating a fifth lexicon based on the second plurality of training records generating the first, second, and third lexicons based on the fourth and fifth lexicons However, Alspector discloses: generating a fourth lexicon based on the first plurality of training records (column 2, lines 34-43: Here, a lexicon of attributes may include a primary lexicon and a secondary lexicon. In this instance, both the primary and secondary lexicons are generated from the same set of training records) generating a fifth lexicon based on the second plurality of training records (column 2, lines 34-43: Here, a lexicon of attributes may include a primary lexicon and a secondary lexicon. In this instance, both the primary and secondary lexicons are generated from the same set of training records. Further, it is noted that Zhou discloses a first and second lexicons. Applying Alspector’s teaching of generating primary and secondary lexicons for each of the first and second lexicons would result in the generation of corresponding fourth and fifth lexicons) generating the first, second, and third lexicons based on the fourth and fifth lexicons (column 2, lines 34-43: Here, a lexicon of attributes may include a primary lexicon and a secondary lexicon. In this instance, both the primary and secondary lexicons are generated from the same set of training records) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Alspector with Zhou, with a reasonable expectation of success, as it would have allowed for generation of primary and secondary lexicons for each generated lexicon (Alspector: column 2, lines 34-43). This would have provided the advantage of modifying the secondary lexicon to create an augmented intersection in order to use for calculating a similarity in addition to calculating the similarity based upon the original primary lexicon (Alspector: column 2, lines 34-43). As per dependent claim 10, Zhou and Alspector disclose the limitations similar to those in claim 9, and the same rejection is incorporated herein. Zhou discloses: determining an intersection of two lexicons (Section 3.4: Here, the results of applying the general lexicon and the feature lexicon are compared. If both results have the same polarity, the lexicons intersect and the label is applied) determining a set of differences between the lexicons (Section 3.4: Here, the results of applying the general lexicon and the feature lexicon are compared. If both results have different polarities, the difference is noted and the associated data is placed in the rejection set) generating the third lexicon to include the determined intersection of the two lexicons (Section 3.4: Here, the results of applying the general lexicon and the feature lexicon are compared. If both results have the same polarity, the lexicons intersect and the label is applied) Zhou fails to specifically disclose: generating the first lexicon to include the determined set of differences of the fourth and fifth lexicons generating the second lexicon to include the determined set of differences of the fifth and fourth lexicons However, Alspector, which is analogous to the claimed invention because it is directed toward classifying data based upon lexical analysis, discloses: generating the first lexicon to include the determined set of differences of the fourth and fifth lexicons (column 2, lines 34-43: Here, a lexicon of attributes may include a primary lexicon and a secondary lexicon. In this instance, both the primary and secondary lexicons are generated from the same set of training records) generating the second lexicon to include the determined set of differences of the fifth and fourth lexicons (column 2, lines 34-43: Here, a lexicon of attributes may include a primary lexicon and a secondary lexicon. In this instance, both the primary and secondary lexicons are generated from the same set of training records) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Alspector with Zhou, with a reasonable expectation of success, as it would have allowed for generation of primary and secondary lexicons for each generated lexicon (Alspector: column 2, lines 34-43). This would have provided the advantage of modifying the secondary lexicon to create an augmented intersection in order to use for calculating a similarity in addition to calculating the similarity based upon the original primary lexicon. With respect to independent claim 17, the claim recites the limitations substantially similar to those in claim 8. Claim 17 is rejected under similar rationale. With respect to dependent claim 19, Zhou discloses wherein the actions further comprise: receiving text-based content class (Figure 2; Section 3.3: Here, a feature lexicon is used to analyze and categorize comments (text) based on sentiment analysis) employing the classifier model to classify the text-based content as belonging to a positive class records (Figure 2; Section 3.3: Here, there exists a set of nouns such that the nouns are the same but the adjective portions are antonyms (classified as both the first plurality of training records (positive) and second plurality of training records (negative)). In this instance, the pattern with the larger sentiment score is given the sentiment polarity of “+1” (positive) and the other the sentiment polarity of “-1” (negative). Based upon the determination that the general lexicon and feature lexicon both classify the content as having a positive polarity, the content are put into the “labeled data” having a positive polarity for use in training the supervised learning model) in response to classifying the text-based content as belonging to the positive class of the classifier model, performing one or more mitigation actions that alters subsequent transmissions of the text-based content (Figure 3; Section 3.4: Here, a three-way decision is performed. Specifically, a general lexicon and the feature lexicon are compared. If both lexicons agree on the sentiment polarity label, the sentiment polarity label is treated as correct and it is added to the labeled data set. If the polarity is different, the result is placed into a rejection set for additional processing. The removal of text-based content to the rejection set is a mitigation action that alters the subsequent transmission and processing of the text-based content) Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou and Alspector and further in view of Pushkin et al. (US 11734937, filed 2 January 2020, hereafter Pushkin). As per dependent claim 11, Zhou and Alspector disclose the limitations similar to those in claim 8, and the same rejection is incorporated herein. Zhou further discloses wherein the actions further comprise: accessing labeled archive data (Figure 3; Section 3.4: Here, a set of data is labeled based upon the general lexicon and the feature lexicon) segmenting the labeled archive data into a training set and a rejection set (Section 3.4: Here, the results of applying the general lexicon and the feature lexicon are compared. If both results have the same polarity, the lexicons intersect and the label is applied. Otherwise the data is added to a rejection set) segmenting the training data, based on the labels included in the set of training data, into the first plurality of training records and the second plurality of training records (Figure 3; Section 3.4: Here, the labeled data includes two different result sets based upon the polarity of the sentiment analysis) Zhou fails to specifically disclose: a set of testing data and employing the labeled testing data to validate the classifier model However, Pushkin, which is analogous to the claimed invention because it is directed toward testing text classification, discloses a set of testing data and employing the labeled testing data to validate the classifier model (Figure 2, item 208; column 1, line 55- column 2, line 22; column 10, lines 1-44: Here, a set of training data and testing data is obtained from the dataset. The testing dataset is used to test the classifier model and fine tune hyperparameters to improve classification (column 7, lines 46-67)). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Pushkin with Zhou, with a reasonable expectation of success, as it would have allowed for testing and fine tuning hyperparameters to improve model classification (Pushkin: column 7, lines 46-67). As per dependent claim 12, Zhou, Alspector, and Pushkin disclose the limitations similar to those in claim 11, and the same rejection is incorporated herein. Zhou discloses wherein the actions further comprise employing the set of training data to train the classifier model (Figure 3; Section 3.4). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou, Alspector, and Pushkin and further in view of Ram et al. (US 2022/0076144, filed 9 September 2020, hereafter Ram). As per dependent claim 13, Zhou, Alspector, and Pushkin disclose the limitations similar to those in claim 11, and the same rejection is incorporated herein. Zhou disclose wherein the actions further comprise: segmenting the set of training data into a set of lexicon training data and a set of scoring threshold data (Figure 3; Section 3.4: Here, the training data is segmented based upon the polarity of the classification data by the general lexicon and feature lexicon) segmenting the set of lexicon training data into the first plurality of training records and the second plurality of training records (Figure 3; Section 3.4: Here, based upon the textual data being classified as either positive or negative (polarity), and having the same polarity between the general and feature lexicons, the record is added to the appropriate lexicon) updating the classifier model (Figure 3; Section 3.4: Here, the lexicon data is used to train the supervised learning classification model) Zhou fails to specifically disclose: updating the classifier model such that the updated classifier model, when benchmarked against the set of scoring threshold data, exhibits a predetermined tradeoff between a false positive error rate (FPR) and a false negative error rate (FNR) of the classifier model However, Ram, which is analogous to the claimed invention because it is directed toward classification models having constraints, discloses: updating the classifier model such that the updated classifier model, when benchmarked against the set of scoring threshold data, exhibits a predetermined tradeoff between a false positive error rate (FPR) and a false negative error rate (FNR) of the classifier model (paragraph 0038: Here, a set of constraints are provided to the machine learning classification model. These constraints include false positive and false negative error rates. The model is then trained to adhere to these constraints) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Ram with Zhou-Pushkin, with a reasonable expectation of success, as it would have allowed for providing constraints, such as false positive and false negative error rates, that are acceptable in training the model (Ram: paragraph 0038). Claims 14, 20, and 25-26 are rejected under 35 U.S.C. 103 as being unpatentable over Zhou and Alspector and further in view of Templeton (US 2019/0259095, published 22 August 2019). As per dependent claim 14, Zhou discloses the limitations similar to those in claim 8, and the same rejection is incorporated herein. Zhou discloses employing the tuned classifier model to classify the text-based content as belonging to a positive class of the classifier model (Section 3.4), but Zhou fails to specifically disclose wherein the actions further comprise: receiving a balance parameter that indicates a target tradeoff between a false positive error rate (FPR) of the classifier model and a false negative error rate (FNR) of the classifier model employing the balance parameter to update the classifier model such that the updated classifier model, when benchmarked against a third plurality of training records, exhibits the target tradeoff between the FPR of the classifier model and the FNR of the classifier However, Templeton discloses: receiving a balance parameter that indicates a target tradeoff between a false positive error rate (FPR) of the classifier model and a false negative error rate (FNR) of the classifier model and meeting a target tradeoff between the FPR of the classifier model and the FNR of the classifier (paragraph 0092: Here, a virtual balance model (Geometric Brownian Motion Monte Carlo) is used to balance error rates including false positive rate and false negative rates) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Templeton with Zhou, with a reasonable expectation of success, as it would have allowed for balancing a plurality of error rates to improve the model (Templeton: paragraph 0092). Further, Alspector discloses: such that the updated classifier model, when benchmarked against a third plurality of training records, exhibits the target tradeoff (column 2, lines 34-43: Here, lexicon used for classification is modified by adding additional content to the intersection to create an augmented intersection. This causes a larger threshold that may result in false positive/negatives based upon the expanded dataset. However, this is a tradeoff made by extending to a larger set that does not meet the original threshold) employing the tuned classifier model to classify the text-based content as belonging to a positive class of the classifier model (column 2, lines 34-43: Here, the augmented lexicon classifies items that originally do not meet the threshold as being positively associated with the lexicon) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Alspector with Zhou-Templeton, with a reasonable expectation of success, as it would have allowed for generation of primary and secondary lexicons for each generated lexicon (Alspector: column 2, lines 34-43). This would have provided the advantage of modifying the secondary lexicon to create an augmented intersection in order to use for calculating a similarity in addition to calculating the similarity based upon the original primary lexicon. With respect to claim 20, the claim recites the limitations substantially similar to those in claim 14. Claim 20 is rejected under similar rationale. With respect to claim 25, the claim recites the limitations substantially similar to those in claim 14. Claim 25 is rejected under similar rationale. As per dependent claim 26, Zhou, Templeton, and Alspector disclose the limitations similar to those in claim 25, and the same rejection is incorporated herein. Zhou further discloses wherein causing the tuned classifier model to classify a second text-based content further comprises: causing the first sub-model to determine a first score for the second text-based content, wherein the first score indicates a likelihood that the second text-based content is associated with the positive class of the classifier model (Figure 2; Section 3.3: Here, a feature lexicon is used to analyze and categorize comments (strings) based on sentiment analysis. This is performed via pattern extraction and sentiment polarity assignment. In this instance, each noun-adjective pattern is identified and a sentiment is applied to the extracted pattern. Based upon the sentiments, the noun-adjective pairs are stored in a data structure. Finally, each sentiment score is transformed to either “+1” (positive sentiment) or “-1” (negative sentiment)”) causing the second sub-model to determine a second score for the second text-based content, wherein the second score indicates a likelihood that the second text-based content is associated with the negative class of the classifier model (Figure 2; Section 3.3: Here, a feature lexicon is used to analyze and categorize comments (strings) based on sentiment analysis. This is performed via pattern extraction and sentiment polarity assignment. In this instance, each noun-adjective pattern is identified and a sentiment is applied to the extracted pattern. Based upon the sentiments, the noun-adjective pairs are stored in a data structure. Finally, each sentiment score is transformed to either “+1” (positive sentiment) or “-1” (negative sentiment)”) causing a third sub-model to determine a third score for the second text-based content, wherein the third score indicates a likelihood that the text-based content is associated with both the positive class of the classifier model and the negative class of the classifier model (Figure 2; Section 3.3: Here, there exists a set of nouns such that the nouns are the same but the adjective portions are antonyms (classified as both the first plurality of training records (positive) and second plurality of training records (negative)). In this instance, the pattern with the larger sentiment score is given the sentiment polarity of “+1” (positive) and the other the sentiment polarity of “-1” (negative)) causing the updated classifier model to generate an overall score for the second text-based content that is based on a combination of the first score, the second score, and the third score (Figure 3; Section 3.4: Here, the classified data define lexicons. These lexicons are used to train a supervised learning model to perform sentiment analysis) Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou, Alspector, and Templeton and further in view of Hayman et al. (US 12229777, filed 1 March 2019, hereafter Hayman). As per dependent claim 15, Zhou, Alspector, and Templeton disclose the limitations similar to those in claim 14, and the same rejection is incorporated herein. Zhou discloses iteratively employing one or more threshold parameters of the classifier model to determine a classification of each record of the third plurality of training records (Figure 3; Section 3.4: Here, a three-way decision is performed. Specifically, a general lexicon and the feature lexicon are compared. If both lexicons agree on the sentiment polarity label, the sentiment polarity label is treated as correct and it is added to the labeled data set. If the polarity is different, the result is placed into a rejection set for additional processing. The removal of text-based content to the rejection set is a mitigation action that alters the subsequent transmission and processing of the text-based content. This constitutes classification of each record of the plurality of training records). Zhou fails to specifically disclose: iteratively employing the label and the classification of each record of the third plurality of records to determine each of the FPR and the FNR of the classifier model iteratively adjusting the one or more threshold parameters of the classifier model such that the classifier model, when bookmarked against the third plurality of training record, exhibits the indicated tradeoff between the FPR of the classifier model and the FNR of the classifier model However, Templeton discloses: receiving a balance parameter that indicates a target tradeoff between a false positive error rate (FPR) of the classifier model and a false negative error rate (FNR) of the classifier model and meeting a target tradeoff between the FPR of the classifier model and the FNR of the classifier (paragraph 0092: Here, a virtual balance model (Geometric Brownian Motion Monte Carlo) is used to balance error rates including false positive rate and false negative rates) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Templeton with Zhou, with a reasonable expectation of success, as it would have allowed for balancing a plurality of error rates to improve the model (Templeton: paragraph 0092). Further, Alspector discloses: such that the updated classifier model, when benchmarked against a third plurality of training records, exhibits the target tradeoff (column 2, lines 34-43: Here, lexicon used for classification is modified by adding additional content to the intersection to create an augmented intersection. This causes a larger threshold that may result in false positive/negatives based upon the expanded dataset. However, this is a tradeoff made by extending to a larger set that does not meet the original threshold) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Alspector with Zhou-Templeton, with a reasonable expectation of success, as it would have allowed for generation of primary and secondary lexicons for each generated lexicon (Alspector: column 2, lines 34-43). This would have provided the advantage of modifying the secondary lexicon to create an augmented intersection in order to use for calculating a similarity in addition to calculating the similarity based upon the original primary lexicon. Finally, Hayman, which is analogous to the claimed invention because it is directed toward training a classifier, discloses iteratively adjusting hyperparameters of the classifier to achieve satisfactory predicted performance (column 11, lines 6-33: Here, classification data and hyperparameter data are iteratively tuned to improve the accuracy of the machine learning model). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Hayman’s iterative training to Zhou’s classification of each record, with a reasonable expectation of success, as it would have provided the advantage of improving classification of each record (Hayman: column 11, lines 6-33). Additionally, it would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Hayman’s iterative training to Templeton’s balancing of FPR and FNR, with a reasonable expectation of success, as it would have allowed for improving balancing through optimization achieved by multiple iterations (Hayman: column 11, lines 6-33). Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou and Alspector and Ram. As per dependent claim 16, Zhou discloses the limitations similar to those in claim 8, and the same rejection is incorporated herein. Zhou fails to specifically disclose deploying the classifier model in a compliance enforcement pipeline. However, Ram, which is analogous to the claimed invention because it is directed toward a classification model, discloses a classifier model in a compliance enforcement pipeline (paragraph 0030: Here an ML model pipeline is used to model the correlation between data and classifications of the data. Various pipelines may classify the data based upon various parameters including false positive rates and false negative rates and use these hyperparameters to constrain the models (paragraph 0038)). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Ram with Zhou, with a reasonable expectation of success, as it would have allowed for generating classification models pipelines that conform to specified hyperparameters (Ram: paragraph 0038). Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over Zhou, Alspector, and Templeton, and Ram. As per dependent claim 27, Zhou, Alspector, and Templeton discloses the limitations similar to those in claim 25, and the same rejection is incorporated herein. Zhou fails to specifically disclose deploying the classifier model in a compliance enforcement pipeline. However, Ram, which is analogous to the claimed invention because it is directed toward a classification model, discloses a classifier model in a compliance enforcement pipeline (paragraph 0030: Here an ML model pipeline is used to model the correlation between data and classifications of the data. Various pipelines may classify the data based upon various parameters including false positive rates and false negative rates and use these hyperparameters to constrain the models (paragraph 0038)). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Ram with Zhou-Templeton-Alspector, with a reasonable expectation of success, as it would have allowed for generating classification models pipelines that conform to specified hyperparameters (Ram: paragraph 0038). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Lindnor (US 10878334): Discloses training and testing a trained model to improve accuracy of the model (Figures 2 and 4) Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE R STORK whose telephone number is (571)272-4130. The examiner can normally be reached 8am - 2pm; 4pm - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571/272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE R STORK/Primary Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Dec 31, 2021
Application Filed
Jan 22, 2026
Non-Final Rejection — §101, §102, §103
Feb 13, 2026
Interview Requested
Mar 03, 2026
Applicant Interview (Telephonic)
Mar 06, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585935
EXECUTION BEHAVIOR ANALYSIS TEXT-BASED ENSEMBLE MALWARE DETECTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12585937
SYSTEMS AND METHODS FOR DEEP LEARNING ENHANCED GARBAGE COLLECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585869
RECOMMENDATION PLATFORM FOR SKILL DEVELOPMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579454
PROVIDING EXPLAINABLE MACHINE LEARNING MODEL RESULTS USING DISTRIBUTED LEDGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12579412
SPIKE NEURAL NETWORK CIRCUIT INCLUDING SELF-CORRECTING CONTROL CIRCUIT AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
92%
With Interview (+28.3%)
4y 0m
Median Time to Grant
Low
PTA Risk
Based on 865 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month