Prosecution Insights
Last updated: April 19, 2026
Application No. 16/194,621

SEMI-AUTOMATED CORRECTION OF POLICY RULES

Final Rejection §101§103§112
Filed
Nov 19, 2018
Examiner
RIFKIN, BEN M
Art Unit
2123
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
6 (Final)
44%
Grant Probability
Moderate
7-8
OA Rounds
4y 12m
To Grant
59%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
139 granted / 317 resolved
-11.2% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 12m
Avg Prosecution
38 currently pending
Career history
355
Total Applications
across all art units

Statute-Specific Performance

§101
21.8%
-18.2% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
7.8%
-32.2% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 317 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The instant application having Application No. 16194621 has a total of 28 claims pending in the application, of which claims 4-5, 7, 11-12, 14, 18 and 20 have been cancelled. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 6-11, and 13-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claim 1 is a process type claim. Claim 8 is a machine type claim. Claim 15 is a manufacture type claim. Therefore, claims 1-4, 6-11, and 13-20 are directed to either a process, machine, manufacture or composition of matter. As per claim 1, 2A Prong 1: “extracting the one or more rules from one or more segments of the text data by identifying the one or more segments of text data contain content having at least one of direct and inferential semantics indicating one or more obligations relating to the one or more insurance policies” The user mentally or with pencil and paper identifies rules related to the insurance policies obligations. “identifying… incorrect data relating to one or more of the one or more rules…” The user mentally or with pencil and paper identifies incorrect data within the rules. “revising the incorrect data of the one or more rules in the set of the one or more rules to maintain accuracy and correctness of the policy data source, wherein the one or more rules are updated in the database according to the revision to the incorrect data” The user mentally or with pencil and paper corrects the data and makes sure the correction is made as needed in any other locations. “…wherein the correction is incorporated into the one or more rules to form updated one or more rules, wherein the correction comprises addition of a missing endpoint of a range” The user mentally or with pencil and paper corrects the rule and adds it to the set of updated rules to be used later. “adding the updated one or more rules to the ground truth set to form an updated ground truth set” The user mentally or with pencil and paper adds the rules to the set. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: Computing environment, a processor (mere instructions to apply the exception using a generic computer component); “training one or more machine learning models on a ground truth set comprising positive and negative example rules”, “via a first machine learning sentence classifier of the one or more machine learning models”, “by a second machine learning classifier of the one or more machine learning models”, “re-training one or more of the one or more machine learning models on the updated ground truth set” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: The machine learning in the claims is no more than a generic machine learning algorithm with no detail or limitations that would make it more than an off the shelf machine learning algorithm). “receiving, into a database, text data defining one or more insurance policies received from a policy data source, wherein the text data includes one or more rules specific to respective policies of the one or more insurance policies”, “presenting the incorrect data and a natural text snippet associated with the one or more rules from the text data, wherein the presenting occurs in response to a size of a set of the incorrect data exceeding a predetermined threshold”, “receiving user feedback in response to the presenting, the user feedback comprising a correction…” (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: Computing environment, a processor (mere instructions to apply the exception using a generic computer component) “training one or more machine learning models on a ground truth set comprising positive and negative example rules”, “via a first machine learning sentence classifier of the one or more machine learning models”, “by a second machine learning classifier of the one or more machine learning models”, “re-training one or more of the one or more machine learning models on the updated ground truth set” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: The machine learning in the claims is no more than a generic machine learning algorithm with no detail or limitations that would make it more than an off the shelf machine learning algorithm). ““receiving, into a database, text data defining one or more insurance policies received from a policy data source, wherein the text data includes one or more rules specific to respective policies of the one or more insurance policies”, “presenting the incorrect data and a natural text snippet associated with the one or more rules from the text data, wherein the presenting occurs in response to a size of a set of the incorrect data exceeding a predetermined threshold”, “receiving user feedback in response to the presenting, the user feedback comprising a correction…” (MPEP 2106.05(d)(II) indicate that merely receiving or transmitting data is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed receiving and displaying step is well-understood, routine, conventional activity is supported under Berkheimer). As per claim 2, this claim contains additional receiving of data and mental steps of interpreting text data, and is rejected for similar reasons to claim 1. As per claims 3, 6 and 21-23, these claims contain additional mental steps similar to claim 1, and is rejected for similar reasons to claim 1. As per claim 8, 2A Prong 1: “extract the one or more rules from one or more segments of the text data by identifying the one or more segments of text data contain content having at least one of direct and inferential semantics indicating one or more obligations relating to the one or more insurance policies” The user mentally or with pencil and paper identifies rules related to the insurance policies obligations. “identify… incorrect data relating to one or more of the one or more rules…” The user mentally or with pencil and paper identifies incorrect data within the rules. “revise the incorrect data of the one or more rules in the set of the one or more rules to maintain accuracy and correctness of the policy data source, wherein the one or more rules are updated in the database according to the revision to the incorrect data” The user mentally or with pencil and paper corrects the data and makes sure the correction is made as needed in any other locations. “…wherein the correction is incorporated into the one or more rules to form updated one or more rules, wherein the correction comprises addition of a missing endpoint of a range” The user mentally or with pencil and paper corrects the rule and adds it to the set of updated rules to be used later. “add the updated one or more rules to the ground truth set to form an updated ground truth set” The user mentally or with pencil and paper adds the rules to the set. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: Computing environment, one or more processors (mere instructions to apply the exception using a generic computer component); “train one or more machine learning models on a ground truth set comprising positive and negative example rules”, “via a first machine learning sentence classifier of the one or more machine learning models”, “by a second machine learning classifier of the one or more machine learning models”, “re-train one or more of the one or more machine learning models on the updated ground truth set” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: The machine learning in the claims is no more than a generic machine learning algorithm with no detail or limitations that would make it more than an off the shelf machine learning algorithm). “receive, into a database, text data defining one or more insurance policies received from a policy data source, wherein the text data includes one or more rules specific to respective policies of the one or more insurance policies”, “presenting the incorrect data and a natural text snippet associated with the one or more rules from the text data, wherein the presenting occurs in response to a size of a set of the incorrect data exceeding a predetermined threshold”, “receiving user feedback in response to the presenting, the user feedback comprising a correction…” (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: Computing environment, one or more processors (mere instructions to apply the exception using a generic computer component) “train one or more machine learning models on a ground truth set comprising positive and negative example rules”, “via a first machine learning sentence classifier of the one or more machine learning models”, “by a second machine learning classifier of the one or more machine learning models”, “re-train one or more of the one or more machine learning models on the updated ground truth set” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: The machine learning in the claims is no more than a generic machine learning algorithm with no detail or limitations that would make it more than an off the shelf machine learning algorithm). “receive, into a database, text data defining one or more insurance policies received from a policy data source, wherein the text data includes one or more rules specific to respective policies of the one or more insurance policies”, “presenting the incorrect data and a natural text snippet associated with the one or more rules from the text data, wherein the presenting occurs in response to a size of a set of the incorrect data exceeding a predetermined threshold”, “receiving user feedback in response to the presenting, the user feedback comprising a correction…” (MPEP 2106.05(d)(II) indicate that merely receiving or transmitting data is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed receiving and displaying step is well-understood, routine, conventional activity is supported under Berkheimer). As per claim 9, this claim contains additional receiving of data and mental steps to claim 8, and is rejected for similar reasons to claim 8. As per claims 10, 13 and 24-26, these claims contain additional mental steps to claim 8, and is rejected for similar reasons to claim 8. As per claim 15, 2A Prong 1: “extracting the one or more rules from one or more segments of the text data by identifying the one or more segments of text data contain content having at least one of direct and inferential semantics indicating one or more obligations relating to the one or more insurance policies” The user mentally or with pencil and paper identifies rules related to the insurance policies obligations. “identifying… incorrect data relating to one or more of the one or more rules…” The user mentally or with pencil and paper identifies incorrect data within the rules. “revising the incorrect data of the one or more rules in the set of the one or more rules to maintain accuracy and correctness of the policy data source, wherein the one or more rules are updated in the database according to the revision to the incorrect data” The user mentally or with pencil and paper corrects the data and makes sure the correction is made as needed in any other locations. “…wherein the correction is incorporated into the one or more rules to form updated one or more rules, wherein the correction comprises addition of a missing endpoint of a range” The user mentally or with pencil and paper corrects the rule and adds it to the set of updated rules to be used later. “adding the updated one or more rules to the ground truth set to form an updated ground truth set” The user mentally or with pencil and paper adds the rules to the set. 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: A computer program product, Computing environment, one or more processors, non-transitory computer-readable storage medium (mere instructions to apply the exception using a generic computer component); “training one or more machine learning models on a ground truth set comprising positive and negative example rules”, “via a first machine learning sentence classifier of the one or more machine learning models”, “by a second machine learning classifier of the one or more machine learning models”, “re-training one or more of the one or more machine learning models on the updated ground truth set” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: The machine learning in the claims is no more than a generic machine learning algorithm with no detail or limitations that would make it more than an off the shelf machine learning algorithm). “receiving, into a database, text data defining one or more insurance policies received from a policy data source, wherein the text data includes one or more rules specific to respective policies of the one or more insurance policies”, “presenting the incorrect data and a natural text snippet associated with the one or more rules from the text data, wherein the presenting occurs in response to a size of a set of the incorrect data exceeding a predetermined threshold”, “receiving user feedback in response to the presenting, the user feedback comprising a correction…” (Adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: A computer program product, Computing environment, one or more processors, non-transitory computer-readable storage medium (mere instructions to apply the exception using a generic computer component) “training one or more machine learning models on a ground truth set comprising positive and negative example rules”, “via a first machine learning sentence classifier of the one or more machine learning models”, “by a second machine learning classifier of the one or more machine learning models”, “re-training one or more of the one or more machine learning models on the updated ground truth set” (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f) – Examiner’s note: The machine learning in the claims is no more than a generic machine learning algorithm with no detail or limitations that would make it more than an off the shelf machine learning algorithm). “receiving, into a database, text data defining one or more insurance policies received from a policy data source, wherein the text data includes one or more rules specific to respective policies of the one or more insurance policies”, “presenting the incorrect data and a natural text snippet associated with the one or more rules from the text data, wherein the presenting occurs in response to a size of a set of the incorrect data exceeding a predetermined threshold”, “receiving user feedback in response to the presenting, the user feedback comprising a correction…” (MPEP 2106.05(d)(II) indicate that merely receiving or transmitting data is a well‐understood, routine, conventional function when it is claimed in a merely generic manner (as it is in the present claim). Thereby, a conclusion that the claimed receiving and displaying step is well-understood, routine, conventional activity is supported under Berkheimer). As per claim 16, this claim contains additional receiving of data and mental steps to claim 15, and is rejected for similar reasons to claim 15. As per claims 17, 19 and 27-28, these claims contain additional mental steps to claim 15, and is rejected for similar reasons to claim 15. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-3, 6, 8-10,13, 15-17, 19, and 21-28 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As per claims 1, 8, and 15, these claims call for “training one or more machine learning models on a ground truth set comprising positive and negative example rules” along with “extracting via a first machine learning sentence classifier of the one or more machine learning models” and “identifying, by a second machine learning classifier of the one or more machine learning models.” These combined limitations are not supported by the specification. The claim requires that at least two models be trained using positive and negative examples, a first machine learning sentence classifier, and a second machine learning classifier. The discussion of positive and negative examples is found in paragraph 0072 of the instant specification, which discloses that “ the ‘learned model’ is a machine learning model that is learned from a set of positive and negative example rules (i.e., correct and incorrect rules).” The only other section discussion positive/negative examples is the Table found on pg.7 between paragraphs 0075 and 0076, which denotes that a “ML classifier (SVM) trained on positive/negative examples of rules (ground truth set)).” These paragraphs only denote a single machine learning model being trained on positive/negative examples. The claim requires that “a first machine learning sentence classifier” being part of the “one or more models” Trained with positive/negative examples. However, the specification only discusses the “machine learning sentence classifier” in paragraphs 0068-0069, which makes no discussion of this machine learning model being trained on positive or negative examples. This causes the limitations above to be new matter, and therefore rejected under U.S.C. 112(a). As per claims 2-3, 6, 9-10, 13, 16-17, 19, and 21-28, these claims are rejected as being dependent on a claim rejected under U.S.C. 112(a) for new matter. As per claims 1, 8 and 15, these claims call for “re-training one or more of the one or more machine learning models on the updated ground truth set” along with “extracting via a first machine learning sentence classifier of the one or more machine learning models” and “identifying, by a second machine learning classifier of the one or more machine learning models.” This causes the claim to require that multiple models of the claim can be retrained on the updated ground truth set. However, this is not supported by the specification. As discussed above, the claim requires that “a first machine learning sentence classifier” being part of the “one or more models.” However, the specification only discusses the “machine learning sentence classifier” in paragraphs 0068-0069, which makes no discussion of this machine learning model being updated in any way, let alone based upon the updated ground truth set. This causes the limitations above to be new matter, and therefore rejected under U.S.C. 112(a). As per claims 2-3, 6, 9-10, 13, 16-17, 19, and 21-28, these claims are rejected as being dependent on a claim rejected under U.S.C. 112(a) for new matter. As per claims 21, 24 and 27, these claims call for “further comprising processing a new insurance claim through an automated processing system that incorporates the updated database.” However, the specification does not support this limitation. Claims are only discussed in paragraph 0013 and paragraph 0054. Claim 0013 discloses claiming physical therapy units and the use of a modern automated claim processing system that uses formal encoding of policy rules. However, it does not disclose using updated database to process a specific new insurance claim, or any insurance claims after updating. Paragraph 0054 discloses ranking rules based upon the recovery costs of applying the rule to claims, but once again makes no mention of updated databases or processing of new claims. This causes these claims to be new matter and rejected under U.S.C. 112(a). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 22 and 26 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The phrase “similar rules” and “similar incorrect data” in claim 22 and 26 are a relative phrase which renders the claim indefinite. The term “similar” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. This cause the claim to be confusing as there is no way to determine just what a “similar rule” or “similar incorrect data” would be, and therefore the claim is rejected under U.S.C. 112(b) for failing to particularly point out and claim the intended invention. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6, 8-10, 13, 15-17, 19, 22, 25, and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Purandare et al (US 20150187011 A1) in view of Erpenbach et al (US 20180203922 A1), Thompson et al (“Active learning for Natural Language Parsing and Information Extraction”), Huang (“US 7984112 B2) and Yachiku et al (US 20130211599 A1). As per claims 1, 8 and 15, Purandare discloses, “A method for correcting policy data” (Pg.7, particularly paragraph 0119; EN: this denotes correcting aspects of insurance products (i.e. policies)). “in a computing environment by a processor comprising” (Pg.7, particularly paragraph 0122; EN: this denotes the use of processors and other computer equipment to implement the system). “Receiving, into a database” (Pg.7, particularly paragraph 0125; EN: this denotes the computer readable medium being a database) “text data defining one or more insurance policies received from a policy data source” (Pg.8, particularly paragraph 0129; EN: this denotes working and examining the incoming policies). “Wherein the text data includes one or more rules defining conditions specific to respective policies of the one or more insurance policies” (Pg.3, particularly paragraph 0036; Tables 00006-00008; EN: this denotes various rules associated with the policies). “Extracting the one or more rules from one or more segments of the text data by identifying the one or more segments of text data contain content having at least one of direct and inferential semantics indicating one or more obligations relating to the one or more insurance policies” (Pg.8, particularly paragraph 0141; EN: this denotes looking at the particulars of an individual policy including rules and obligations of that policy). “identifying, by a rule selector, incorrect data relating to one or more of the one or more rules…” (pg.8, particularly paragraph 0129; EN: this denotes reviewing rules and the like for errors). “revising the incorrect data of the one or more rules in the set of the one or more rules to maintain accuracy and correctness of the policy data source, wherein the one or more rules are updated in the database according to the revision to the incorrect data, wherein the revising comprises” (Pg.8, particularly paragraph 0129; EN: this denotes looking at results of the system to validate/look for errors, identify, and correct them). “presenting the incorrect data” (Pg.8 particularly paragraphs 0129-0130; EN: this denotes using the Evaluator/evaluation tool to look for errors. Figure 6; EN: this denotes the display of the evaluation tool which would display any errors). “and a natural text snippet associated with the one or more rules from the text data” (Pg.8 , particularly paragraphs 0130-0139; Figure 6; EN: this denotes text snippets associated with the rules and being displayed by the evaluation tool). “receiving user feedback in response to the presenting, the user feedback comprising a correction, wherein the correction is incorporated into the one or more rules to form updated one or more rules” (Pg.8, particularly paragraph 0129; EN: this denotes looking at results of the system to validate/look for errors, identify, and correct them). However, Purandare fails to explicitly disclose, “training one or more machine learning models on a ground truth set comprising positive and negative example rules”, “extracting, via first machine learning sentence classifier of the one or more machine learning models”, “by a second machine learning classifier of the one or more machine learning models”, “wherein the presenting occurs in response to a size of a set of the incorrect data exceeding a predetermined threshold value”, “wherein the correction comprises addition of a missing endpoint of a range”, “adding the updated one or more rules to the ground truth set to form an updated ground truth set”, and “retraining one or more of the one or more machine learning models on the ground truth set.” Erpenbach discloses, “Training one or more machine learning models…”, (Pg.2, particularly paragraph 0017; EN: this denotes using natural language processing including machine learning, which will inherently include some form of training). “…by a … machine learning classifier of the one or more machine learning models (Pg.2, particularly paragraph 0019; EN: this denotes using the NLP algorithm to work with rules and other text based systems). Thompson discloses, “training one or more machine learning models on a ground truth set”, (abstract; EN: this denotes the training being supervised, which includes inputs and expected outputs (i.e. ground truth)). “comprising positive and negative example rules” (Pg.2, particularly section 3.1; EN: This denotes using negative and positive examples to train the natural language system). “extracting via a first machine learning sentence classifier of the one or more machine learning models” (Pg.2-3, particularly sections 3.1 and 3.2; EN: this denotes using multiple different machine learning models for parsing and information gathering from the parsed data). “by a second machine learning classifier… of the one or more machine learning models” (Pg.2-3, particularly sections 3.1 and 3.2; EN: this denotes using multiple different machine learning models for parsing and information gathering from the parsed data). “adding the updated one or more rules to the ground truth set to form an updated ground truth set”, and “retraining one or more of the one or more machine learning models on the ground truth set” (Pg.2, particularly section 2; EN: this denotes looking at actions that were classified/performed incorrectly, correcting them, and then using them for retraining). Huang discloses, “wherein the presenting occurs in response to a size of a set of the incorrect data exceeding a predetermined threshold value” (C5, particularly L60-68; C6, particularly L1-14; EN: this denotes setting up batches of data before transmitting them. When combined with the Purandare reference, this denotes setting a batch size for transmitting rules to the user to review). Purandare and Erpenbach are analogous art because both involve rule evaluation. Before the effective filing date it would have been obvious to one skilled in the art of rule evaluation to combine the work of Purandare and Erpenbach in order to use machine learning to evaluate and validate data relating to rules. The motivation for doing so would be to “determine the importance/relevance of the attributes involved in generating the conclusion” (Erpenbach, paragraph 0072) or in the case of Purandare, allow the system to determine what aspects of a rule are important in order to determine what to review for accuracy. Therefore before the effective filing date it would have been obvious to one skilled in the art of rule evaluation to combine the work of Purandare and Erpenbach in order to use machine learning to evaluate and validate data relating to rules. Thompson and Purandare modified by Erpenbach are analogous art because both involve machine learning. Before the effective filing date it would have been obvious to one skilled in the art of machine learning to combine the work of Thompson and Purandare modified by Erpenbach in order to train using positive/negative examples and retrain when new data is available. The motivation for doing so would be to “present the k examples with the lowest certainties are then presented to the user for annotation and retraining” (Thompson, Pg.2, section 2, second paragraph) or in the case of Purandare modified by Erpenbach, allow the system to identify incorrectly performed actions and then use the corrected versions to be used to further train the model. Therefore before the effective filing date it would have been obvious to one skilled in the art of machine learning to combine the work of Thompson and Purandare modified by Erpenbach in order to train using positive/negative examples and retrain when new data is available. Purandare and Huang are analogous art because both involve transmitting data. Before the effective filing date it would have been obvious to one skilled in the art of data transmission to combine the work of Purandare and Huang in order to set a minimum threshold before transmitting data. The motivation for doing so would be because of bandwidth, “If the bandwidth is high, network acceleration devices 150 can potentially afford to use a larger batch size without significant effect on response time. On the other hand, if the bandwidth is low, a small batch size may be better to avoid unnecessarily delaying subsequent requests” (Huang, C6, L1-5) or in the case of Purandare, allowing the system to store up a reasonable amount of data before displaying it to the suer to prevent delays and save response time. Therefore before the effective filing date it would have been obvious to one skilled in the art of data transmission to combine the work of Purandare and Huang in order to set a minimum threshold before transmitting data. Purandare and Yachiku are analogous art because both involve data correction. Before the effective filing date it would have been obvious to one skilled in the art of data correction to combine the work of Purandare and Yachiku in order to change bounds of ranges to correct values. The motivation for doing so would be because “When the measured value of the status data is greater than the upper limit or less than the lower limit of the prediction range, the controller 100 corrects the upper limit upward, or corrects the lower limit downward” (Yachiku, Pg.4, particularly paragraph 0077) or in the case of Purandare, allow the system to correct errors in range value end points as needed to make them correct for the system. Therefore before the effective filing date it would have been obvious to one skilled in the art of data correction to combine the work of Purandare and Yachiku in order to change bounds of ranges to correct values. As per claims 2, 9 and 16, Purandare discloses, “ingesting the text data from the policy data source upon processing the text data using at least one member selected from a group consisting of: a lexical analysis, parsing, extraction of concepts, semantic analysis, and a machine learning operation” (Abstract; EN: this denotes using parsing to pull in data). As per claims 3, 10, and 17, Purandare discloses, “wherein the identifying the incorrect data is further performed according to a knowledge domain” (abstract; EN: this denotes the system being related to the domain of insurance policies). “wherein the revising the incorrect data of the one or more rules is further done according to the knowledge domain” (abstract; EN: this denotes the system being related to the domain of insurance policies). As per claims 6, 13, and 19, Erpenbach discloses, “further including ranking each of the one or more rules according to a score” (Pg.2, particularly paragraph 0020; EN: this denotes ranking information based upon how important it is to the system and its current actions). As per claims 22, 25, and 28, Erpenbach discloses, “further comprising using natural language processing (NLP) to determine the one or more rules form one or more segments of text data” (Pg.1, particularly paragraph 0001; EN: this denotes using natural language processing to deal with the text of the system). Claim Rejections - 35 USC § 103 Claims 21, 23-24 and 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over Purandare et al (US 20150187011 A1) in view of Erpenbach et al (US 20180203922 A1), Thompson et al (“Active learning for Natural Language Parsing and Information Extraction”), Huang (US 7984112 B2) and Yachiku et al (US 20130211599 A1) and further in view of LaTurner et al (US 20040205075 A1). As per claims 21, 24, and 27, Purandare discloses, “further comprising processing a new insurance … through an automated processing system that incorporates the updated database” (Pg.8, particularly paragraph 0129; EN: this denotes looking at results of the system to validate/look for errors, identify, and correct them). However, Purandare fails to explicitly disclose, “insurance claim.” LaTurner discloses, “insurance claim” (pg.2, particularly paragraph 0015; EN: this denotes monitoring insurance claims to make sure the data is correct. When combined with the Purandare reference, this denotes using the Purandare system to monitor the rules associated with these insurance claims to make sure the data is correct and processed correctly). Purandare and LaTurner are analogous art because both involve insurance. Before the effective filing date it would have been obvious to one skilled in the art of insurance to combine the work of Purandare and LaTurner in order to make use of the insurance error correction features of Purandare to process insurance claims. The motivation for doing so would be to “ensure that automobile insurance claim information is consistently entered according to procedure and ensure that homeowners insurance claim information is consistently entered according to another procedure” (LaTurner, Pg.2, paragraph 0015) or in the case of Purandare, allow the system to have the error checking of the Purandare reference to make sure insurance claims are processed correctly. Therefore before the effective filing date it would have been obvious to one skilled in the art of insurance to combine the work of Purandare and LaTurner in order to make use of the insurance error correction features of Purandare to process insurance claims. As per claims 23 and 26, Purandare fails to explicitly disclose, “further comprising using the updated one or more rules to revise similar rules having similar incorrect data.” LaTurner discloses, “further comprising using the updated one or more rules to revise similar rules having similar incorrect data” (Pg.1, particularly paragraph 0009; EN: this denotes treating similar data the same in order to be efficient and make sure data is treated properly). Purandare and LaTurner are analogous art because both involve insurance. Before the effective filing date it would have been obvious to one skilled in the art of insurance to combine the work of Purandare and LaTurner in order to treat similar data similarly to already processed data. The motivation for doing so would be to allow the system to “ensure that differing clerks or other entities or program each treat the content in the similar or the same manner, thus providing for consistent content entry and/or manipulation procedure” (LaTurner, Pg.2, paragraph 0014) or in the case of Purandare, allow the system to use the knowledge of fixing errors to properly treat and fix similar errors found in the system. Therefore before the effective filing date it would have been obvious to one skilled in the art of insurance to combine the work of Purandare and LaTurner in order to treat similar data similarly to already processed data. Response to Arguments In pg.11-12, the Applicant argues in regards to the rejection under U.S.C. 101, Appellant's specification identifies a technical problem. For example, paragraphs [0012] and [0013] describe that modern automated claim processing systems may rely on some formal encoding of the policy rules but that existing tools for automatically extracting rules from text are quite noisy and very prone to errors, often resulting in extracting incorrect rules. Paragraph [0014] of Applicant's specification describes that using a semi-automated system for updating policy rules and by using active learning and user feedback the automated tasks are improved so that benefits of artificial intelligence are more successfully achieved in the space of automated software for handling insurance claims. Automated software for handling insurance claims is indeed a technical field. MPEP 2106.04(d) says that integration into a practical application may be shown by "claims that improve the functioning of a computer or other technology or technological field" (emphasis added), not only by claims that improve the functioning of a computer. Automated software for handling insurance claims is important in modern society to handle claims regarding automotive damage, property damage, etc. in an automated manner to provide resolution more quickly. The novel steps of using the semi-automated system make the amended claim 1 more than implementation of abstract ideas in a generic computing environment. In response, the Examiner maintains the rejection as shown above. Applicants argument appears to be that their improvement is to the technology of “automated claim processing systems.” However, this is a “technology” that is met by taking an abstract idea – processing insurance claims, and using generic computer equipment and/or extra-solutionary activity to perform the function of a human being. Insurance has been used in society for thousands of years, merely taking the steps of a human being and automating them through a generic computer system or generic machine learning algorithms does not cause the claims to be significantly more than the abstract idea, and therefore the rejection is maintained as shown above. In pg.12, the Applicant further argues in regards to the rejection under U.S.C. 101 of the independent claims, Applicant respectfully submits that the presently claimed invention cannot be practically performed entirely within the human mind. In particular, the step of re-training a machine learning model with an updated ground truth set cannot be performed by a human using a pen and paper or a generic computer. In response, the Examiner maintains the rejection as shown above. Examiner at no time claimed a machine learning model is processed in the human mind. The use of generic machine learning models is handled similarly to generic computer equipment such as processors or memory. Merely taking an abstract idea such as correcting errors in rules or other systems of insurance claim processing, which can certainly be handled via pencil and paper, and using a generic machine learning model to perform these actions does not cause the claims to be significantly more than the abstract idea. Here the computer equipment and machine learning models are nothing more than generic machine learning models or generic computer equipment, and therefore the rejection is maintained as shown above. Applicant's arguments with respect to claims 1-3, 6, 8-10, 13, 15-17, 19, and 21-28 are either conclusory, repetitions of the above arguments, or have been considered but are moot in view of the new ground(s) of rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BEN M RIFKIN whose telephone number is (571)272-9768. The examiner can normally be reached Monday-Friday 9 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571) 270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BEN M RIFKIN/Primary Examiner, Art Unit 2123
Read full office action

Prosecution Timeline

Nov 19, 2018
Application Filed
Mar 28, 2022
Non-Final Rejection — §101, §103, §112
Jul 06, 2022
Response Filed
Sep 30, 2022
Final Rejection — §101, §103, §112
Jan 06, 2023
Request for Continued Examination
Jan 17, 2023
Response after Non-Final Action
May 26, 2023
Non-Final Rejection — §101, §103, §112
Sep 06, 2023
Response Filed
Feb 06, 2025
Final Rejection — §101, §103, §112
Mar 04, 2025
Examiner Interview Summary
Mar 04, 2025
Applicant Interview (Telephonic)
Mar 17, 2025
Response after Non-Final Action
Apr 21, 2025
Request for Continued Examination
May 02, 2025
Response after Non-Final Action
Sep 22, 2025
Non-Final Rejection — §101, §103, §112
Dec 03, 2025
Interview Requested
Dec 18, 2025
Applicant Interview (Telephonic)
Dec 18, 2025
Examiner Interview Summary
Dec 19, 2025
Response Filed
Feb 18, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12541685
SEMI-SUPERVISED LEARNING OF TRAINING GRADIENTS VIA TASK GENERATION
2y 5m to grant Granted Feb 03, 2026
Patent 12455778
SYSTEMS AND METHODS FOR DATA STREAM SIMULATION
2y 5m to grant Granted Oct 28, 2025
Patent 12236335
SYSTEM AND METHOD FOR TIME-DEPENDENT MACHINE LEARNING ARCHITECTURE
2y 5m to grant Granted Feb 25, 2025
Patent 12223418
COMMUNICATING A NEURAL NETWORK FEATURE VECTOR (NNFV) TO A HOST AND RECEIVING BACK A SET OF WEIGHT VALUES FOR A NEURAL NETWORK
2y 5m to grant Granted Feb 11, 2025
Patent 12106207
NEURAL NETWORK COMPRISING SPINTRONIC RESONATORS
2y 5m to grant Granted Oct 01, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
44%
Grant Probability
59%
With Interview (+15.6%)
4y 12m
Median Time to Grant
High
PTA Risk
Based on 317 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month