Prosecution Insights
Last updated: April 19, 2026
Application No. 19/075,574

DATA QUALITY EVALUATION SYSTEM

Non-Final OA §101§103§DP
Filed
Mar 10, 2025
Examiner
UDDIN, MOHAMMED R
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
Collectivehealth Inc.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
564 granted / 726 resolved
+22.7% vs TC avg
Strong +31% interview lift
Without
With
+30.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
23 currently pending
Career history
749
Total Applications
across all art units

Statute-Specific Performance

§101
22.4%
-17.6% vs TC avg
§103
51.9%
+11.9% vs TC avg
§102
5.4%
-34.6% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 726 resolved cases

Office Action

§101 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the communication filed on March 10, 2025. Claims 1-20 are examined and are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on March 10, 2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claim 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No.12,248,447 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because they are substantially similar in scope and they use the same limitations. Especially, the U.S. Patent No. 12,248,447 discloses more details in data quality evaluation system. Therefore, it would have been obvious to one of ordinary skill in the art to realize that claims 1-20 of the instant application is fully disclosed by the U.S. Patent No.12,248,447. The following table shows the claims in Instant Application that are rejected by corresponding claim(s) in U.S. Patent No.12248447. Instant Application: 19075574 Patent: 12248447 1. A computer-implemented method, comprising: detecting, by a computing system comprising a processor, a data quality issue associated with a data processing system, wherein: the data processing system comprises a pipeline of processing stages, the data quality issue impacts quality of final output generated by the data processing system, and the data quality issue is detected by determining that output of a first processing stage of the pipeline does not correspond with expected output used as input to a second processing stage of the pipeline; generating, by the computing system, data quality results associated with the data quality issue; and generating, by the computing system, and based at least in part on the data quality results, at least one of a data quality scorecard or an anomaly notification. 2. The computer-implemented method of claim 1, wherein the expected output is based on one or more types of data within at least one of: the output of the first processing stage, or input data provided to the first processing stage. 3. The computer-implemented method of claim 1, wherein the expected output is based on amounts of data within at least one of: the output of the first processing stage, or input data provided to the first processing stage. 4. The computer-implemented method of claim 3, wherein the data quality issue is detected by determining that a first number of data records, in the input data, does not correspond to a second number of data records in the output. 5. The computer-implemented method of claim 4, wherein a determination that the first number of data records does not correspond to the second number of data records is based on at least one of a historical pattern or a prediction of a machine learning model indicating an expected number of data records in the output based on the first number of data records in the input data. 6. The computer-implemented method of claim 1, wherein the expected output is based on a processing time taken by the first processing stage to generate the output based on input data provided to the first processing stage. 7. The computer-implemented method of claim 1, further comprising: predicting, by the computing system, using a machine learning model, one or more attributes of the expected output, wherein determining that the output of the first processing stage does not correspond with the expected output comprises determining that the output does not have the one or more attributes of the expected output. 8. The computer-implemented method of claim 7, wherein the one or more attributes of the expected output, predicted using the machine learning model, indicates at least one of a type of data or an amount of data likely to be included in the output of the first processing stage. 9. The computer-implemented method of claim 7, wherein the machine learning model is trained based on historical data indicating historical attributes of: historical input to the first processing stage; and historical output, of the first processing stage, that corresponds to the historical input. 10. The computer-implemented method of claim 1, wherein the data quality issue is further detected by determining that input data provided to the data processing system by one or more external sources does not correspond with at least one of historical patterns, a prediction by a machine learning model trained based on historical data, or validation data provided by a validation source different from the one or more external sources. 11. The computer-implemented method of claim 1, wherein: the data processing system is associated with a benefit plan administrator that manages a benefit plan on behalf of a sponsor, the data processing system processes data associated with the benefit plan, and the final output comprises a report associated with the benefit plan that is generated for the sponsor. 12. The computer-implemented method of claim 11, wherein the pipeline of processing stages comprises two or more of: a data import stage configured to obtain data associated with claims, corresponding to the benefit plan, from one or more sources, a claim adjudication stage configured to adjudicate the claims, a billing stage configured to issue bills based on adjudication of the claims, a payment stage configured to make payments based on adjudication of the claims, or a report generation stage that generates, as the final output, reports associated with the claims. 13. A computing system, comprising: one or more processors; and memory storing computer-executable instructions that, when executed by the one or more processors, cause the computing system to perform operations comprising: detecting a data quality issue associated with a data processing system comprising a pipeline of processing stages, wherein: the data quality issue impacts quality of final output generated by the data processing system, and the data quality issue is detected by determining that output of a first processing stage of the pipeline does not correspond with expected output used as input to a second processing stage of the pipeline; generating data quality results associated with the data quality issue; and generating, based at least in part on the data quality results, at least one of a data quality scorecard or an anomaly notification. 14. The computing system of claim 13, wherein the expected output is based on one or more of types of data or amounts of data within at least one of: the output of the first processing stage, or input data provided to the first processing stage. 15. The computing system of claim 13, wherein the expected output is based on a processing time taken by the first processing stage to generate the output based on input data provided to the first processing stage. 16. The computing system of claim 13, wherein: the operations further comprise predicting, using a machine learning model, one or more attributes of the expected output, and determining that the output of the first processing stage does not correspond with the expected output comprises determining that the output does not have the one or more attributes of the expected output. 17. One or more non-transitory computer-readable media storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: detecting a data quality issue associated with a data processing system comprising a pipeline of processing stages, wherein: the data quality issue impacts quality of final output generated by the data processing system, and the data quality issue is detected by determining that output of a first processing stage of the pipeline does not correspond with expected output used as input to a second processing stage of the pipeline; generating data quality results associated with the data quality issue; and generating, based at least in part on the data quality results, at least one of a data quality scorecard or an anomaly notification. 18. The one or more non-transitory computer-readable media of claim 17, wherein the expected output is based on one or more of types of data or amounts of data within at least one of: the output of the first processing stage, or input data provided to the first processing stage. 19. The one or more non-transitory computer-readable media of claim 17, wherein the expected output is based on a processing time taken by the first processing stage to generate the output based on input data provided to the first processing stage. 20. The one or more non-transitory computer-readable media of claim 17, wherein: the operations further comprise predicting, using a machine learning model, one or more attributes of the expected output, and determining that the output of the first processing stage does not correspond with the expected output comprises determining that the output does not have the one or more attributes of the expected output. 1. A computer-implemented method, comprising: determining, by one or more processors, and using a data quality evaluation system, an expected amount of data received or processed by a data processing system, wherein: the data processing system generates output based at least in part on instances of the data received from one or more sources, and the data quality evaluation system determines the expected amount of the data based at least in part on validation data received from one or more validation sources different from the one or more sources; determining, by the one or more processors, and using the data quality evaluation system, that an actual amount of the data received or processed by the data processing system differs from the expected amount by more than a threshold degree; detecting, by the one or more processors, using the data quality evaluation system, and based on determining that the actual amount differs from the expected amount by more than the threshold degree, a data quality issue associated with the data received or processed by the data processing system, wherein the data quality issue impacts quality of the output generated by the data processing system; generating, by the one or more processors, and using the data quality evaluation system, data quality results associated with the data quality issue; and generating, by the one or more processors, and using the data quality evaluation system based at least in part on the data quality results, at least one of a data quality scorecard or an anomaly notification. 2. The computer-implemented method of claim 1, wherein the data quality evaluation system further determines the expected amount based at least in part on historical patterns of previous data received or processed by the data processing system. 3. The computer-implemented method of claim 1, further comprising identifying, by the one or more processors, using the data quality evaluation system, and from among a plurality of different sources, a particular source that is causing the actual amount of data to differ from the expected amount by more than the threshold degree. 4. The computer-implemented method of claim 1, wherein: the data processing system comprises a plurality of processing stages arranged in a pipeline, the method further comprises detecting, by the one or more processors, and using the data quality evaluation system, that a particular processing stage of the pipeline that receives input data is failing to provide expected data to a subsequent processing stage of the pipeline, and the data quality evaluation system detects the data quality issue and generates the data quality results based at least on part on detecting that the particular processing stage is failing to provide the expected data to the subsequent processing stage. 5. The computer-implemented method of claim 4, wherein: the data corresponds with claims associated with a benefit plan and a sponsor, and the plurality of processing stages, arranged in the pipeline, comprises: a first processing stage configured to receive the instances of the data from the one or more sources; a second processing stage configured to adjudicate the claims based on the instances of the data; a third processing stage configured to at least one of issue bills or make payments based on adjudication of the claims by the second processing stage; and a fourth processing stage configured to generate, as the output, reports associated with the adjudication of the claims. 6. The computer-implemented method of claim 1, wherein the data quality evaluation system comprises a data error detector configured to detect a second data quality issue by identifying data errors within the data upon receipt of the data or during processing of the data by the data processing system. 7. The computer-implemented method of claim 1, wherein the data quality evaluation system further determines the expected amount based at least in part on service level agreements, with the one or more sources, that define deadlines for the one or more sources to provide the data to the data processing system. 8. The computer-implemented method of claim 1, wherein the anomaly notification indicates an issue with a channel by which the data processing system receives the instances of the data from the one or more sources. 9. The computer-implemented method of claim 1, further comprising determining, by the one or more processors, scores for one or more data quality metrics based on the data quality results, wherein the scores are presented in the data quality scorecard. 10. The computer-implemented method of claim 1, further comprising: determining, by the one or more processors, scores for one or more data quality metrics based on the data quality results; determining, by the one or more processors, weights associated with the one or more data quality metrics; and combining, by the one or more processors, the scores into an overall score based on the weights, wherein the overall score is presented in the data quality scorecard. 11. The computer-implemented method of claim 1, wherein: the data processing system is associated with a benefit plan administrator that manages a benefit plan on behalf of a sponsor, the data is associated with the benefit plan, the output comprises a report associated with the benefit plan that is generated for the sponsor, the one or more validation sources comprise at least one of the sponsor or one or more other validation sources, and the validation data is based on information provided by the one or more sources to the one or more validation sources. 12. The computer-implemented method of claim 1, wherein the validation data is based on information provided by the one or more sources to the one or more validation sources. 13. The computer-implemented method of claim 1, wherein the validation data received from the one or more validation sources comprises a different data type than the data received or processed by the data processing system. 14. One or more computing devices, comprising: one or more processors; and memory storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: determining, using a data quality evaluation system, an expected amount of data received or processed by a data processing system, wherein: the data processing system generates output based at least in part on instances of the data received from one or more sources, and the data quality evaluation system determines the expected amount of the data based at least in part on validation data received from one or more validation sources different from the one or more sources; determining, using the data quality evaluation system, that an actual amount of the data received or processed by the data processing system differs from the expected amount by more than a threshold degree; detecting, using the data quality evaluation system, and based on determining that the actual amount differs from the expected amount by more than the threshold degree, a data quality issue associated with the data received or processed by the data processing system, wherein the data quality issue impacts quality of the output generated by the data processing system; generating, using the data quality evaluation system, data quality results associated with the data quality issue; and generating, using the data quality evaluation system based at least in part on the data quality results, at least one of a data quality scorecard or an anomaly notification. 15. The one or more computing devices of claim 14, wherein: the data processing system comprises a plurality of processing stages arranged in a pipeline, the data quality evaluation system detects that a particular processing stage of the pipeline that receives input data is failing to provide expected data to a subsequent processing stage of the pipeline, and the data quality evaluation system detects the data quality issue and generates the data quality results based at least in part on detecting that the particular processing stage is failing to provide the expected data to the subsequent processing stage. 16. The one or more computing devices of claim 14, wherein the data quality evaluation system further determines the expected amount based at least in part on service level agreements, with the one or more sources, that define deadlines for the one or more sources to provide the data to the data processing system. 17. The one or more computing devices of claim 14, wherein the anomaly notification indicates an issue with a channel by which the data processing system receives the instances of the data from the one or more sources. 18. One or more non-transitory computer-readable media storing computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: determining, using a data quality evaluation system, an expected amount of data received or processed by a data processing system, wherein: the data processing system generates output based at least in part on instances of the data received from one or more sources, and the data quality evaluation system determines the expected amount of the data based at least in part on validation data received from one or more validation sources different from the one or more sources; determining, using the data quality evaluation system, that an actual amount of the data received or processed by the data processing system differs from the expected amount by more than a threshold degree; detecting, using the data quality evaluation system, and based on determining that the actual amount differs from the expected amount by more than the threshold degree, a data quality issue associated with the data received or processed by the data processing system, wherein the data quality issue impacts quality of the output generated by the data processing system; generating, using the data quality evaluation system, data quality results associated with the data quality issue; and generating, using the data quality evaluation system based at least in part on the data quality results, at least one of a data quality scorecard or an anomaly notification. 19. The one or more non-transitory computer-readable media of claim 18, wherein: the data processing system comprises a plurality of processing stages arranged in a pipeline, the data quality evaluation system detects that a particular processing stage of the pipeline that receives input data is failing to provide expected data to a subsequent processing stage of the pipeline, and the data quality evaluation system detects the data quality issue and generates the data quality results based at least in part on detecting that the particular processing stage is failing to provide the expected data to the subsequent processing stage. 20. The one or more non-transitory computer-readable media of claim 18, wherein the data quality evaluation system further determines the expected amount based at least in part on service level agreements, with the one or more sources, that define deadlines for the one or more sources to provide the data to the data processing system. “Omission of element and its function in combination is obvious expedient if the remaining elements perform same functions as before.” See In re Karlson (CCPA) 136 USPQ 184, decide Jan 16, 1963, Appl. No. 6857, U.S. Court of Customs and Patent Appeals. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 13 and 17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 This part of the eligibility analysis evaluates whether the claim falls within any statutory category MPEP 2106.03. Step 2A Prong One This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04(II) and the October 2019 Update, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. Step 2A Prong 2 This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by (a) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (b) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. 2019 PEG. Step 2B This part of the eligibility analysis evaluates whether the claim as a whole amount to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. MPEP 2106.05. Step 1 Statutory Category: Claims 1-12 are recited as being directed to “a computer implemented method”. Claims 13-16 are recited as being directed to “a system comprising one or more computers …”. Claims 17-20 are recited as being directed to “one or more non-transitory storage medium storing computer executable instruction …” Thus claims 1, 13 and 17 have been identified to be directed towards the appropriate statutory category. Below is further analysis related to step 2. a). In analyzing under step 2A Prong One, Does the claim recite an abstract idea law of nature or natural phenomenon? Yes. Claims 1, 13 and 17 recites, detecting, by a computing system comprising a processor, a data quality issue associated with a data processing system, wherein: the data processing system comprises a pipeline of processing stages, the data quality issue impacts quality of final output generated by the data processing system, and the data quality issue is detected by determining that output of a first processing stage of the pipeline does not correspond with expected output used as input to a second processing stage of the pipeline; generating, by the computing system, data quality results associated with the data quality issue; and generating, by the computing system, and based at least in part on the data quality results, at least one of a data quality scorecard or an anomaly notification. As claim texts drafted by a set of very minimal limitations (or elements) of each of the three claim categories, detecting, by a computing system comprising a processor, a data quality issue … a pipeline of processing stages … generating, data quality results … generating, a data quality scorecard or an anomaly notification, are merely a process that, under its broadest reasonable interpretation, covers mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion), but for the recitation of processing unit, memory and a computer readable medium which are explicitly generic computing components, including: “detecting, by a computing system comprising a processor, a data quality issue associated with a data processing system,” as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a user can detect the quality of data received from various sources using his/her mind by observation and judgement. Therefore, the normalizing limitation is a mental process (including an observation, evaluation, judgment, opinion). Similarly, “the data processing system comprises a pipeline of processing stages, the data quality issue impacts quality of final output generated by the data processing system, and the data quality issue is detected by determining that output of a first processing stage of the pipeline does not correspond with expected output used as input to a second processing stage of the pipeline”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a user can process data in a series of steps or pipeline stage and see the impact of output of each stage and how it impacts the input of next stage, using his/her mind or with the aid of pen and paper, by observation and judgement. Therefore, the pipeline of processing stages is a mental process (including an observation, evaluation, judgment, opinion). Similarly, “generating, by the computing system, data quality results associated with the data quality issue; and generating, by the computing system, and based at least in part on the data quality results, at least one of a data quality scorecard or an anomaly notification”, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a user can generate data quality result and determine the quality of data by providing a score or notify administration if it has any anomalous data using his/her mind or with the aid of pen and paper. Therefore, generating data quality result and generating a quality score is a mental process (including an observation, evaluation, judgment, opinion). The claim recites three additional; elements: “data processing system, pipeline processing stage”. The data processing system, pipeline processing stage, these are a form of insignificant extra-solution activity, (see Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information)). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to the abstract idea. b) In analyzing under step 2A Prong Two, Does the claim recite additional elements that integrate the judicial exception into a practical application? NO. This judicial exception is not integrated into a practical application. In particular, the claim only recites additional elements – “system comprising one or more computers”, “one or more non-transitory computer readable storage medium”, and “training machine learning model”. The additional components are generic computer components even being recited as additional limitations, however, do not preclude the claims from reciting an abstract idea. For instance, as the above detailed analysis on the minimal limitations as abstract ideas that can be performed mentally in mind by human, without reciting any “additional element” to integrate the judicial exception into a practical application. The processes of receiving necessities for performing an action and providing indication of completed such that it amounts no more than mere instructions to apply the exception using a generic computer component, processing unit(s), memory and computer readable medium for the processes. That is, the limitations represent well-understood, routine, conventional activity (See MPEP 2106.05(g) or 2106.05(d) for receiving or transmitting data over a network, e.g. see Intellectual Ventures v. Symantec; Storing and retrieving information in memory: Versata; Analyzing data: Genetic Techs; Determining: OIP Techs; Electronic recordkeeping: Alice Corp). Accordingly, even considering all the elements as additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. As such, the claim is directed to an abstract idea. c) In analyzing under step 2B, does the claim recite additional elements that amount to significantly more than the judicial exception? NO The claim 1, 13 and 17 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, there is simply no additional elements adding to the already analyzed very few minimal steps of performing action. The steps, represent well-understood, routine, conventional activity previously known to the industry and are specified at a high level of generality, and in the context of the limitations reciting performing action that can be practically performed in the human mind and may be considered to fall within the mental process and mathematical concepts groupings. As such, the limitations represent well-understood, routine, conventional activity (See MPEP 2106.05(g) or 2106.05(d) for receiving or transmitting data over a network, e.g. see Intellectual Ventures v. Symantec; Storing and retrieving information in memory: Versata; Analyzing data: Genetic Techs; Determining: OIP Techs; Electronic recordkeeping: Alice Corp). The claims are not patent eligible. Further the limitations in the dependent claims 2-12, 14-16 and 18-20 are an extension of the abstract idea of claim 1, 13 and 17 above. Claim 2 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 2 recites the same abstract idea of detecting data quality issue and generating data quality result with a scorecard or anomaly notification. The claim recites the additional limitations of wherein the expected output is based on one or more types of data within at least one of: the output of the first processing stage, or input data provided to the first processing stage, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, user can see and expect the output or input of the data quality result according to the data type, by evaluation and observation, using his/her mind. Hence, the limitation can be performed in human mind which is a mental process. Claim 3 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 3 recites the same abstract idea of detecting data quality issue and generating data quality result with a scorecard or anomaly notification. The claim recites the additional limitations of wherein the expected output is based on amounts of data within at least one of: the output of the first processing stage, or input data provided to the first processing stage, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, user can imagine the expected output based on amount of data as an input or output, by evaluation and observation, using his/her mind. Hence, the limitation can be performed in human mind which is a mental process. Claim 4 is dependent on claim 3 and includes all the limitations of claim 1 and 3. Therefore, claim 4 recites the same abstract idea of detecting data quality issue and generating data quality result with a scorecard or anomaly notification. The claim recites the additional limitations of wherein the data quality issue is detected by determining that a first number of data records, in the input data, does not correspond to a second number of data records in the output, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, user can imagine the number of input data and number of output data and determine data quality accordingly, by evaluation and observation, using his/her mind. Hence, the limitation can be performed in human mind which is a mental process. Claim 5 is dependent on claim 4 and includes all the limitations of claim 1, 3 and 4. Therefore, claim 4 recites the same abstract idea of detecting data quality issue and generating data quality result with a scorecard or anomaly notification. The claim recites the additional limitations of wherein the data quality issue is detected by determining that a first number of data records, in the input data, does not correspond to a second number of data records in the output, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, user can imagine the number of input data and number of output data and determine data quality accordingly, by evaluation and observation, using his/her mind. Hence, the limitation can be performed in human mind which is a mental process. Claim 6 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 6 recites the same abstract idea of detecting data quality issue and generating data quality result with a scorecard or anomaly notification. The claim recites the additional limitations of wherein the expected output is based on a processing time taken by the first processing stage to generate the output based on input data provided to the first processing stage, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, user can imagine how much time it takes to process data in the first stage, by evaluation and observation, using his/her mind. Hence, the limitation can be performed in human mind which is a mental process. Claim 7 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 6 recites the same abstract idea of detecting data quality issue and generating data quality result with a scorecard or anomaly notification. The claim recites the additional limitations of predicting, by the computing system, using a machine learning model, one or more attributes of the expected output, wherein determining that the output of the first processing stage does not correspond with the expected output comprises determining that the output does not have the one or more attributes of the expected output, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a user can predict expected output, by judgment and observation, using his/her mind. Hence, the limitation can be performed in human mind which is a mental process. Claim 7 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 6 recites the same abstract idea of detecting data quality issue and generating data quality result with a scorecard or anomaly notification. The claim recites the additional limitations of predicting, by the computing system, using a machine learning model, one or more attributes of the expected output, wherein determining that the output of the first processing stage does not correspond with the expected output comprises determining that the output does not have the one or more attributes of the expected output, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a user can predict expected output, by judgment and observation, using his/her mind. Hence, the limitation can be performed in human mind which is a mental process. Claim 8 is dependent on claim 7 and includes all the limitations of claim 1 and 7. Therefore, claim 8 recites the same abstract idea of detecting data quality issue and generating data quality result with a scorecard or anomaly notification. The claim recites the additional limitations of wherein the one or more attributes of the expected output, predicted using the machine learning model, indicates at least one of a type of data or an amount of data likely to be included in the output of the first processing stage, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a user can predict expected output and indicate type of data or amount of data likely to include in output, by evaluation and judgment, using his/her mind. Hence, the limitation can be performed in human mind which is a mental process. Claim 9 is dependent on claim 7 and includes all the limitations of claim 1, 7 and 8. Therefore, claim 9 recites the same abstract idea of detecting data quality issue and generating data quality result with a scorecard or anomaly notification. The claim recites the additional limitations of wherein the machine learning model is trained based on historical data indicating historical attributes of: historical input to the first processing stage; and historical output, of the first processing stage, that corresponds to the historical input, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a user can train a machine learning model using the historical input or historical output, by evaluation and judgment, using his/her mind. Hence, the limitation can be performed in human mind which is a mental process. Claim 10 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 10 recites the same abstract idea of detecting data quality issue and generating data quality result with a scorecard or anomaly notification. The claim recites the additional limitations of wherein the data quality issue is further detected by determining that input data provided to the data processing system by one or more external sources does not correspond with at least one of historical patterns, a prediction by a machine learning model trained based on historical data, or validation data provided by a validation source different from the one or more external sources, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a user can determine the data quality issue by looking at historical patter or predicting, by evaluation and judgment, using his/her mind. Hence, the limitation can be performed in human mind which is a mental process. Claim 11 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 11 recites the same abstract idea of detecting data quality issue and generating data quality result with a scorecard or anomaly notification. The claim recites the additional limitations of the data processing system is associated with a benefit plan administrator that manages a benefit plan on behalf of a sponsor, the data processing system processes data associated with the benefit plan, and the final output comprises a report associated with the benefit plan that is generated for the sponsor., as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a user can determine the data source such as data from benefit plan and process data to create a report by evaluation and observation, using his/her mind. Hence, the limitation can be performed in human mind which is a mental process. Claim 12 is dependent on claim 11 and includes all the limitations of claim 1 and 11. Therefore, claim 12 recites the same abstract idea of detecting data quality issue and generating data quality result with a scorecard or anomaly notification. The claim recites the additional limitations of the pipeline of processing stages comprises two or more of: a data import stage configured to obtain data associated with claims, corresponding to the benefit plan, from one or more sources, a claim adjudication stage configured to adjudicate the claims, a billing stage configured to issue bills based on adjudication of the claims, a payment stage configured to make payments based on adjudication of the claims, or a report generation stage that generates, as the final output, reports associated with the claims, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, a user can determine different processing stage for different data form different source such as data from billing, claim etc., and process data to create a report by evaluation and observation, using his/her mind. Hence, the limitation can be performed in human mind which is a mental process. Claims 14-16 recites similar limitation of claim 2 and 6-7 respectively and rejected for the same reason set forth to the rejection of claims 2 and 6-7 above, Claims 18-20 recites similar limitation of claim 2 and 6-7 respectively and rejected for the same reason set forth to the rejection of claims 2 and 6-7 above, Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-10 and 12-20 are rejected under 35 U.S.C. 103 as being unpatentable over Haile (US 11,429,614 B2), in view of Hagenbuch et al (US 2020/0118653 A1). As per claim 1, Haile discloses: - a computer-implemented method, comprising: (Abstract, line 1-10, system and method for data quality monitoring are provided), - detecting, by a computing system comprising a processor, a data quality issue associated with a data processing system, wherein (Fig. 1, item 130, 135, column 2, line 8-25, monitoring data quality during data processing), - the data processing system comprises a pipeline of processing stages (Column 4, line 45-65, Fig. 1, item 110, data processing through processing of pipelines), - the data quality issue impacts quality of final output generated by the data processing system (column 2, line 30-50, differences in data type, data values (i.e., data quality issues) cause error (i.e., impact) on expected output (i.e., final output), Fig. 1-2), - the data quality issue is detected by determining that output of a first processing stage of the pipeline does not correspond with expected output (Fig. 1-3, column 9, line 15-55, data quality determines output of pipeline processing align (i.e., corresponding with) expected output), - generating, by the computing system, data quality results associated with the data quality issue (column 2, line 40-55, column 9, line 45-65, generating report with data quality such as error in data, Fig. 5, item 550, Fig. 1, item 145, 147), - generating, by the computing system, and based at least in part on the data quality results, at least one of a data quality scorecard or an anomaly notification (column 5, line 15-25, column 6, line 60-67, the data monitoring system can provide quality scoring that enables analysts to annotate or document confidence in their reports and visualizations of the data presented to the end-user), Haile does not explicitly disclose output used as input to a second processing stage of the pipeline. However, in the same field of endeavor Hagenbuch in an analogous art disclose output used as input to a second processing stage (Para [0008], [0046], output of one element in a data processing pipeline series (i.e., stage) is the input of next element in the pipeline series (i.e., stage), or anomaly notification, column 5, line 50-56). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the output of one stage of a pipeline is used as input of next stage of data processing pipeline taught by Hagenbuch as the means to process data in a series of data processing stage in a pipeline to determine the data quality issue in Haile, (Haile, column 2, line 8-25, column 9, line 15-55, Hagenbuch, Para [0008], [0046]). Haile and Hagenbuch are analogous prior art since they both deal with pipeline processing for data quality issue. A person of the ordinary skill in the art would have been motivated to make aforementioned modification to accurately align the output of data processing pipeline. This is because one aspect of Haile invention is to accurately detect different type of data type to process the data in the data processing pipeline for data quality issue as described at least in column 1, line 45-55]. Output of one processing stage is the input of next or another stage is part of this detecting data quality issue. However, Haile doesn’t specify any particular manner in output of one processing stage is used as an input of next or another stage in the data processing pipeline. This would have lead one of the ordinary skill in the art to seek and recognize the output of one processing stage is the input of next or another stage as taught by Hagenbuch. Hagenbuch describes how their data quality checking system analyzer data for consistency and accurately to process in the next stage of the pipeline as described at least in Para [0065], as desired by Haile. As per claim 2, rejection of claim 1 is incorporated, and further Haile discloses: - wherein the expected output is based on one or more types of data within at least one of: the output of the first processing stage, or input data provided to the first processing stage (Fig. 3, column 2, line 15-25, 35-55, expected output is based on type of data output of unprocessed data in first phase of data pipeline). As per claim 3, rejection of claim 1 is incorporated, and further Hagenbuch discloses: - wherein the expected output is based on amounts of data within at least one of: the output of the first processing stage, or input data provided to the first processing stage (Para [0023] [0042], output based on volume of data (i.e. amount of data), Para [0008], [0046], output of one element in a data processing pipeline series (i.e., stage) is the input of next element in the pipeline series (i.e., stage)). As per claim 4, rejection of claim 3 is incorporated, and further Hagenbuch discloses: - wherein the data quality issue is detected by determining that a first number of data records, in the input data, does not correspond to a second number of data records in the output (Para [0065], [0070] data quality determined by checking input record does not correspond to the expected value). As per claim 5, rejection of claim 4 is incorporated, and further Hagenbuch discloses: - wherein a determination that the first number of data records does not correspond to the second number of data records is based on at least one of a historical pattern or a prediction of a machine learning model indicating an expected number of data records in the output based on the first number of data records in the input data ([0063], [0065], determining first and second records not correspond based on previous trend (i.e., historical pattern). As per claim 6, rejection of claim 1 is incorporated, and further Hagenbuch discloses: - wherein the expected output is based on a processing time taken by the first processing stage to generate the output based on input data provided to the first processing stage (Para [0066], [0073], data processing pipeline for data quality over time) As per claim 7, rejection of claim 1 is incorporated, and further Haile discloses: - predicting, by the computing system, using a machine learning model, one or more attributes of the expected output (column 2, line 15-20, column 12, line 25-35, attribute of the expected output predicted using machine learning model, column 5, line 35-40), - wherein determining that the output of the first processing stage does not correspond with the expected output comprises determining that the output does not have the one or more attributes of the expected output (column 12, line 25-45, determine output does not have attribute to align with expected output). As per claim 8, rejection of claim 7 is incorporated, and further Haile discloses: - wherein the one or more attributes of the expected output, predicted using the machine learning model, indicates at least one of a type of data or an amount of data likely to be included in the output of the first processing stage (column 2, 20-30, column 5, line 20-25, 55-65, column 7, line 25-35, attribute or metadata of expected output included in the output of pipeline). As per claim 9, rejection of claim 7 is incorporated, and further Haile discloses: - wherein the machine learning model is trained based on historical data indicating historical attributes of: historical input to the first processing stage; and historical output, of the first processing stage, that corresponds to the historical input (column 5, line 5-10, column 10, line 35-55, historical input and historical output). As per claim 10, rejection of claim 1 is incorporated, and further Haile discloses: - wherein the data quality issue is further detected by determining that input data provided to the data processing system by one or more external sources does not correspond with at least one of historical patterns, a prediction by a machine learning model trained based on historical data, or validation data provided by a validation source different from the one or more external sources (column 5, line 15-20, column 11, line 60-65, data quality issue determined by a data source validation). As per claim 12, rejection of claim 11 is incorporated, and further Hagenbuch discloses: - wherein the pipeline of processing stages comprises two or more of: a data import stage configured to obtain data associated with claims, corresponding to the benefit plan, from one or more sources, a claim adjudication stage configured to adjudicate the claims, a billing stage configured to issue bills based on adjudication of the claims, a payment stage configured to make payments based on adjudication of the claims, or a report generation stage that generates, as the final output, reports associated with the claims (Para [0023], [0073], pipeline processing stage include billing stage and report stage). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Haile (US 2021/0248144 A1), in view of in view of Hagenbuch et al (US 2020/0118653 A1), as applied to claim 1, 12 and 17 above, and further in view of Background of the invention. As per claim 11, rejection of claim 1 is incorporated, Combined method of Haile and Hagenbuch does not explicitly disclose wherein: the data processing system is associated with a benefit plan administrator that manages a benefit plan on behalf of a sponsor, the data processing system processes data associated with the benefit plan, and the final output comprises a report associated with the benefit plan that is generated for the sponsor. However, applicant’s own background of the invention teaches wherein: the data processing system is associated with a benefit plan administrator that manages a benefit plan on behalf of a sponsor (background, Para [0004], line 1-2), the data processing system processes data associated with the benefit plan, and the final output comprises a report associated with the benefit plan that is generated for the sponsor (sponsor (background, Para [0004], line 6-7) Therefore, it would have been obvious to a person of the ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Haile, as previously modified with Iyengar, with the teaching of applicant’s own Background by modifying Haile such that data quality issues are for care providers to define the accuracy and validity of claims. The motivation for doing so would be a data quality analysis technique that incrementally compute DQMs for new and updated data and provide a level of computing resource efficiency, (Iyengar, column 6, line 55-65). As per claims 13-16, Claims 13-16 are system claims corresponding to method claims 1-2, and 6-7 respectively and rejected under the same reason set forth to the rejection of claims 1-2 and 6-7 above. As per claims 17-20, Claims 17-20 computer readable medium claims corresponding to method claims 1-2, and 6-7 respectively and rejected under the same reason set forth to the rejection of claims 1-2 and 6-7 above. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED R UDDIN whose telephone number is (571)270-3138. The examiner can normally be reached M-F: 9:00 AM-5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached at 571-272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMED R UDDIN/Primary Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Mar 10, 2025
Application Filed
Jan 24, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602432
SUMMARY GENERATION FOR A DISTRIBUTED GRAPH DATABASE
2y 5m to grant Granted Apr 14, 2026
Patent 12596676
RECORDS RETENTION MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596960
MISUSE INDEX FOR EXPLAINABLE ARTIFICIAL INTELLIGENCE IN COMPUTING ENVIRONMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12585890
System and Method for Image Generation Using Neuroscience-Inspired Prompt Strategy
2y 5m to grant Granted Mar 24, 2026
Patent 12566800
EFFICIENT AND SCALABLE DATA PROCESSING AND MODELING
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+30.8%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 726 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month