DETAILED ACTION
1. The present application is being examined under the pre-AIA first to invent provisions. 2. This Office Action is in response to the REM filed on 11/04/2025.
3. Status of Claims: Claims 7-26 are pending in this Office Action.
4. Claims 7, 17 and 21 are independent claims.
5. This action is made Final.
Double Patenting
6. Claims 7-26 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims (1 and 2), (3,4), 1, 4, 1, 27, 28, 26, 9, 12, (1 and 2), 15, 1, 26, 17, 18, 9, 12, 13 and 20 of U.S Patent No 11593326. Although the claims at issue are not identical, they are not patentably distinct from each other.
Applicant respectfully requests that the instant nonstatutory double patenting rejection be held in abeyance until the claimed invention is deemed allowable and the claims are no longer subject to amendment.
Claim Rejections – 35 USC § 101
7. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
8. Claims 7-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim Interpretation: Under the broadest reasonable interpretation, the terms of the claim are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art. See MPEP 2111.
Claim 7 recites the steps:
step (a) “assessing, …, a metadata file for completion of fields defined by a standard”. the limitation does not put any limits on how the metadata file is assessed. The plain meaning and the broadest reasonable interpretation of “assessing” encompasses analyzing and evaluating and the plain meaning of “metadata” is consistent with the specification at [0003] as something that ‘provides information about one or more aspects of data’ As well as [0007] explicitly indicating that “evaluating … metadata” including for “completeness” can be done by human review, i.e. a mental process.
step (b) “evaluating the metadata file against evaluation criteria, wherein the evaluation criteria is a collection of algorithms for evaluating data in the fields for compliance with a plurality of rules of the standard, wherein the metadata file includes at least a first data in a first field and a second data in a second field”. The claim specifies that the evaluation was performed using a collection of algorithms, under its broadest reasonable interpretation the “evaluating” encompasses mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion.
Step (c) “wherein the metadata file includes at least a first data in a first field and a second data in a second field, wherein the evaluation includes determining whether the second data that is dependent on the first data complies with at least one of the plurality of rules of the standard, according to the evaluation criteria” encompasses performing evaluation, judgment, and opinion to make a determination about the assessment of the metadata file. Under its broadest reasonable interpretation when read in light of the specification, the “evaluating” encompasses mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion.
step (d)” calculating at least one score for the metadata file based upon the assessment of the completion of required fields and the evaluation of the metadata file against the evaluation criteria of the plurality of rules of the standard” This step recites a mathematical operation for calculating scores-based evaluations. without placing any limitation on how the calculation was performed and operates. under its broadest reasonable interpretation, the “calculating” encompasses mental processes practically performed in the human mind.
Step (e) recites “presenting, …, a report of results of the assessment and the evaluation in a graphical user interface, the report including the at least one score for the metadata file.” Encompasses displaying the result of the evaluation.
Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03.
The claim recites the steps or acts of assessing, …, evaluating… calculating… and presenting, and thus is a process (a series of steps or acts). A process is a statutory category of invention. (Step 1: YES).
Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim.
Steps (a)-(d) encompass mental processes practically performed in the human mind by observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III.
That is, other than reciting “by at least one processor,” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “by at least one processor” language, “assessing and evaluating” in the context of this claim encompasses the user manually assessing a metadata file and evaluate the metadata file fields using his own eyes to see if they comply with the rules or standard, then correct the errors and warnings. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. (Step 2A, Prong One: YES).
Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d).
The claim recites the additional elements of step (e) “presenting, …, a report of results of the assessment and the evaluation in a graphical user interface, the report including the at least one score for the metadata file.” are mere data outputting recited at a high level of generality, and thus are insignificant extra-solution activity (Post solution activity of outputting data). See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data outputting, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05.
Further, the limitations are recited as being performed by a processor and using “a graphical user interface” to present the results of the evaluation. The processor and graphical user interface are recited at a high level of generality. The processor and graphical user interface are used as a tool to perform an abstract idea, as discussed above in Step 2A, Prong One, such that it amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f).
Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application (Step 2A, Prong Two: NO), and the claim is directed to the judicial exception. (Step 2A: YES).
Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amount to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05.
As explained with respect to Step 2A, Prong Two, the additional elements in Step (e) which recites “presenting, …, a report of results of the assessment and the evaluation in a graphical user interface, the report including the at least one score for the metadata file.” was found to be insignificant extra-solution activity in Step 2A, Prong Two, because they were determined to be insignificant limitations as necessary data outputting.
However, a conclusion that an additional element is insignificant extra-solution activity in Step 2A, Prong Two should be re-evaluated in Step 2B. See MPEP 2106.05, subsection I.A. At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g).
The data outputting activities in limitation (e) is recited at a high level of generality and well-understood, routine, conventional activity as it has been recognized by the courts.
Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept. (Step 2B: NO). The claim is not eligible.
Claim 17 recites the same steps as claim 7 implemented on a system such that they are executable on a processor. The invention described by those steps are directed towards an abstract idea, for the reasons explained above.
Claim 21 recites the same steps as claim 7 stored on a non-transitory computer readable medium such that they are executable on a processor. The invention described by those steps are directed towards an abstract idea, for the reasons explained above.
The dependent claims merely incorporate additional elements that narrow the abstract idea without yielding an improvement to any technical field, the computer itself, or limitations beyond merely linking the idea to a particular technological environment. All the steps in the dependent claims as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “by a processor,” For example, but for the “by a processor” language, “calculating a score” in the context of this claim encompasses the user may assign a score or a weight to the metadata file based on his evaluation and using his own eyes. Accordingly, the dependent claims fall within the “Mental Processes” grouping of abstract ideas. Accordingly.
Claims 8, 18, 22: receiving, in the graphical user interface, an edit to the first data or the second data; and adjusting the at least one score for the metadata file to reflect the edit to the first data or the second data (insignificant pre-solution data gathering).
Claims 9, 23: presenting a suggested improvement to the first data or the second data corresponding to one or more data errors in the report of the results of the assessment and the evaluation in the graphical user interface, wherein the suggested improvements include an explanation of the suggested improvement to the first data or the second data; receiving, in the graphical user interface an edit to the first data or the second data (falls within the “Mental Processes” grouping of abstract ideas, one can manually explain in writing and suggest improvement).
Claim 10: automatically making changes to the first data or the second data according to the suggested improvements (falls within the “Mental Processes” grouping of abstract ideas, one can manually make changes to data).
Claims 11, 19: receiving a selection of one or more data errors in the report of the results of the evaluation in the graphical user interface; and presenting a subset of the one or more data errors resulting from the selection of the one or more data errors (falls within the “Mental Processes” grouping of abstract ideas”).
Claims 12, 24: after the presenting the results of the assessment and the evaluation, receiving a selection of one or more data errors; applying an update to correct a selected error; adjusting the at least one score for the metadata file to reflect the correction of the selected error (falls within the “Mental Processes” grouping of abstract ideas).
Claims 13, 25: applying the update to a plurality of records having the selected error to correct the selected error across the plurality of records (falls within the “Mental Processes” grouping of abstract ideas).
Claims 14, 20, 26: after the presenting the results of the assessment and the evaluation, receiving inputs to navigate, sort, or filter to identify a subset of errors (falls within the “Mental Processes” grouping of abstract ideas).
Claim 15: wherein the at least one score includes a completeness component as judged by a percentage of the fields that include the data, and a quality component as judged by the evaluation of the metadata file against the evaluation criteria (falls within the “Mental Processes” grouping of abstract ideas).
Claim 16: wherein the metadata file includes the data pertaining to a publication (as per MPEP 2106.05(h) merely tying the abstract idea to a field of use. In particular, the data “pertaining to a publication” merely ties the abstract idea of metadata assessment, evaluation, and scoring to the field of publications such as books, but that does not render the claim eligible.).
Examiner Note
9. The Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Claim Rejections - 35 USC § 103
10. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
11. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
a) A patent may not be obtained through the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
12. Claims 7-26 are rejected under 35 U.S.C.103 as being unpatentable over Gitai et al (US 20090171991 A1) hereinafter as Gitai in view of 1 Liolios et al hereinafter as Liolios.
1 Liolios et al was Cited in IDS received 08/24/2023, NPL, Row 6.
Liolios et al "The Metadata Coverage Index (MCI): A standardized metric for quantifying database metadata richness”. Standards in Genomic Sciences. 2012 6:444-453.
13. Regarding claim 7, Gitai teaches a method comprising:
assessing, by at least one processor, a metadata file for completion of fields defined by a standard ([0022], [0026], [0027], [0029], Fig 1, [0035-0036], “Dependency Check (a standard) of the metadata file 100”, [0038], [0041], “A pattern analysis (a standard) may be used to determine how many different data patterns exist for data in a given field in the repository. A large number of patterns may indicate a problem with data validity or accuracy, particularly for highly formatted data fields, such as phone numbers, as shown for example in Report 600.”, [0024-0025], [0035], “a business rule (a standard) is often a more efficient description of the relationship of the data… a rule might be developed to ensure that when California is selected that the country is always set to the USA”, [0043], “To evaluate the efficiency of the data schema, one approach in keeping with the present invention is to apply the business rules of the repository strictly to the data itself… Data inaccuracy and invalidity can be highlighted by identifying data that violates the repository's business rules. Both dependency checks and redundancy checks, described elsewhere herein, may be used to identify business rule violations and find inefficiencies in the data schema.”, [0052], “analysis of the business rules of the database.”));
Examiner interpretation:
per definition: Evaluating data fields for compliance with multiple rules of a standard involves using specific data quality dimensions as evaluation criteria, such as validity, accuracy, completeness, and consistency. Each criterion must be mapped to the specific rules of the applicable standard (e.g., GDPR, HIPAA).
evaluating the metadata file against evaluation criteria, wherein the evaluation criteria is a collection of algorithms for evaluating data in the fields for compliance with a plurality of rules of the standard ([0022], [0026-0027], “statistical check is to evaluate the lookup fields of the table or tables. FIG. 2 illustrates the evaluation of a table's lookup data field usage count and rate and unused lookup count.”, [0029], Fig 1, [0035], “Dependency Check”, [0036] “Data dependency analysis can also be used to detect and correct errors in data.... A dependency checks as described herein may also be used to find incorrect values, such as the entry that indicates the city of "City" (four entries) and one with the state of "Stockholm Lan" (one entry). All five of these entries clearly represent invalid data and need correction”, [0038], [0041], “A pattern analysis may be used to determine how many different data patterns exist for data in a given field in the repository. A large number of patterns may indicate a problem with data validity or accuracy, particularly for highly formatted data fields, such as phone numbers, as shown for example in Report 600.”, [0043] “To evaluate the efficiency of the data schema, one approach in keeping with the present invention is to apply the business rules of the repository strictly to the data itself”, [0044], “Taxonomy evaluation”.), wherein the metadata file includes at least a first data in a first field and a second data in a second field, wherein the evaluation includes determining whether the second data that is dependent on the first data complies with at least one of the plurality of rules of the standard, according to the evaluation criteria (Fig 1, [0035]-[0036], “evaluating the dependency between two columns or between two pairs of fields in a data repository. Where a 100% dependency exists, a business rule is often a more efficient description of the relationship of the data”, see also Fig 3 & 4 and [0038]).
Gitai did not specifically teach calculating at least one score for the metadata file based upon the assessment of the completion of required fields and the evaluation of the metadata file against the evaluation criteria of the plurality of rules of the standard; presenting, by the at least one processor, a report of results of the assessment and the evaluation in a graphical user interface, the report including the at least one score for the metadata file.
However, Liolios teaches calculating at least one score for the metadata file based upon the assessment of the completion of required fields and the evaluation of the metadata file against the evaluation criteria of the plurality of rules of the standard; presenting, by the at least one processor, a report of results of the assessment and the evaluation in a graphical user interface, the report including the at least one score for the metadata file (page 1, “MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest).”, page 2, “MCI scores were calculated for each of the above collections as the total number of filled fields expressed as a percentage of the total fields available across all records. Scores were also calculated for individual records and for each field (i.e., each variable or column header in a spreadsheet).”, “Calculating MCI scores and comparison of metadata field, see also Fig 1, “Schematic representation of the MCI calculation procedure”, Table 1&2, Page 8, “MCI scores could be used for judging compliance with a given standard”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the of teachings suggested in Liolios's system into Gitai and by incorporating Liolios into Gitai because both systems are related to managing metadata would enables filtering that supports downstream analysis (Liolios, Abstract).
14. Regarding claim 8, Gitai and Liolios teach the invention as claimed in claim 7 above and Liolios further teaches receiving, in the graphical user interface, an edit to the first data or the second data; and adjusting the at least one score for the metadata file to reflect the edit to the first data or the second data (page 8, “MCI scores will ideally be used to make targeted improvements to databases over time. They could also be used over time to track the evolution of databases and their contents, for example, to signal significant updates in content even when the total number of entries remains the same, to report progress to funders, or to reward the work of curators who contribute the relevant information”, “MCI scores could be further refined in several ways, MCI scores could be used for judging compliance with a given standard. MCI scores could also be broken down to cover ‘required’ and ‘optional’ fields separately. Further refinement of MCI scores would require more thorough validation of metadata, making maximum use of mappings between minimal information requirements, recommended terminologies and any formats used.”).
15. Regarding claim 9, Gitai and Liolios teach the invention as claimed in claim 7 above and Gitai further teaches presenting a suggested improvement to the first data or the second data corresponding to one or more data errors in the report of the results of the assessment and the evaluation in the graphical user interface, wherein the suggested improvement includes an explanation of the suggested improvement to the first data or the second data; receiving, in the graphical user interface an edit to the first data or the second data (Fig 4, issue description/How to resolve, [0031].
Also, Liolios further teaches the limitation at (page 2, “highlight challenging-to acquire components of specifications or to quantify improvements in metadata reporting or database content (for example, through curation).”, page 6, “Improvements in MCI scores over time”, page 8, “MCI scores will ideally be used to make targeted improvements to databases over time. They could also be used over time to track the evolution of databases and their contents, for example, to signal significant updates in content even when the total number of entries remains the same, to report progress to funders, or to reward the work of curators who contribute the relevant information.”).
16. Regarding claim 10, Gitai and Liolios teach the invention as claimed in claim 9 above and Gitai further teaches automatically making changes to the first data or the second data according to the suggested improvements (Fig 4, issue description/How to resolve, [0031]).
Also, Liolios further teaches the limitation at (page 8, “MCI scores will ideally be used to make targeted improvements to databases over time. They could also be used over time to track the evolution of databases and their contents, for example, to signal significant updates in content even when the total number of entries remains the same, to report progress to funders, or to reward the work of curators who contribute the relevant information”, “MCI scores could be further refined in several ways, MCI scores could be used for judging compliance with a given standard. MCI scores could also be broken down to cover ‘required’ and ‘optional’ fields separately. Further refinement of MCI scores would require more thorough validation of metadata, making maximum use of mappings between minimal information requirements, recommended terminologies and any formats used.”).
17. Regarding claim 11, Gitai and Liolios teach the invention as claimed in claim 7 above and Gitai further teaches receiving a selection of one or more data errors in the report of the results of the evaluation in the graphical user interface ([0010], [0026], Fig 4, [0031], [0036], [0040], [0045], Fig 5 & 7, the tables include an arrow facing down to sort and group errors by type).
18. Regarding claim 12, Gitai and Liolios teach the invention as claimed in claim 7 above and Gitai further teaches after presenting the results of the assessment and the evaluation, receiving a selection of one or more data errors; applying an update to correct a selected error, and adjusting the at least one score for the metadata file to reflect the correction of the selected error ([0010], [0026], Fig 4, [0031], [0036], [0040], [0045], Fig 5 & 7, the tables include an arrow facing down to sort and group errors by type).
Also, Liolios further teaches the limitation adjusting the at least one score for the metadata file to reflect the correction of the selected error (page 2, “highlight challenging-to acquire components of specifications or to quantify improvements in metadata reporting or database content (for example, through curation).”, page 6, “Improvements in MCI scores over time”, page 8, “MCI scores will ideally be used to make targeted improvements to databases over time. They could also be used over time to track the evolution of databases and their contents, for example, to signal significant updates in content even when the total number of entries remains the same, to report progress to funders, or to reward the work of curators who contribute the relevant information.”).
19. Regarding claim 13, Gitai and Liolios teach the invention as claimed in claim 12 above and Gitai further teaches wherein the applying the update to correct the selected error includes applying the update to a plurality of records having the selected error to correct the selected error across the plurality of records ([0010], [0026], Fig 4, [0031], [0036], [0040], [0045], Fig 5 & 7, the tables include an arrow facing down to sort and group errors by type).
20. Regarding claim 14, Gitai and Liolios teach the invention as claimed in claim 7 above and Gitai further teaches after presenting the results of the assessment and the evaluation, receiving inputs to navigate, sort, or filter to identify a subset of errors ([0010], [0026], Fig 4, [0031], [0036], [0040], [0045], Fig 5 & 7, the tables include an arrow facing down to sort and group errors by type).
Also, Liolios further teaches the limitation at (page 1, “The scoring of records on the richness of their description provides a simple, objective proxy measure for quality that enables filtering that supports downstream analysis. Pivotally, such descriptions should spur on improvements…MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest). There are many potential uses for this simple metric: for example; to filter, rank or search for records; to assess the metadata availability of an ad hoc collection; to determine the frequency with which fields in a particular record type are filled, especially with respect to standards compliance; to assess the utility of specific tools and resources, and of data capture practice more generally; to prioritize records for further curation; to serve as performance metrics of funded projects; or to quantify the value added by curation.”).
21. Regarding claim 15, Gitai and Liolios teach the invention as claimed in claim 7 above and Gitai further teaches wherein the at least one score includes a completeness component as judged by a percentage of the fields that include the data, and a quality component as judged by the evaluation of the metadata file against the evaluation criteria (Fig 4 (Issues description, How to resolve), [0031], FIG. 4 illustrates the type of Report 400 that may be shown to indicate that issues exist regarding the use of text fields and suggests methods by which schema inefficiency, verification failure, or data accuracy errors may be corrected.).
Also, Liolios further teaches the limitation at (page 1, “MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest). There are many potentials uses for this simple metric: for example; to filter, rank or search for records; to assess the metadata availability of an ad hoc collection; to determine the frequency with which fields in a particular record type are filled, especially with respect to standards compliance; to assess the utility of specific tools and resources, and of data capture practice more generally; to prioritize records for further curation; to serve as performance metrics of funded projects; or to quantify the value added by curation. “. Page 2, “MCI scores were calculated for each of the above collections as the total number of filled fields expressed as a percentage of the total fields available across all records. Scores were also calculated for individual records and for each field (i.e., each variable or column header in a spreadsheet).”, “Calculating MCI scores and comparison of metadata field, see also Fig 1, “Schematic representation of the MCI calculation procedure”, Table 1&2, Page 8, “MCI scores could be used for judging compliance with a given standard”).
22. Regarding claim 16, Gitai and Liolios teach the invention as claimed in claim 7 above and Gitai further teaches wherein the metadata file includes the data pertaining to a publication (Fig 6).
Also, Liolios further teaches the limitation at (page 1, “the Genomes Online Database (GOLD), including records compliant with the ‘Minimum Information about a Genome Sequence’ (MIGS) standard developed by the Genomic Standards Consortium.”, Page 5, “publication of GOLD reported list”).
23. Regarding claim 17, Gitai teaches A system comprising: a storage configured to store instructions; a processor configured to execute the instructions and cause the processor (computer system) to:
assess a metadata file for completion of fields defined by a standard ([0022], “evaluate the efficiency of the repository schema and to evaluate the accuracy and validity of the data and metadata within the repository”, [0026], “evaluate a data repository for metadata and data accuracy and validity is to evaluate the table structure of the repository. A repository may have one or more than one table, and the tables may be in a hierarchy or flat structure. Various statistical queries may be performed to evaluate the accuracy and validity of the data, metadata, and structure of the table hierarchy to provide the user with information at this level that may be used to correct the data and/or improve the structure of the data repository.”, [0027], “statistical check is to evaluate the lookup fields of the table or tables. FIG. 2 illustrates the evaluation of a table's lookup data field usage count and rate and unused lookup count.”, [0029], Fig 1, [0035], “Dependency Check”, [0036] “Data dependency analysis can also be used to detect and correct errors in data.... A dependency checks as described herein may also be used to find incorrect values, such as the entry that indicates the city of "City" (four entries) and one with the state of "Stockholm Lan" (one entry). All five of these entries clearly represent invalid data and need correction”,, [0038], [0041], “A pattern analysis may be used to determine how many different data patterns exist for data in a given field in the repository. A large number of patterns may indicate a problem with data validity or accuracy, particularly for highly formatted data fields, such as phone numbers, as shown for example in Report 600.”, [0043] “To evaluate the efficiency of the data schema, one approach in keeping with the present invention is to apply the business rules of the repository strictly to the data itself”, [0044], “Taxonomy evaluation”.);
evaluate the metadata file against evaluation criteria, wherein the evaluation criteria is a collection of algorithms for evaluating data in the fields for compliance with a plurality of rules of the standard ([0022], [0026-0027], “statistical check is to evaluate the lookup fields of the table or tables. FIG. 2 illustrates the evaluation of a table's lookup data field usage count and rate and unused lookup count.”, [0029], Fig 1, [0035], “Dependency Check”, [0036] “Data dependency analysis can also be used to detect and correct errors in data.... A dependency checks as described herein may also be used to find incorrect values, such as the entry that indicates the city of "City" (four entries) and one with the state of "Stockholm Lan" (one entry). All five of these entries clearly represent invalid data and need correction”, [0038], [0041], “A pattern analysis may be used to determine how many different data patterns exist for data in a given field in the repository. A large number of patterns may indicate a problem with data validity or accuracy, particularly for highly formatted data fields, such as phone numbers, as shown for example in Report 600.”, [0043] “To evaluate the efficiency of the data schema, one approach in keeping with the present invention is to apply the business rules of the repository strictly to the data itself”, [0044], “Taxonomy evaluation”.), wherein the metadata file includes at least a first data in a first field and a second data in a second field, wherein the evaluation includes determining whether the second data that is dependent on the first data complies with at least one of the plurality of rules of the standard, according to the evaluation criteria (Fig 1, [0035]-[0036], “evaluating the dependency between two columns or between two pairs of fields in a data repository. Where a 100% dependency exists, a business rule is often a more efficient description of the relationship of the data”, see also Fig 3 & 4 and [0038]).
Gitai did not specifically teach calculate at least one score for the metadata file based upon the assessment of the completion of required fields and the evaluation of the metadata file against the evaluation criteria of the plurality of rules of the standard; present a report of results of the assessment and the evaluation in a graphical user interface, the report including the at least one score for the metadata file.
However, Liolios teaches calculate at least one score for the metadata file based upon the assessment of the completion of required fields and the evaluation of the metadata file against the evaluation criteria of the plurality of rules of the standard; present a report of results of the assessment and the evaluation in a graphical user interface, the report including the at least one score for the metadata file (page 1, “MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest).”, page 2, “MCI scores were calculated for each of the above collections as the total number of filled fields expressed as a percentage of the total fields available across all records. Scores were also calculated for individual records and for each field (i.e., each variable or column header in a spreadsheet).”, “Calculating MCI scores and comparison of metadata field, see also Fig 1, “Schematic representation of the MCI calculation procedure”, Table 1&2, Page 8, “MCI scores could be used for judging compliance with a given standard”).
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the of teachings suggested in Liolios's system into Gitai and by incorporating Liolios into Gitai because both systems are related to managing metadata would enables filtering that supports downstream analysis (Liolios, Abstract).
24. Regarding claim 18, Gitai and Liolios teach the invention as claimed in claim 17 above and Liolios further teaches wherein the processor is configured to execute the instructions and cause the processor to: receive, in the graphical user interface, an edit to the first data or the second data; and adjust the at least one score for the metadata file to reflect the edit to the first data or the second data (page 8, “MCI scores will ideally be used to make targeted improvements to databases over time. They could also be used over time to track the evolution of databases and their contents, for example, to signal significant updates in content even when the total number of entries remains the same, to report progress to funders, or to reward the work of curators who contribute the relevant information”, “MCI scores could be further refined in several ways, MCI scores could be used for judging compliance with a given standard. MCI scores could also be broken down to cover ‘required’ and ‘optional’ fields separately. Further refinement of MCI scores would require more thorough validation of metadata, making maximum use of mappings between minimal information requirements, recommended terminologies and any formats used.”).
25. Regarding claim 19, Gitai and Liolios teach the invention as claimed in claim 17 above and Gitai further teaches wherein the processor is configured to execute the instructions and cause the processor to: receive a selection of one or more data errors in the report of the results of the evaluation in the graphical user interface; and present a subset of the one or more data errors resulting from the selection of the one or more data errors (Fig 4, issue description/How to resolve, [0031].
Also, Liolios further teaches the limitation at (page 2, “highlight challenging-to acquire components of specifications or to quantify improvements in metadata reporting or database content (for example, through curation).”, page 6, “Improvements in MCI scores over time”, page 8, “MCI scores will ideally be used to make targeted improvements to databases over time. They could also be used over time to track the evolution of databases and their contents, for example, to signal significant updates in content even when the total number of entries remains the same, to report progress to funders, or to reward the work of curators who contribute the relevant information.”).
26. Regarding claim 20, Gitai and Liolios teach the invention as claimed in claim 17 above and Gitai further teaches after the presentation of the results of the assessment and the evaluation, receive inputs to navigate, sort, or filter to identify a subset of errors ([0010], [0026], Fig 4, [0031], [0036], [0040], [0045], Fig 5 & 7, the tables include an arrow facing down to sort and group errors by type).
Also, Liolios further teaches the limitation at (page 1, “The scoring of records on the richness of their description provides a simple, objective proxy measure for quality that enables filtering that supports downstream analysis. Pivotally, such descriptions should spur on improvements…MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest). There are many potential uses for this simple metric: for example; to filter, rank or search for records; to assess the metadata availability of an ad hoc collection; to determine the frequency with which fields in a particular record type are filled, especially with respect to standards compliance; to assess the utility of specific tools and resources, and of data capture practice more generally; to prioritize records for further curation; to serve as performance metrics of funded projects; or to quantify the value added by curation.”).
27. Regarding claims 21-26, those claims recite a non-transitory computer readable medium storing instruction performs the method of claims 17, 18, 9, 12, 13 and 20 respectively and are rejected under the same rationale.
Respond to Amendments and Arguments
28. Applicant’s 35 U.S.C. § 101 arguments on claims received 11/04/2025 have been fully considered but are not persuasive. The 35 USC 101 made against claims 7-26 is maintained. Examiner has expanded the analysis for comprehensibility, claims 7-26 falls within the “Mental Processes” grouping of abstract ideas (see rejection above).
29. Applicant respectfully requests that the instant nonstatutory double patenting rejection be held in abeyance until the claimed invention is deemed allowable and the claims are no longer subject to amendment.
30. Applicant’s 35 U.S.C. § 103 arguments on claims have been fully considered but are not persuasive.
In the Applicant Arguments/Remarks Made in an Amendment received 07/26/2024, Applicant argued in the that the combination of Gitai in view of Liolios does not teach the invention recited in Claims 7-26, for a number of reasons, including but not limited to, the following:
A- Pursuant to the requirements for establishing a prima facie case of obviousness under 35 U.S.C. § 103, all the claim limitations must be taught or suggested by the prior art. The determination of obviousness is made with respect to the subject matter as a whole, not separate pieces of the claim. Additionally, Examiners are cautioned to avoid hindsight and set aside knowledge of applicant's disclosure in reaching this determination. MPEP 2142.
B- First, Gitai does not disclose a standard, and especially does not disclose the claim limitation: "evaluating the metadata file against evaluation criteria, wherein the evaluation criteria is a collection of algorithms for evaluating data in the fields for compliance with a plurality of rules of the standard, wherein the metadata file includes at least a first data in a first field and a second data in a second field, wherein the evaluation includes determining whether the second data that is dependent on the first data complies with at least one of the plurality of rules of the standard according to the evaluation criteria.".
C- second Liolios does not disclose "calculating at least one score for the metadata file based upon the assessment of the completion of required fields and the evaluation of the metadata file against the evaluation criteria." Liolos describes MCI. MCI does not validate values against a standard's rules; it scores only presence/absence of fields.
Examiner presents the following responses to Applicant’s arguments:
With respect to applicant’s argument A, Applicant's arguments regarding establishing a prima facie case of obviousness under 35 U.S.C. § 103 have been fully considered but they are not persuasive.
Gitai system pertain to the field of computer systems. More particularly, but not by way of limitation, one or more embodiments of the invention enable an invention for modeling a master data repository to evaluate the efficiency of the repository schema and to evaluate the accuracy and validity of the data and metadata within the repository.
Liolios system pertain to o assessing the quality of data (metadata). More particularly, automatic scoring of records on the richness of their description which enables sorting by quality.
Both systems Gitai and Liolios are related to managing/evaluating data (metadata). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the teachings suggested in Liolios's system “calculating score for the metadata file based upon the assessment of the completion of required fields and the evaluation of the metadata file against the evaluation criteria; and presenting a report of results of the assessment and the evaluation” into Gitai and by incorporating Liolios into Gitai would provide an objective measure for metadata using the Metadata Coverage Index (MCI) to filter, rank or search for records, to assess the metadata quality of an ad hoc collection. Such an index provides a step towards putting metadata capture practices and in the future, standards compliance, into a quantitative and objective framework. (Liolios, Abstract).
With respect to applicant’s argument B, Applicant argued that Gitai doesn't disclose a standard. Applicant's arguments have been fully considered but they are not persuasive.
Examiner interpretation:
per definition: Evaluating data fields for compliance with multiple rules of a standard involves using specific data quality dimensions as evaluation criteria, such as validity, accuracy, completeness, and consistency. Each criterion must be mapped to the specific rules of the applicable standard (e.g., GDPR, HIPAA).
Gitai discloses “a standard= business rules” in [0024-0025], [0035], “a business rule is often a more efficient description of the relationship of the data… a rule might be developed to ensure that … [0043], “To evaluate the efficiency of the data schema … apply the business rules of the repository strictly to the data itself… identify business rule violations and find inefficiencies in the data schema.”, [0052], “analysis of the business rules of the database.”).
With respect to applicant’s argument C, Applicant argued that Liolios does not disclose "calculating at least one score for the metadata file based upon the assessment of the completion of required fields and the evaluation of the metadata file against the evaluation criteria.". However, Liolios teaches calculating at least one score for the metadata file based upon the assessment of the completion of required fields and the evaluation of the metadata file against the evaluation criteria of the plurality of rules of the standard; presenting, by the at least one processor, a report of results of the assessment and the evaluation in a graphical user interface, the report including the at least one score for the metadata file (page 1, “MCI scores can be calculated across a database, for individual records or for their component parts (e.g., fields of interest).”, page 2, “MCI scores were calculated for each of the above collections as the total number of filled fields expressed as a percentage of the total fields available across all records. Scores were also calculated for individual records and for each field (i.e., each variable or column header in a spreadsheet).”, “Calculating MCI scores and comparison of metadata field, see also Fig 1, “Schematic representation of the MCI calculation procedure”, Table 1&2, Page 8, “MCI scores could be used for judging compliance with a given standard”).
CONCLUSION
31. The prior art made of record and not relied upon is considered pertinent to applicant s disclosure.
Collins et al (US 20100211575 A1) discloses evaluates a media file's metadata to determine whether the media file is relevant to a mobile device user based on time, date, location, subject matter, or other criteria.
THIS ACTION IS MADE FINAL. Applicants are reminded of the extension of time policy as set forth in 37 C.F.R. § 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 C.F.R. § 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. The advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HICHAM SKHOUN whose telephone number is (571)272-9466. The examiner can normally be reached Normal schedule: Mon-Fri 10am-6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at 5712701698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https