Prosecution Insights
Last updated: April 19, 2026
Application No. 18/375,682

METHOD AND APPARATUS FOR INTEGRATED OPTIMIZATION-GUIDED INTERPOLATION

Final Rejection §101§103§112
Filed
Oct 02, 2023
Examiner
BAKER, IRENE H
Art Unit
2152
Tech Center
2100 — Computer Architecture & Software
Assignee
Ram Pavement
OA Round
6 (Final)
54%
Grant Probability
Moderate
7-8
OA Rounds
3y 0m
To Grant
81%
With Interview

Examiner Intelligence

Grants 54% of resolved cases
54%
Career Allow Rate
129 granted / 238 resolved
-0.8% vs TC avg
Strong +27% interview lift
Without
With
+26.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
32 currently pending
Career history
270
Total Applications
across all art units

Statute-Specific Performance

§101
26.3%
-13.7% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
4.6%
-35.4% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 238 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Introductory Remarks In response to communications filed on 25 November 2025, claims 1 and 11 are amended per Applicant's request. Claims 3, 6, 13, and 16 are cancelled. No claims were withdrawn. No new claims were added. Therefore, claims 1-2, 4-5, 7-12, 14-15, and 17-20 are presently pending in the application, of which claims 1 and 11 are presented in independent form. The previously raised objection to claims 1 and 11 is maintained. The previously raised 112 rejection of the pending claims is withdrawn in view of the amendments to the claims. A new ground(s) of rejection has been issued. The previously raised 101 rejection of the pending claims is maintained. The previously raised 103 rejection of the pending claims is withdrawn in view of the amendments to the claims. A new ground(s) of rejection has been issued. Response to Arguments Applicant’s arguments filed 25 November 2025 with respect to the objection of claims 1 and 11 (see Remarks, p. 11) have been fully considered but are not persuasive. Applicant’s arguments are not relevant to what was being claimed. The objection did not state that “the at least an OCR process” language was an issue. The objection stated that “the at an OCR process” was an issue. Applicant is therefore arguing based on claim language that did not exist. Therefore, the objection has been maintained. Furthermore, Applicant’s argument that “the at least an OCR process” is appropriate is unpersuasive. Even if Applicant were to amend the language to this, it would be rejected under 35 U.S.C. 112 for at least the reasons of (1) lack of antecedent basis issue (as “an” OCR process had already been earlier described in the claim), and (2) indefiniteness, as this implies that there are multiple OCR processes, yet only one is described in both the claims and the Specification. Applicant’s arguments filed 25 November 2025 with respect to the rejection of the claims under 35 U.S.C. 112 (see Remarks, p. 11-12) have been fully considered but are moot, as Applicant’s amendments have raised new issues. Applicant’s arguments filed 25 November 2025 with respect to the rejection of the claims under 35 U.S.C. 101 (see Remarks, p. 12-19) have been fully considered but are not persuasive. Applicant’s argument that amended claim 1 “recites a specific set of outputs at each step and is not directed to a mental process…” (Remarks, p. 15), is unpersuasive. Firstly, Applicant utilizes the language “directed to” a mental process with respect to Step 2A, Prong One. This is improper, as Step 2A, Prong One determines whether the claim recites an abstract idea. Therefore, the basis for Applicant’s argument misinterprets 101. Secondly, simply having outputs does not move the claims outside the realm of abstract ideas. If those outputs are claimed in a merely functional manner, i.e., claiming the result or effect rather than a particular manner of claiming it (i.e., a “concrete embodiment” of that idea), such claims may be found to be abstract. See Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016), slip op. 12 (“[T]he essentially result-focused, functional character of claim language has been a frequent feature of claims held ineligible under § 101”. Applicant’s argument that “Further, the output is not just a display of information, it is a second dataset containing classifier descriptors and data tags labeled by a subject matter expert” (see Remarks, p. 15) is unpersuasive. The claims still recite an abstract idea regardless of whether it is a display of information or whether it contains certain types of information. For example, the former may be an insignificant extra-solution activity, the latter may be an insignificant field-of-use limitation. Therefore, this argument is not persuasive with respect to Step 2A, Prong One. Applicant’s argument that “The second dataset is used to perform a comparative process and generate a projected schedule using a performance analysis and the comparative process” (see Remarks, p. 15) is unpersuasive. The mere application of the claimed dataset to a comparative process does not amount to significantly more, as the comparative process does not particularly utilize any elements within a particular structure or function of the dataset to perform this comparative process. Rather, the manner in which the claimed second dataset is invoked is merely to instruct the claimed invention to utilize the comparative process using a particular source of information—here, the second dataset, with slightly more specific types of information (which is an insignificant field-of-use limitation). Applicant’s argument that “In [McRO], the Federal Circuit held that using rules (even if mathematical) to achieve a specific technological result (automated lip-syncing) was patent eligible. Similarly, claim 1 as amended recites the use of specific image processing applications to achieve a technological result to generate a projected schedule and update project tracking and prompts for corrective actions” (see Remarks, p. 15) is unpersuasive. Similarly, Applicant’s argument that Claim 1’s recitation of an importance metric machine learning model using training data and an image classifier do not recite certain methods of managing personal behavior or relationships (see Remarks, p. 15-16) is unpersuasive. Firstly, Applicant had mentioned in the previous paragraph prior to this argument the steps of performing inferences, i.e., missing features. However, Applicant had conflated three different aspects, namely OCR recognition (which is well-understood, routine, and conventional), an image classifier, and inferring missing data. All of these steps were disclosed separately in the specification, not together. Even, however, with this unsupported combination of elements, the question is whether there is any recitation of abstract ideas in the claim, which there are. Thus, Applicant is arguing with respect to limitations treated at later steps in the analysis, in which the more particular elements that were claimed were found to be insignificant extra-solution activities that were well-understood, routine, and conventional (e.g., with respect to the OCR process, which had been cited in at least 100 different patent art references; and the use of neural networks for performing machine learning, which is also common in the art). Similarly, the image classifier was not found to amount to significantly more, for at least the reasons set forth in the 101 rejection below. More specifically, as stated previously, none of these steps are actually involved in how the projected schedule and update project tracking and prompts for corrective action are actually done. Rather, this step is, at best, an insignificant extra-solution activity that attempts to limit the claims to a particular field-of-use, describing the context rather than a particular manner of achieving the result. Applicant’s arguments with respect to Step 2A, Prong Two (see Remarks, p. 16-17) are unpersuasive for at least the reasons set forth in the response to arguments above and those in the 101 rejection below. Applicant’s arguments with respect to Step 2B (see Remarks, p. 17) are unpersuasive for at least the reasons set forth above, and because Applicant is arguing against limitations there were already addressed in Step 2A, Prong One/Two, as seen above. Applicant’s arguments filed 25 November 2025 with respect to the rejection of the claims under 35 U.S.C. 103 (see Remarks, p. 19-24) have been fully considered but are not persuasive. Applicant essentially argues that the amendments render the prior rejections moot. See Response at p. 19-24. The examiner respectfully disagrees and the rejections have been modified to conform to the current claim language. A Note on Intended Use The Examiner notes there are multiple elements in the claims that will be interpreted as intended use. A recitation directed to the manner in which a claimed apparatus is intended to be used does not distinguish the claimed apparatus from the prior art, if the prior art has the capability to so perform, see MPEP 2114 (II) and Ex parte Masham, 2 USPQ2d 1647 (Bd. Pat. App. & Inter. 1987). “Language that suggest or makes optional but does not require steps to be performed does not limit a claim to a particular structure, nor limits the scope of a claim or claim limitation”, see MPEP 2111.04. The Examiner notes the recited prior art has the capability to perform the limitations indicated as intended use. An incomplete list of the limitations that could be interpreted as intended use is as follows: Claims 1 and 11 recite “post-processing an output of the matrix matching process to increase OCR accuracy by constraining the output to a lexicon containing a set of words whose occurrence is permitted”. Claim Objections Claims 1 and 11 are objected to because of the following informalities: the claims recite “[the second dataset] including the at least a portion of the second dataset converted into the machine-encoded text by the at an OCR process”. There is a grammatical error, e.g., this should be “the” OCR process. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-2, 4-5, 7-12, 14-15, and 17-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Independent Claims 1 and 11 recite that the received second dataset (having an unknown degree of completion) “contain[s] at least an input glyph” and that this input glyph that is part of the second dataset is used in the step of “comparing pixels of at least one of the pre-processed images and the at least an input glyph to pixels of a stored glyph on a pixel-by-pixel basis”. However, there is no support for such limitations. An “input glyph”, as described by the Specification, is used for implementing an OCR process which includes a matrix matching process by comparing an image to a stored glyph on a pixel-by-pixel basis. See Specification, [0020]. In this manner, the matrix matching process may be thought of as a function that takes in, as input parameters, a “glyph” (i.e., input glyph) from the image component (i.e., “Matrix matching may rely on an input glyph being correctly isolated from the rest of the image component”), to the stored glyphs. Therefore, the second dataset does not “contain at least an input glyph”, but rather that the second dataset contains, e.g., images that may contain elements of textual dataset (see Specification, [0031] (“…receiving second dataset 132 may include receiving at least an image…. Receiving second dataset 132 may include converting at least an image into one or more elements of textual data, and/or generating one or more elements of textual data using at least an image…”)). In other words, the “input glyph” is something that results from performing the OCR preprocessing step (i.e., it being an isolated portion of the received image), not that the second dataset contains the input glyph as directly as claimed. Therefore, there is no support for the second dataset containing “at least an input glyph” as claimed. Additionally, Applicant claims that two elements, i.e., “pixels of at least one of the pre-processed images” and “the at least an input glyph”, are compared to a stored glyph on a pixel-by-pixel basis. However, the Specification only compares one element to the stored glyph. See Specification, [0020]. Therefore, there is no support for such a limitation as well. Independent Claims 1 and 11 further recite “using binarization to convert at least a portion of one of the images from color or greyscale to a binary image by separating text from a background of image component…”. “Binarization” does not involve “separating text from a background of image component”. This is unsupported by the Specification, which states at [0019], “Binarization may be performed as a simple way of separating text (or any other desired image component) from a background of image component”. As a result, the Specification indicates that binarization results in the separation of text from a background of image component, whereas the claimed limitation pertains to binarization resulting from the separation of text from a background of image component. Therefore, the claimed limitation is not supported by the Specification. Independent Claims 1 and 11 further recite “implementing an OCR algorithm comprising a matrix matching process, wherein implementing the OCR algorithm comprises: comparing pixels…; and post-processing an output of the matrix matching process…; inputting the second dataset into an image classifier…; and outputting, the second dataset from the image classifier…”. The amended claim language is written as though it is part of the post-processing step. However, there is no support for the OCR process including the image classifier step. See, e.g., Specification, [0032] (“…In some embodiments, generating second dataset 132 may include generating the second dataset 132 using at least an image and an image classifier 128…. In some embodiments, generating second dataset 132 may include generating the second dataset 132 using the at least an image and an optical character recognition process….”). At seen, the Specification indicates that these are two separate processes, not that the OCR process “further” includes the image classifier aspect, or that the image classifier is somehow a part of the OCR process. Independent Claims 1 and 11 further recite “identify at least a missing feature in the second dataset including the at least a portion of the second dataset converted into the machine-encoded text by the at an [sic] OCR process that can be input to a comparative process but is not present in the second dataset using the classifier descriptors and data tags labeled by the subject matter expert”. Firstly, a feature that is missing cannot be inputted into a comparative process. Rather, as claimed later, it is an interpolated feature that is inputted (“perform a comparative process using the first dataset and the interpolated second dataset…”). Thus, stating that the method identifies at least a missing feature “that can be input to a comparative process but is not present in the second dataset” indicates that a missing feature is used as input into the comparative process. This is not supported by the Specification, and indeed, contradicts the later step that the interpolated second dataset (i.e., in which the missing feature is interpolated) is used in the comparative process. Essentially, what is being claimed is that a null, missing or otherwise unavailable value is used in the comparison process despite it being missing, i.e., by using the claim language “that can be input to a comparative process but is not present in the second dataset”. As stated previously, this is unsupported by the Specification. Secondly, “using the classifier descriptors and data tags labeled by the subject matter expert” within the context of identifying the missing feature in the second dataset including at least a portion that was converted into text by the OCR process, is not supported by the Specification. The Specification does not support the interpretation that this is related to the OCR process. As stated previously, the classifier descriptions and OCR processes are two separate processes, and they were not linked within the description of the Specification. Essentially, the OCR process is about identifying specific characters in the image. The image classifier is used for identifying concepts within the images, e.g., “driveway”, “asphalt”, “20-year original age”, etc. See, e.g., Specification, [0028]. Similarly, there is no link in the Specification with respect to the identification of the missing feature in the second dataset and using the classifier descriptors and data tags labeled by the subject matter expert. Essentially, there are three different embodiments being improperly mixed, the first being OCR, the second being image/object recognition, and the third being data interpolation of missing data. Lastly, the Specification lacks support for the limitation of “outputting, the second dataset from the image classifier containing classifier descriptors and data tags labeled by the subject matter expert”. The closest paragraph appears to be Specification, [0026], which states that “image classifier 128 may rely on prior training data executed within machine-learning processes in the form of a subject matter expert inputting pictures then methodically applying classifier descriptors and data tags”. This therefore appears to be referring to an unsupervised learning process, in which the machine-learning process “methodically appl[ies] classifier descriptors and data tags”, not the subject matter expert. However, Specification, [0080], states that “an initial set of samples may be performed to cover an initial heuristic and/or ‘first guess’ at an output and/or relationship, which may be seeded, without limitation, using expert input received according to any process as described herein”. This indicates a supervised learning process. As such, Applicant appears to be mixing embodiments, i.e., the majority being part of the unsupervised learning process, and the other part being the supervised learning process, which is not supported by the Specification. The dependent claims are rejected for at least by virtue of their dependency on their respective independent claims, and for failing to cure the deficiencies of their respective independent claims. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-2, 4-5, 7-12, 14-15, and 17-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Independent Claims 1 and 11 further recite that the received second dataset (having an unknown degree of completion) “contain[s] at least an input glyph”. This contradicts the Specification, in which the second dataset appears to contain textual data that was extracted from images containing textual elements via OCR (in which the “input glyph” is part of the OCR process). Therefore, the metes and bounds cannot be ascertained as a result of this contradiction. Additionally, Applicant claims that two elements, i.e., “pixels of at least one of the pre-processed images” and “the at least an input glyph”, are compared to a stored glyph on a pixel-by-pixel basis. However, the Specification only compares one element to the stored glyph. See Specification, [0020]. Therefore, it is unclear what is meant by “pixels of at least one of the pre-processed images” and/or “input glyph” within the context of the claims, and which of these correspond to the Specification’s disclosure of “comparing an image to a stored glyph on a pixel-by-pixel basis”. Independent Claims 1 and 11 recite “using binarization to convert at least a portion of one of the images from color or greyscale to a binary image by separating text from a background of image component…”. Firstly, there is a lack of antecedent basis issue with this limitation. Secondly, “binarization” does not involve “separating text from a background of image component”. This is unsupported by the Specification, which states at [0019], “Binarization may be performed as a simple way of separating text (or any other desired image component) from a background of image component”. As a result, the Specification indicates that binarization results in the separation of text from a background of image component, whereas the claimed limitation pertains to binarization resulting from the separation of text from a background of image component. Furthermore, even if one were to attempt to interpret binarization including the step of separating text from a background of image component, it does not make sense how binarization, which is about the transformation of color representation, involves the separation of text from a background of image component as claimed, as the two pertain to two different types of image processing. For purposes of examination, the interpretation from the Specification, [0019] as seen above, has been taken. Independent Claims 1 and 11 further recite “implementing an OCR algorithm comprising a matrix matching process, wherein implementing the OCR algorithm comprises: comparing pixels…; and post-processing an output of the matrix matching process…; inputting the second dataset into an image classifier…; and outputting, the second dataset from the image classifier…”. It is unclear whether the newly added language of “inputting the second dataset into an image classifier…” and “outputting, the second dataset from the image classifier” are part of the post-processing step, or separate. This confusion stems from (1) the additional indentation for the “inputting” and “outputting” steps, which makes it appear as though it is part of the “post-processing” step, (2) the use of the word “and” following from the “inputting” step, yet (3) the post-processing step does not utilize the language “comprising” or “including”, or some other language to indicate that there are additional steps associated with this particular step. Therefore, the metes and bounds of these limitations cannot be ascertained. Independent Claims 1 and 11 further recite “identify at least a missing feature in the second dataset including the at least a portion of the second dataset converted into the machine-encoded text by the at an [sic] OCR process that can be input to a comparative process but is not present in the second dataset using the classifier descriptors and data tags labeled by the subject matter expert”. Firstly, the metes and bounds of “that can be input to a comparative process” cannot be ascertained. A feature that is missing cannot be inputted into a comparative process. Rather, as claimed later, it is an interpolated feature that is inputted (“perform a comparative process using the first dataset and the interpolated second dataset…”). Thus, it is unclear how a missing, null, or otherwise unavailable value can be compared to a value that does exist. Secondly, “using the classifier descriptors and data tags labeled by the subject matter expert” within the context of identifying the missing feature in the second dataset including at least a portion that was converted into text by the OCR process, does not make sense. Nor does the Specification support the interpretation that this is related to the OCR process. As stated previously, the classifier descriptions and OCR processes are two separate processes, and they were not linked within the description of the Specification. Similarly, there is no link in the Specification with respect to the identification of the missing feature in the second dataset and using the classifier descriptors and data tags labeled by the subject matter expert. Therefore, it is unclear what is being claimed here, as (1) it cannot be ascertained whether this is meant to apply to the identification of a missing feature in the second dataset, or by the OCR process, and (2) even if either interpretation were taken, it cannot be ascertained what is meant by this limitation within the context of either of those two steps. Thirdly, it is unclear what is meant by the limitation of “outputting, the second dataset from the image classifier containing classifier descriptors and data tags labeled by the subject matter expert”. The closest paragraph appears to be Specification, [0026], which states that “image classifier 128 may rely on prior training data executed within machine-learning processes in the form of a subject matter expert inputting pictures then methodically applying classifier descriptors and data tags”. This therefore appears to be referring to an unsupervised learning process, in which the machine-learning process “methodically appl[ies] classifier descriptors and data tags”, not the subject matter expert. However, Specification, [0080], states that “an initial set of samples may be performed to cover an initial heuristic and/or ‘first guess’ at an output and/or relationship, which may be seeded, without limitation, using expert input received according to any process as described herein”. This indicates a supervised learning process. As such, Applicant appears to be mixing embodiments, i.e., the majority being part of the unsupervised learning process and the rest being part of the supervised learning process. Therefore, it is unclear what is being claimed here. For purposes of examination, the interpretation from Specification, [0080], has been taken. Lastly, Specification, [0028] states that “a ‘classifier descriptor’ is a type of data tag which is digitally attached to a picture, engagement, or user profile”. The distinction between “classifier descriptors” and “data tags” is therefore unclear within the context of the claimed invention, as there is no mention of other data tags, but only the classifier descriptor being associated with the images. The dependent claims are rejected for at least by virtue of their dependency on their respective independent claims, and for failing to cure the deficiencies of their respective independent claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-5, 7-12, 14-15, and 17-20 are rejected under 35 U.S.C. 101, because the claims are directed to a judicial exception (i.e., an abstract idea) without significantly more. Independent claims 1 and 11 recite generating a first dataset (having a known degree of completion) which comprises identifying a type of project, selecting a representative stored candidate model as a function of the identified type of project1, comparing at least a user input to the representative stored candidate model, and determining a required piece of information as a function of the comparison between the at least a user input and the representative stored candidate model; (adding to) the second dataset (that contains pictures), classifier descriptors and data tags labeled by a subject matter expert2; identifying a missing feature in a data set using classifier descriptors and data tags labeled by a subject matter expert; determining the missing feature is a necessary feature based on some optimization criterion; interpolate at least an additional datum into data set, wherein the at least additional datum is a substitute for the missing feature; and perform a comparative process using the first dataset and the interpolated second dataset. The independent claims further recite determining a missing feature is necessary based on an importance metric using the identification of the at least a missing feature; comparing the importance metric to the threshold criterion; and determining that at least a missing feature is a necessary feature as a function of a comparison. This can be practically performed in the mind of a person (e.g., a person evaluating whether or not a feature is necessary based on, e.g., experience; because the person would ultimately decide “yes” or “no”, there is a threshold criterion for making such a decision, including weighing based on gut feelings, or other quantitative measures such as counting based on random sampling, etc.). Additionally, the independent claims’ recitation of using training data to correlate an identification of a feature input to an importance metric for training, can also be performed in the mind of a person, e.g., a person can be shown features and make associations to whether such features are important in classifying a particular dataset, e.g., a person reading a document can be trained to identify it as a legal document, a fictional short story, a non-fiction short story, a novel, etc. Similar identifications can be performed mentally by a person with regards to images, e.g., classifying an image as corresponding to a tree, a flower, a panther, a piece of jewelry, a car, or even a particular disease. A person may assess the characteristics that were used to make such a determination, e.g., doctors being able to point out which features were used to make a medical diagnosis based on a patient’s charts (textual data) and scans (image data). As such, these steps encompass an evaluation, observation, and/or judgment, including analyzing steps that can be practically performed in the mind of a person, which fall under the “Mental Processes” grouping of abstract ideas. The comparative process / performance analysis may also be regarded as “Certain Methods of Organizing Human Activity”, e.g., keeping track of the progress of a project, which is a human activity. Dependent claims 4 and 14 recite classifying a dataset to a feature template, comparing the second dataset to the feature template, and identifying at least a missing feature based on the comparison. This is no different than a person being able to compare two sets of data (the other being a feature), whether it is tabular or image, and then being able to identify the differences, including missing information. Thus, such steps amount to an evaluation, observation and/or judgment, which falls under the “Mental Processes” grouping of abstract ideas. Because the claims do no more than cover performance of the limitation in the mind but for the recitation of generic computer components, the claims therefore fall within the “Mental Processes” grouping of abstract ideas (in addition to “Certain Methods of Organizing Human Activity”). Accordingly, the claims recite an abstract idea. The claims are not integrated into a practical application of that idea. In particular, the claims recite various computing hardware components, which are recited at a high level of generality and recited so generically that they represent no more than mere instructions to apply the judicial exception on a computer (see MPEP 2106.05(f)). These limitations can also be viewed as nothing more than an attempt to generally link the use of the judicial exception to the technological environment of a computer (see MPEP 2106.05(h)). The claims variously recite machine-learning components or other software components to implement the abstract idea. See, e.g., the claims to an optical character recognition process for performing the claimed conversion of image text to machine-encoded text (independent claims 1 and 11), “image classifier” (independent claims 1 and 11 and dependent claims 2 and 12), “template classifier” (dependent claims 4 and 14), “feature identification machine-learning model” (dependent claims 5 and 15), “importance metric machine-learning model” (dependent claims 7 and 17), “generative machine-learning model” using a “generative machine-learning algorithm” (dependent claims 9 and 19), and “machine-learning process” (dependent claims 10 and 20), in addition to training the various machine-learning models. Because such elements are recited so generically and at a high level of generality, such elements do nothing more than an attempt to generally link the claims to a particular technological environment—namely, implementation via computers. Even with independent claims 1 and 11 reciting that the OCR process is performed by inputting the second dataset into an image classifier that is trained with training data containing pictures as inputs and classifier descriptors and data tags as outputs labeled by a subject matter expert, outputting the second dataset from the image classifier containing classifier descriptors and data tags labeled by the subject matter expert, and that the classifier descriptors and data tags labeled by the subject matter expert are used for identifying at least a missing feature in the second dataset, that the determining a feature to be important is performed based on an iterative training of a machine learning model, and that a (first) dataset is “updated” to track progress of the project, are nothing more than an attempt to limit the claims to a particular technological environment—namely, implementation via computers. As stated above in Step One, a person could be trained with similar data to be trained/learn to associate certain features with certain importance metrics. A person could also make notes on the progress of a project, and thus “update” their data(set). Thus, attempting to narrow it to a “machine learning model” is nothing more than an insignificant field-of-use limitation, describing the context rather than a particular manner of achieving the result. Similarly, that data is updated is nothing more than an attempt to limit the claims to a particular technological field—namely, implementation via computers. Additionally, independent claims 1 and 11 recite more specific details regarding the steps utilized in OCR (for converting portions of the second dataset into OCR). These are insignificant extra-solution activities, as they are only limited to the certain types of information that are being utilized within the (second) dataset, but are unrelated to the rest of the determination processes. As such, such limitations amount to nothing more than tangential or nominal additions to the claims, i.e., insignificant extra-solution activities. Independent claims 1 and 11 further recite more specific details regarding the iterative training of the importance metric machine learning model using training data (i.e., “applied to an input layer of nodes comprising an identification of a feature input, one or more intermediate layers, and an output layer of nodes comprising an importance metric parameter output; adjusting the one or more connections and one or more weights between nodes in adjacent layers of the importance metric machine learning model to iteratively update the output layer of nodes by updating the training data applied to the input layer of nodes”). Firstly, the slightly more descriptive detail is nothing more than insignificant extra-solution activities, describing a tangential/nominal addition to the claim that is still recited at a high level of generality. Secondly, the type of data involved in the training, e.g., the feature input and importance metric parameter output, are nothing more than insignificant field-of-use limitations, describing the context rather than a particular manner of achieving the result. More specifically and additionally, the claims variously recite insignificant field-of-use limitations, describing the context rather than a particular manner of achieving the result. Such limitations include “the second dataset containing at least an input glyph having an unknown degree of completion”, that the second dataset is generated/contains machine-encoded text generated from an “optical character recognition process”, that binarization is used to perform conversion from color or grayscale by separating text from a background of image component, that pixels of at least one of the pre-processed images and the at least an input glyph are compared to pixels of a stored glyph on a pixel-by-pixel basis, that the same font and scale is ascertained, that the identification of at least a missing feature in the second dataset uses the classifier descriptors and data tags labeled by the subject matter expert, and that the first and second datasets are involved in the comparative process (independent claims 1 and 11); that the additional datum that is interpolated is generated “as a function of the necessary feature”; that the second dataset is generated using “an image and an image classifier” (dependent claims 2 and 12); that the feature identification machine-learning model is trained “as a function of the at least an exemplary dataset” (dependent claims 5 and 15); that the missing feature is identified “using the feature identification machine-learning model and the second dataset” (dependent claims 5 and 15); that “each training example correlates an identification of a feature with an importance metric parameter”, that the importance metric machine-learning model is “train[ed] as a function of the plurality of training examples” and that the importance metric is generated “using the identification of the at least a missing feature and the importance metric machine-learning model” (dependent claims 7 and 17); that the interpolating at least an additional datum is “a function of the at least an exemplary dataset” (dependent claims 8 and 18); that a generative machine-learning model is trained “using the at least an exemplary dataset and a generative machine-learning algorithm” (dependent claims 9 and 19); and that the comparative process further comprises “a machine-learning process” (dependent claims 10 and 20). The claims also variously recite receiving data (claims 1-2, 5, 7-8, 11-12, 15, and 17-18), which is nothing more than an insignificant extra-solution activity. Additionally, independent claims 1 and 11 recite “generate a prompt” for a proposed corrective action, and “display[ing] a result of the comparative process”, which is also an insignificant post-solution activity, which a tangential or nominal addition to the claim that is unrelated to how any of the identification / determination / analysis steps are performed3. Similarly, because the displaying is being performed by a “remote device”, this is nothing more than an attempt to link the claims to a particular technological environment—implementation via computers. The claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements reciting the use of various computing software and hardware components amount to no more than mere instructions to apply the judicial exception using generic components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Additionally, with regards to the claims’ recitation of receiving and displaying data, such steps are well-understood, routine, and conventional activities within the computing industry. See MPEP 2106.05(d)(II) (“Receiving or transmitting data over a network, e.g., using the Internet to gather data”, with regards to requesting user input and receiving user input; “Presenting offers and gathering statistics” with regards to the displaying step / generating a prompt for a proposed corrective action step). Additionally, with regards to independent claims 1 and 11, they recite well-understood, routine, and conventional activities within OCR. See the attached Wikipedia article on OCR, which describes the various steps typically undertaken with respect to OCR. Note that some of the Wikipedia article’s paragraphs were copied into the present application’s Specification with respect to the OCR aspects, and subsequently claimed herein. See also, e.g., prior art reference Golchha, [6:41-67]-[8:1-42] and [9:14-35], e.g., in the 103 rejection below, which recites all of the claimed limitations with respect to the OCR process. See also Dohrn (US 2024/0143615 A1) at [0028-0030], Mara (US 2025/0037008 A1) at [0044-0046], Smith et al. (US 2024/0370748 A1) at [0025] and [0027-0028], Lombard et al. (US 2024/0362735 A1) at [0109] and [0111-0112], Turner (US 2024/0354185 A1) at [0025] and [0027-0028], Arriaga (US 12,061,622 B1) at [17:48-67]-[19:1-2], and the other, e.g., roughly 100 prior art references found during the search that all include exactly the same language with respect to the OCR steps disclosed by the present Specification at [0026] and [0028-0029] (and thus disclose the well-understood, routine, and conventional nature of what is being claimed in the present application with respect to the OCR). Lastly, with regards to independent claims 1 and 11 reciting well-understood, routine, and conventional activities within the realm of neural networks by reciting that the training of the machine learning model using training data applied to an input layer of nodes, one or more intermediate layers, and an output layer of nodes comprising an output4, and adjusting the one or more connections and one or more weights between nodes in adjacent layers of the machine learning model to iteratively update the output layer of nodes by updating the training data applied to the input layer of nodes.5 Essentially, Applicant is claiming some of the most basic functions of a neural network, i.e., being well-understood, routine, and conventional within the realm of computing. Even as an ordered combination, the claims as a whole do not contain any additional elements that amount to significantly more. The claims do nothing more than provide a generic environment for performing various analyses, i.e., determination and identification steps, of the data, and only attempt to limit the claims to particular field-of-use limitations by describing the type of data involved, and generically recite various software and hardware components that are not integrated into a practical application of the idea. In particular, the claims are not limited to any particular manner by which the determination and identification steps are performed. Instead, the claims recite the steps at such a high level of generality, and not a specific means for performing the stated functions. Instead, the claimed steps are directed to the resulting goal or effect, rather than a particular manner by which a computer would implement those steps. In other words, at that level of generality, the claims do no more than describe a desired function or outcome, without providing any limiting detail that confines the claims to a particular solution to an identified problem. The purely functional nature of the claim confirms that it is directed to an abstract idea, not to a concrete embodiment of that idea (see Affinity Labs of Texas LLC v. Amazon.com Inc., 838 F.3d 1253 (Fed. Cir. 2016) at p. 7-8, citing Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016), slip op. 12 (“[T]he essentially result-focused, functional character of claim language has been a frequent feature of claims held ineligible under § 101”)). As a whole, the claims do not go beyond stating the relevant functions in general terms, without limiting them to a technical means for performing the functions that are arguably an advance over conventional database technologies. Therefore, for at least the aforementioned reasons, the claims are rejected under 35 U.S.C. 101 for being directed to a judicial exception (i.e., an abstract idea) without significantly more. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 7, 10-12, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Dalyac et al. (“Dalyac”) (US 2018/0300576 A1), in view of Huts et al. (“Huts”) (US 2023/0067026 A1, incorporating by reference Achin et al. (“IBR-Achin”) (App. Ser. No. 15/790,803 or US 2018/0046926 A1 at [0002])), in further view of Subramanian et al. (“Subramanian”) (US 2021/0117878 A1), in further view of Golchha et al. (“Golchha”) (US 11,977,952 B1). Regarding claim 1: Dalyac teaches An apparatus for integrated optimization-guided interpolation in datasets, wherein the apparatus comprises: at least a processor, and a memory communicatively configuring the at least a processor, the memory containing instructions configuring the at least a processor to (Dalyac, [0027-0031], where the disclosed system may be embodied as an apparatus, e.g., as a computer program or computer program product for carrying out any of the disclosed methods, and may be expressed in terms of their corresponding structure, such as a suitably programmed processor and associated memory): generate a first dataset (Dalyac, [0052-0064], where the system will generate a dataset by pre-training a model on the best possible similar data (i.e., regarding volume and labels), and at the last time, fine tune the latest feature extraction model using some/all of the labelled dataset or feature set, until sufficient data and model quality is achieved), wherein generating the first dataset comprises: comparing at least two user inputs using the representative … candidate model; and determining a required piece of information as a function of the comparison between the at least two user inputs based on the representative … candidate model (Dalyac, [0085], where the system optimizes the re-training of a model (i.e., “representative…candidate model”) to take into account the new user input that includes a ranked list of images/image clusters (i.e., “comparing at least two user inputs to the representative…candidate model’6). See also Dalyac, [0051-0086], where the system provides for semi-automatic labeling for the user provides guidance via input/output 108 for modelling dataset 102 with computational model 106. The system initially pre-trains a model on the best possible similar data, and at Step 2, models the target data with the pre-trained model (i.e., “comparing at least two user inputs to the representative…candidate model”)7, and prepares the modelled target data for the user for review (i.e., “determining a required piece of information as a function of the comparison between the at least two user inputs and the representative…candidate model”) by extracting features of the target dataset with the model (referred to as the feature set), performing dimensionality reduction on the feature set, assigning labels to no/some/all feature points, presenting a user interface to the user for browsing and editing the tagged feature set (in which the user browses through the labelled feature set to find regions to validate, and the user subsequently validates or corrects labels seen on the interface). The cycle is repeated from Step 2 (i.e., modeling the target data with the pre-trained model) (i.e., “comparing at least a user input to the representative…candidate model”); receive a second dataset … having an unknown degree of completion (Dalyac, [0107-0108], where the system performs vehicle damage estimation, where a user captures images 712 of a damaged vehicle and transmits the images to the system. The system uses a computational model 706 to evaluate the images 712 and produce a vehicle damage estimate. To produce a repair estimate, the system recognizes a set of damaged parts via deep learning, where for an image provided from a vehicle owner where no part labels are provided (i.e., “unknown degree of completion”), a fairly robust model for the image data is necessary), …; identify at least a missing feature in the second dataset (Dalyac, [0109-0111], where the system recognizes a set of damaged parts via deep learning. The system predicts a “repair”/“replace” label for each damaged part, as well as predicting a “not visible” (i.e., missing), “undamaged”, “repair”, or “replace” label for relevant internal parts, where internal parts were not directly observed, i.e., were missing. See also, e.g., Dalyac, [0165], where internal damage prediction can be implemented with predictive analytics such as regression models, where images of a damaged vehicle do not permit direct observation of internal parts); determine that at least a missing feature is a necessary feature by generating an importance metric using the at least a missing feature (Dalyac, [0169], where the system analyzes the number of internal parts that can be omitted from the regression model for performing internal damage prediction) … . Dalyac does not appear to explicitly teach [the second dataset] containing at least an input glyph; [wherein generating the first dataset comprises] identifying a type of project; selecting a representative stored candidate model as a function of the identified type of project; [a representative] stored [candidate model]; wherein receiving the second dataset comprises converting at least a portion of the second dataset into machine-encoded text by at least an optical character recognition (OCR) process, wherein converting the at least a portion of the second dataset into the machine-encoded text comprises converting images of text in the at least a portion of the second dataset into the machine-encoded text and further comprises: pre-processing image components of the images, wherein pre-processing the image components comprises: de-skewing at least one of the image components by applying a homography transform to the at least one of the image components; using binarization to convert at least a portion of one of the images from color or greyscale to a binary image by separating text from a background of image component and using normalization to normalize an aspect ratio of at least one of the image components; implementing an OCR algorithm comprising a matrix matching process, wherein implementing the OCR algorithm comprises: comparing pixels of at least one of the pre-processed images and the at least an input glyph to pixels of a stored glyph on a pixel-by-pixel basis; and ascertaining a same font and scale; and post-processing an output of the matrix matching process to increase OCR accuracy by constraining the output to a lexicon containing a set of words whose occurrence is permitted; inputting the second dataset into an image classifier, wherein the image classifier is trained with training data containing pictures as inputs and classifier descriptors and data tags as outputs labeled by a subject matter expert; and outputting, the second dataset from the image classifier containing classifier descriptors and data tags labeled by the subject matter expert; [the second dataset] including the at least a portion of the second dataset converted into the machine-encoded text by the at an OCR process that can be input to a comparative process but is not present in the second dataset using the classifier descriptors and data tags labeled by the subject matter expert; [determine that at least a missing feature is a necessary feature by generating an importance metric using the identification of the at least a missing feature] and comparing the importance metric to a threshold criterion, wherein generating the importance metric comprises: iteratively training an importance metric machine learning model using training data applied to an input layer of nodes comprising an identification of a feature input, one or more intermediate layers, and an output layer of nodes comprising an importance metric parameter output; adjusting one or more connections and one or more weights between nodes in adjacent layers of the importance metric machine learning model to iteratively update the one or more weights between nodes by updating the training data applied to the input layer of nodes; interpolate at least an additional datum into the second dataset, wherein the at least an additional datum is a substitute for the missing feature, wherein the at least an additional datum is generated as a function of the necessary feature; perform a comparative process using the first dataset and the interpolated second dataset, wherein the first dataset represents a project to be completed, wherein the second dataset represents data concerning the project, wherein the comparative process determines an extent to which the project represented by the second dataset has been completed according to the first dataset; and configure a remote device to display a result of the comparative process. Huts teaches [wherein generating the first dataset comprises] identifying a type of project; selecting a representative stored candidate model as a function of the identified type of project; [a representative] stored [candidate model] (IBR-Achin, [0144-0149] and [0163-0164], where the system determines the suitability of a predictive modeling procedure for a prediction problem based on the performance of similar predictive modeling procedures on similar prediction problems. The exploration engine 110 may use tools (included within a library of modeling techniques, i.e., “representative stored candidate model”) for assessing the similarities between prediction problems. The exploration engine 110 may calculate the suitability of the modeling procedure at issue based on performance of similar modeling procedures on the similar prediction problems, and the system subsequently selects a predictive model for the prediction problem based on the evaluations, e.g., scores, of the generated predictive models. See also IBR-Achin, [0042], where a memory stores a machine-executable module encoding a predictive modeling procedure (i.e., “stored candidate model”). Although IBR-Achin does not appear to explicitly state that the “prediction problem” corresponds to a project, IBR-Achin suggests in [0223] and [0228] that models may be associated with projects. Therefore, it would have been obvious to one of ordinary skill in the art to have substituted IBR-Achin’s “predictive problem” with IBR-Achin’s “project” because a “project” may be regarded as having an association with predictive problems. See, e.g., IBR-Achin, [0213], where users may manage multiple modeling projects within an organization, gain insights into the dataset and model results, and/or deploy completed models to produce predictions on new data. One would have found it obvious to perform such a substitution with predictably equivalent operating characteristics, which is that a project involves a predictive problem); [determine that at least a missing feature is a necessary feature by generating an importance metric using the identification of the at least a missing feature] and comparing the importance metric to a threshold criterion (Huts, [0163], where feature engineering operations are performed based on the predictive value (e.g., feature importance) of the features, where features are determined to be more/less important based on whether they are greater/less than a threshold value, e.g., a feature is classified as “more important” if the predictive value of the feature is greater than a threshold value, if the feature has one of the N highest predictive values among the features in the dataset, etc. See Huts, [0126], where feature engineering operations include infilling missing variable values, i.e., “that at least a missing feature is a necessary feature”. This indicates that Hut’s system will infill a missing variable value upon determination that the (missing) variable has a high enough importance, i.e., that the missing feature is “necessary”, as claimed), wherein generating the importance metric comprises: iteratively training an importance metric machine learning model using training data applied to an input layer of nodes comprising an identification of a feature input, one or more intermediate layers, and an output layer of nodes comprising an importance metric parameter output; adjusting one or more connections and one or more weights between nodes in adjacent layers of the importance metric machine learning model to iteratively update the one or more weights between nodes by updating the training data applied to the input layer of nodes (Huts, [0280], where feature impact, which is an estimate of the extent to which a feature F contributes to the performance (e.g., accuracy) of a model M, may be assessed based on, e.g., IBR-Achin, [0183], where variable importance, which measures the degree of significance each feature has in predicting a target, may be analyzed using gradient boosted trees, random forest (a form of machine learning), and/or other suitable techniques. See also IBR-Achin, [0383], where for calculating the importance of one feature for one model and/or modeling technique, the engine can iterate over models and/or modeling techniques to determine the relative importance of a feature across models and/or modeling techniques. See IBR-Achin, [0263] and [0271], where the system selects the response variable and chooses a primary fitting metric. The set of predictive models that may be chosen include a neural network. See also IBR-Achin, [0380-0384], where engine 110 maintains a list of modeling techniques that generally produce illustrative results for feature importance, and when evaluating a dataset, the engine 110 may automatically run all or some of these modeling techniques. The importance of any feature may be calculated by engine 110 using universal partial dependence, where (1) the engine 110 obtains the accuracy metric (i.e., “importance metric parameter output”) for a predictive model fitted on the sample (i.e., “training data”) using the modeling technique (i.e., “an importance metric machine learning model”), where this fitting may be performed from scratch or use a previous fitting, and then (2) for a given feature (i.e., “identification of a feature input”), the engine 110 takes all its values across all observations (i.e., correlations between the feature and the corresponding model(s)), shuffles them, and reassigns them to the observations (i.e., “iteratively training an importance metric machine learning model using training data”). The engine may then rescore the model on the dataset with the shuffled feature values (i.e., “feature input”), producing a new value for the accuracy metric (i.e., “an importance metric parameter output”). The engine can iterate over features to determine the relative importance of features within a model and/or modeling technique, and/or vice versa (iterating over models and/or modeling techniques to determine the relative importance of a feature across models and/or modeling techniques). Although IBR-Achin does not appear to explicitly state that a neural network is utilized in determining feature importance, it would have been obvious to have substituted IBR-Achin’s disclosure in [0380-0384] for calculating feature importance by explicitly utilizing a neural network model because IBR-Achin’s feature importance represents combining data from multiple domains, and in general, feed-forward neural networks are particularly well-suited to handling data analytics problems that involve combining data from multiple domains (see, e.g., Huts, [0131]). Therefore, one of ordinary skill in the art would have found it obvious to perform such a substitution with the motivation of greater accuracy in determining feature importance via the use of a neural network. Furthermore, although IBR-Achin does not appear to explicitly state that the structure of a neural network, one of ordinary skill in the art would have recognized that neural networks comprise of at least three layers, including an input layer, an intermediate or hidden layer, and an output layer, where the input layer comprises a plurality of input neuron units (i.e., which, when taken in the context of IBR-Achin’s disclosure, represents the feature input), the intermediate layer comprises a plurality of intermediate neuron units, and the output layer comprises at least one output neuron unit (i.e., which, when taken in the context of IBR-Achin’s disclosure, represents the response variable or output, i.e., the feature’s importance).8 Furthermore, one of ordinary skill in the art would have recognized that training such a neural network would involve back-propagation9, i.e., by adjusting the output weighting coefficients and the intermediate weighting coefficients10. Note that IBR-Achin’s disclosure of iterating through the features and the models (see IBR-Achin, [0380-0384] above) corresponds to the claimed limitation of “updating the training data applied to the input layer of nodes”); [and] interpolate at least an additional datum into the second dataset, wherein the at least an additional datum is a substitute for the missing feature, wherein the at least an additional datum is generated as a function of the necessary feature (Huts, [0126], where the feature engineering operations performed by the data preparation and feature engineering module 124 may include, e.g., infilling missing variable values, as well as feature selection operations including dropping uninformative features, dropping highly correlated features, replacing original features with top principal components, etc. Although Huts does not appear to explicitly state that the infilling of missing data values is generated “as a function of” important features, Huts discloses determining necessary features (see, e.g., Huts, [0126], [0163], [0183] and IBR-Achin, [0183] and [0380-0384] above), and also discloses feature selection (i.e., a consideration of the feature characteristics) when preparing data for modeling, such as dropping uninformative features, dropping highly correlated features, etc. Therefore, it would have been obvious to one of ordinary skill in the art to have modified Huts such that the feature selection, e.g., using feature importance, is utilized for determining which missing values to infill/impute with the motivation of greater efficiency and faster processing (e.g., firstly identifying which features are important enough to be selected for modeling and then filling in the missing features, rather than, e.g., filling in missing features for a dataset (which may be computationally costly) and then dropping some of those features during the feature selection process, which wastes processing resources, as the infilled missing values were calculated for features that were not utilized in the end for modeling)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Dalyac and Huts (hereinafter “Dalyac as modified”) with the motivation of (1) optimizing feature selection operations according to different modeling techniques, as different modeling techniques may produce different measures of the importance of the same feature for the same dataset (IBR-Achin, [0380]), (2) identifying features that are important to increase modeling accuracy, and (3) performing missing value imputation on only those features that are important, thereby saving computational costs and resources (i.e., instead of automatically performing imputation on all missing data values, which is much more computationally expensive), while allowing a model to still be trained in an accurate manner despite the absence of such important features. Dalyac as modified does not appear to explicitly teach [the second dataset] containing at least an input glyph; wherein receiving the second dataset comprises converting at least a portion of the second dataset into machine-encoded text by at least an optical character recognition (OCR) process, wherein converting the at least a portion of the second dataset into the machine-encoded text comprises converting images of text in the at least a portion of the second dataset into the machine-encoded text and further comprises: pre-processing image components of the images, wherein pre-processing the image components comprises: de-skewing at least one of the image components by applying a homography transform to the at least one of the image components; using binarization to convert at least a portion of one of the images from color or greyscale to a binary image format by separating text from a background of image component and using normalization to normalize an aspect ratio of at least one of the image components; implementing an OCR algorithm comprising a matrix matching process, wherein implementing the OCR algorithm comprises: comparing pixels of at least one of the pre-processed images and the at least an input glyph to pixels of a stored glyph on a pixel-by-pixel basis; and ascertaining a same font and scale; and post-processing an output of the matrix matching process to increase OCR accuracy by constraining the output to a lexicon containing a set of words whose occurrence is permitted; inputting the second dataset into an image classifier, wherein the image classifier is trained with training data containing pictures as inputs and classifier descriptors and data tags as outputs labeled by a subject matter expert; and outputting, the second dataset from the image classifier containing classifier descriptors and data tags labeled by the subject matter expert; [the second dataset] including the at least a portion of the second dataset converted into the machine-encoded text by the at an OCR process that can be input to a comparative process but is not present in the second dataset using the classifier descriptors and data tags labeled by the subject matter expert; perform a comparative process using the first dataset and the interpolated second dataset, wherein the first dataset represents a project to be completed, wherein the second dataset represents data concerning the project, wherein the comparative process determines an extent to which the project represented by the second dataset has been completed according to the first dataset; and configure a remote device to display a result of the comparative process. Subramanian teaches perform a comparative process using the first dataset and the interpolated second dataset, wherein the first dataset represents a project to be completed, wherein the second dataset represents data concerning the project, wherein the comparative process determines an extent to which the project represented by the second dataset has been completed according to the first dataset (Subramanian, [0023], where the system tracks progress of a paving project at a worksite 112, where progress data may be compared with target data (or target goals) automatically generated by the system controller, including information indicative of the progress of the paving project and/or the productivity of one or more components of the paving system. See Subramanian, [0035], where target data may be calculated based in part on historical data of one or more paving projects; see also Subramanian, [0046], where historical data for one or more previous paving projects may be analyzed to generate the target data for the user, and use the historical data to determine weighting factors, constants, etc., to determine the target data and track the progress of the paving project. See also Subramanian, [0053], where the system may determine, based in part on the sensor data (i.e., “first dataset [that] represents a project to be completed”), the one or more parameters, the target data and/or historical data associated with the paving project 206 and/or other paving projects (i.e., “second dataset represents data concerning the project”), one or more components of the paving system 100 that are underperforming compared to the target data at 516 (i.e., “determines an extent to which the project represented by the second dataset has been completed according to the first dataset”). For example, the system controller may determine, based in part on sensor data, that the paving machine 106, the haul trucks 104, and/or the paving material plant 102 may be underperforming compared to the target data for the paving project 206, e.g., the haul trucks are spending too much time waiting at the paving material plant 102 to be loaded with paving material 108; or that there are too few haul trucks 104 in the paving system and are creating delays for the paving machine 106 (i.e., “a comparison of executed actions compared to project estimates”). The system controller 122 may compare the sensor data of one or more components (i.e., “first dataset”) with their respective target data (i.e., “second dataset”) to determine whether the one or more components are reaching the target data, and if not, then analyzing the sensor data to determine the one or more factors causing the component(s) to not reach the target data, e.g., determining from the historical data that the haul trucks typically spend less than one hour at a paving material plant 102 being loaded with paving material. See Huts, [0126] above with regards to the dataset being “interpolated”); and configure a remote device to display a result of the comparative process (Subramanian, [FIG. 2] and [0035-0037] where the system displays information in an interface indicative of project status for one or more paving projects, including sensor data received from one or more components of the paving system, and a progress value that is generated from the sensor data, indicative of the paving project statuses, as well as a completion percentage 210 (which is based at least in part on target data). The completion percentage 210 may represent an actual amount of any one or more of the parameters completed by the paving system, i.e., an amount of the paving project 206 that is complete. The target data may also be represented by a projected (completion) bar 214 displayed adjacent to the completion bar 212). It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined the teachings of Dalyac as modified and Subramanian (hereinafter “Dalyac as modified”) with the motivation of enabling users to evaluate productivity and progress of a project while the project is in progress (Subramanian, [0002]) more accurately (Subramanian, [0048]), as well as providing recommendations to users to overcome one or more inefficiencies (Subramanian, [0042]) to aid the user in managing the project. Dalyac as modified does not appear to explicitly teach [the second dataset] containing at least an input glyph; wherein receiving the second dataset comprises converting at least a portion of the second dataset into machine-encoded text by at least an optical character recognition (OCR) process, wherein converting the at least a portion of the second dataset into the machine-encoded text comprises converting images of text in the at least a portion of the second dataset into the machine-encoded text and further comprises: pre-processing image components of the images, wherein pre-processing the image components comprises: de-skewing at least one of the image components by applying a homography transform to the at least one of the image components; using binarization to convert at least a portion of one of the images from color or greyscale to a binary image format by separating text from a background of image component and using normalization to normalize an aspect ratio of at least one of the image components; implementing an OCR algorithm comprising a matrix matching process, wherein implementing the OCR algorithm comprises: comparing pixels of at least one of the pre-processed images and the at least an input glyph to pixels of a stored glyph on a pixel-by-pixel basis; and ascertaining a same font and scale; and post-processing an output of the matrix matching process to increase OCR accuracy by constraining the output to a lexicon containing a set of words whose occurrence is permitted; inputting the second dataset into an image classifier, wherein the image classifier is trained with training data containing pictures as inputs and classifier descriptors and data tags as outputs labeled by a subject matter expert; and outputting, the second dataset from the image classifier containing classifier descriptors and data tags labeled by the subject matter expert; [and the second dataset] including the at least a portion of the second dataset converted into the machine-encoded text by the at an OCR process that can be input to a comparative process but is not present in the second dataset using the classifier descriptors and data tags labeled by the subject matter expert. Golchha teaches [the second dataset] containing at least an input glyph; wherein receiving the second dataset comprises converting at least a portion of the second dataset into machine-encoded text by at least an optical character recognition (OCR) process, wherein converting the at least a portion of the second dataset into the machine-encoded text comprises converting images of text in the at least a portion of the second dataset into the machine-encoded text (Golchha, [6:41-67]-[7:1-52], where the system converts data from a (received) physical document or an image to machine encoded text or binary code using an optical character recognition (OCR) system. A text recognition module 124 for automatically recognizing and extracting text from images, may include an optical character recognition (OCR) system which is configured to convert images of written text into machine-encoded text. See Dalyac, [0107-0108] above with respect to “receiving the second dataset” as claimed) and further comprises: pre-processing image components of the images, wherein pre-processing the image components comprises: de-skewing at least one of the image components by applying a homography transform to the at least one of the image components; using binarization to convert at least a portion of one of the images from color or greyscale to a binary image format by separating text from a background of image component and using normalization to normalize an aspect ratio of at least one of the image components (Golchha, [7:65-67]-[8:1-30], where OCR processes may employ pre-processing of image components, where pre-processing may include de-skew, binarization, and normalization. A de-skew process may include applying a homography transform to the image component to align text. A binarization process converts an image from color or greyscale to black-and-white, i.e., a binary image. Binarization may be performed as a simple way of separating text (or any other desired image component) from the background of the image component. A normalization process normalizes the aspect ratio of the image component); implementing an OCR algorithm comprising a matrix matching process, wherein implementing the OCR algorithm comprises: comparing pixels of at least one of the pre-processed images and the at least an input glyph to pixels of a stored glyph on a pixel-by-pixel basis; and ascertaining a same font and scale (Golchha, [8:31-42], where an OCR process will include an OCR algorithm, which includes a matrix-matching process. Matrix-matching involves comparing an image to a stored glyph on a pixel-by-pixel basis, and may rely on an input glyph being correctly isolated from the rest of the image component, as well as also relying on a stored glyph being in a similar font and at the same scale as input glyph. Note that “same” font is a narrower form of “similar” font, i.e., “same” being an exact match, whereas “similar” font encompasses both exact and close matches. One of ordinary skill in the art would have found it obvious to have modified Golccha to narrow to “same” fonts with the motivation of narrowing to more exact matches for more precise matching); and post-processing an output of the matrix matching process to increase OCR accuracy by constraining the output to a lexicon containing a set of words whose occurrence is permitted (Golchha, [9:14-35], where OCR may include post-processing, where OCR accuracy can be increased if output is constrained by a lexicon that includes a list or set of words allowed to occur (in a document). Note that although Golchha does not appear to explicitly state that the act of constraining the output to a lexicon is performed, but rather utilizes the word “if”, the present application’s Specification, [0032] also utilizes the word “if”, and therefore one of ordinary skill in the art would have found it obvious to have actively performed the step of constraining the output to a lexicon with the motivation of increasing OCR accuracy); inputting the second dataset into an image classifier, wherein the image classifier is trained with training data containing pictures as inputs and classifier descriptors and data tags as outputs labeled by a subject matter expert; and outputting, the second dataset from the image classifier containing classifier descriptors and data tags labeled by the subject matter expert (Golchha, [21:30-61], where for training data that is not categorized, i.e., training data not formatted or containing descriptors for some elements of data, machine-learning algorithms may perform ad-hoc categorization and/or automated association of data in the data entry with descriptors or into a given format (i.e., “at least a portion of the second dataset…that…is not present in the second dataset using the classifier descriptors and data tags”). See Golchha, [18:49-67]-[19:1-10], where a machine learning model for performing classification may be trained using an initial set of samples may be performed to cover an initial heuristic and/or “first guess” at an output and/or relationship, which may be seeded using expert input received (i.e., “classifier descriptors and data tags labeled by the subject matter expert”). See Dalyac, [0107-0108] above with respect to “the second dataset” as claimed); [and] [the second dataset] including the at least a portion of the second dataset converted into the machine-encoded text by the at an OCR process that can be input to a comparative process but is not present in the second dataset using the classifier descriptors and data tags labeled by the subject matter expert (Golchha, [], where an OCR system uses image recognition algorithms to identify and convert scanned image characters into a scanned label 120 that includes machine encoded text, where the OCR software analyzes the scanned image, identifies the shape and patterns of the characters, and applies character recognition techniques to convert them into digital text. See Golchha above with respect to the dataset (training dataset) being labeled “using the classifier descriptors and data tags labeled by the subject matter expert” and which was “not present in the…dataset”. See Dalyac, [0107-0108] above with respect to “the second dataset” as claimed. See Subramanian above with respect to the dataset that “can be input to a comparative process”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Dalyac as modified and Golchha (hereinafter “Dalyac as modified”). Huts discloses at [0122] that the system may perform one or more computer vision tasks, and one of ordinary skill in the art would have recognized that OCR is a type of computer vision task.11 Therefore, it would have been obvious to one of ordinary skill in the art to have incorporated OCR into Dalyac as modified’s computer vision task in the case that that the data contains relevant text, e.g., documents, thereby allowing data to be extracted from a wide variety of source types12, so that it can be read and interpreted by a computer or software system, as well as for further processing or integration into a digital system (Golchha, [6:41-67]-[7:1-11]), as well as enabling ad-hoc categorization thereby enabling automated association of data in the data entry with descriptors or into a given format, which reduces manual requirements, thereby increasing efficiency. The Examiner notes that the pre-processing step of constraining the output to a lexicon containing a set of words whose occurrence is permitted “to increase OCR accuracy” has been considered as an intended use/result, and is not afforded patentable weight. The Examiner notes that “A claim containing a ‘recitation with respect to the manner in which a claimed apparatus is intended to be employed does not differentiate the claimed apparatus from a prior art apparatus' if the prior art apparatus teaches all the structural limitations of the claim.” Ex parte Masham, 2 USPQ2d 1647 (Bd. Pat. App. & Inter. 1987), see MPEP 2114. The recited prior art has the capability to perform these intended use limitations, and therefore, the prior art meets the claimed limitations. See MPEP 2111.02; See also In re Schreiber, 128 F.3d 1473, 1477, 44 USPQ2d 1429, 1431 (Fed.Cir. 1997). Regarding claim 2: Dalyac as modified teaches The apparatus of claim 1, wherein receiving the second dataset further comprises: generating the second dataset using at least an image of the images of the second dataset and an image classifier (Dalyac, [0165-0170], where the system predicts internal damage by considering the extent of damage of a part in order to determine a labor operation (repair, replace, do nothing). The output of a repair/replace classifier (trained on semi-automatically labelled data) could feed into this. See also Dalyac, [0138-0163], where a classifier (“part not visible”, “part undamaged”, “repair part”, and “replace part”) was previously trained. See also Dalyac, [0131], where a multi-instance learning (MIL) convolutional neural network that enables multiple images to be analyzed, and ultimately the machine learning model can output that, e.g., a rear bumper is in need of repair). Although Dalyac as modified does not appear to explicitly state that the images are “of the second dataset” as claimed, the claimed steps would have been performed the same regardless of where the images were part of the second dataset or some other source in generating the data for the second dataset. Therefore, one of ordinary skill in the art would have found it obvious to have modified Dalyac as modified with the motivation of converting the second dataset into a standard representation for easier comparison, as opposed to being composed of multiple data format types. Regarding claim 7: Dalyac as modified teaches The apparatus of claim 1, wherein generating the importance metric further comprises: receiving a plurality of training examples, wherein each training example correlates an identification of a feature with an importance metric parameter (Huts, [0291], where feature importance values may be based on SHAP values of a tree-based model’s features (i.e., “importance metric parameter”). The feature importance values may be determined based on selecting an absolute number of samples (i.e., “training examples”) and determining an average of the Shapley values for each feature of the selected samples (i.e., “wherein each training example correlates an identification of a feature with an importance metric parameter”). See also, e.g., IBR-Achin, [0380-0384] in claim 1 above with regards to the received “observations” (i.e., training examples), which relate features to an accuracy metric (i.e., “an importance metric parameter”)); training an importance metric machine-learning model as a function of the plurality of training examples (Huts, [0292], where the model development system may determine SHAP-based feature importance scores for one or more features of a data set during the model creation and evaluation phase. See also, e.g., IBR-Achin, [0380-0384] in claim 1 above with regards to the received “observations” (i.e., training examples), which relate features to an accuracy metric (i.e., “an importance metric parameter”), and are used to train a model for determining feature importance); and generating the importance metric using the identification of the at least a missing feature and the importance metric machine-learning model (Huts, [0326], where for each of the constituent image features, a feature importance score is determined, where the feature importance score is a univariate feature importance score, a feature impact score, or a Shapley value. See Huts, [0163], where feature engineering operations (such as infilling missing variable values; see Huts, [0126]), are performed based on an importance of the feature. This indicates that Hut’s system will infill a missing variable value upon determination that the variable has a high enough importance, i.e., that the missing feature is “necessary”, as claimed). Although Huts does not appear to explicitly state that the SHAP-based feature importance scores are based on a model, one of ordinary skill in the art would have been suggested to explicitly include such a machine learning model. See, e.g., Huts, [0163], where feature engineering classifies a feature as being “more important” if the predictive value of the feature is greater than a threshold value, implying that the “classification” involves some form of machine learning. Therefore, one of ordinary skill in the art would have found it obvious to modify Huts to explicitly include a machine-learning model for deriving feature importance with the motivation of responding to dynamic changes in data, in which features may become more or less important over time, i.e., creating a learning system that has dynamically-updated feature importance/selection capabilities. Regarding claim 10: Dalyac as modified teaches The apparatus of claim 1, wherein the comparative process further comprises a machine-learning process (Subramanian, [0019] and [0049], where the disclosed system controller 122, which is involved in performing the various disclosed steps of claim 1, may rely on neural networks, machine learning algorithms, etc., e.g., determining weighting factors, constants, and/or modifiers to use in one or more neural networks, etc., to determine the progress value). It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to have combined the teachings of Dalyac as modified and Subramanian with the motivation of automating analysis based on historical information in order to derive potentially more accurate projections. Regarding claim 11: Claim 11 recites substantially the same claim limitations as claim 1, and is rejected for the same reasons. Regarding claim 12: Claim 12 recites substantially the same claim limitations as claim 2, and is rejected for the same reasons. Regarding claim 17: Claim 17 recites substantially the same claim limitations as claim 7, and is rejected for the same reasons. Regarding claim 20: Claim 20 recites substantially the same claim limitations as claim 10, and is rejected for the same reasons. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Dalyac et al. (“Dalyac”) (US 2018/0300576 A1), in view of Huts et al. (“Huts”) (US 2023/0067026 A1, incorporating by reference Achin et al. (“IBR-Achin”) (App. Ser. No. 15/790,803 or US 2018/0046926 A1 at [0002])), in further view of Subramanian et al. (“Subramanian”) (US 2021/0117878 A1), in further view of Golchha et al. (“Golchha”) (US 11,977,952 B1), in further view of Ushiba et al. (“Ushiba”) (US 2013/0216141 A1). Regarding claim 4: Dalyac as modified teaches The apparatus of claim 1, but does not appear to explicitly teach wherein identifying at least a missing feature further comprises: classifying the second dataset to a feature template using a template classifier; comparing the second dataset to the feature template; and identifying at least a missing feature based on the comparison. Ushiba teaches classifying the second dataset to a feature template using a template classifier; comparing the second dataset to the feature template; and identifying at least a missing feature based on the comparison (Ushiba, [0012], where a pattern matching method is performed on an image using a template (i.e., “classifying the [data] to a feature template”) that is formed on the basis of design data, and a characteristic quantity of the image is obtained, where a position in which the characteristic quantity satisfies a certain condition is determined as a matching position, a matching position candidate, or an erroneous matching position. Although Ushiba does not appear to explicitly state that the method is perofmred by “a template classifier” as claimed, Ushiba discloses that the disclosed template matching may be performed by a computer program (see, e.g., Ushiba, [Abstract]). Therefore, Ushiba is equivalent to the claimed invention, as the claimed “template classifier” may be broadly construed as any sort of computer-implemented program or other component capable of carrying out the classification step). Although Ushiba discloses an “erroneous matching position” instead of a “missing feature” as claimed, it would have been obvious to one of ordinary skill in the art to have substituted Ushiba’s “erroneous matching position” with the missing feature disclosed by Dalyac as modified, because one of ordinary skill in the art would have recognized that missing features may be a form of error. Therefore, one of ordinary skill in the art would have found the substitution to have resulted in predictably equivalent operating results, which is that differences between the image and the template are identified, i.e., as Ushiba’s “error” may correspond to a missing feature. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Dalyac as modified and Ushiba with the motivation of potentially more accurate identification of missing data, e.g., by using pre-existing templates or previous data, the accuracy of identifying missing data may be greater/faster, as opposed to the system attempting to learn without some historical basis. Regarding claim 14: Claim 14 recites substantially the same claim limitations as claim 4, and is rejected for the same reasons. Claims 5, 8-9, 15, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Dalyac et al. (“Dalyac”) (US 2018/0300576 A1), in view of Huts et al. (“Huts”) (US 2023/0067026 A1, incorporating by reference Achin et al. (“IBR-Achin”) (App. Ser. No. 15/790,803 or US 2018/0046926 A1 at [0002])), in further view of Subramanian et al. (“Subramanian”) (US 2021/0117878 A1), in further view of Golchha et al. (“Golchha”) (US 11,977,952 B1), in further view of Mishra et al. (“Mishra”) (US 10,733,515 A1). Regarding claim 5: Dalyac as modified teaches The apparatus of claim 1, but does not appear to explicitly teach wherein identifying at least a missing feature further comprises: receiving at least an exemplary dataset; training a feature identification machine-learning model as a function of the at least an exemplary dataset; and identifying the at least a missing feature using the feature identification machine-learning model and the second dataset. Mishra teaches receiving at least an exemplary dataset (Mishra, [4:10-25], where the partition algorithm 106 divides the dataset into two subsets, one in which the data records are complete (i.e., “exemplary dataset”), and one in which the data records contain missing feature values); training a feature identification machine-learning model as a function of the at least an exemplary dataset; and identifying the at least a missing feature using the feature identification machine-learning model and the second dataset (Mishra, [4:26-33], where the training algorithm 108 trains a machine learning model, where the training algorithm 108 uses the data subset that has no missing values (i.e., “exemplary dataset”) to train the model 110. The trained machine learning model can then be used to impute values 114 to the data subset (that has missing feature values (i.e., “second dataset”). The imputed values 114 can then be added to the dataset stored in the database 112). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Dalyac as modified and Mishra (hereinafter “Dalyac as modified and by Mishra”) with the motivation of improving machine learning models despite missing features or instances (Mishra, [2:16-31]). Regarding claim 8: Dalyac as modified teaches The apparatus of claim 1, but does not appear to explicitly teach wherein interpolating at least an additional datum further comprises: receiving at least an exemplary dataset; and interpolating at least an additional datum as a function of the at least an exemplary dataset. Mishra teaches receiving at least an exemplary dataset; and interpolating at least an additional datum as a function of the at least an exemplary dataset (Mishra, [4:26-33], where the training algorithm 108 trains a machine learning model, where the training algorithm 108 uses the data subset that has no missing values (i.e., “exemplary dataset”) to train the model 110. The trained machine learning model can then be used to impute values 114 to the data subset (that has missing feature values (i.e., “second dataset”). The imputed values 114 can then be added to the dataset stored in the database 112. See Mishra, [FIG. 2] and [5:21-44], where a machine learning model that was trained on dataset A is applied to dataset B, and the machine learning model is applied to dataset B (at block 206) in order to compute, determine, or predict values for the missing feature values based upon the previous mapping of the input data features to the target features (as seen in dataset A)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teachings of Dalyac as modified and Mishra with the motivation of improving machine learning models despite missing features or instances (Mishra, [2:16-31]). Regarding claim 9: Dalyac as modified and by Mishra teaches The apparatus of claim 5, wherein interpolating at least an additional datum further comprises: training a generative machine-learning model using the at least an exemplary dataset and a generative machine-learning algorithm; and interpolating at least an additional datum using the generative machine-learning model (Mishra, [4:26-33], where the training algorithm 108 trains a machine learning model, where the training algorithm 108 uses the data subset that has no missing values (i.e., “exemplary dataset”) to train the model 110. The trained machine learning model can then be used to impute values 114 to the data subset (that has missing feature values (i.e., “second dataset”). The imputed values 114 can then be added to the dataset stored in the database 112. See Mishra, [2:32-43], where the machine-based method is used to generate missing values with a machine learning model (i.e., “generative machine-learning algorithm”)). Regarding claim 15: Claim 15 recites substantially the same claim limitations as claim 5, and is rejected for the same reasons. Regarding claim 18: Claim 18 recites substantially the same claim limitations as claim 8, and is rejected for the same reasons. Regarding claim 19: Claim 19 recites substantially the same claim limitations as claim 9, and is rejected for the same reasons. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRENE BAKER whose telephone number is (408)918-7601. The examiner can normally be reached M-F 8-5PM PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, NEVEEN ABEL-JALIL can be reached at (571)270-0474. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IRENE BAKER/Primary Examiner, Art Unit 2152 13 March 2026 1 See, e.g., Achin et al. (US 2018/0046926 A1) at [0228], where users may add models to a project, and thus because such tasks may be manually performed by users, such steps recite an abstract idea. 2 This relates to the “inputting the second dataset into an image classifier, wherein the image classifier is trained with training data containing pictures as inputs and classifier descriptors and data tags as outputs labeled by a subject matter expert; and outputting the second dataset from the image classifier containing classifier descriptors and data tags labeled by the subject matter expert”, as the only distinction between the claimed limitation and that which can be performed in the human mind is the use of an image classifier. However, the ability to take in as input, an image, and generate classifier descriptors and data tags, is an act that can be mentally performed in the mind of a person. 3 See also Intellectual Ventures I LLC v. Erie Indemnity Co., 850 F.3d 1315, 121 USPQ2d 1928 (Fed. Cir. 2017), in which the claimed mobile interface did “little more than provide a generic technological environment to allow users to access information”, which did not save the claims from being abstract, and thus failed under step one. See Id. at p. 24. 4 Sakoe (US 4,975,961 A) at [Background] (third paragraph) (“It is described by Lippmann that the neural network comprises at least three layers, such as an input layer, an intermediate or hidden layer, and an output layer. The input layer comprises a plurality of input neuron units. The intermediate layer comprises a plurality of intermediate neuron units, which may be greater in number than the input neuron units. The output layer comprises at least one output neuron unit. The neuron unit is alternatively called either a computational element or a node”). 5 Sakoe at [Background] (second paragraph) (“…The multi-layer neural network is described in the Lippmann article, pages 15 to 18, as well as a back-propagation training algorithm therefor”); [2:22-26] (“After the neural network is trained in compliance with the back-propagation training algorithm, the significant component is produced from one of the output neuron units that is assigned to the correct result of recognition of the input pattern”); and [7:55-66] (“According to the back-propagation training algorithm mentioned herein before, random numbers are used at first as the intermediate and the output weighting coefficients u and v. If the n-th output signal component is the sole significant component, the neural network is already ready for recognition of the particular word. If the n-th output signal component is not the sole significant component, the back-propagation training algorithm is executed in the known manner to train the neural network by adjusting the output weighting coefficients v and the intermediate weighting coefficients u”). 6 Because there is a list of images/image clusters, essentially, a user’s input for each of those images/image clusters is counted separately, i.e., if there are N images in a list, then there are N user inputs, not a single user input. 7 Note that the mapping is based on a multiple iteration with the user. As seen in Dalyac, [0063], the cycle is repeated from Step 2 onward. Thus, at a subsequent cycle (i.e., not necessarily at the initial cycle), the user’s inputs are utilized. 8 See Sakoe. US 4,975,961 A at [Background] (third paragraph). 9 Sakoe at [Background] (second paragraph and [2:11-26]). 10 Sakoe at [7:55-66]. 11 Wang et al. US 2018/0101726 A1 at [0003] (“Optical Character Recognition (OCR) is an important computer vision problem with a rich history”). 12 Wang et al. at [0004-0005] (“…Accurate OCR systems are also needed because of the ubiquity of imaging devices such as smart phones and other mobile devices that allow a vast number of people to scan or image a document containing text…. OCR forms the key first step in understanding text documents from their images or scans. OCR systems find use in extracting data from…documents…[and] can be used for license plate number recognition, books analysis…”).
Read full office action

Prosecution Timeline

Oct 02, 2023
Application Filed
Dec 16, 2023
Non-Final Rejection — §101, §103, §112
Feb 05, 2024
Interview Requested
Feb 16, 2024
Examiner Interview Summary
Feb 16, 2024
Applicant Interview (Telephonic)
Mar 21, 2024
Response Filed
Mar 30, 2024
Final Rejection — §101, §103, §112
May 03, 2024
Request for Continued Examination
May 07, 2024
Response after Non-Final Action
Jun 27, 2024
Non-Final Rejection — §101, §103, §112
Aug 05, 2024
Interview Requested
Aug 29, 2024
Applicant Interview (Telephonic)
Aug 30, 2024
Examiner Interview Summary
Sep 03, 2024
Response Filed
Jan 29, 2025
Final Rejection — §101, §103, §112
Apr 29, 2025
Request for Continued Examination
May 09, 2025
Response after Non-Final Action
May 22, 2025
Non-Final Rejection — §101, §103, §112
Nov 25, 2025
Response Filed
Mar 13, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602368
ANOMALY DETECTION DATA WORKFLOW FOR TIME SERIES DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12591890
CONCURRENT STATE MACHINE PROCESSING USING A BLOCKCHAIN
2y 5m to grant Granted Mar 31, 2026
Patent 12566880
SEAMLESS UPDATING AND RECONCILIATION OF DATABASE IDENTIFIERS GENERATED BY DIFFERENT AGENT VERSIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12566790
LAKEHOUSE METADATA CHANGE DETERMINATION METHOD, DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12536138
FILE SYSTEM REDIRECTOR SERVICE IN A SCALE OUT DATA PROTECTION APPLIANCE
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
54%
Grant Probability
81%
With Interview (+26.7%)
3y 0m
Median Time to Grant
High
PTA Risk
Based on 238 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month