Prosecution Insights
Last updated: April 19, 2026
Application No. 18/372,907

LAYERED PROCESSING FOR CONSTRAINED FUND OPTIMIZATION

Non-Final OA §101§103§112
Filed
Sep 26, 2023
Examiner
HUDSON, MARLA LAVETTE
Art Unit
3694
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hrb Innovations Inc.
OA Round
3 (Non-Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
2y 6m
To Grant
82%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
65 granted / 114 resolved
+5.0% vs TC avg
Strong +26% interview lift
Without
With
+25.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
24 currently pending
Career history
138
Total Applications
across all art units

Statute-Specific Performance

§101
46.5%
+6.5% vs TC avg
§103
26.6%
-13.4% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 114 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The following is Office Action on the merits in response to the communication received on 12/19/25. Claim status: Amended claims: 1-2, 6, 9, and 16 Canceled claims: 5 and 15 Added New claims: 21-22 Pending claims: 1-4, 6-14, and 16-22 Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 6-14, and 16-22 are rejected under 35 U.S.C. § 101 because the claimed invention is not directed to statutory subject matter. Specifically, the invention of claims 1-4, 6-14, and 16-22 is directed to an abstract idea without significantly more. Independent claims 1, 9 and 16 are directed to one or more non-transitory computer-readable media (claim 1), a system (claim 9) and a method (claim 16). Therefore on its face, each of claims 1, 9 and 16 is directed to a statutory category of invention under Step 1 of the 2019 PEG. However each of claims 1, 9 and 16 is also directed to an abstract idea without significantly more, under Step 2A (Prong One and Prong Two) and Step 2B of the 2019 PEG, which is a judicial exception to 35 U.S.C. 101, as detailed below. Using the language of independent claim 1 to illustrate the claim recites the limitations of, (i) determining an optimal outcome, (ii) storing, at a primary layer, a calculation module configured to generate a user profile of a user; (iii) storing, at a first sublayer of the primary layer, data points and a set of government laws, wherein the data points are indicative of the user and a user region associated with the user, and wherein the set of government laws comprise federal and state laws based on the user region, wherein the first sublayer is only accessible by the calculation module, (iv) obtaining, by the calculation module at the primary layer, the data points and the set of government laws from the first sublayer; (v) generating, by the calculation module at the primary layer, the user profile comprising the data points and the set of government laws; (vi) obtaining, by an insight module from the calculation module at the primary layer, the user profile; (vii) obtaining, by the insight module and from a second sublayer, opportunity data including opportunities to achieve potential user goals; wherein the second sublayer is only accessible by the insight module, (viii) determining, by the insight module at the primary layer, insights from the user profile and the opportunity data; (ix) obtaining, by a goal module at the primary layer, a set of goals from the user; (x) generating an outcome objective based on the set of goals from the user and the insights; (xi) determining a set of action items that maximizes the outcome objective while constraining the outcome objective to the set of government laws; (xii) causing display of the set of action items to maximize the outcome objective for the user; and (xiii) determining a current state of the set of goals; estimating an end state of the set of goals; determining an accuracy of the end state of the set of goals; causing displaying of the accuracy; and providing further guidance to the user based on the end state of the set of goals and the accuracy of the end state of the set of goals under the broadest reasonable interpretation covers mental processes. (Independent claims 9 and 16 recite similar limitations and the analysis is the same). That is, other than reciting at least one processor, one or more non-transitory computer-readable media and the display nothing in the claim precludes the steps from being directed to mental processes. If a claim limitation under its BRI, covers mental processes but for the recitation of generic computer components, then the limitations fall within the “methods of organizing human activity” grouping of abstract ideas. Therefore, claim 1 recites an abstract idea under Step 2A Prong One of the Revised Patent Subject Matter Eligibility Guidance 84 Fed.Reg 50 (“2019 PEG”). This “mental process” is not integrated into a practical application under Step 2A prong Two of the 2019 PEG. In particular claim 1 recites the following additional elements of, at least one processor, one or more non-transitory computer-readable media and the display. This judicial exception is not integrated into a practical application. In particular, the claim only recites the additional elements – at least one processor, one or more non-transitory computer-readable media and the display. The at least one processor, one or more non-transitory computer-readable media and display are recited at a high-level or generality (i.e. as a generic computer performing generic computer functions) such that, they amount to no more than instructions to apply the abstract idea with a computer (see MPEP 2106.05(h). Accordingly these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Under Step 2B of the 2019 PEG independent claim 1 does not include additional elements that are sufficient to amount to significantly more than the abstract idea. The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of using at least one processor, one or more non-transitory computer-readable media and the display, determining an optimal outcome, storing, at a primary layer, a calculation module configured to generate a user profile of a user; storing, at a first sublayer of the primary layer, data points and a set of government laws, wherein the data points are indicative of the user and a user region associated with the user, and wherein the set of government laws comprise federal and state laws based on the user region, wherein the first sublayer is only accessible by the calculation module, obtaining, by the calculation module at the primary layer, the data points and the set of government laws from the first sublayer; generating, by the calculation module at the primary layer, the user profile comprising the data points and the set of government laws; obtaining, by an insight module from the calculation module at the primary layer, the user profile; obtaining, by the insight module and from a second sublayer, opportunity data including opportunities to achieve potential user goals; wherein the second sublayer is only accessible by the insight module, determining, by the insight module at the primary layer, insights from the user profile and the opportunity data; obtaining, by a goal module at the primary layer, a set of goals from the user; generating an outcome objective based on the set of goals from the user and the insights; determining a set of action items that maximizes the outcome objective while constraining the outcome objective to the set of government laws; causing display of the set of action items to maximize the outcome objective of for the user, and determining a current state of the set of goals; estimating an end state of the set of goals; determining an accuracy of the end state of the set of goals; causing displaying of the accuracy; and providing further guidance to the user based on the end state of the set of goals and the accuracy of the end state of the set of goals amount to instructions to apply the abstract idea with a computer. The claims are not patent eligible. The dependent claims have been given the full two part analysis including analyzing the additional limitations individually. The Dependent claim(s) when analyzed individually are also held to be patent ineligible under 35 U.S.C. 101 because for the same reasoning as above and the additional recited limitation(s) fail to establish that the claim(s) are not directed to an abstract idea. The additional limitations of the dependent claim(s) when considered individually do not amount to significantly more than the abstract idea. Claims 2-4, 6-8, 10-14 and 17-22 merely further explain the abstract idea. When viewed individually the additional limitations do not amount to a claim as a whole that is significantly more than the abstract idea. Accordingly claims 1-4, 6-14, and 16-22 are ineligible. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-4, 6-14 and 16-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "causing displaying of the accuracy by the display" in the last sub-paragraph. There is insufficient antecedent basis for “the display” limitation in the claim. The first recitation of “display” in the claim is a verb, not a noun. Claims 2-4, 6-8 and 21-22 depend from Claim 1 and are rejected for this reason. Claim 9 recites the limitation "causing display of the task list comprising the set of action items by the display" in the last sub-paragraph. There is insufficient antecedent basis for “the display” limitation in the claim. The first recitation of “display” in the claim is a verb, not a noun. Claims 10-14 depend from Claim 9 and are rejected for this reason. Claim 16 recites the limitation "causing display of the accuracy by the display" in the next to the last sub-paragraph. There is insufficient antecedent basis for “the display” limitation in the claim. The first recitation of “display” in the claim is a verb, not a noun. Claims 10-14 depend from Claim 16 and are rejected for this reason. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 6-14 and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over, Goldman (U.S. Pub. No. 11,354,755), in view of Leitner (U.S. Pub. No. 2005/0086579) and Sowder (U.S. Pub. No. 2014/0067634). With respect to claim 1: Goldman teaches: One or more non-transitory computer-readable media storing computer executable instructions that, when executed by at least one processor, performs a method of determining an optimal outcome, the method comprising: storing, {…..}, a calculation module configured to generate a user profile of a user; storing {…..}, data points and a set of government laws, wherein the data points are indicative of the user and a user region associated with the user, and wherein the set of government laws comprise federal and state laws based on the user region, {…..}; obtaining, by the calculation module {…..}, the data points and a set of government laws from the first sublayer {…..}; generating, by the calculation module {…..}, the user profile comprising the data points and the set of government laws (“Referring back to FIG. 12, after initiating the tax preparation software 100, the tax preparation software 100, in operation 1100, gathers or imports tax related data from the one or more data sources 48 as illustrated in FIGS. 7 and 8. Note that the gathering of tax related data from the one or more data sources 48 may occur at the time the tax preparation software 100 is run. Alternatively, the gathering of tax related data from the one or more data sources 48 may occur over a period of time. For example, data sources 48 may be periodically queried over time (e.g., during a tax reporting year) whereby updated information is stored in a database (not shown) or the like that is then accessed by the tax preparation software 100. This option may improve the efficiency and speed of tax return preparation as the information is already available” (Goldman Column 33 Lines 23-37) and “The tax return preparation system accesses taxpayer data comprising personal data and/or tax data regarding the taxpayer by any of the means described below, such as from prior year tax returns, third party databases, user inputs, etc. The system then generates a taxpayer data profile using the taxpayer data. For instance, the taxpayer data profile may include the taxpayer's age, occupation, place of residence, estimated income, etc.” (Goldman Column 2 Lines 34-42) and “FIG. 1 illustrates graphically how tax legislation/tax rules 10 are broken down into a completeness graph 12 and a tax calculation graph 14. In one aspect of the invention, tax legislation or rules 10 are parsed or broken into various topics. For example, there may be nearly one hundred topics that need to be covered for completing a federal tax return. When one considers both federal and state tax returns, there can be well over one hundred tax topics that need to be covered. When tax legislation or tax rules 10 are broken into various topics or sub-topics, in one embodiment of the invention, each particular topic (e.g., topics A, B) may each have their own dedicated completeness graph 12A, 12B and tax calculation graph 14A, 14B as seen in FIG. 1” (Goldman Column 10 Line 64 to Column 11 Line 9) and “There may be many different schemas 44 depending on the different tax jurisdiction. For example, Country A may have a tax schema 44 that varies from Country B. Different regions or states within a single country may even have different schemas 44. The systems and methods described herein are not limited to a particular schema 44 implementation. The schema 44 may contain all the data fields required to prepare and file a tax return with a government taxing authority” Goldman Column 15 Lines 40-48); determining a set of action items that maximizes the outcome objective while constraining the outcome objective to the set of government laws (“The predictive model generates as output(s) one or more predicted tax matters which are determined to be likely to be relevant to the taxpayer. The system may then determine tax questions to present to the user based at least in part upon the predicted tax matters determined by the predictive model” (Goldman Abstract) “For example, AGI is a re-occurring tax concept that occurs in many places in the tax code. AGI is used not only for the mathematical computation of taxes is also used, for example, to determine eligibility of certain tax deductions and credits. Thus, the AGI node is common to both the completion graph 12 and the tax calculation graph 14” Goldman Column 14 Lines 47-51); and causing display of the set of action items to maximize the outcome objective for the user (“The computing device executes a user interface manager configured to receive the one or more suggestions and present to a user one or more questions based on the one or more suggestions via a user interface, wherein a user response to the one or more questions is input to the shared data store. The user interface manager is configured to generate and display a question screen to the user” Goldman Column 1 Lines 60-67); determining a current state of the set of goals estimating an end state of the set of goals; determining an accuracy of the end state of the set of goals causing displaying of the accuracy by the display; and providing further guidance to the user based on the end state of the set of goals and the accuracy of the end state of the set of goals (“The import module 89 may also present prompts or questions to the user via a user interface presentation 84 generated by the user interface manager 82. For example, a question may ask the user to confirm the accuracy of the data. The user may also be given the option of whether or not to import the data from the data sources 48 (Goldman Column 16 Lines 31-36) and “It should also be understood that the estimation module 110 may rely on one or more inputs to arrive at an estimated value. For example, the estimation module 110 may rely on a combination of prior tax return data 116 in addition to online resources 118 to estimate a value. This may result in more accurate estimations by relying on multiple, independent sources of information. The UI control 80 may be used in conjunction with the estimation module 110 to select those sources of data to be used by the estimation module 110. For example, user input 114 will require input by the user of data using a user interface presentation 84. The UI control 80 may also be used to identify and select prior tax returns 116. Likewise, user names and passwords may be needed for online resources 118 and third party information 120 in which case UI control 80 will be needed to obtain this information from the user” (Goldman Column 30 Lines 37-52) and “A user interface presentation 84 may be pre-programmed interview screens that can be selected and provided to the generator element 85 for providing the resulting user interface presentation 84 or content or sequence of user interface presentations 84 to the user. User interface presentations 84 may also include interview screen templates, which are blank or partially completed interview screens that can be utilized by the generation element 85 to construct a final user interface presentation 84 on the fly during runtime” (Goldman Column 28 Lines 41-49) and “Still referring to FIG. 9, another attribute 122 may include a confirmation flag 128 that indicates that a taxpayer or user of the tax preparation software 100 has confirmed a particular entry. For example, confirmed entries may be given an automatic “high” confidence value as these are finalized by the taxpayer. Another attribute 122 may include a range of values 130 that expresses a normal or expected range of values for the particular data field. The range of values 130 may be used to identify erroneous estimates or data entry that appear to be incorrect because they fall outside an intended range of expected values. Some estimates, such as responses to Boolean expressions, do not have a range of values 130. In this example, for example, if the number of estimates dependents is more than five (5), the tax logic agent 60 may incorporate into the rules engine 64 attribute range information that can be used to provide non-binding suggestions to the UI control 80 recommending a question to ask the taxpayer about the high number of dependents (prompting user with “are you sure you have 7 dependents”). Statistical data may also be used instead of specific value ranges to identify suspect data. For example, standard deviation may be used instead of a specific range. When a data field exhibits statistical deviation beyond a threshold level, the rules engine 64 may suggest a prompt or suggestion 66 to determine whether the entry is a legitimate or not. Additional details regarding methods and systems that are used to identify suspect electronic tax data may be found in U.S. Pat. No. 8,346,635 which is incorporated by reference herein” (Goldman Column 31 Lines 33-61) and “The confidence level indicator 132 may take a number of different forms, however. For example, the confidence level indicator 132 may be in the form of a gauge or the like that such as that illustrated in FIG. 11. In the example, of FIG. 11, the confidence level indicator 132 is indicated as being “low.” Of course, the confidence level indicator 132 may also appear as a percentage (e.g., 0% being low confidence, 100% being high confidence) or as a text response (e.g., “low,” “medium,” and “high” or the like). Other graphic indicia may also be used for the confidence level indicator 132. For example, the color of a graphic may change or the size of the graphic may change as a function of level of confidence. Referring to FIG. 11, in this instance, the user interface presentation 84 may also include hyperlinked tax topics 136 that are the primary sources for the low confidence in the resulting tax calculation. For example, the reason that the low confidence is given is that there is low confidence in the amount listed on the taxpayer's W-2 form that has been automatically imported into the shared data store 42. This is indicated by the “LOW” designation that is associated with the “earned income” tax topic. In addition, in this example, there is low confidence in the amount of itemized deductions being claimed by a taxpayer. This is seen with the “LOW” designation next to the “deductions” tax topic. Hyperlinks 136 are provided on the screen so that the user can quickly be taken to and address the key drivers in the uncertainty in the calculated tax liability” Goldman Column 32 Line 44 to Column 33 Line 2). Goldman does not teach but Leitner teaches: wherein the first sublayer is only accessible by the {…..} module (“Stackable execution elements such as DLLs can be deployed to various portions of the data processing component 204 for processing of credit data. The data processing component 204 can process the credit data through each of the execution elements until a set of result data is obtained. Note that each execution element can apply filters, routines, methods, techniques, logic, selection, assessment, or analysis as required” Leitner Pgh. [0121]); storing, at a primary layer, {…..} module {…..}; storing, at a first sublayer of the primary layer, {…..}; obtaining, by {…..} module at the primary layer, {…..} from the first sublayer; generating, by {…..} module at the primary layer, {…..}; obtaining, by {…..} module from the {…..} module at the primary layer, {…..}; obtaining, by {…..} module and from a second sublayer, {…..}; determining, by {…..} module at the primary layer, {…..}; obtaining, by {…..} module at the primary layer, {…..} (“In the embodiment shown in FIG. 2, an Autopilot component 202 can include sub-components such as a criteria graphical user interface 210, a data access layer 212, a relational database management (RDBMS) system schema 214, metadata 216, a criteria/attribute translator 218, a code generator/compiler 220, and a runtime component 222” (Leitner Pgh. [0061]) and “In the embodiment shown in FIG. 2, a data access layer 212 can be an internal component that can process communications, such as messages, to and from the criteria graphical user interface 210. The data access layer can also extract and populate data received via the interface 210 into an associated memory 118, database or data storage device” (Leitner Pgh. [0063]) and “The server 304 can provide an abstraction layer for file and data layouts as well as a database description of attribute types, criteria, and criteria passing. In response to receiving the request document and associated criteria and attributes from the user 112 a-n, the server 304 can utilize particular DLLs or other executable components to access and filter credit data in one or more credit data sources 170 a-n, and to capture credit file layouts from the credit data, selected criteria, and selected and underlying data attributes in the databases using tables, or other devices and techniques. For example, the server 304 can combine pre-existing or generate new DLLs or other executable components for processing a request, such as a source select list, a utility select list such as general purpose utilities, a criteria module such as a criteria DLL for processing credit file analytics, and models such as a scoring model DLL. When the DLLs or other executable components are collected, then the project can be transmitted to the data processing component 204 via the data processing interface 308 for processing which is described in greater detail below” Leitner Pgh. [0085]); and wherein the second sublayer is only accessible by the {…..} module (“Stackable execution elements such as DLLs can be deployed to various portions of the data processing component 204 for processing of credit data. The data processing component 204 can process the credit data through each of the execution elements until a set of result data is obtained. Note that each execution element can apply filters, routines, methods, techniques, logic, selection, assessment, or analysis as required” Leitner Pgh. [0121]). It would have been obvious to one of ordinary skill of the art to have modified Goldman’s teachings to incorporate Leitner’s teachings, in order “to assure that it will provide the desired data subsets, modeling, formatting and testing of programming that creates the output results in desired form” Leitner Pgh. [0063]. Goldman does not teach but Sowder teaches: obtaining, by an insight module from the calculation module {…..}, the user profile (“In another embodiment, the financial goal visualization system provider device may retrieve information from one or more user accounts of the user, use that information to determine the plurality of financial goals that are personalized to the user, and at block 102 the device may send those financial goals over the network to a user device to provide the plurality of financial goals to the user” Sowder Pgh. [0024] and “At block 114, the financial goal visualization system provider device retrieves financial data from the one or more user accounts over the network from, for example, associated account provider devices” Sowder Pgh. [0063]); obtaining, by the insight module {…..}, opportunity data including opportunities to achieve potential user goals; {…..}; determining, by the insight module {…..}, insights from the user profile and the opportunity data, {…..} (“In the examples provided below, the financial goals provided to the user include an education financial goal, a vacation financial goal, and a product financial goal, but the present disclosure is not limited to these examples, and a variety of financial goals know in the art will fall within its scope. Those financial goals may be the predetermined financial goals applicable to a variety of different users, or may be determined for a specific user from information retrieved from their user account (e.g., purchases for a child may indicate that an education financial goal is appropriate for the user, purchases associated with previous vacation spending may be used to determine a vacation financial goal appropriate for the user, product purchases older than a certain age may indicate that a product financial goal is appropriate for the user, etc.)” Sowder Pgh. [0025]); and obtaining, by a goal module {…..}, a set of goals from the user; generating an outcome objective based on the set of goals from the user and the insights (“Upon selection of one of the financial goal selectors 206, 208, or 210, the user may provide financial goal details for the determination of financial sub-goals (which are themselves financial goals) and/or financial goal statuses for the selected financial goal, associate images with those financial Sub-goals and/or financial goal statuses, and receive savings plans for the selected financial goal and/or its sub-goals in blocks 104,106, 108, 110, and 112 of the method 100” Sowder Pgh. [0027] and “In an embodiment, upon user selection of a financial goal, the financial goal visualization system provider may request a plurality of information about the selected financial goal from the user, and the user may provide that information in order to provide financial goal details for the selected financial goal to the financial goal visualization system provider” Sowder Pgh. [0029]). It would have been obvious to one of ordinary skill of the art to have modified Goldman’s teachings to incorporate Sowder’s teachings, in order “to determine whether actions taken keep the financial goal on track” Sowder Pgh. [0004]. With respect to claim 2: Goldman teaches: wherein the set of action items is determined by at least one statistical or machine learning algorithm by maximize the outcome objective to realize the set of goals (“In additional aspects of the method for generating the database of tax correlation data for the statistical/life knowledge module, the method may utilize a training algorithm to determine the correlation between the taxpayer attribute and the tax related aspect. The training algorithm learns as it analyzes the data records, and uses the learned knowledge in analyzing additional data records accessed by the computing device. The training algorithm also trains future versions of the tax return preparation application to alter the user experience by modifying the content of tax questions and order of tax questions presented to a user based on taxpayer correlations and the quantitative relevancy scores. In another aspect, the method utilizes a scoring algorithm to determine the quantitative relevancy score” (Goldman Column 4 Lines 4-17) and “One embodiment of the present invention is directed to methods of using one or more predictive models for determining the relevancy and prioritizing the tax matters presented to a user in preparing an electronic tax return using the computerized tax return preparation system (such as by using the predictive models to determine suggested tax matters that are likely relevant to the taxpayer). The method of using predictive models may be alternative to, or in addition to, the methods of determining relevancy and prioritizing tax matters using the statistical/life knowledge module, as described above. As used herein, the term “predictive model” means an algorithm utilizing as input(s) taxpayer data comprising at least one of personal data and tax data regarding the particular taxpayer, and configured to generate as output(s) one or more tax matters (as defined above), which are predicted to be relevant to the taxpayer, the algorithm created using at least one of the predictive modeling techniques selected from the group consisting of: logistic regression; naive bayes; k-means classification; K-means clustering; other clustering techniques; k-nearest neighbor; neural networks; decision trees; random forests; boosted trees; k-nn classification; kd trees; generalized linear models; support vector machines; and substantial equivalents thereof. The algorithm may also be selected from any subgroup of these techniques, such as at least one of the predictive modeling techniques selected from the group consisting of decision trees; k-means classification; and support vector machines. The predictive model may be based on any suitable data, such as previously filed tax returns, user experiences with tax preparation applications, financial data from any suitable source, demographic data from any suitable source, and the like. Similar to the method using the statistical/life knowledge module, this method allows the system to obtain the required tax data for the taxpayer in a more efficient and tailored fashion for the particular taxpayer” Goldman Column 5 Lines 3-40). With respect to claim 3: Goldman does not teach but Sowder teaches: wherein the method further comprises updating at least one third-party application with an action item of the set of action items to realize at least one goal of the set of goals (“As the financial goal status changes according to financial data retrieved from the users accounts, the image displayed for may financial sub-goal may be changed, visually indicating the users progress towards their financial goals” Sowder Abstract). It would have been obvious to one of ordinary skill of the art to have modified Goldman’s teachings to incorporate Sowder’s teachings, in order “to determine whether actions taken keep the financial goal on track” Sowder Pgh. [0004]. With respect to claim 4: Goldman teaches: wherein the method further comprises generating a task list comprising the set of action items, and wherein completion of the task list completes the set of goals (“Step 207 may include transmitting, via the one or more processors, a notification to the user device, wherein the notification is indicative of a suggested plan to achieve the first user financial goal, and wherein the notification is based on the determined activity and the first user financial goal. The suggested plan may identify at least one of a suggested duration of time or a suggested number of transactions to reach the first user financial goal” Goldman Column 6 Lines 60-67). With respect to claim 6: Goldman teaches: determining, {…..}, guidance for the user based on the insights; and providing the guidance to the user by the insight module (“Step 207 may include transmitting, via the one or more processors, a notification to the user device, wherein the notification is indicative of a suggested plan to achieve the first user financial goal, and wherein the notification is based on the determined activity and the first user financial goal. The suggested plan may identify at least one of a suggested duration of time or a suggested number of transactions to reach the first user financial goal” Goldman Column 6 Lines 60-67). Goldman does not teach but Leitner teaches: determining, at a second sublayer, {…..} wherein the sublayer is a first sublayer (“In the embodiment shown in FIG. 2, an Autopilot component 202 can include sub-components such as a criteria graphical user interface 210, a data access layer 212, a relational database management (RDBMS) system schema 214, metadata 216, a criteria/attribute translator 218, a code generator/compiler 220, and a runtime component 222” (Leitner Pgh. [0061]) and“In the embodiment shown in FIG. 2, a data access layer 212 can be an internal component that can process communications, such as messages, to and from the criteria graphical user interface 210. The data access layer can also extract and populate data received via the interface 210 into an associated memory 118, database or data storage device” Leitner Pgh. [0063]). It would have been obvious to one of ordinary skill of the art to have modified Goldman’s teachings to incorporate Leitner’s teachings, in order “to assure that it will provide the desired data subsets, modeling, formatting and testing of programming that creates the output results in desired form” Leitner Pgh. [0063]. With respect to claim 7: Goldman teaches: determining an effect of the insights on the outcome objective; and generating the outcome objective only when the effect is above a minimum threshold (“The tax calculation engine 50 may ignore data entries having a confidence level below a pre-determined threshold. The estimation module 110 may generate a number of different estimates from a variety of different sources and then writes a composite estimate based on all the information from all the different sources. For example, sources having higher confidence levels 126 may be weighted more than other sources having lower confidence levels 126” (Goldman Column 31 Lines 24-32) and “Statistical data may also be used instead of specific value ranges to identify suspect data. For example, standard deviation may be used instead of a specific range. When a data field exhibits statistical deviation beyond a threshold level, the rules engine 64 may suggest a prompt or suggestion 66 to determine whether the entry is a legitimate or not” Goldman Column 31 Lines 54-57). With respect to claim 8: Goldman teaches: wherein the set of goals comprises one of a realizing a minimum tax refund, withholding an amount from income, maximizing use of a health savings account, allocating income to a retirement fund, and allocating income to various assets (“The tax calculation engine 50 may calculate a final tax due amount, a final refund amount, or one or more intermediary calculations (e.g., taxable income, AGI, earned income, un-earned income, total deductions, total credits, alternative minimum tax (AMT) and the like) Goldman Column 17 Lines 42-47). With respect to claim 9: Goldman teaches: A system for determining an optimal outcome, the system comprising: at least one processor; a datastore; and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the at least one processor, perform a method of determining the optimal outcome, the method comprising: storing, {…..}, a calculation module configured to generate a user profile of a user; storing {…..}, data points and a set of government laws, wherein the data points are indicative of the user and a user region associated with the user, and wherein the set of government laws comprise federal and state laws based on the user region, {…..}; obtaining, by the calculation module {…..}, the data points and a set of government laws from the first sublayer {…..}; generating, by the calculation module {…..}, the user profile comprising the data points and the set of government laws (“Referring back to FIG. 12, after initiating the tax preparation software 100, the tax preparation software 100, in operation 1100, gathers or imports tax related data from the one or more data sources 48 as illustrated in FIGS. 7 and 8. Note that the gathering of tax related data from the one or more data sources 48 may occur at the time the tax preparation software 100 is run. Alternatively, the gathering of tax related data from the one or more data sources 48 may occur over a period of time. For example, data sources 48 may be periodically queried over time (e.g., during a tax reporting year) whereby updated information is stored in a database (not shown) or the like that is then accessed by the tax preparation software 100. This option may improve the efficiency and speed of tax return preparation as the information is already available” (Goldman Column 33 Lines 23-37) and “The tax return preparation system accesses taxpayer data comprising personal data and/or tax data regarding the taxpayer by any of the means described below, such as from prior year tax returns, third party databases, user inputs, etc. The system then generates a taxpayer data profile using the taxpayer data. For instance, the taxpayer data profile may include the taxpayer's age, occupation, place of residence, estimated income, etc.” (Goldman Column 2 Lines 34-42) and “FIG. 1 illustrates graphically how tax legislation/tax rules 10 are broken down into a completeness graph 12 and a tax calculation graph 14. In one aspect of the invention, tax legislation or rules 10 are parsed or broken into various topics. For example, there may be nearly one hundred topics that need to be covered for completing a federal tax return. When one considers both federal and state tax returns, there can be well over one hundred tax topics that need to be covered. When tax legislation or tax rules 10 are broken into various topics or sub-topics, in one embodiment of the invention, each particular topic (e.g., topics A, B) may each have their own dedicated completeness graph 12A, 12B and tax calculation graph 14A, 14B as seen in FIG. 1” (Goldman Column 10 Line 64 to Column 11 Line 9) and “There may be many different schemas 44 depending on the different tax jurisdiction. For example, Country A may have a tax schema 44 that varies from Country B. Different regions or states within a single country may even have different schemas 44. The systems and methods described herein are not limited to a particular schema 44 implementation. The schema 44 may contain all the data fields required to prepare and file a tax return with a government taxing authority” Goldman Column 15 Lines 40-48); determining a set of action items that maximizes the outcome objective while constraining the outcome objective to the set of government laws (“The predictive model generates as output(s) one or more predicted tax matters which are determined to be likely to be relevant to the taxpayer. The system may then determine tax questions to present to the user based at least in part upon the predicted tax matters determined by the predictive model” (Goldman Abstract) “For example, AGI is a re-occurring tax concept that occurs in many places in the tax code. AGI is used not only for the mathematical computation of taxes is also used, for example, to determine eligibility of certain tax deductions and credits. Thus, the AGI node is common to both the completion graph 12 and the tax calculation graph 14” Goldman Column 14 Lines 47-51); and generating a task list comprising the set of action items, wherein completion of the task list realizes the set of goals; and causing display of the task list comprising the set of action items by the display (“The computing device executes a user interface manager configured to receive the one or more suggestions and present to a user one or more questions based on the one or more suggestions via a user interface, wherein a user response to the one or more questions is input to the shared data store. The user interface manager is configured to generate and display a question screen to the user. The question screen includes a question for the user requesting tax data and is also configured to receive the tax data from the user in the form of input from the user. The user interface manager which receives the suggestion(s) selects one or more suggested questions to be presented to a user. Alternatively, the user interface manager may ignore the suggestion(s) and present a different question or prompt to the user” (Goldman Column 1 Lines 60-67) and “In additional aspects of the method for generating the database of tax correlation data for the statistical/life knowledge module, the method may utilize a training algorithm to determine the correlation between the taxpayer attribute and the tax related aspect. The training algorithm learns as it analyzes the data records, and uses the learned knowledge in analyzing additional data records accessed by the computing device. The training algorithm also trains future versions of the tax return preparation application to alter the user experience by modifying the content of tax questions and order of tax questions presented to a user based on taxpayer correlations and the quantitative relevancy scores. In another aspect, the method utilizes a scoring algorithm to determine the quantitative relevancy score” (Goldman Column 4 Lines 4-17) and “One embodiment of the present invention is directed to methods of using one or more predictive models for determining the relevancy and prioritizing the tax matters presented to a user in preparing an electronic tax return using the computerized tax return preparation system (such as by using the predictive models to determine suggested tax matters that are likely relevant to the taxpayer). The method of using predictive models may be alternative to, or in addition to, the methods of determining relevancy and prioritizing tax matters using the statistical/life knowledge module, as described above. As used herein, the term “predictive model” means an algorithm utilizing as input(s) taxpayer data comprising at least one of personal data and tax data regarding the particular taxpayer, and configured to generate as output(s) one or more tax matters (as defined above), which are predicted to be relevant to the taxpayer, the algorithm created using at least one of the predictive modeling techniques selected from the group consisting of: logistic regression; naive bayes; k-means classification; K-means clustering; other clustering techniques; k-nearest neighbor; neural networks; decision trees; random forests; boosted trees; k-nn classification; kd trees; generalized linear models; support vector machines; and substantial equivalents thereof. The algorithm may also be selected from any subgroup of these techniques, such as at least one of the predictive modeling techniques selected from the group consisting of decision trees; k-means classification; and support vector machines. The predictive model may be based on any suitable data, such as previously filed tax returns, user experiences with tax preparation applications, financial data from any suitable source, demographic data from any suitable source, and the like. Similar to the method using the statistical/life knowledge module, this method allows the system to obtain the required tax data for the taxpayer in a more efficient and tailored fashion for the particular taxpayer” (Goldman Column 5 Lines 3-40) and “A user interface presentation 84 may be pre-programmed interview screens that can be selected and provided to the generator element 85 for providing the resulting user interface presentation 84 or content or sequence of user interface presentations 84 to the user. User interface presentations 84 may also include interview screen templates, which are blank or partially completed interview screens that can be utilized by the generation element 85 to construct a final user interface presentation 84 on the fly during runtime” Goldman Column 28 Lines 41-49). Goldman does not teach but Leitner teaches: wherein the first sublayer is only accessible by the {…..} module (“Stackable execution elements such as DLLs can be deployed to various portions of the data processing component 204 for processing of credit data. The data processing component 204 can process the credit data through each of the execution elements until a set of result data is obtained. Note that each execution element can apply filters, routines, methods, techniques, logic, selection, assessment, or analysis as required” Leitner Pgh. [0121]); storing, at a primary layer, {…..} module {…..}; storing, at a first sublayer of the primary layer, {…..}; obtaining, by {…..} module at the primary layer, {…..} from the first sublayer; generating, by {…..} module at the primary layer, {…..}; obtaining, by {…..} module from the {…..} module at the primary layer, {…..}; obtaining, by {…..} module and from a second sublayer, {…..}; determining, by {…..} module at the primary layer, {…..}; obtaining, by {…..} module at the primary layer, {…..} (“In the embodiment shown in FIG. 2, an Autopilot component 202 can include sub-components such as a criteria graphical user interface 210, a data access layer 212, a relational database management (RDBMS) system schema 214, metadata 216, a criteria/attribute translator 218, a code generator/compiler 220, and a runtime component 222” (Leitner Pgh. [0061]) and “In the embodiment shown in FIG. 2, a data access layer 212 can be an internal component that can process communications, such as messages, to and from the criteria graphical user interface 210. The data access layer can also extract and populate data received via the interface 210 into an associated memory 118, database or data storage device” Leitner Pgh. [0063]); and wherein the second sublayer is only accessible by the {…..} module (“Stackable execution elements such as DLLs can be deployed to various portions of the data processing component 204 for processing of credit data. The data processing component 204 can process the credit data through each of the execution elements until a set of result data is obtained. Note that each execution element can apply filters, routines, methods, techniques, logic, selection, assessment, or analysis as required” Leitner Pgh. [0121]). It would have been obvious to one of ordinary skill of the art to have modified Goldman’s teachings to incorporate Leitner’s teachings, in order “to assure that it will provide the desired data subsets, modeling, formatting and testing of programming that creates the output results in desired form” Leitner Pgh. [0063]. Goldman does not teach but Sowder teaches: obtaining, by an insight module from the calculation module {…..}, the user profile; determining, by the insight module {…..}, insights from the user profile and the opportunity data, {…..} (“In another embodiment, the financial goal visualization system provider device may retrieve information from one or more user accounts of the user, use that information to determine the plurality of financial goals that are personalized to the user, and at block 102 the device may send those financial goals over the network to a user device to provide the plurality of financial goals to the user” Sowder Pgh. [0024] and “At block 114, the financial goal visualization system provider device retrieves financial data from the one or more user accounts over the network from, for example, associated account provider devices” Sowder Pgh. [0063]); obtaining, by the insight module {…..}, opportunity data including opportunities to achieve potential user goals; {…..}; determining, by the insight module {…..}, insights from the user profile and the opportunity data, {…..} (“In the examples provided below, the financial goals provided to the user include an education financial goal, a vacation financial goal, and a product financial goal, but the present disclosure is not limited to these examples, and a variety of financial goals know in the art will fall within its scope. Those financial goals may be the predetermined financial goals applicable to a variety of different users, or may be determined for a specific user from information retrieved from their user account (e.g., purchases for a child may indicate that an education financial goal is appropriate for the user, purchases associated with previous vacation spending may be used to determine a vacation financial goal appropriate for the user, product purchases older than a certain age may indicate that a product financial goal is appropriate for the user, etc.)” Sowder Pgh. [0025]); and obtaining, by a goal module {…..}, a set of goals from the user; generating an outcome objective based on the set of goals from the user and the insights (“Upon selection of one of the financial goal selectors 206, 208, or 210, the user may provide financial goal details for the determination of financial sub-goals (which are themselves financial goals) and/or financial goal statuses for the selected financial goal, associate images with those financial Sub-goals and/or financial goal statuses, and receive savings plans for the selected financial goal and/or its sub-goals in blocks 104,106, 108, 110, and 112 of the method 100” Sowder Pgh. [0027] and “In an embodiment, upon user selection of a financial goal, the financial goal visualization system provider may request a plurality of information about the selected financial goal from the user, and the user may provide that information in order to provide financial goal details for the selected financial goal to the financial goal visualization system provider” Sowder Pgh. [0029]). It would have been obvious to one of ordinary skill of the art to have modified Goldman’s teachings to incorporate Sowder’s teachings, in order “to determine whether actions taken keep the financial goal on track” Sowder Pgh. [0004]. With respect to claim 10: Goldman does not teach but Sowder teaches: providing the guidance automatically by a graphical user interface on a mobile device, wherein the guidance comprises one or more insights of the insights (“As a result, the system and method of the present disclosure provides a visual way to approach the problem of saving for financial goals, as a user may associate Subsets of images with different financial goals, and the user will be presented with an image selected from those Subsets that allows the user to visualize the status of each financial goal and their progress toward them” Sowder Pgh. [0022]. It would have been obvious to one of ordinary skill of the art to have modified Goldman’s teachings to incorporate Sowder’s teachings, in order “to determine whether actions taken keep the financial goal on track” Sowder Pgh. [0004]. With respect to claim 11: Goldman teaches: wherein the set of goals includes a tax withholding and a tax refund minimum (“The tax calculation engine 50 may calculate a final tax due amount, a final refund amount, or one or more intermediary calculations (e.g., taxable income, AGI, earned income, un-earned income, total deductions, total credits, alternative minimum tax (AMT) and the like) (Goldman Column 17 Lines 42-47) and “For instance, the services engine 90 may identify that a taxpayer has incurred penalties for underpayment of estimates taxes and may recommend to the taxpayer to increase his or her withholdings or estimated tax payments for the following tax year” Goldman Column 29 Lines 23-26). With respect to claim 12: Goldman teaches: wherein the outcome objective maximizes a tax refund while constraining the tax withholding below a specified value (“The tax calculation engine 50 may calculate a final tax due amount, a final refund amount, or one or more intermediary calculations (e.g., taxable income, AGI, earned income, un-earned income, total deductions, total credits, alternative minimum tax (AMT) and the like) (Goldman Column 17 Lines 42-47) and “For instance, the services engine 90 may identify that a taxpayer has incurred penalties for underpayment of estimates taxes and may recommend to the taxpayer to increase his or her withholdings or estimated tax payments for the following tax year” Goldman Column 29 Lines 23-26). With respect to claim 13: Goldman teaches: determining a current state of the tax refund; estimating an end state of the tax refund; determining an accuracy of the end state of the tax refund; and providing further guidance to the user based on the end state of the tax refund (The import module 89 may also present prompts or questions to the user via a user interface presentation 84 generated by the user interface manager 82. For example, a question may ask the user to confirm the accuracy of the data. The user may also be given the option of whether or not to import the data from the data sources 48 (Goldman Column 16 Lines 31-36) and “It should also be understood that the estimation module 110 may rely on one or more inputs to arrive at an estimated value. For example, the estimation module 110 may rely on a combination of prior tax return data 116 in addition to online resources 118 to estimate a value. This may result in more accurate estimations by relying on multiple, independent sources of information. The UI control 80 may be used in conjunction with the estimation module 110 to select those sources of data to be used by the estimation module 110. For example, user input 114 will require input by the user of data using a user interface presentation 84. The UI control 80 may also be used to identify and select prior tax returns 116. Likewise, user names and passwords may be needed for online resources 118 and third party information 120 in which case UI control 80 will be needed to obtain this information from the user” (Goldman Column 30 Lines 37-52). With respect to claim 14: Goldman does not teach but Sowder teaches: wherein the set of goals further comprises allocating funds to various assets held by the user (“In an embodiment, a selected or provided savings plan may be used by the financial goal visualization system provider to set up automatic transfers of money between user accounts in order to save for the financial goal as defined by the user according to the savings plan selected. In an embodiment, savings plans for a plurality of financial goals may be prioritized such that the financial goal visualization system provider may determine which of the plurality of financial goals to save for in the event there are limited funds in the user accounts” Sowder Pgh. [0039]). It would have been obvious to one of ordinary skill of the art to have modified Goldman’s teachings to incorporate Sowder’s teachings, in order “to determine whether actions taken keep the financial goal on track” Sowder Pgh. [0004]. With respect to claim 16: Goldman teaches: A method of determining an optimal outcome, the method comprising: storing, {…..}, a calculation module configured to generate a user profile of a user; storing {…..}, data points and a set of government laws, wherein the data points are indicative of the user and a user region associated with the user, and wherein the set of government laws comprise federal and state laws based on the user region, {…..}; obtaining, by the calculation module {…..}, the data points and a set of government laws from the first sublayer {…..}; generating, by the calculation module {…..}, the user profile comprising the data points and the set of government laws (“Referring back to FIG. 12, after initiating the tax preparation software 100, the tax preparation software 100, in operation 1100, gathers or imports tax related data from the one or more data sources 48 as illustrated in FIGS. 7 and 8. Note that the gathering of tax related data from the one or more data sources 48 may occur at the time the tax preparation software 100 is run. Alternatively, the gathering of tax related data from the one or more data sources 48 may occur over a period of time. For example, data sources 48 may be periodically queried over time (e.g., during a tax reporting year) whereby updated information is stored in a database (not shown) or the like that is then accessed by the tax preparation software 100. This option may improve the efficiency and speed of tax return preparation as the information is already available” (Goldman Column 33 Lines 23-37) and “The tax return preparation system accesses taxpayer data comprising personal data and/or tax data regarding the taxpayer by any of the means described below, such as from prior year tax returns, third party databases, user inputs, etc. The system then generates a taxpayer data profile using the taxpayer data. For instance, the taxpayer data profile may include the taxpayer's age, occupation, place of residence, estimated income, etc.” (Goldman Column 2 Lines 34-42) and “FIG. 1 illustrates graphically how tax legislation/tax rules 10 are broken down into a completeness graph 12 and a tax calculation graph 14. In one aspect of the invention, tax legislation or rules 10 are parsed or broken into various topics. For example, there may be nearly one hundred topics that need to be covered for completing a federal tax return. When one considers both federal and state tax returns, there can be well over one hundred tax topics that need to be covered. When tax legislation or tax rules 10 are broken into various topics or sub-topics, in one embodiment of the invention, each particular topic (e.g., topics A, B) may each have their own dedicated completeness graph 12A, 12B and tax calculation graph 14A, 14B as seen in FIG. 1” (Goldman Column 10 Line 64 to Column 11 Line 9) and “There may be many different schemas 44 depending on the different tax jurisdiction. For example, Country A may have a tax schema 44 that varies from Country B. Different regions or states within a single country may even have different schemas 44. The systems and methods described herein are not limited to a particular schema 44 implementation. The schema 44 may contain all the data fields required to prepare and file a tax return with a government taxing authority” Goldman Column 15 Lines 40-48); determining a set of action items that maximizes the outcome objective while constraining the outcome objective to the set of government laws (“The predictive model generates as output(s) one or more predicted tax matters which are determined to be likely to be relevant to the taxpayer. The system may then determine tax questions to present to the user based at least in part upon the predicted tax matters determined by the predictive model” (Goldman Abstract) “For example, AGI is a re-occurring tax concept that occurs in many places in the tax code. AGI is used not only for the mathematical computation of taxes is also used, for example, to determine eligibility of certain tax deductions and credits. Thus, the AGI node is common to both the completion graph 12 and the tax calculation graph 14” Goldman Column 14 Lines 47-51); presenting the set of action items to the user by a graphical user interface (“The computing device executes a user interface manager configured to receive the one or more suggestions and present to a user one or more questions based on the one or more suggestions via a user interface, wherein a user response to the one or more questions is input to the shared data store. The user interface manager is configured to generate and display a question screen to the user” Goldman Column 1 Lines 60-67); and determining a current state of the set of goals; estimating an end state of the set of goals; determining an accuracy of the end state of the set of goals; causing display of the accuracy by the display; and providing further guidance to the user based on the end state of the set of goals and the accuracy of the end state of the set of goals (“The import module 89 may also present prompts or questions to the user via a user interface presentation 84 generated by the user interface manager 82. For example, a question may ask the user to confirm the accuracy of the data. The user may also be given the option of whether or not to import the data from the data sources 48 (Goldman Column 16 Lines 31-36) and “It should also be understood that the estimation module 110 may rely on one or more inputs to arrive at an estimated value. For example, the estimation module 110 may rely on a combination of prior tax return data 116 in addition to online resources 118 to estimate a value. This may result in more accurate estimations by relying on multiple, independent sources of information. The UI control 80 may be used in conjunction with the estimation module 110 to select those sources of data to be used by the estimation module 110. For example, user input 114 will require input by the user of data using a user interface presentation 84. The UI control 80 may also be used to identify and select prior tax returns 116. Likewise, user names and passwords may be needed for online resources 118 and third party information 120 in which case UI control 80 will be needed to obtain this information from the user” (Goldman Column 30 Lines 37-52) and “A user interface presentation 84 may be pre-programmed interview screens that can be selected and provided to the generator element 85 for providing the resulting user interface presentation 84 or content or sequence of user interface presentations 84 to the user. User interface presentations 84 may also include interview screen templates, which are blank or partially completed interview screens that can be utilized by the generation element 85 to construct a final user interface presentation 84 on the fly during runtime” (Goldman Column 28 Lines 41-49) and “Still referring to FIG. 9, another attribute 122 may include a confirmation flag 128 that indicates that a taxpayer or user of the tax preparation software 100 has confirmed a particular entry. For example, confirmed entries may be given an automatic “high” confidence value as these are finalized by the taxpayer. Another attribute 122 may include a range of values 130 that expresses a normal or expected range of values for the particular data field. The range of values 130 may be used to identify erroneous estimates or data entry that appear to be incorrect because they fall outside an intended range of expected values. Some estimates, such as responses to Boolean expressions, do not have a range of values 130. In this example, for example, if the number of estimates dependents is more than five (5), the tax logic agent 60 may incorporate into the rules engine 64 attribute range information that can be used to provide non-binding suggestions to the UI control 80 recommending a question to ask the taxpayer about the high number of dependents (prompting user with “are you sure you have 7 dependents”). Statistical data may also be used instead of specific value ranges to identify suspect data. For example, standard deviation may be used instead of a specific range. When a data field exhibits statistical deviation beyond a threshold level, the rules engine 64 may suggest a prompt or suggestion 66 to determine whether the entry is a legitimate or not. Additional details regarding methods and systems that are used to identify suspect electronic tax data may be found in U.S. Pat. No. 8,346,635 which is incorporated by reference herein” (Goldman Column 31 Lines 33-61) and “The confidence level indicator 132 may take a number of different forms, however. For example, the confidence level indicator 132 may be in the form of a gauge or the like that such as that illustrated in FIG. 11. In the example, of FIG. 11, the confidence level indicator 132 is indicated as being “low.” Of course, the confidence level indicator 132 may also appear as a percentage (e.g., 0% being low confidence, 100% being high confidence) or as a text response (e.g., “low,” “medium,” and “high” or the like). Other graphic indicia may also be used for the confidence level indicator 132. For example, the color of a graphic may change or the size of the graphic may change as a function of level of confidence. Referring to FIG. 11, in this instance, the user interface presentation 84 may also include hyperlinked tax topics 136 that are the primary sources for the low confidence in the resulting tax calculation. For example, the reason that the low confidence is given is that there is low confidence in the amount listed on the taxpayer's W-2 form that has been automatically imported into the shared data store 42. This is indicated by the “LOW” designation that is associated with the “earned income” tax topic. In addition, in this example, there is low confidence in the amount of itemized deductions being claimed by a taxpayer. This is seen with the “LOW” designation next to the “deductions” tax topic. Hyperlinks 136 are provided on the screen so that the user can quickly be taken to and address the key drivers in the uncertainty in the calculated tax liability” Goldman Column 32 Line 44 to Column 33 Line 2). Goldman does not teach but Leitner teaches: wherein the first sublayer is only accessible by the {…..} module (“Stackable execution elements such as DLLs can be deployed to various portions of the data processing component 204 for processing of credit data. The data processing component 204 can process the credit data through each of the execution elements until a set of result data is obtained. Note that each execution element can apply filters, routines, methods, techniques, logic, selection, assessment, or analysis as required” Leitner Pgh. [0121]); storing, at a primary layer, {…..} module {…..}; storing, at a first sublayer of the primary layer, {…..}; obtaining, by {…..} module at the primary layer, {…..} from the first sublayer; generating, by {…..} module at the primary layer, {…..}; obtaining, by {…..} module from the {…..} module at the primary layer, {…..}; obtaining, by {…..} module and from a second sublayer, {…..}; determining, by {…..} module at the primary layer, {…..}; obtaining, by {…..} module at the primary layer, {…..} (“In the embodiment shown in FIG. 2, an Autopilot component 202 can include sub-components such as a criteria graphical user interface 210, a data access layer 212, a relational database management (RDBMS) system schema 214, metadata 216, a criteria/attribute translator 218, a code generator/compiler 220, and a runtime component 222” (Leitner Pgh. [0061]) and “In the embodiment shown in FIG. 2, a data access layer 212 can be an internal component that can process communications, such as messages, to and from the criteria graphical user interface 210. The data access layer can also extract and populate data received via the interface 210 into an associated memory 118, database or data storage device” Leitner Pgh. [0063]); and wherein the second sublayer is only accessible by the {…..} module (“Stackable execution elements such as DLLs can be deployed to various portions of the data processing component 204 for processing of credit data. The data processing component 204 can process the credit data through each of the execution elements until a set of result data is obtained. Note that each execution element can apply filters, routines, methods, techniques, logic, selection, assessment, or analysis as required” Leitner Pgh. [0121]). It would have been obvious to one of ordinary skill of the art to have modified Goldman’s teachings to incorporate Leitner’s teachings, in order “to assure that it will provide the desired data subsets, modeling, formatting and testing of programming that creates the output results in desired form” Leitner Pgh. [0063]. Goldman does not teach but Sowder teaches: obtaining, by an insight module from the calculation module {…..}, the user profile; determining, by the insight module {…..}, insights from the user profile and the opportunity data, {…..} (“In another embodiment, the financial goal visualization system provider device may retrieve information from one or more user accounts of the user, use that information to determine the plurality of financial goals that are personalized to the user, and at block 102 the device may send those financial goals over the network to a user device to provide the plurality of financial goals to the user” Sowder Pgh. [0024] and “At block 114, the financial goal visualization system provider device retrieves financial data from the one or more user accounts over the network from, for example, associated account provider devices” Sowder Pgh. [0063]); obtaining, by the insight module {…..}, opportunity data including opportunities to achieve potential user goals; {…..}; determining, by the insight module {…..}, insights from the user profile and the opportunity data, {…..} (“In the examples provided below, the financial goals provided to the user include an education financial goal, a vacation financial goal, and a product financial goal, but the present disclosure is not limited to these examples, and a variety of financial goals know in the art will fall within its scope. Those financial goals may be the predetermined financial goals applicable to a variety of different users, or may be determined for a specific user from information retrieved from their user account (e.g., purchases for a child may indicate that an education financial goal is appropriate for the user, purchases associated with previous vacation spending may be used to determine a vacation financial goal appropriate for the user, product purchases older than a certain age may indicate that a product financial goal is appropriate for the user, etc.)” Sowder Pgh. [0025]); and obtaining, by a goal module {…..}, a set of goals from the user; generating an outcome objective based on the set of goals from the user and the insights (“Upon selection of one of the financial goal selectors 206, 208, or 210, the user may provide financial goal details for the determination of financial sub-goals (which are themselves financial goals) and/or financial goal statuses for the selected financial goal, associate images with those financial Sub-goals and/or financial goal statuses, and receive savings plans for the selected financial goal and/or its sub-goals in blocks 104,106, 108, 110, and 112 of the method 100” Sowder Pgh. [0027] and “In an embodiment, upon user selection of a financial goal, the financial goal visualization system provider may request a plurality of information about the selected financial goal from the user, and the user may provide that information in order to provide financial goal details for the selected financial goal to the financial goal visualization system provider” Sowder Pgh. [0029]). It would have been obvious to one of ordinary skill of the art to have modified Goldman’s teachings to incorporate Sowder’s teachings, in order “to determine whether actions taken keep the financial goal on track” Sowder Pgh. [0004]. With respect to claim 17: Goldman teaches: wherein the set of action items is determined by at least one statistical or machine learning algorithm analyzing the outcome objective (“In additional aspects of the method for generating the database of tax correlation data for the statistical/life knowledge module, the method may utilize a training algorithm to determine the correlation between the taxpayer attribute and the tax related aspect. The training algorithm learns as it analyzes the data records, and uses the learned knowledge in analyzing additional data records accessed by the computing device. The training algorithm also trains future versions of the tax return preparation application to alter the user experience by modifying the content of tax questions and order of tax questions presented to a user based on taxpayer correlations and the quantitative relevancy scores. In another aspect, the method utilizes a scoring algorithm to determine the quantitative relevancy score” (Goldman Column 4 Lines 4-17) and “One embodiment of the present invention is directed to methods of using one or more predictive models for determining the relevancy and prioritizing the tax matters presented to a user in preparing an electronic tax return using the computerized tax return preparation system (such as by using the predictive models to determine suggested tax matters that are likely relevant to the taxpayer). The method of using predictive models may be alternative to, or in addition to, the methods of determining relevancy and prioritizing tax matters using the statistical/life knowledge module, as described above. As used herein, the term “predictive model” means an algorithm utilizing as input(s) taxpayer data comprising at least one of personal data and tax data regarding the particular taxpayer, and configured to generate as output(s) one or more tax matters (as defined above), which are predicted to be relevant to the taxpayer, the algorithm created using at least one of the predictive modeling techniques selected from the group consisting of: logistic regression; naive bayes; k-means classification; K-means clustering; other clustering techniques; k-nearest neighbor; neural networks; decision trees; random forests; boosted trees; k-nn classification; kd trees; generalized linear models; support vector machines; and substantial equivalents thereof. The algorithm may also be selected from any subgroup of these techniques, such as at least one of the predictive modeling techniques selected from the group consisting of decision trees; k-means classification; and support vector machines. The predictive model may be based on any suitable data, such as previously filed tax returns, user experiences with tax preparation applications, financial data from any suitable source, demographic data from any suitable source, and the like. Similar to the method using the statistical/life knowledge module, this method allows the system to obtain the required tax data for the taxpayer in a more efficient and tailored fashion for the particular taxpayer” Goldman Column 5 Lines 3-40). With respect to claim 18: Goldman teaches: wherein the set of goals comprises one of a realizing a minimum tax refund, withholding an amount from income, maximizing use of a health savings account, allocating income to a retirement fund, and allocating income to various assets (“The tax calculation engine 50 may calculate a final tax due amount, a final refund amount, or one or more intermediary calculations (e.g., taxable income, AGI, earned income, un-earned income, total deductions, total credits, alternative minimum tax (AMT) and the like) Goldman Column 17 Lines 42-47). With respect to claim 19: Goldman teaches: comparing the accuracy to a threshold value: and if the accuracy is less than the threshold value, determining an updated current state of a tax refund; estimating an updated end state of the tax refund; determining an updated accuracy of the end state of the tax refund; and providing additional further guidance to the user based on the updated end state of the tax refund and the updated accuracy of the updated end state of the tax refund (“The import module 89 may also present prompts or questions to the user via a user interface presentation 84 generated by the user interface manager 82. For example, a question may ask the user to confirm the accuracy of the data. The user may also be given the option of whether or not to import the data from the data sources 48 (Goldman Column 16 Lines 31-36) and “It should also be understood that the estimation module 110 may rely on one or more inputs to arrive at an estimated value. For example, the estimation module 110 may rely on a combination of prior tax return data 116 in addition to online resources 118 to estimate a value. This may result in more accurate estimations by relying on multiple, independent sources of information. The UI control 80 may be used in conjunction with the estimation module 110 to select those sources of data to be used by the estimation module 110. For example, user input 114 will require input by the user of data using a user interface presentation 84. The UI control 80 may also be used to identify and select prior tax returns 116. Likewise, user names and passwords may be needed for online resources 118 and third party information 120 in which case UI control 80 will be needed to obtain this information from the user” (Goldman Column 30 Lines 37-52) and “A user interface presentation 84 may be pre-programmed interview screens that can be selected and provided to the generator element 85 for providing the resulting user interface presentation 84 or content or sequence of user interface presentations 84 to the user. User interface presentations 84 may also include interview screen templates, which are blank or partially completed interview screens that can be utilized by the generation element 85 to construct a final user interface presentation 84 on the fly during runtime” (Goldman Column 28 Lines 41-49) and “Still referring to FIG. 9, another attribute 122 may include a confirmation flag 128 that indicates that a taxpayer or user of the tax preparation software 100 has confirmed a particular entry. For example, confirmed entries may be given an automatic “high” confidence value as these are finalized by the taxpayer. Another attribute 122 may include a range of values 130 that expresses a normal or expected range of values for the particular data field. The range of values 130 may be used to identify erroneous estimates or data entry that appear to be incorrect because they fall outside an intended range of expected values. Some estimates, such as responses to Boolean expressions, do not have a range of values 130. In this example, for example, if the number of estimates dependents is more than five (5), the tax logic agent 60 may incorporate into the rules engine 64 attribute range information that can be used to provide non-binding suggestions to the UI control 80 recommending a question to ask the taxpayer about the high number of dependents (prompting user with “are you sure you have 7 dependents”). Statistical data may also be used instead of specific value ranges to identify suspect data. For example, standard deviation may be used instead of a specific range. When a data field exhibits statistical deviation beyond a threshold level, the rules engine 64 may suggest a prompt or suggestion 66 to determine whether the entry is a legitimate or not. Additional details regarding methods and systems that are used to identify suspect electronic tax data may be found in U.S. Pat. No. 8,346,635 which is incorporated by reference herein” (Goldman Column 31 Lines 33-61) and “The confidence level indicator 132 may take a number of different forms, however. For example, the confidence level indicator 132 may be in the form of a gauge or the like that such as that illustrated in FIG. 11. In the example, of FIG. 11, the confidence level indicator 132 is indicated as being “low.” Of course, the confidence level indicator 132 may also appear as a percentage (e.g., 0% being low confidence, 100% being high confidence) or as a text response (e.g., “low,” “medium,” and “high” or the like). Other graphic indicia may also be used for the confidence level indicator 132. For example, the color of a graphic may change or the size of the graphic may change as a function of level of confidence. Referring to FIG. 11, in this instance, the user interface presentation 84 may also include hyperlinked tax topics 136 that are the primary sources for the low confidence in the resulting tax calculation. For example, the reason that the low confidence is given is that there is low confidence in the amount listed on the taxpayer's W-2 form that has been automatically imported into the shared data store 42. This is indicated by the “LOW” designation that is associated with the “earned income” tax topic. In addition, in this example, there is low confidence in the amount of itemized deductions being claimed by a taxpayer. This is seen with the “LOW” designation next to the “deductions” tax topic. Hyperlinks 136 are provided on the screen so that the user can quickly be taken to and address the key drivers in the uncertainty in the calculated tax liability” Goldman Column 32 Line 44 to Column 33 Line 2). With respect to claim 20: Goldman teaches: wherein the method further comprises generating a task list comprising the set of action items, and wherein completion of the task list completes the set of goals (“Step 207 may include transmitting, via the one or more processors, a notification to the user device, wherein the notification is indicative of a suggested plan to achieve the first user financial goal, and wherein the notification is based on the determined activity and the first user financial goal. The suggested plan may identify at least one of a suggested duration of time or a suggested number of transactions to reach the first user financial goal” Goldman Column 6 Lines 60-67). Allowable Subject Matter Claims 21-22 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Response to Arguments Applicant's arguments filed 12/19/25 have been fully considered but they are not persuasive. 35 USC § 101 The Applicant states that “The invention is not directed to an abstract concept” (page 11) and that the Claims are “not simply data analysis utilizing known techniques but a technical solution to a technical problem” (page 13). The Examiner disagrees with these sentences because the claims are an improvement of the abstract idea only. It is a business solution to the business problem of optimizing the tax filing process. The applicant has not shown how the claims improve a computer or other technology, invoke a particular machine, transform matter, or provide more than a general link between the abstraction and the technology, MPEP 2106.05(a)-(c) & (e). The Examiner disagrees with the sentence that “The claim limitations cannot reasonable be considered a Mental Process” (page 13). The claim amendments further define and recite a more narrow abstract idea, and can be performed with pen and paper. The data structures are conventional, and are arranged and used in a conventional manner. The Claims do not provide an improvement over prior systems and only add details to the abstract idea, they do not address a problem particular to computer networks and merely apply the abstract idea on general computer components. The amended claims make the abstract idea more specific, and optimizing the tax filing process is not an unconventional activity. Applicant’s remarks about why these limitations provide a practical application fail to surface any technical improvement identified in the spec provided by the claimed machine learning system, therefore this is not an inventive concept and significantly more. 35 USC § 103 The amended claim language is taught in the references of record as indicated above in the Office action. Dependent claims 21-22 would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARLA HUDSON whose telephone number is (571)272-1063. The examiner can normally be reached M-F 9:30 a.m. - 5:30 p.m. ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bennett Sigmond can be reached at (303) 297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.H./Examiner, Art Unit 3694 /BENNETT M SIGMOND/Supervisory Patent Examiner, Art Unit 3694
Read full office action

Prosecution Timeline

Sep 26, 2023
Application Filed
May 01, 2025
Non-Final Rejection — §101, §103, §112
Jul 22, 2025
Interview Requested
Jul 30, 2025
Applicant Interview (Telephonic)
Jul 31, 2025
Examiner Interview Summary
Aug 08, 2025
Response Filed
Sep 20, 2025
Final Rejection — §101, §103, §112
Dec 30, 2025
Request for Continued Examination
Dec 31, 2025
Response after Non-Final Action
Jan 08, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561744
DIFFERENTIAL EVOLUTION ALGORITHM TO ALLOCATE RESOURCES
2y 5m to grant Granted Feb 24, 2026
Patent 12530723
Optimization and Prioritization of Account Directed Distributions in an Asset Management System
2y 5m to grant Granted Jan 20, 2026
Patent 12469033
SERVICES FOR ENTITY TRUST CONVEYANCES
2y 5m to grant Granted Nov 11, 2025
Patent 12417504
CONTROL METHOD, CONTROLLER, DATA STRUCTURE, AND POWER TRANSACTION SYSTEM
2y 5m to grant Granted Sep 16, 2025
Patent 12387197
SECURE COMMUNICATIONS BETWEEN FUELING STATION COMPONENTS
2y 5m to grant Granted Aug 12, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
82%
With Interview (+25.5%)
2y 6m
Median Time to Grant
High
PTA Risk
Based on 114 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month