Prosecution Insights
Last updated: April 19, 2026
Application No. 18/599,017

GENERATION OF ENTITY PLANS COMPRISING GOALS AND NODES

Non-Final OA §101§102§103
Filed
Mar 07, 2024
Examiner
HATCHER, DEIRDRE D
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Iterate Studio Inc.
OA Round
1 (Non-Final)
28%
Grant Probability
At Risk
1-2
OA Rounds
3y 10m
To Grant
53%
With Interview

Examiner Intelligence

Grants only 28% of cases
28%
Career Allow Rate
98 granted / 357 resolved
-24.5% vs TC avg
Strong +26% interview lift
Without
With
+25.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
45 currently pending
Career history
402
Total Applications
across all art units

Statute-Specific Performance

§101
40.0%
+0.0% vs TC avg
§103
37.1%
-2.9% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 357 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This communication is a Non-Final Rejection Office Action in response to the 3/7/2024 filling of Application 18/599,017. Claims 1-20 are now presented. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. When considering subject matter eligibility under 35 U.S.C. 101, in step 1 it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. If the claim does fall within one of the statutory categories, in step 2A prong 1 it must then be determined whether the claim is recite a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea). If the claim recites a judicial exception, under step 2A prong 2 it must additionally be determined whether the recites additional elements that integrate the judicial exception into a practical application. If a claim does not integrate the Abstract idea into a practical application, under step 2B it must then be determined if the claim provides an inventive concept. In the Instant case, Claims 1-11 are directed toward a method for displaying, in a user interface, a track that represents a goal; displaying, in the user interface and within the track, a node that represents a desired action or an area of interest associated with the track; displaying, in the user interface with the node, a status of the node; and displaying, in the user interface with the node, a resource associated with the node. Claims 12-16 are directed toward a system for receiving, via a first user interface, an indication to add a goal to a plan; receiving, via the first user interface, data describing the plan; displaying, in a second user interface, a track that represents the goal; receiving, via the second user interface, an indication to add a node to the track, the node representing a desired action or an area of interest associated with the track; receiving, via a third user interface, data describing the node; displaying, in the second user interface and within the track, the node; and displaying, in the second user interface within the node, one or more icons that represent at least one of: a status of the node; a resource is associated with the node; or a category is associated with the node. Claim 17-20 is directed toward a method for A method, comprising: receiving an indication to add a goal to a plan; receiving data describing the goal; associating the data describing the goal with a track; receiving an indication to add a node within the track, the node representing a desired action or an area of interest associated with the goal; receiving data describing the node; generating the plan based on the goal, the data describing the plan, and the data describing the node; and outputting the plan. As such, each of the Claims is directed to one of the four statutory categories of invention. MPEP 2106.04 II. A. explains that in step 2A prong 1 Examiners are to determine whether a claim recites a judicial exception. MPEP 2106.04(a) explains that: To facilitate examination, the Office has set forth an approach to identifying abstract ideas that distills the relevant case law into enumerated groupings of abstract ideas. The enumerated groupings are firmly rooted in Supreme Court precedent as well as Federal Circuit decisions interpreting that precedent, as is explained in MPEP § 2106.04(a)(2). This approach represents a shift from the former case-comparison approach that required examiners to rely on individual judicial cases when determining whether a claim recites an abstract idea. By grouping the abstract ideas, the examiners’ focus has been shifted from relying on individual cases to generally applying the wide body of case law spanning all technologies and claim types. The enumerated groupings of abstract ideas are defined as: 1) Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations (see MPEP § 2106.04(a)(2), subsection I); 2) Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) (see MPEP § 2106.04(a)(2), subsection II); and 3) Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion) (see MPEP § 2106.04(a)(2), subsection III). As per step 2A prong 1 of the eligibility analysis, claim 1 recites the abstract idea of managing goes and implementation plans which falls into the abstract idea categories of certain methods of organizing human activity. The elements of Claim 1 that represent the Abstract idea include: A method for providing a plan, the method comprising: Displaying a track that represents a goal; Displaying and within the track, a node that represents a desired action or an area of interest associated with the track; Displaying with the node, a status of the node; and displaying with the node, a resource associated with the node. MPEP 2106.04(a)(2) II. states: The phrase "methods of organizing human activity" is used to describe concepts relating to: fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations); and managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions). The Supreme Court has identified a number of concepts falling within the "certain methods of organizing human activity" grouping as abstract ideas. In particular, in Alice, the Court concluded that the use of a third party to mediate settlement risk is a ‘‘fundamental economic practice’’ and thus an abstract idea. 573 U.S. at 219–20, 110 USPQ2d at 1982. In addition, the Court in Alice described the concept of risk hedging identified as an abstract idea in Bilski as ‘‘a method of organizing human activity’’. Id. Previously, in Bilski, the Court concluded that hedging is a ‘‘fundamental economic practice’’ and therefore an abstract idea. 561 U.S. at 611–612, 95 USPQ2d at 1010. In the instant case the steps of displaying a track that represents a goal; displaying, within the track, a node that represents a desired action or an area of interest associated with the track; displaying with the node, a status of the node; and displaying with the node, a resource associated with the node are directed to goal tracking which is a method of organizing human activity. The elements of Claim 1 that represent the Abstract idea include: A system, comprising: Receiving an indication to add a goal to a plan; Receiving data describing the plan; displaying, in a second user interface, a track that represents the goal; Receiving an indication to add a node to the track, the node representing a desired action or an area of interest associated with the track; Receiving data describing the node; displaying within the track, the node; and displaying within the node, one or more icons that represent at least one of: a status of the node; a resource is associated with the node; or a category is associated with the node. In the instant case the steps of receiving an indication to add a goal to a plan; receiving data describing the plan; displaying, in a second user interface, a track that represents the goal; receiving an indication to add a node to the track, the node representing a desired action or an area of interest associated with the track; receiving data describing the node; displaying within the track, the node; and displaying within the node, one or more icons that represent at least one of: a status of the node; a resource is associated with the node; or a category is associated with the node are directed to goal tracking which is a method of organizing human activity. Under step 2A prong 2 the examiner must then determine if the recited abstract idea is integrated into a practical application. MPEP 2106.04 states: Limitations the courts have found indicative that an additional element (or combination of elements) may have integrated the exception into a practical application include: • An improvement in the functioning of a computer, or an improvement to other technology or technical field, as discussed in MPEP §§ 2106.04(d)(1) and 2106.05(a); • Applying or using a judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, as discussed in MPEP § 2106.04(d)(2); • Implementing a judicial exception with, or using a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, as discussed in MPEP § 2106.05(b); • Effecting a transformation or reduction of a particular article to a different state or thing, as discussed in MPEP § 2106.05(c); and • Applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception, as discussed in MPEP § 2106.05(e) The courts have also identified limitations that did not integrate a judicial exception into a practical application: • Merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f); • Adding insignificant extra-solution activity to the judicial exception, as discussed in MPEP § 2106.05(g); and • Generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h). In the instant case, this judicial exception is not integrated into a practical application. In particular, Claims 1 and 12 recites the additional elements of: A computer-implemented method and; A system, comprising: a processing element; and a memory component storing instructions, that when executed by the processing element, cause operations to be performed, the operations comprising the recited steps; a user interface; However, the computer elements (the computer and a memory component storing instructions, that when executed by the processing element, cause operations to be performed, the operations comprising the abstract idea) are recited at a high-level of generality (i.e., as a generic processor performing a generic computer functions) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Further, the user interface amounts to a conventional display and does not amount to a technical improvement. Viewing the generic user interface in combination with the generic computer does not add more than when viewing the elements individually. Accordingly, the additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. In step 2B, the examiner must be determine whether the claim adds a specific limitation other than what is well-understood, routine, conventional activity in the field - see MPEP 2106.05(d). As discussed with respect to Step 2A Prong Two, the processing circuitry in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Further, MPEP 2106.05(a) states that Instructions to display two sets of information on a computer display in a non-interfering manner, without any limitations specifying how to achieve the desired result, (see Interval Licensing LLC v. AOL, Inc., 896 F.3d 1335, 1344-45, 127 USPQ2d 1553, 1559-60 (Fed. Cir. 2018)) is not sufficient to show an improvement in computer-functionality. The instant case is similar as the recited user interface does not recite limitations specifying how to achieve the desired result. Viewing the generic user interface in combination with the generic computer does not add more than when viewing the elements individually. Accordingly, the additional elements do not provide an inventive concept. Further Claims 2-10 and 13-16 further limit the methods of organizing human activity already rejected in the parent claim, but fail to remedy the deficiencies of the parent claim as they do not impose any additional elements that amount to significantly more than the abstract idea itself. Accordingly, the Examiner concludes that there are no meaningful limitations in claims 1-13 or 12-16 that transform the judicial exception into a patent eligible application such that the claim amounts to significantly more than the judicial exception itself. Claims 17-20 are rejected for similar reasons as claims 1-10. The analysis above applies to all statutory categories of invention. The presentment of claim 1 or 12 otherwise styled as a method, or system, for example, would be subject to the same analysis. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 2, 3, 4, 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ali US 20150347987 A1 in view of Matsuoka US 2022/0405687 A1. As per Claim 1 Ali teaches A computer-implemented method for providing a plan, the method comprising: displaying, in a user interface, a track that represents a goal; (see Ali Fig. 10 that displays several interface elements that the represent tracks that represent goals (items 1006, 1008) displaying, in the user interface and within the track, a node that represents a desired action or an area of interest associated with the track; (see Ali Fig. 10 that displays several interface elements that the represent tracks that represent goals (items 1028, 1030, 1032, 1036) displaying, in the user interface with the node, a status of the node; and See Ali para. 204 that teaches the goal component may optionally display a progress indicator, such as indicator 1038, associated with a goal or sub-goal. The progress indicator 1038 indicates how much progress or how close the user is toward accomplishing the goal or sub-goal. Ali does not teach displaying, in the user interface with the node, a resource associated with the node. However, Matsuoka fig. 5 teaches resources (i.e. Calvin Community College) associated with nodes. Both Ali and Matsuoka ae directed to goal display interfaces. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Ali to include displaying, in the user interface with the node, a resource associated with the node as taught by Matsuoka to provide information to assist a user in achieving their goals (as suggested by para. 127). As per Claim 2 Ali teaches the computer-implemented method of claim 1, wherein: the user interface is a first user interface; and the computer-implemented method further comprises at least one of: receiving, via a second user interface, an identifier for the plan; or receiving, via the second user interface, a type for the plan. Ali para. 194 teaches a user selects “add new goal” 1004 to open a create goals field. The user may create the goal by entering a textual title or description of the goal. In one embodiment, goals pane 1000 may include “enter title of new goal” 1050. The user selects “enter title of new goal” 1050 to create a title and/or description for a newly created goal. As per Claim 3 Ali teaches the computer-implemented method of claim 1, further comprising receiving, via the user interface, an indication to add the node to the track prior to displaying the node. Ali para. 196 teaches The user may select add a sub-goal 1010 to create a new sub-goal associated with goal 1006, edit 1012 to edit goal 1006, or delete 1014 to delete goal 1006. The user may create one or more user selected sub-goals for each goal. The set of sub-goals may include a single sub-goal, two or more sub-goals, as well as no sub-goals. However, in this embodiment, the goals component of the daily digital planner only permits the user to add three (3) sub-goals to each goal. As per Claim 4 Ali teaches the computer-implemented method of claim 1, further comprising receiving, via the user interface, an indication to add the goal to the plan prior to displaying the track that represents the goal. Ali para. 194 teaches a user selects “add new goal” 1004 to open a create goals field. The user may create the goal by entering a textual title or description of the goal. In one embodiment, goals pane 1000 may include “enter title of new goal” 1050. The user selects “enter title of new goal” 1050 to create a title . Further, para. 0196 teaches the user may select add a sub-goal 1010 to create a new sub-goal associated with goal 1006, edit 1012 to edit goal 1006, or delete 1014 to delete goal 1006. The user may create one or more user selected sub-goals for each goal. The set of sub-goals may include a single sub-goal, two or more sub-goals, as well as no sub-goals. However, in this embodiment, the goals component of the daily digital planner only permits the user to add three (3) sub-goals to each goal. As per Claim 11 Ali teaches the computer-implemented method of claim 1, wherein: the user interface is a first user interface; and the computer-implemented method further comprises at least one of: receiving, via a second user interface, a tag for the plan; receiving, via the second user interface, a name for the plan; or receiving, via the second user interface, a summary of the plan. Ali para. 195 teaches In another embodiment, the user may select the goal from a template list of pre-defined goals. A goal may be any goal selected or created by the user. For example, a goal may be, without limitation, to lose a specific amount of weight, complete a project, buy a home, or any other user created goal. In this example, the user has created goal 1006 and goal 1008. Claim(s) 5, 6, 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ali US 20150347987 A1 in view of Matsuoka US 2022/0405687 A1 as applied to claim 1 and in further view of Srivastava US 2022/0292393 A1. As per claim 5 Ali does not teach the computer-implemented method of claim 1, wherein: the user interface is a first user interface; and the computer-implemented method further comprises: receiving, via a second user interface, entity characteristics of an entity associated with the plan, the entity characteristics including an entity type and entity financial information; and However, Srivastava para. 1 teaches an initiative plan can be a formal written document containing goals of an entity, methods for attaining those goals, and a time frame for achievement of the goals. The initiative plan can also describe a nature of the entity, background information on the entity, financial projections of the entity, strategies the entity intends to implement to achieve the goals, and/or the like. Further, para. 20 teaches as shown by reference number 110, the planning system processes the client data, with a first machine learning model, to determine current state data identifying a current state of the client. The first machine learning model may include an inference engine that utilizes predictive modeling to generate an output identifying a current state of the client. The current state of the client may enable a user (e.g., a consultant) to understand the business nuances of the client. In some implementations, the first machine learning model (e.g., the inference engine) may analyze reports (e.g., annual financial reports) of the client and may integrate with external services (e.g., a portfolio analysis service) to generate a detailed analysis of the current state of the client. generating a recommendation for the plan based on the entity type and the entity financial information. Srivastava para. 92 teaches as further shown in FIG. 5, process 500 may include processing the initiatives, the benefits of the initiatives, the priorities of the initiatives, and the costs of the initiatives, with the second machine learning model, to generate an initiative plan for solving a problem associated with the problem statement (block 560). For example, the device may process the initiatives, the benefits of the initiatives, the priorities of the initiatives, and the costs of the initiatives, with the second machine learning model, to generate an initiative plan for solving a problem associated with the problem statement, as described above. The initiative plan may include a roll out plan for the initiative plan, an implementation strategy for the initiative plan, and/or a cost-benefit analysis for the initiative plan. Both Ali and Srivastava ae directed to goal display interfaces. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Ali to include the user interface is a first user interface; and the computer-implemented method further comprises: receiving, via a second user interface, entity characteristics of an entity associated with the plan, the entity characteristics including an entity type and entity financial information; and generating a recommendation for the plan based on the entity type and the entity financial information as taught by Srivastava to improve accuracy of initiative plans (see pra. 67). As per claim 6 Ali does not teach the computer-implemented method of claim 5, wherein: generating the recommendation comprises generating the recommendation using a machine learning model; and However, Srivastava para. 12 teaches the planning system may process the initiatives, the benefits of the initiatives, the priorities of the initiatives, and the costs of the initiatives, with the second machine learning model, to generate an initiative plan for solving a problem associated with the problem statement. the computer-implemented method further comprises: receiving entity data for multiple entities, the entity data including, for each entity, the entity type, the entity financial information, and an entity plan; determining a financial metric or a trackable metric for each entity of the multiple entities; Srivastava para. 12 teaches Some implementations described herein relate to a planning system that utilizes machine learning models to intelligently generate an initiative plan. For example, the planning system may receive client data identifying current operations of a client. The planning system may process the client data, with a first machine learning model, to determine current state data identifying a current state of the client. The planning system may process the current state data and prior client data, with a second machine learning model, to determine a problem statement for the client and future state data identifying a future state of the client. The prior client data may identify challenges, capabilities, processes, and/or key performance indicators (KPIs) associated with other clients. The planning system may utilize the second machine learning model to identify initiatives for the client, and costs of the initiatives, based on the problem statement, the current state data, and the future state data. The planning system may utilize the second machine learning model to assign benefits and priorities to the initiatives based on the costs of the initiatives, the problem statement, the current state data, and the future state data. The planning system may process the initiatives, the benefits of the initiatives, the priorities of the initiatives, and the costs of the initiatives, with the second machine learning model, to generate an initiative plan for solving a problem associated with the problem statement. generating, using the entity data and the financial metric or the trackable metric for each entity of the multiple entities, a training dataset; and Srivastava para. 13 teaches In this way, the planning system utilizes machine learning models to intelligently generate an initiative plan. In its entirety, the initiative plan may serve as a roadmap (e.g., a plan) that provides direction to the client. The planning system may provide initiative plans much quicker than current techniques, and may provide benchmarks associated with a peer set, industry best practices, KPIs, value drivers, and/or the like for each initiative plan across a business, business units, a geography, a country, and/or the like. The planning system may include machine learning models that process client data from public websites, investors, annual reports, and/or the like, that utilize latest content from similar clients when determining the initiative plan, and/or the like. This, in turn, conserves computing resources, networking resources, human resources, and/or the like that would otherwise have been wasted in reduced work productivity, lost opportunities for the business, generating incorrect business plans, making poor decisions based on the incorrect business plans, and/or the like. [0054] FIG. 2 is a diagram illustrating an example 200 of training and using a machine learning model (e.g., the first machine learning model and/or the second machine learning model) in connection with intelligently generating initiative plans. The machine learning model training and usage described herein may be performed using a machine learning system. The machine learning system may include or may be included in a computing device, a server, a cloud computing environment, and/or the like, such as the planning system described in more detail elsewhere herein. Para. 27 teaches In some implementations, the first machine learning model may include a predictive model, and the planning system may utilize the predictive model to determine the state data based on the analyzed client data (e.g., the vectors, the values, the cosine similarity matrix, the clusters, and/or the like). The state data may indicate a current state of the client. For example, the state data may include data identifying current financials of the client, systems diagnostic information associated with the client, process diagnostic information associated with the client, data diagnostic information associated with the client, KPIs associated with the client, information comparing the current state (e.g., KPIs, current financial data, market capitalization data, and/or the like) of the client to current states of other entities similar to the client (e.g., associated with the same business industry, the same market, the same geographic location, the same quantity of employees, the same gross revenue, and/or the like), and/or another type of information indicating a current state of the client. Para. 55 teaches As shown by reference number 205, a machine learning model may be trained using a set of observations. The set of observations may be obtained from historical data, such as data gathered during one or more processes described herein. In some implementations, the machine learning system may receive the set of observations (e.g., as input) from the planning system, as described elsewhere herein. training, using the training dataset, the machine learning model to generate recommendations for entities based on entity types and entity financial information. Para. 13 teaches In this way, the planning system utilizes machine learning models to intelligently generate an initiative plan. In its entirety, the initiative plan may serve as a roadmap (e.g., a plan) that provides direction to the client. The planning system may provide initiative plans much quicker than current techniques, and may provide benchmarks associated with a peer set, industry best practices, KPIs, value drivers, and/or the like for each initiative plan across a business, business units, a geography, a country, and/or the like. The planning system may include machine learning models that process client data from public websites, investors, annual reports, and/or the like, that utilize latest content from similar clients when determining the initiative plan, and/or the like. This, in turn, conserves computing resources, networking resources, human resources, and/or the like that would otherwise have been wasted in reduced work productivity, lost opportunities for the business, generating incorrect business plans, making poor decisions based on the incorrect business plans, and/or the like. Para. 54 teaches FIG. 2 is a diagram illustrating an example 200 of training and using a machine learning model (e.g., the first machine learning model and/or the second machine learning model) in connection with intelligently generating initiative plans. The machine learning model training and usage described herein may be performed using a machine learning system. The machine learning system may include or may be included in a computing device, a server, a cloud computing environment, and/or the like, such as the planning system described in more detail elsewhere herein. Both Ali and Srivastava ae directed to goal display interfaces. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Ali to include wherein: generating the recommendation comprises generating the recommendation using a machine learning model; and the computer-implemented method further comprises: receiving entity data for multiple entities, the entity data including, for each entity, the entity type, the entity financial information, and an entity plan; determining a financial metric or a trackable metric for each entity of the multiple entities; generating, using the entity data and the financial metric or the trackable metric for each entity of the multiple entities, a training dataset; and training, using the training dataset, the machine learning model to generate recommendations for entities based on entity types and entity financial information as taught by Srivastava to improve accuracy of initiative plans (see pra. 67). As per claim 7 Ali does not teach the computer-implemented method of claim 5, wherein: the machine learning model includes a natural language processing model; and However, Srivastava para. 56 teaches t [0056] As shown by reference number 210, the set of observations includes a feature set. The feature set may include a set of variables, and a variable may be referred to as a feature. A specific observation may include a set of variable values (or feature values) corresponding to the set of variables. In some implementations, the machine learning system may determine variables for a set of observations and/or variable values for a specific observation based on input received from the planning system. For example, the machine learning system may identify a feature set (e.g., one or more features and/or feature values) by extracting the feature set from structured data, by performing natural language processing to extract the feature set from unstructured data, by receiving input from an operator, and/or the like. the computer-implemented method further comprises automatically generating a prompt to suggest a track or a node to include in the plan. However, Srivastava para. 63 teaches t [0063] As an example, the trained machine learning model 225 may predict a value of initiatives X for the target variable of the initiatives for the new observation, as shown by reference number 235. Based on this prediction, the machine learning system may provide a first recommendation, may provide output for determination of a first recommendation, may perform a first automated action, may cause a first automated action to be performed (e.g., by instructing another device to perform the automated action), and/or the like. Both Ali and Srivastava ae directed to goal display interfaces. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Ali to include wherein: the machine learning model includes a natural language processing model; and the computer-implemented method further comprises automatically generating a prompt to suggest a track or a node to include in the plan as taught by Srivastava to improve accuracy of initiative plans (see pra. 67). Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ali US 20150347987 A1 in view of Matsuoka US 2022/0405687 A1 as applied to claim 1 and in further view of Sandhu US 20070192130 A1 As per claim 8 Ali does not teach the computer-implemented method of claim 1, further comprising associating the node with the resource prior to displaying the resource, wherein: the resource is represented in the user interface by an icon; and the icon is associated with a uniform resource locator. However, Sandhu para. 41 teaches after searching the appropriate database or databases, in step 306, the website may display the results 208 of the user's 202 search. The results 208 can include matching service providers 206 who are ranked according to their respective average hourly value ratings. Additionally, as described above, the service providers 206 may be ranked or displayed in any other of a variety of manners. Additionally, the ranking and displaying of the service providers 206 may include hyperlinks associated with each of the service providers 206. After viewing the list, the user 202, in step 308, may obtain more information about one or more service providers 206 by clicking hyperlinks associated with the names of the service providers 206. The hyperlinks may lead to outside websites owned and maintained by the service providers 206 or may lead to profile pages that are part of the interface and which may contain a variety of information. The profile pages may contain, for example, information about past jobs performed, biographical data and skill-related information. Other data related to past jobs may also be available. This data can include average hourly value ratings, average actual hourly fees, total compensation earned, total hours worked and/or billed and any other information about one or more particular jobs. Depending on the application, all of the information that may be housed on the profile pages may or may not be available to all users 202 of the website at all times. In other exemplary embodiments, the information available to users 202 of the website may be limited to a predetermined amount and scope of data. Both Ali and Sandhu are directed to goal display interfaces. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Ali to include associating the node with the resource prior to displaying the resource, wherein: the resource is represented in the user interface by an icon; and the icon is associated with a uniform resource locator as taught by Sandhu to provide an easy and quick way for a user to connect with a resource. Claim(s) 9, 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ali US 20150347987 A1 in view of Matsuoka US 2022/0405687 A1 in view of Sandhu US 20070192130 A1 as applied to claim 8 and in further view of Dyer US 2019/0057335 A1. As per claim 9 Ali does not teach the computer-implemented method of claim 8, wherein the resource includes a different entity, the method further comprising: identifying the different entity based on determining that the different entity is capable of performing the desired action or that the different entity is associated with the area of interest; and recommending the identified different entity for inclusion in the plan. However, Dyer paras. 71 teaches The resource configuration layer 118 accesses the project plan database 230 to identify project tasks that have been completed and project tasks that remain to be completed. The resource configuration 118 is prompted by an update to a project profile to re-evaluate the assigned resources to a project task, and generate a recommended modification to the assigned resources. The recommended modifications are presented, for example, through the digital project scheduler 500 as a “recommended resources” message included in the fourth task field 522. The recommended resources message is interpreted as a message that although resources 1) claims database, and 2) Tester A, are scheduled and assigned to implement Task 4, the resource configuration layer 118 has determined that actually the 1) claims database, and 2) Tester B are the better project team for Task 4. This recommendation for changing the combination of resources to implement Task 4 is a calculation made by the resource configuration layer 118 based on available information gathered from one or more of the project plan database 230, the machine learning engine 103, and the learned project plan database 107. Both Ali in view of Sandhu and Dyer are directed to goal display interfaces. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Ali in view of Sandhu to include wherein the resource includes a different entity, the method further comprising: identifying the different entity based on determining that the different entity is capable of performing the desired action or that the different entity is associated with the area of interest; and recommending the identified different entity for inclusion in the plan as taught by Dyer to facilitate more efficient, accurate, consistent, and precise execution of complex projects using disparate geographically distributed resources (see para. 32). As per claim 10 Ali does not teach the computer-implemented method of claim 9, wherein the recommending of the identified different entity is based on an estimate of a performance of the entity. However, Dyer paras. 71 teaches The resource configuration layer 118 accesses the project plan database 230 to identify project tasks that have been completed and project tasks that remain to be completed. The resource configuration 118 is prompted by an update to a project profile to re-evaluate the assigned resources to a project task, and generate a recommended modification to the assigned resources. The recommended modifications are presented, for example, through the digital project scheduler 500 as a “recommended resources” message included in the fourth task field 522. The recommended resources message is interpreted as a message that although resources 1) claims database, and 2) Tester A, are scheduled and assigned to implement Task 4, the resource configuration layer 118 has determined that actually the 1) claims database, and 2) Tester B are the better project team for Task 4. This recommendation for changing the combination of resources to implement Task 4 is a calculation made by the resource configuration layer 118 based on available information gathered from one or more of the project plan database 230, the machine learning engine 103, and the learned project plan database 107. Both Ali in view of Sandhu and Dyer are directed to goal display interfaces. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Ali in view of Sandhu to include wherein the recommending of the identified different entity is based on an estimate of a performance of the entity as taught by Dyer to facilitate more efficient, accurate, consistent, and precise execution of complex projects using disparate geographically distributed resources (see para. 32). Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 12, 13, 14 is/are rejected under 35 U.S.C. 102(a)(1) and (a)(2) as being anticipated by Ali US 2015/0347987 A1. As per Claim 12 Ali teaches A system, comprising: a processing element; and a memory component storing instructions, that when executed by the processing element, cause operations to be performed, the operations comprising: Ali para. 259-260 receiving, via a first user interface, an indication to add a goal to a plan; Ali para. 196 teaches the user may select add a sub-goal 1010 to create a new sub-goal associated with goal 1006, edit 1012 to edit goal 1006, or delete 1014 to delete goal 1006. The user may create one or more user selected sub-goals for each goal. The set of sub-goals may include a single sub-goal, two or more sub-goals, as well as no sub-goals. However, in this embodiment, the goals component of the daily digital planner only permits the user to add three (3) sub-goals to each goal. receiving, via the first user interface, data describing the plan; Ali para. 195 teaches in another embodiment, the user may select the goal from a template list of pre-defined goals. A goal may be any goal selected or created by the user. For example, a goal may be, without limitation, to lose a specific amount of weight, complete a project, buy a home, or any other user created goal. In this example, the user has created goal 1006 and goal 1008. displaying, in a second user interface, a track that represents the goal; Ali para. 193-196 teaches turning now to FIG. 10, a pictorial illustration of a goals management pane associated with a goals modular component of a daily digital planner is shown in accordance with one embodiment. Goals pane 1000 is a panel presenting a set of user selected goals, goal related tasks associated with each user selected goal, and a goal progress report. A goals modular component, such as goals 216 in FIG. 2, generates goals pane 1000 in response to the user selecting goals tab 1002. A user selects “add new goal” 1004 to open a create goals field. The user may create the goal by entering a textual title or description of the goal. In one embodiment, goals pane 1000 may include “enter title of new goal” 1050. The user selects “enter title of new goal” 1050 to create a title and/or description for a newly created goal. In another embodiment, the user may select the goal from a template list of pre-defined goals. A goal may be any goal selected or created by the user. For example, a goal may be, without limitation, to lose a specific amount of weight, complete a project, buy a home, or any other user created goal. In this example, the user has created goal 1006 and goal 1008. receiving, via the second user interface, an indication to add a node to the track, the node representing a desired action or an area of interest associated with the track; Ali para. 197 teaches in this non-limiting example, goal 1006 includes three sub-goals, namely sub-goal 1016, sub-goal 1018, and sub-goal 1020. Goal 1008 includes only a single sub-goal. The user may select add a task 1022 to create a new task associated with sub-goal 1016. The user may also select to edit 1024 sub-goal 1016 or delete 1026 sub-goal 1016. receiving, via a third user interface, data describing the node; Ali para. 197 teaches in this non-limiting example, goal 1006 includes three sub-goals, namely sub-goal 1016, sub-goal 1018, and sub-goal 1020. Goal 1008 includes only a single sub-goal. The user may select add a task 1022 to create a new task associated with sub-goal 1016. The user may also select to edit 1024 sub-goal 1016 or delete 1026 sub-goal 1016. displaying, in the second user interface and within the track, the node; and see Ali Fig. 10 displaying, in the second user interface within the node, one or more icons that represent at least one of: a status of the node; See Ali para. 204 that teaches the goal component may optionally display a progress indicator, such as indicator 1038, associated with a goal or sub-goal. The progress indicator 1038 indicates how much progress or how close the user is toward accomplishing the goal or sub-goal. a resource is associated with the node; or a category is associated with the node. As per Claim 13 Ali teaches the system of claim 12, wherein the data describing the plan comprises at least one of: a plan identifier; a plan type; a plan name; a summary of the plan; a tag; or a partner entity associated with the plan. Ali para. 194 teaches a user selects “add new goal” 1004 to open a create goals field. The user may create the goal by entering a textual title or description of the goal. In one embodiment, goals pane 1000 may include “enter title of new goal” 1050. The user selects “enter title of new goal” 1050 to create a title and/or description for a newly created goal. As per Claim 14 Ali teaches the system of claim 12, wherein the data describing the node comprises at least one of: the status of the node; the category associated with the node; a title; a description; or the resource associated with the node. Ali para. 198 teaches the goals module permits a user to create a set of tasks associated with each sub-goal. The set of tasks include tasks and activities intended to assist the user in achieving the goal. The set of tasks may include a single task, as well as two or more tasks. In this example, sub-goal 1016 includes tasks 1028-1030. Sub-goal 1018 includes a single task 1034. Likewise, the user has assigned a single task to sub-goal 1020. The user may select add a task 1036 to create a new task for the sub-goal associated with goal 1008 or schedule a task to occur on a particular date and/or time in the calendar module. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 15, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ali US 20150347987 A1 in view of Phillips US 2014/0358828 A1. As per Claim 15 Ali does not teach the system of claim 12, wherein the memory component stores further instructions for generating a recommendation for the plan based on an existing plan. Phillips para. 72 teaches for example, in the depicted embodiment, the supported attributes or values for the features 128, 130, 132, 134, 136, 138 vary based on a selected goal, such as a zip code (e.g., a customer zip code, a store zip code), an age group (e.g., a customer age group, a target age group), a dollar amount (e.g., a purchase amount, a profit amount, a goal amount, an action cost, a budget), an integer value (e.g., a target group size, a number of sales, a number of actions), or the like. In certain embodiments, a goal may comprise business goals or desired outcomes, such as goals to double sales, to increase repeat purchases, or the like. A goal, as used herein, may include a desired, intended, or selected outcome or result of one or more actions or, action plans, or other events. A goal may include a business goal (e.g., a sales goal, a marketing goal, a corporate goal, an IT goal, a customer service goal, or the like) a personal goal, a medical goal, a fitness goal, a political goal, an organization goal, a team goal, an economic goal, a short-term goal, a long-term goal, a custom goal, a predefined goal, or another type of goal, based on a context in which the action plan module 102 operate. Para. 106 teaches in one embodiment, the recommended action module 204 is configured to select a suggested or optimal action plan for the action plan interface module 206 to initially display to a user prior to the action plan interface module 206 receiving user input. The recommended action module 204 may then update the displayed action plan dynamically as the action plan interface module 206 receives user input adjusting a machine learning parameter for the action plan or the like. Para. 116 teaches in one embodiment, the pre-compute module 208 is configured to store machine learning results in a results data structure for the machine learning module 202 and/or the recommended action module 204, allowing them to determine action plans without determining new machine learning results, but instead using cached, pre-computed machine learning results. The pre-compute module 208 may store machine learning results in a results data structure indexed or accessible by feature (e.g., actionable, non-actionable), by different instances or data values of data, or by other machine learning parameters. As described below with regard to the update module 212, the update module 212 may update or adjust an action plan in response to a user or other client 104 providing user input adjusting a machine learning parameter, such as a value for a feature (e.g., actionable, non-actionable), a recommended action, a target for a recommended action, a count for a recommended action, an action time for a recommended action, a cost for a recommended action, a predicted outcome for a recommended action, or another machine learning parameter associated with an action or action plan, using either pre-computed machine learning results from the pre-compute module 208 and/or dynamically determined machine learning results from the machine learning module 202. Both Ali and Phillips are directed to goal display interfaces. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Ali to include wherein the memory component stores further instructions for generating a recommendation for the plan based on an existing plan as taught by Phillips to produce more accurate, realistic, correct results(see para. 99). As per Claim 16 Ali does not teach the system of claim 15, wherein the further instructions for generating the recommendation for the plan comprises a machine learning model. Phillips para. 72 teaches for example, in the depicted embodiment, the supported attributes or values for the features 128, 130, 132, 134, 136, 138 vary based on a selected goal, such as a zip code (e.g., a customer zip code, a store zip code), an age group (e.g., a customer age group, a target age group), a dollar amount (e.g., a purchase amount, a profit amount, a goal amount, an action cost, a budget), an integer value (e.g., a target group size, a number of sales, a number of actions), or the like. In certain embodiments, a goal may comprise business goals or desired outcomes, such as goals to double sales, to increase repeat purchases, or the like. A goal, as used herein, may include a desired, intended, or selected outcome or result of one or more actions or, action plans, or other events. A goal may include a business goal (e.g., a sales goal, a marketing goal, a corporate goal, an IT goal, a customer service goal, or the like) a personal goal, a medical goal, a fitness goal, a political goal, an organization goal, a team goal, an economic goal, a short-term goal, a long-term goal, a custom goal, a predefined goal, or another type of goal, based on a context in which the action plan module 102 operate. Para. 106 teaches in one embodiment, the recommended action module 204 is configured to select a suggested or optimal action plan for the action plan interface module 206 to initially display to a user prior to the action plan interface module 206 receiving user input. The recommended action module 204 may then update the displayed action plan dynamically as the action plan interface module 206 receives user input adjusting a machine learning parameter for the action plan or the like. Para. 116 teaches in one embodiment, the pre-compute module 208 is configured to store machine learning results in a results data structure for the machine learning module 202 and/or the recommended action module 204, allowing them to determine action plans without determining new machine learning results, but instead using cached, pre-computed machine learning results. The pre-compute module 208 may store machine learning results in a results data structure indexed or accessible by feature (e.g., actionable, non-actionable), by different instances or data values of data, or by other machine learning parameters. As described below with regard to the update module 212, the update module 212 may update or adjust an action plan in response to a user or other client 104 providing user input adjusting a machine learning parameter, such as a value for a feature (e.g., actionable, non-actionable), a recommended action, a target for a recommended action, a count for a recommended action, an action time for a recommended action, a cost for a recommended action, a predicted outcome for a recommended action, or another machine learning parameter associated with an action or action plan, using either pre-computed machine learning results from the pre-compute module 208 and/or dynamically determined machine learning results from the machine learning module 202. Both Ali and Phillips are directed to goal display interfaces. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Ali to include wherein the further instructions for generating the recommendation for the plan comprises a machine learning model as taught by Phillips to produce more accurate, realistic, correct results(see para. 99). As per Claim 17 Ali teaches A method, comprising: receiving an indication to add a goal to a plan; Ali para. 194 teaches a user selects “add new goal” 1004 to open a create goals field. The user may create the goal by entering a textual title or description of the goal. In one embodiment, goals pane 1000 may include “enter title of new goal” 1050. The user selects “enter title of new goal” 1050 to create a title and/or description for a newly created goal. receiving data describing the goal; Ali para. 194 teaches a user selects “add new goal” 1004 to open a create goals field. The user may create the goal by entering a textual title or description of the goal. In one embodiment, goals pane 1000 may include “enter title of new goal” 1050. The user selects “enter title of new goal” 1050 to create a title and/or description for a newly created goal. associating the data describing the goal with a track; Ali para. 194 teaches a user selects “add new goal” 1004 to open a create goals field. The user may create the goal by entering a textual title or description of the goal. In one embodiment, goals pane 1000 may include “enter title of new goal” 1050. The user selects “enter title of new goal” 1050 to create a title and/or description for a newly created goal. receiving an indication to add a node within the track, the node representing a desired action or an area of interest associated with the goal; Ali para. 197 teaches In this non-limiting example, goal 1006 includes three sub-goals, namely sub-goal 1016, sub-goal 1018, and sub-goal 1020. Goal 1008 includes only a single sub-goal. The user may select add a task 1022 to create a new task associated with sub-goal 1016. The user may also select to edit 1024 sub-goal 1016 or delete 1026 sub-goal 1016. receiving data describing the node; Ali para. 197 teaches In this non-limiting example, goal 1006 includes three sub-goals, namely sub-goal 1016, sub-goal 1018, and sub-goal 1020. Goal 1008 includes only a single sub-goal. The user may select add a task 1022 to create a new task associated with sub-goal 1016. The user may also select to edit 1024 sub-goal 1016 or delete 1026 sub-goal 1016. Ali does not teaches generating the plan based on the goal, the data describing the plan, and the data describing the node; and outputting the plan. However, Phillips para. 72 teaches for example, in the depicted embodiment, the supported attributes or values for the features 128, 130, 132, 134, 136, 138 vary based on a selected goal, such as a zip code (e.g., a customer zip code, a store zip code), an age group (e.g., a customer age group, a target age group), a dollar amount (e.g., a purchase amount, a profit amount, a goal amount, an action cost, a budget), an integer value (e.g., a target group size, a number of sales, a number of actions), or the like. In certain embodiments, a goal may comprise business goals or desired outcomes, such as goals to double sales, to increase repeat purchases, or the like. A goal, as used herein, may include a desired, intended, or selected outcome or result of one or more actions or, action plans, or other events. A goal may include a business goal (e.g., a sales goal, a marketing goal, a corporate goal, an IT goal, a customer service goal, or the like) a personal goal, a medical goal, a fitness goal, a political goal, an organization goal, a team goal, an economic goal, a short-term goal, a long-term goal, a custom goal, a predefined goal, or another type of goal, based on a context in which the action plan module 102 operate. Para. 106 teaches in one embodiment, the recommended action module 204 is configured to select a suggested or optimal action plan for the action plan interface module 206 to initially display to a user prior to the action plan interface module 206 receiving user input. The recommended action module 204 may then update the displayed action plan dynamically as the action plan interface module 206 receives user input adjusting a machine learning parameter for the action plan or the like. Para. 116 teaches in one embodiment, the pre-compute module 208 is configured to store machine learning results in a results data structure for the machine learning module 202 and/or the recommended action module 204, allowing them to determine action plans without determining new machine learning results, but instead using cached, pre-computed machine learning results. The pre-compute module 208 may store machine learning results in a results data structure indexed or accessible by feature (e.g., actionable, non-actionable), by different instances or data values of data, or by other machine learning parameters. As described below with regard to the update module 212, the update module 212 may update or adjust an action plan in response to a user or other client 104 providing user input adjusting a machine learning parameter, such as a value for a feature (e.g., actionable, non-actionable), a recommended action, a target for a recommended action, a count for a recommended action, an action time for a recommended action, a cost for a recommended action, a predicted outcome for a recommended action, or another machine learning parameter associated with an action or action plan, using either pre-computed machine learning results from the pre-compute module 208 and/or dynamically determined machine learning results from the machine learning module 202. Both Ali and Phillips are directed to goal display interfaces. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Ali to include generating the plan based on the goal, the data describing the plan, and the data describing the node; and outputting the plan as taught by Phillips to produce more accurate, realistic, correct results(see para. 99). As per Claim 18 Ali teaches the method of claim 17, wherein: the data describing the plan comprises at least one of: a plan identifier; a plan type; a plan name; a summary of the plan; a tag; or a partner entity associated with the plan; and Ali para. 195 teaches In another embodiment, the user may select the goal from a template list of pre-defined goals. A goal may be any goal selected or created by the user. For example, a goal may be, without limitation, to lose a specific amount of weight, complete a project, buy a home, or any other user created goal. In this example, the user has created goal 1006 and goal 1008. The user may select add a sub-goal 1010 to create a new sub-goal associated with goal 1006, edit 1012 to edit goal 1006, or delete 1014 to delete goal 1006. The user may create one or more user selected sub-goals for each goal. The set of sub-goals may include a single sub-goal, two or more sub-goals, as well as no sub-goals. However, in this embodiment, the goals component of the daily digital planner only permits the user to add three (3) sub-goals to each goal. the data describing the node comprises at least one of: the status of the node; the category associated with the node; a title; a description; or the resource associated with the node. Ali para. 198 teaches the goals module permits a user to create a set of tasks associated with each sub-goal. The set of tasks include tasks and activities intended to assist the user in achieving the goal. The set of tasks may include a single task, as well as two or more tasks. In this example, sub-goal 1016 includes tasks 1028-1030. Sub-goal 1018 includes a single task 1034. Likewise, the user has assigned a single task to sub-goal 1020. The user may select add a task 1036 to create a new task for the sub-goal associated with goal 1008 or schedule a task to occur on a particular date and/or time in the calendar module. As per Claim 19 Ali teaches The method of claim 17, wherein: outputting the plan comprises displaying, in a user interface, the plan, the displaying comprising: displaying, in the user interface, the track; and displaying, in the user interface, the node within the track. See Ali Fig. 10 As per Claim 20 Ali does not teach the method of claim 17, further comprising generating a recommended plan for the plan prior to associating the data describing the plan with a track, the recommended plan generated by a machine learning model. However, Phillips para. 72 teaches for example, in the depicted embodiment, the supported attributes or values for the features 128, 130, 132, 134, 136, 138 vary based on a selected goal, such as a zip code (e.g., a customer zip code, a store zip code), an age group (e.g., a customer age group, a target age group), a dollar amount (e.g., a purchase amount, a profit amount, a goal amount, an action cost, a budget), an integer value (e.g., a target group size, a number of sales, a number of actions), or the like. In certain embodiments, a goal may comprise business goals or desired outcomes, such as goals to double sales, to increase repeat purchases, or the like. A goal, as used herein, may include a desired, intended, or selected outcome or result of one or more actions or, action plans, or other events. A goal may include a business goal (e.g., a sales goal, a marketing goal, a corporate goal, an IT goal, a customer service goal, or the like) a personal goal, a medical goal, a fitness goal, a political goal, an organization goal, a team goal, an economic goal, a short-term goal, a long-term goal, a custom goal, a predefined goal, or another type of goal, based on a context in which the action plan module 102 operate. Para. 106 teaches in one embodiment, the recommended action module 204 is configured to select a suggested or optimal action plan for the action plan interface module 206 to initially display to a user prior to the action plan interface module 206 receiving user input. The recommended action module 204 may then update the displayed action plan dynamically as the action plan interface module 206 receives user input adjusting a machine learning parameter for the action plan or the like. Para. 116 teaches in one embodiment, the pre-compute module 208 is configured to store machine learning results in a results data structure for the machine learning module 202 and/or the recommended action module 204, allowing them to determine action plans without determining new machine learning results, but instead using cached, pre-computed machine learning results. The pre-compute module 208 may store machine learning results in a results data structure indexed or accessible by feature (e.g., actionable, non-actionable), by different instances or data values of data, or by other machine learning parameters. As described below with regard to the update module 212, the update module 212 may update or adjust an action plan in response to a user or other client 104 providing user input adjusting a machine learning parameter, such as a value for a feature (e.g., actionable, non-actionable), a recommended action, a target for a recommended action, a count for a recommended action, an action time for a recommended action, a cost for a recommended action, a predicted outcome for a recommended action, or another machine learning parameter associated with an action or action plan, using either pre-computed machine learning results from the pre-compute module 208 and/or dynamically determined machine learning results from the machine learning module 202. Both Ali and Phillips are directed to goal display interfaces. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the Applicant’s invention to modify the teachings of Ali to include generating a recommended plan for the plan prior to associating the data describing the plan with a track, the recommended plan generated by a machine learning model as taught by Phillips to produce more accurate, realistic, correct results(see para. 99). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEIRDRE D HATCHER whose telephone number is (571)270-5321. The examiner can normally be reached Monday-Friday 8-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached at 571-270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DEIRDRE D HATCHER/Primary Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

Mar 07, 2024
Application Filed
Feb 07, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591902
METHOD FOR PREDICTING BUSINESS PERFORMANCE USING MACHINE LEARNING AND APPARATUS USING THE SAME
2y 5m to grant Granted Mar 31, 2026
Patent 12572867
DIGITAL PROCESSING SYSTEMS AND METHODS FOR MANAGING WORKFLOWS
2y 5m to grant Granted Mar 10, 2026
Patent 12536488
DETERMINING MACHINE LEARNING MODEL ANOMALIES AND IMPACT ON BUSINESS OUTPUT DATA
2y 5m to grant Granted Jan 27, 2026
Patent 12530703
DELIVERY OF DATA-DRIVEN & CROSS-PLATFORM EXPERIENCES BASED ON BEHAVIORAL COHORTS & IDENTITY RESOLUTION
2y 5m to grant Granted Jan 20, 2026
Patent 12462210
Performance Measuring System Measuring Sustainable Development Relevant Properties Of An Object, and Method Thereof
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
28%
Grant Probability
53%
With Interview (+25.9%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 357 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month