Prosecution Insights
Last updated: April 19, 2026
Application No. 18/601,750

APPARATUS AND A METHOD FOR THE IDENTIFICATION OF DYNAMIC SUB-TARGETS

Final Rejection §101§103§112
Filed
Mar 11, 2024
Examiner
PUJOLS-CRUZ, MARJORIE
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
The Strategic Coach Inc.
OA Round
8 (Final)
18%
Grant Probability
At Risk
9-10
OA Rounds
3y 2m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
25 granted / 136 resolved
-33.6% vs TC avg
Strong +28% interview lift
Without
With
+27.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
50 currently pending
Career history
186
Total Applications
across all art units

Statute-Specific Performance

§101
38.7%
-1.3% vs TC avg
§103
43.3%
+3.3% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This communication is a Final Office Action rejection on the merits. Claims 1-2, 4-12, and 14-20 are currently pending and have been addressed below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed on 12/11/2025 (related to the 103 Rejection) have been fully considered but are moot in view of new grounds of rejection. Applicant's amendments necessitated the new ground(s) of rejection presented in this Office action. Rejection based on a newly cited reference(s) follows. Applicant's arguments filed on 12/11/2025 (related to the 112 Rejection) have been fully considered and are persuasive only for claim 1. The term "measuring a relevance of the market data" has been removed from claim 1. Therefore, the 112 Rejection has been withdrawn for claim 1. However, claim 11 still maintains the 112 Rejection since the term "measuring a relevance of the market data" was not removed. See updated 112 Rejection. Applicant's arguments filed on 12/11/2025 (related to the 101 Rejection) have been fully considered but they are not persuasive. Applicant states, on pages 13-18, that independent claim 1 is directed to specific computer-implemented operations that materially improve data processing in addition to machine learning refinement, and is not directed to an abstract idea. Examiner respectfully disagrees with Applicant. The amended claim 1 limitations are still considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “commercial or legal interactions.” In this case, using correlations to identify a set of dynamic sub-targets (e.g., identify market strategies that increase sales and/or profit of products) is a marketing activity. If a claim limitation, under its broadest reasonable interpretation, covers commercial or legal interactions, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Applicant further states, on pages 18-25, that these limitations, individually and as an ordered combination with other claim elements, of Applicant's representative claim 1 provide a solution to a problem rooted in technology in the field of data optimization. Applicant submits that Applicant's representative claim 1 provides a technological solution to a problem by automatedly processing and transforming data (e.g., pixels of images) for downstream analysis by utilizing a specific multi-stage technical scheme (e.g., optical character recognition (OCR). Also, Applicant respectfully asserts that these limitations are not "well-understood, routine, [and] conventional activities." Id The references of record in this matter also do not contain any such characterization, and there is no court case or printed publication supporting the conclusion that the above-described limitations are "well-understood, routine, [and] conventional." Applicant respectfully submits therefore that the recitation of the above limitations, both individually and as an ordered combination with other claim elements, amounts to "significantly more" than any allegedly abstract idea for at least this reason. Additionally, as discussed below, Applicant's claims amount to an "inventive concept" because they are not taught by the relevant art. Examiner respectfully disagrees with Applicant. Step 2A, Prong One: As previously stated, amended claim 1 limitations are still considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “commercial or legal interactions.” In this case, using correlations to identify a set of dynamic sub-targets (e.g., identify market strategies that increase sales and/or profit of products) is a marketing activity. If a claim limitation, under its broadest reasonable interpretation, covers commercial or legal interactions, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A, Prong Two: Claim 1 includes additional elements: an application-specific integrated circuit that includes a rewritable read-only memory (ROM) and a circuitry; a processor; a web-crawler; an optical character reader (OCR); a sub-target machine learning; a status machine learning model; and a display device. The processor is merely used to perform a plurality of steps. The ROM is merely used to store a plurality of parameters (Paragraph 0075). The circuitry is merely used to instantiate mathematical operations and input and/or output of data to or from models (Paragraph 0075). The memory is merely used to store instructions to perform task disclosed in the disclosure (Paragraph 0010). The web-crawler is merely used to search a plurality of industry specific websites to gather market data (Paragraph 0017). The OCR is merely used to employ pre-processing of image components. Pre-processing process may include without limitation de-skew, de-speckle, binarization, line removal, layout analysis or “zoning,” line and word detection, script recognition, character isolation or “segmentation,” and normalization. In some embodiments, an OCR process will include an OCR algorithm. Exemplary OCR algorithms include matrix-matching process and/or feature extraction processes. Matrix matching may involve comparing an image to a stored glyph on a pixel-by-pixel basis. In some cases, matrix matching may also be known as “pattern matching,” “pattern recognition,” and/or “image correlation” (Paragraphs 0023-0024). The sub-target machine learning model is merely used to: generate dynamic sub-targets by identifying a plurality of static targets and product data correlated to examples of a first set of dynamic sub-targets (Paragraph 0043); iteratively retrained using updated sub-target training data (Paragraph 0045); update the training data by removing or adding correlations of user data to a path or resources as indicated by the feedback (Paragraphs 0046-0047); and use the feedback to calculate an accuracy score (Paragraph 0048). The status machine learning is merely used to generate a static target status tailored to the first set of dynamic sub-targets, wherein the static training data may be iteratively updated (Paragraph 0051). For example, the static machine learning uses the updated parameters to predict a new static target (e.g., predict a new sales/profit goal using the new marketing strategies). The display device is merely used to provide graphical representations of aspects of the present disclosure (Paragraph 0113). These elements of “processor,” “ROM,” “circuitry,” “web-crawler,” “OCR,” “machine learning,” and “display device” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element (MPEP 2106.05f). In this case, the machine learning model includes inputs (e.g., entity data inputs and static target inputs) and outputs (e.g., sub-target outputs such as marketing strategies). Although the machine learning model receives feedback over time to improve an accuracy score (Paragraphs 0044-0048), the claim and specification do not include any specific details about how the trained machine learning model operates or how the neural network is configured to perform specific functions, which is merely claiming the idea of a solution or outcome (MPEP 2106.05(a)). Rather, “receiving data to improve accuracy of the machine learning” is just an inherent characteristic used to train a machine learning over time. See example 47 of July 2024 AI Subject Matter Eligibility. Also, the web-crawler is considered “field of use” since it’s just used to gather data for updating the machine learning, but the web-crawler is not improved (MPEP 2106.05h). Lastly, the transformation using the OCR is considered an extra-solution activity of gathering data since it only contributes nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step). See MPEP 2106.05(c), particular transformation. Step 2B: As discussed above with respect to integration of the abstract idea into a practical application, the claim describes how to generally “apply” the concept of identifying dynamic sub-targets (e.g., market strategies) and updating a static target (e.g., goal) based on updated correlations. The specification shows that the processor is merely used to perform steps (a)-(i). The ROM is merely used to store a plurality of parameters (Paragraph 0075). The circuitry is merely used to instantiate mathematical operations and input and/or output of data to or from models (Paragraph 0075). The memory is merely used to store instructions to perform task disclosed in the disclosure (Paragraph 0010). The web-crawler is merely used to search a plurality of industry specific websites to gather market data (Paragraph 0017). The OCR is merely used to employ pre-processing of image components. Pre-processing process may include without limitation de-skew, de-speckle, binarization, line removal, layout analysis or “zoning,” line and word detection, script recognition, character isolation or “segmentation,” and normalization. In some embodiments, an OCR process will include an OCR algorithm. Exemplary OCR algorithms include matrix-matching process and/or feature extraction processes. Matrix matching may involve comparing an image to a stored glyph on a pixel-by-pixel basis. In some cases, matrix matching may also be known as “pattern matching,” “pattern recognition,” and/or “image correlation” (Paragraphs 0023-0024). The sub-target machine learning model is merely used to: generate dynamic sub-targets by identifying a plurality of static targets and product data correlated to examples of a first set of dynamic sub-targets (Paragraph 0043); iteratively retrained using updated sub-target training data (Paragraph 0045); update the training data by removing or adding correlations of user data to a path or resources as indicated by the feedback (Paragraphs 0046-0047); and use the feedback to calculate an accuracy score (Paragraph 0048). The status machine learning is merely used to generate a static target status tailored to the first set of dynamic sub-targets, wherein the static training data may be iteratively updated (Paragraph 0051). For example, the static machine learning uses the updated parameters to predict a new static target (e.g., predict a new sales/profit goal using the new marketing strategies). The display device is merely used to provide graphical representations of aspects of the present disclosure (Paragraph 0113). As discussed in Step 2A, Prong Two above, the recitation of a processor and machine learning amounts to no more than mere instructions to apply the exception using a generic computer component. Also, the steps of “automatically retrieving records,” “updating the sub-target training data” and “retraining with the updated training data to replace low quality output with a new dynamic output” are considered a well-understood, routing, and conventional function since they’re just “performing repetitive calculations” (MPEP 2106.05(d)). The web-crawler and the display device are considered a conventional computer function of “receiving and transmitting over a network” (MPEP 2106.05d). Lastly, using an OCR algorithm such as a matrix-matching process is considered a well-known algorithm in the art of OCR (e.g., matrix matching may involve comparing an image to a stored glyph on a pixel-by-pixel basis). The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Thus, the claim is ineligible. Independent claim 11 recites similar features and therefore is rejected for the same reasons as independent claim 1. Claims 2, 4-10, 12, and 14-20 are rejected for having the same deficiencies as those set forth with respect to the claims that they depend from, independent claims 1 and 11. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. The term "measuring a relevance of the market data" in claim 11 is a relative term which renders the claim indefinite. The term "relevance" is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For examination purposes the term “relevance” has been construed to be obtaining data for the most popular products. Claims 12 and 14-20 are rejected for having the same deficiencies as those set forth with respect to the claims that they depend from, independent claim 11. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-12, and 14-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without reciting significantly more. Independent Claim 1 Step One - First, pursuant to step 1 in the January 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”) on 84 Fed. Reg. 53, the claim 1 is directed to an apparatus which is a statutory category. Step 2A, Prong One - Claim 1 recites: An apparatus for an identification of dynamic sub-targets, wherein the apparatus comprises to: storing a plurality of parameters, wherein the plurality of parameters includes the at least a parameter of each node of the plurality of nodes, to perform a mathematical operation on inputs to the node using at least a parameter of the plurality of parameters; receive a plurality of entity data comprising a plurality of product data associated with an entity, wherein receiving the plurality of entity data comprises: automatically retrieving one or more entity records, retrieving and indexing market data; and generating demand data based on the market data; converting at least a portion of the one or more entity records into machine encoded text; identify one or more static targets as a function of the plurality of entity data and the converted at least a portion of the one or more entity records using a first objective function, wherein identifying the one or more static targets comprises: receiving a target area from a user; selecting a target metric as a function of the target area; generating the first objective function as a function the target metric; and selecting the one or more static targets as a function of optimizing the first objective function; identify a first set of dynamic sub-targets as a function of the one or more static targets and the plurality of product data, wherein identifying the first set of dynamic sub-targets comprises: the parameters to instantiate a sub-target machine learning model comprising a neural network; training a sub-target machine learning model as a function of sub-target training data comprising a plurality of examples of entity data inputs and static target inputs correlated to dynamic sub-target outputs, wherein training further comprises updating the plurality of parameters; receiving user inputs comprising user feedback indicating a quality of previous dynamic sub-target outputs generated based on previous entity data inputs and static target inputs; updating the sub-target training data as a function of the user feedback, wherein updating the sub-target training data as a function of user feedback comprises: identifying the quality of the dynamic sub-target output as a function of the user feedback, wherein identifying the quality of the dynamic sub-target output comprises generating an accuracy score of the dynamic sub-target output; removing a low quality dynamic sub-target output from the training data, wherein the low quality dynamic sub- target output indicates a low accuracy score; replacing the low quality dynamic sub-target output with a new dynamic sub-target output; retraining the sub-target machine learning model using the static target inputs correlated to the new dynamic sub-target output as a function of the accuracy score; and retraining the sub-target machine learning model a function of modified correlations of examples of entity data inputs and static target inputs and dynamic sub-target outputs by updating the parameters, wherein integrates a feedback loop mechanism to allow the user to provide input on analysis, interpretation, and recommendations; identify at least one target path as a function of the first set of dynamic sub-targets; iteratively determine a static target status as a function of the first set of dynamic sub-targets and the one or more static targets using a status machine learning model comprising: receiving static training data, wherein the static training data correlates a plurality of the first set of dynamic sub-target data and static target data to a plurality of examples of static target data; training, iteratively, the status machine learning model using the static training data, wherein training the status machine learning model includes retraining the status machine learning model with feedback from previous iterations of the status machine learning model; and generating the static target status using the trained status machine learning model; identify a second set of dynamic sub-targets as a function of the static target status and the plurality of product data; and generate a target report as a function of the static target status and the second set of dynamic sub-targets, wherein is further configured to communicate a displayable image to provide a graphical representation to the user, to iteratively determine the static target status as a function of the first set of dynamic sub-targets and the one or more static targets, wherein the static target status is data associated with s reflection of the overall progress and performance of an entity in achieving its long-term objectives, to continuously update the static target status, wherein the static target status is updated in real time. These claim elements are considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “commercial or legal interactions.” In this case, using correlations to identify a set of dynamic sub-targets (e.g., identify market strategies that increase sales and/or profit of products) is a marketing activity. If a claim limitation, under its broadest reasonable interpretation, covers “commercial or legal interactions,” then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 - The judicial exception is not integrated into a practical application. Claim 1 includes additional elements: an application-specific integrated circuit that includes a rewritable read-only memory (ROM) and a circuitry; a processor; a memory; automatically retrieving one or more entity records using a web crawler, wherein the web crawler is additionally configured to systematically browse the internet by visiting a plurality of URLs; an optical character reader (OCR), wherein converting the at least a portion of the one or more entity records into the machine-encoded text comprises converting images of text in the at least a portion of the one or more entity records into the machine-encoded text and further comprises: pre-processing the images of text by de-skewing at least one image component associated with the at least a portion of the one or more entity records by applying a transform operation to the at least one image component; and implementing an OCR algorithm comprising a matrix matching process by comparing pixels of the pre-processed images to pixels of a stored glyph on a pixel-by-pixel basis; a sub-target machine learning; a status machine learning; using a user interface; and a display device. The processor is merely used to perform any method, method step, or sequence of method steps in any embodiment described in this disclosure (Paragraph 0009). The ROM is merely used to store a plurality of parameters (Paragraph 0075). The circuitry is merely used to instantiate mathematical operations and input and/or output of data to or from models (Paragraph 0075). The memory is merely used to store instructions to perform task disclosed in the disclosure (Paragraph 0010). The web-crawler is merely used to search a plurality of industry specific websites to gather market data (Paragraph 0017). The OCR is merely used to employ pre-processing of image components. Pre-processing process may include without limitation de-skew, de-speckle, binarization, line removal, layout analysis or “zoning,” line and word detection, script recognition, character isolation or “segmentation,” and normalization. In some embodiments, an OCR process will include an OCR algorithm. Exemplary OCR algorithms include matrix-matching process and/or feature extraction processes. Matrix matching may involve comparing an image to a stored glyph on a pixel-by-pixel basis. In some cases, matrix matching may also be known as “pattern matching,” “pattern recognition,” and/or “image correlation” (Paragraphs 0023-0024). The sub-target machine learning model is merely used to: generate dynamic sub-targets by identifying a plurality of static targets and product data correlated to examples of a first set of dynamic sub-targets (Paragraph 0043); iteratively retrained using updated sub-target training data (Paragraph 0045); update the training data by removing or adding correlations of user data to a path or resources as indicated by the feedback (Paragraphs 0046-0047); and use the feedback to calculate an accuracy score (Paragraph 0048). The status machine learning is merely used to generate a static target status tailored to the first set of dynamic sub-targets, wherein the static training data may be iteratively updated (Paragraph 0051). In this case, the static machine learning uses the updated parameters to predict a new static target (e.g., predict a new sales/profit goal using the new marketing strategies). The user interface is merely used to input progress of one or more dynamic sub-targets or static targets (Paragraph 0095). The display device is merely used to provide graphical representations of aspects of the present disclosure (Paragraph 0113). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). These elements of “processor,” “ROM,” “circuitry,” “memory,” “web-crawler,” “OCR,” “sub-target machine learning,” “status machine learning,” “user interface,” and “display device” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. Also, the web-crawler is considered “field of use” since it’s just used to collect data (e.g., plurality of entity data), but the web-crawler is not improved (MPEP 2106.05h). Lastly, the transformation using the OCR is considered an extra-solution activity of gathering data since it only contributes nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step). See MPEP 2106.05(c), particular transformation. Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Step 2B - The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” the concept of identifying dynamic sub-targets (e.g., market strategies) and updating a static target (e.g., goal) based on updated correlations. The specification shows that the processor is merely used to perform any method, method step, or sequence of method steps in any embodiment described in this disclosure (Paragraph 0009). The ROM is merely used to store a plurality of parameters (Paragraph 0075). The circuitry is merely used to instantiate mathematical operations and input and/or output of data to or from models (Paragraph 0075). The memory is merely used to store instructions to perform task disclosed in the disclosure (Paragraph 0010). The web-crawler is merely used to search a plurality of industry specific websites to gather market data (Paragraph 0017). The OCR is merely used to employ pre-processing of image components. Pre-processing process may include without limitation de-skew, de-speckle, binarization, line removal, layout analysis or “zoning,” line and word detection, script recognition, character isolation or “segmentation,” and normalization. In some embodiments, an OCR process will include an OCR algorithm. Exemplary OCR algorithms include matrix-matching process and/or feature extraction processes. Matrix matching may involve comparing an image to a stored glyph on a pixel-by-pixel basis. In some cases, matrix matching may also be known as “pattern matching,” “pattern recognition,” and/or “image correlation” (Paragraphs 0023-0024). The sub-target machine learning model is merely used to: generate dynamic sub-targets by identifying a plurality of static targets and product data correlated to examples of a first set of dynamic sub-targets (Paragraph 0043); iteratively retrained using updated sub-target training data (Paragraph 0045); update the training data by removing or adding correlations of user data to a path or resources as indicated by the feedback (Paragraphs 0046-0047); and use the feedback to calculate an accuracy score (Paragraph 0048). The status machine learning is merely used to generate a static target status tailored to the first set of dynamic sub-targets, wherein the static training data may be iteratively updated (Paragraph 0051). In this case, the static machine learning uses the updated parameters to predict a new static target (e.g., predict a new sales/profit goal using the new marketing strategies). The user interface is merely used to input progress of one or more dynamic sub-targets or static targets (Paragraph 0095). The display device is merely used to provide graphical representations of aspects of the present disclosure (Paragraph 0113). Also, the steps of “identify a second set of dynamic sub-targets” and “iteratively determine a static target status” are considered a conventional activity since they’re merely “performing repetitive calculations” (see MPEP 2106.05(d)). The web-crawler and the display device are considered a conventional computer function of “receiving and transmitting over a network” (MPEP 2106.05d). Lastly, using an OCR algorithm such as a matrix-matching process is considered a well-known algorithm in the art of OCR (e.g., matrix matching may involve comparing an image to a stored glyph on a pixel-by-pixel basis). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Independent claim 11 is directed to a method at step 1, which is a statutory category. Claim 11 recites similar limitations as claim 1 and therefore is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. The claim is not patent eligible. Dependent claims 2, 4-5, 8-10, 12, 14-15, and 18-20 are not directed to any additional claim elements. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above - such as: to identify a target group as a function of the entity data; to iteratively generate updated product data as a function of the first set of dynamic sub-targets and the static target status; to identify the second set of dynamic sub-targets as a function of the updated product data; analysis steps to identity the one or more static targets; wherein iteratively determining the static target status comprises updating static target status as a function of the second set of dynamic sub-targets; wherein the static target status comprises one or more vital metrics associated with the one or more static targets. These processes are similar to the abstract idea noted in the independent claim because they further the limitations of the independent claim which are directed to “certain methods of organizing human activity” which include “commercial or legal interactions.” In addition, no additional elements are integrated into the abstract idea. Therefore, the claims still recite an abstract idea that can be grouped into a method of organizing human activity. Dependent claims 6-7 and 16-17 are directed to additional elements such as: using tracking cookies; and using a chatbot. The cookies and chatbot are merely used to generate a plurality of entity data (Paragraph 0096). Those are considered “field of use” (MPEP 2106.05h) at step 2A, Prong 2; as they are just used to receive information and does not improve the technology. At Step 2B, “insignificant extra-solution activity” since the functions described are just “mere data gathering” (MPEP 2106.05g) to use it for an analysis. Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 4-6, 9-12, 14-16, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Seybert et al. (US 2022/0058661 A1), in view of Lah (US 11,494,721 B1), in further view of Figueroa et al. (US 2024/0177113 A1) and Everest (US 12,242,945 B1). Regarding claim 1 (Currently Amended), Seybert et al. discloses an apparatus for an identification of dynamic sub-targets, wherein the apparatus comprises (Paragraph 0002, This disclosure relates generally to the technical field of market research, and, more particularly, to methods, systems, articles of manufacture, and apparatus to identify market strategies; As explained in Paragraph 0038 of Applicant’s specification, dynamic sub-targets might involve implementing marketing strategies to boost their sales): an application-specific integrated circuit instantiating a plurality of neural network nodes, wherein: the application-specific integrated circuit includes a rewritable read-only memory (ROM) storing a plurality of parameters, wherein the plurality of parameters includes the at least a parameter of each node of the plurality of nodes; the application-specific integrated circuit includes circuitry for each node of the plurality of nodes, the circuitry configured to perform a mathematical operation on inputs to the node using at least a parameter of the plurality of parameters stored in the ROM; and at least a processor communicatively connected to the application-specific integrated circuit (see Figure 2; Paragraph 0034, In the illustrated example of FIG. 1, the action determiner 114 accesses and analyzes the data stored in the client databases 102, 104, 106, 108 to determine a target market strategy for price, promotion, new products, assortment, and/or in-market execution. In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy; Paragraph 0093, Thus, for example, any of the example data accessor 202, the example data lake 204, the example model trainer 205, the example target principle generator 206, the example pricing determiner 208, the example promotion determiner 210, the example assortment determiner 212, the example new product determiner 214, the example in-store execution determiner 216, the example execution analyzer 218, the example pricing analyzer 220, the example promotion analyzer 222, the example assortment analyzer 224, the example new product analyzer 226, the example execution analyzer 228, the example score generator 230, the example output generator 232 and/or, more generally, the example action determiner 114 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)); Paragraph 0094, The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor and/or processor circuitry, such as the processor 2112 shown in the example processor platform 2100 discussed below in connection with FIG. 21. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 2112, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 2112 and/or embodied in firmware or dedicated hardware; Paragraph 0116, The processor platform 2100 can be a self-learning machine (e.g., a neural network); Examiner notes that a neural network comprises nodes and edges); and a memory communicatively connected to the at least a processor, the memory containing instructions configuring the at least a processor to (Paragraph 0116, FIG. 21 is a block diagram of an example processor platform 2100 structured to execute the instructions of FIGS. 15-20 to implement the action determiner 114 of FIGS. 1 and/or 2; Paragraph 0118, The processor 2112 of the illustrated example includes a local memory 2113 (e.g., a cache). The processor 2112 of the illustrated example is in communication with a main memory including a volatile memory 2114 and a non-volatile memory 2116 via a bus 2118): receive a plurality of entity data comprising a plurality of product data associated with an entity (Paragraph 0025, The real-time market data can include anything from measuring sales performances of retail companies to optimizing in-store execution such as price, promotion and assortment; Paragraph 0028. In the illustrated example of FIG. 1, the respective client databases 102, 104, 106, 108 contain product information for associated individual clients (e.g., different retail chains, different brands, etc.). That is, the client databases 102, 104, 106, 108 store point of sale (POS) data. In examples disclosed herein, the client database 102 stores market data such as universal product code (UPC) level data including volumetric sales, price data, promotion data, and/or audit data. The client database 102 can store retail chain data (e.g., data from Target®, Walmart®, etc.) and/or independent retail data. For example, the client database 102 can cover grocery data, drug data, military commissary data, liquor data, etc.), …; identify one or more static targets as a function of the plurality of entity data … using a first objective function, wherein identifying the one or more static targets comprises (Paragraph 0039, The example target principle generator 206 applies the machine learning model to determine target business strategies. For example, the target principle generator 206 determines guidelines (e.g., principles, rules, target metrics, parameter, etc.) for one or more market levers at the market, account, and/or store level): receiving a target area from a user; selecting a target metric as a function of the target area (Paragraph 0034, The action determiner 114 scores (e.g., ranks, prioritizes, etc.) the accounts and/or products of retailers and/or manufacturers based on compliance to the target market strategy to prioritize focus against the highest leverage opportunities. That is, the action determiner 114 determines and ranks one or more actions to increase sales and/or profit of products; In this case, the target area is financial performance and the target metric is sales and/or profit of products. See Paragraph 0033 in Applicant’s specification); generating the first objective function as a function the target metric; and selecting the one or more static targets as a function of optimizing the first objective function (Paragraph 0034, The action determiner 114 scores (e.g., ranks, prioritizes, etc.) the accounts and/or products of retailers and/or manufacturers based on compliance to the target market strategy to prioritize focus against the highest leverage opportunities. That is, the action determiner 114 determines and ranks one or more actions to increase sales and/or profit of products; Paragraph 0040, For example, the pricing determiner 208 analyzes sub-levers of the price lever (e.g., target price gaps, recommended price strategy, everyday price thresholds, target price positions, target price velocity, target historical price changes, etc.). For example, the example pricing determiner 208 determines a target everyday price for a product to increase (e.g., maximize) profit and volume growth; As stated in Paragraph 0033 of Applicant’s specification, the first objective function is a mathematical formula that defines a measure of performance that needs to be either maximized or minimized for a static target); identify a first set of dynamic sub-targets as a function of the one or more static targets and the plurality of product data (Paragraph 0034, In the illustrated example of FIG. 1, the action determiner 114 accesses and analyzes the data stored in the client databases 102, 104, 106, 108 to determine a target market strategy for price, promotion, new products, assortment, and/or in-market execution. In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy. The action determiner 114 scores (e.g., ranks, prioritizes, etc.) the accounts and/or products of retailers and/or manufacturers based on compliance to the target market strategy to prioritize focus against the highest leverage opportunities. That is, the action determiner 114 determines and ranks one or more actions to increase sales and/or profit of products; Paragraph 0038, The example model trainer 205 trains a machine learning model to identify target strategies for one or more market levers (e.g., price, promotion, new products, assortment, and/or in-market execution; As stated in Paragraph 0038 of Applicant’s specification, the dynamic sub-targets might involve implementing marketing strategies to boost their sales), wherein identifying the first set of dynamic sub-targets comprises: configuring the parameters of the rewritable ROM to instantiate a sub-target machine learning model comprising a neural network (see Figure 2; Paragraph 0034, In the illustrated example of FIG. 1, the action determiner 114 accesses and analyzes the data stored in the client databases 102, 104, 106, 108 to determine a target market strategy for price, promotion, new products, assortment, and/or in-market execution. In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy; Paragraph 0093, Thus, for example, any of the example data accessor 202, the example data lake 204, the example model trainer 205, the example target principle generator 206, the example pricing determiner 208, the example promotion determiner 210, the example assortment determiner 212, the example new product determiner 214, the example in-store execution determiner 216, the example execution analyzer 218, the example pricing analyzer 220, the example promotion analyzer 222, the example assortment analyzer 224, the example new product analyzer 226, the example execution analyzer 228, the example score generator 230, the example output generator 232 and/or, more generally, the example action determiner 114 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)); Paragraph 0094, The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor and/or processor circuitry, such as the processor 2112 shown in the example processor platform 2100 discussed below in connection with FIG. 21. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 2112, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 2112 and/or embodied in firmware or dedicated hardware; Paragraph 0116, The processor platform 2100 can be a self-learning machine (e.g., a neural network)); training the sub-target machine learning model as a function of sub-target training data comprising a plurality of examples of entity data inputs and static target inputs correlated to dynamic sub-target outputs (Paragraph 0034, In examples disclosed herein , the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy; Paragraph 0038, In the illustrated example of FIG. 2, the action determiner 114 includes an example model trainer 205. In some examples, the model trainer 205 includes means for model training (sometimes referred to herein as model training means). The example means for model training is hardware. The example model trainer 205 trains a machine learning model to identify target strategies for one or more market levers (e.g., price, promotion, new products, assortment, and/or in-market execution)), wherein training further comprises updating the plurality of parameters in the rewritable ROM (see Figure 2; Paragraph 0034, In the illustrated example of FIG. 1, the action determiner 114 accesses and analyzes the data stored in the client databases 102, 104, 106, 108 to determine a target market strategy for price, promotion, new products, assortment, and/or in-market execution. In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy; Paragraph 0093, Thus, for example, any of the example data accessor 202, the example data lake 204, the example model trainer 205, the example target principle generator 206, the example pricing determiner 208, the example promotion determiner 210, the example assortment determiner 212, the example new product determiner 214, the example in-store execution determiner 216, the example execution analyzer 218, the example pricing analyzer 220, the example promotion analyzer 222, the example assortment analyzer 224, the example new product analyzer 226, the example execution analyzer 228, the example score generator 230, the example output generator 232 and/or, more generally, the example action determiner 114 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)); Paragraph 0094, The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor and/or processor circuitry, such as the processor 2112 shown in the example processor platform 2100 discussed below in connection with FIG. 21. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 2112, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 2112 and/or embodied in firmware or dedicated hardware; Paragraph 0116, The processor platform 2100 can be a self-learning machine (e.g., a neural network)); … identify at least one target path as a function of the first set of dynamic sub- targets (Paragraph 0039, In some examples, the target principles are conditions deemed “optimal” by the target principle generator 206. In the illustrated example of FIG. 2, the target principle generator 206 includes an example pricing determiner 208, an example promotion determiner 210, an example assortment determiner 212, an example new product determiner 214, and an example execution determiner 216 (sometimes referred-to as an in-store execution determiner 216). For example, the target principle generator 206 determines target principles for the pricing lever, the promotion lever, the assortment lever, the new product lever, and/or the execution lever; Paragraph 0040, For example, the pricing determiner 208 analyzes sub-levers of the price lever (e.g., target price gaps, recommended price strategy, everyday price thresholds, target price positions, target price velocity, target historical price changes, etc.). For example, the example pricing determiner 208 determines a target everyday price for a product to increase (e.g., maximize) profit and volume growth); iteratively determine a static target status as a function of the first set of dynamic sub-targets and the one or more static targets using a status machine learning model comprising (Paragraph 0034, In examples disclosed herein , the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy; Paragraph 0057, In the illustrated example of FIG. 2, the action determiner 114 includes an example execution analyzer 218. In some examples, the execution analyzer 218 includes means for comparing data (sometimes referred to herein as data comparing means). The example means for comparing data is hardware. The example execution analyzer 218 analyzes real-time market data (e.g., POS data, etc.) to identify products in which the in-market strategies and executions differ from the target principles determined by the example target principle generator 206. That is, in some examples, the execution analyzer 218 analyzes the real-time market data based on the levers analyzed by the target principle generator 206 (e.g., the pricing lever, the promotion lever, the assortment lever, the new product lever, and/or the execution lever). However, the execution analyzer 218 can analyze the real-time market data based on any suitable lever and/or sub-lever analyzed by the target principle generator 206; Paragraph 0058, The example pricing analyzer 220 compares the real-time market data to the target price principle determined by the pricing determiner 208; Table 11, lower price to be competitive; Examiner interprets “changing the price” as “the first set of dynamic sub-targets”); receiving static training data, wherein the static training data correlates a plurality of the first set of dynamic sub-target data and static target data to a plurality of examples of static target data; training, iteratively, the status machine learning model using the static training data, … (Paragraph 0034, In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy. The action determiner 114 scores (e.g., ranks, prioritizes, etc.) the accounts and/or products of retailers and/or manufacturers based on compliance to the target market strategy to prioritize focus against the highest leverage opportunities. That is, the action determiner 114 determines and ranks one or more actions to increase sales and/or profit of products; Paragraph 0038, The example model trainer 205 trains a machine learning model to identify target strategies for one or more market levers (e.g., price, promotion, new products, assortment, and/or in-market execution; Paragraph 0039, The example target principle generator 206 applies the machine learning model to determine target business strategies. For example, the target principle generator 206 determines guidelines (e.g., principles, rules, target metrics, parameter, etc.) for one or more market levers at the market, account, and/or store level. In some examples, the target principles are conditions deemed “optimal” by the target principle generator 206; see Figure 3 and related text in Paragraph 0067, In the example identify phase 306, the example execution analyzer 218 (FIG. 2) compares the target principles to in-market execution data to identify levers that are out of compliance. In the example score phase 308, the example score generator 230 (FIG. 2) generates scores for the levers based on whether the levers are out of compliance. For example, the score generator 230 aggregates the lever scores to generate account scores and/or market scores. That is, the example score generator 230 identifies levers with the highest opportunity to optimize market strategies (e.g., levers with relatively low scores); Examiner notes that Seybert is continuously optimizing marketing strategies based on the real-time market data, wherein the real-time data is used to determine the status/compliance of the marketing strategies); and generating the static target status using the trained status machine learning model (Paragraph 0039, The example target principle generator 206 applies the machine learning model to determine target business strategies. For example, the target principle generator 206 determines guidelines (e.g., principles, rules, target metrics, parameter, etc.) for one or more market levers at the market, account, and/or store level. In some examples, the target principles are conditions deemed “optimal” by the target principle generator 206); identify a second set of dynamic sub-targets as a function of the static target status and the plurality of product data (Paragraph 0057, In the illustrated example of FIG. 2, the action determiner 114 includes an example execution analyzer 218. In some examples, the execution analyzer 218 includes means for comparing data (sometimes referred to herein as data comparing means). The example means for comparing data is hardware. The example execution analyzer 218 analyzes real-time market data (e.g., POS data, etc.) to identify products in which the in-market strategies and executions differ from the target principles determined by the example target principle generator 206. That is, in some examples, the execution analyzer 218 analyzes the real-time market data based on the levers analyzed by the target principle generator 206 (e.g., the pricing lever, the promotion lever, the assortment lever, the new product lever, and/or the execution lever). However, the execution analyzer 218 can analyze the real-time market data based on any suitable lever and/or sub-lever analyzed by the target principle generator 206; Paragraph 0059, The example promotion analyzer 222 compares the real-time market data to the target promotion principle determined by the example promotion determiner 210; Table 12, timing best offer; see Figure 3 and related text in Paragraph 0067, In the example identify phase 306, the example execution analyzer 218 (FIG. 2) compares the target principles to in-market execution data to identify levers that are out of compliance. In the example score phase 308, the example score generator 230 (FIG. 2) generates scores for the levers based on whether the levers are out of compliance. For example, the score generator 230 aggregates the lever scores to generate account scores and/or market scores. That is, the example score generator 230 identifies levers with the highest opportunity to optimize market strategies (e.g., levers with relatively low scores); Examiner notes that Seybert is continuously optimizing marketing strategies based on real-time market data. Therefore, the new marketing strategies with the highest opportunity are interpreted as the second set of dynamic sub-targets); and generate a target report as a function of the static target status and the second set of dynamic sub-targets (Paragraph 0065, Additionally or alternatively, the output generator 232 generates a report card, intelligent dashboard, etc. including aggregate report cards by market, brand, etc. of the opportunities of each product. Additionally or alternatively, the output generator 232 provides a recommended adjustment of a lever for the user to execute. For example, the output generator 232 generates the report 116 (FIG. 1) to display on the user device 118 (FIG. 1). In some examples, the output generator 232 causes a change in an advertised price of the product, releases an advertisement for broadcast having the updated price, etc. In still other examples, the output generator 232 generates control instructions to cause an advertisement, cause a price change in a retailer computer system, cause a temporary price change in a market of interest, etc.; Paragraph 0067. For example, the output generator 232 generates a report card displaying the levers of the account and/or product that require action (e.g., are out of compliance with the target principles; Paragraph 0078, At block 718, the first market analyst views the report (e.g., the insights banner 710, the driving force report 712, the grade report 714, and/or the insights report 716). For example, the first market analyst can determine market strategies that are working (e.g., levers with a relatively high grade) and market strategies that are not working (e.g., levers with a relatively low grade). At block 720, the first market analyst selects an opportunity. For example, the first market analyst selects a lever with a relatively low grade. For example, the first market analyst selects a first opportunity with a score of “C” and does not select a second opportunity with a score of “A”), wherein the apparatus is further configured to communicate a displayable image to a display device to provide a graphical representation to the user (Paragraph 0121, One or more output devices 2124 are also connected to the interface circuit 2120 of the illustrated example. The output devices 2124 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit 2120 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor; As stated in Applicant’s specification, Paragraph 0113, examples of a display device that can communicate a displayable image include a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof), wherein the processor is configured to iteratively determine the static target status as a function of the first set of dynamic sub-targets and the one or more static targets, wherein the static target status is data associated with a reflection of the overall progress and performance of an entity in achieving its long-term objectives, wherein the processor is configured to continuously update the static target status, wherein the static target status is updated in real time (Paragraph 0034, In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy. The action determiner 114 scores (e.g., ranks, prioritizes, etc.) the accounts and/or products of retailers and/or manufacturers based on compliance to the target market strategy to prioritize focus against the highest leverage opportunities. That is, the action determiner 114 determines and ranks one or more actions to increase sales and/or profit of products; Paragraph 0038, The example model trainer 205 trains a machine learning model to identify target strategies for one or more market levers (e.g., price, promotion, new products, assortment, and/or in-market execution; Paragraph 0039, The example target principle generator 206 applies the machine learning model to determine target business strategies. For example, the target principle generator 206 determines guidelines (e.g., principles, rules, target metrics, parameter, etc.) for one or more market levers at the market, account, and/or store level. In some examples, the target principles are conditions deemed “optimal” by the target principle generator 206; see Figure 3 and related text in Paragraph 0067, In the example identify phase 306, the example execution analyzer 218 (FIG. 2) compares the target principles to in-market execution data to identify levers that are out of compliance. In the example score phase 308, the example score generator 230 (FIG. 2) generates scores for the levers based on whether the levers are out of compliance. For example, the score generator 230 aggregates the lever scores to generate account scores and/or market scores. That is, the example score generator 230 identifies levers with the highest opportunity to optimize market strategies (e.g., levers with relatively low scores); Paragraph 0094, Processor; Examiner notes that Seybert is continuously optimizing marketing strategies based on the real-time market data, wherein the real-time data is used to determine the status/compliance of the marketing strategies). Although Seybert et al. discloses training a sub-target machine learning model as a function of sub-target training data comprising a plurality of examples of entity data inputs and static target inputs correlated to dynamic sub-target outputs receiving user inputs (Paragraph 0038, training a machine learning to identify target strategies that increase sales and/or profit of products; As stated in Paragraph 0038 of Applicant’s specification, the dynamic sub-targets might involve implementing marketing strategies to boost their sales), Seybert et al. does not specifically disclose retraining the sub-target machine learning model by providing feedback indicating a quality of previous dynamic sub-target outputs. However, Lah discloses … wherein identifying the first set of dynamic sub-targets comprises: configuring the parameters of the rewritable ROM to instantiate a sub-target machine learning model comprising a …; training the sub-target machine learning model as a function of sub-target training data comprising a plurality of examples of entity data inputs and static target inputs correlated to dynamic sub-target outputs, wherein training further comprises updating the plurality of parameters in the rewritable ROM; receiving user inputs comprising user feedback indicating a quality of previous dynamic sub-target outputs generated based on previous entity data inputs and static target inputs using a user interface (Column 7, lines 1-4, A company may therefore use the system to achieve certain goals, which may be predefined by the company. The company may wish to achieve one or more goals such as, without limitation, maximize sales or profits; Column 17, lines 18-34, In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning. As the system acquires additional data, it will be able to use that data to make comparative analyses on other businesses who have implemented similar strategies and determine what has the highest probability of working best for different types of companies across different types of industries. For example, the strategy implementation and corresponding result may be fed back into the models that correlate actions, results, external data, and/or other information used to learn from and develop strategies described herein. In some instances, such monitoring may also include re-scoring some or all channels, and the overall score. Such re-scores may trigger additional or alternative suggestions; Column 17, lines 36-43, In an implementation, the AI portal 134 may provide one or more interfaces, such as graphical user interfaces, to obtain goals, datasets, business processes, and/or other training parameters for training a model or otherwise creating a company instance for modeling. As such, the company may specify its goals and provide information used by the system to learn how to achieve those goals based on machine learned models trained and refined by the AI engine 136; Column 27, lines 56-67, Figure 7 illustrates configuration of rules for action automation; Column 29, lines 22-67, As another example, processor(s) 112 may be programmed by one or more additional instructions that may perform some or all of the functionality attributed herein to one of the instructions. The various instructions described herein may be stored in a storage device 114, which may comprise random access memory (RAM), read only memory (ROM), and/or other memory. The storage device may store the computer program instructions (e.g., the aforementioned instructions) to be executed by processor 112 as well as data that may be manipulated by processor 112); updating the sub-target training data as a function of the user feedback, wherein updating the sub-target training data as a function of user feedback comprises: identifying the quality of the dynamic sub-target output as a function of the user feedback, wherein identifying the quality of the dynamic sub-target output comprises generating an accuracy … of the dynamic sub-target output (Column 17, lines 18-34, In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning. As the system acquires additional data, it will be able to use that data to make comparative analyses on other businesses who have implemented similar strategies and determine what has the highest probability of working best for different types of companies across different types of industries. For example, the strategy implementation and corresponding result may be fed back into the models that correlate actions, results, external data, and/or other information used to learn from and develop strategies described herein. In some instances, such monitoring may also include re-scoring some or all channels, and the overall score. Such re-scores may trigger additional or alternative suggestions; Column 24, lines 19-30, Relying on correlative data may be reasonably acceptable if that is all the data available, but true cause and effect is a much stronger base for accurate forecasting; Column 25, lines 7-20, In some instances, the AI engine 136 either may not have enough data or not have strong enough interpretation of the data to make accurate action predictions); removing a low quality dynamic sub-target output from the training data, …; replacing the low quality dynamic sub-target output with a new dynamic sub-target output (Column 5, lines 37-60, In other words, the strategy and suggestion engine may correlate various observed weather conditions with observed business performance (e.g., sales) and observed actions (e.g., marketing activities). Using the modeling, strategy and suggestion engine may suggest that the company suspend certain actions such as marketing activities because their effectiveness during inclement weather is reduced (e.g. deviating from a threshold amount) and external data indicating that inclement weather is expected. In some instances, strategy and suggestion engine may make time-bound suggestions, such as to suspend marketing activities for four days until the inclement weather is expected to pass. Because strategy and suggestion engine may operate in real-time, these and other suggestions may be updated based on updated internalized data and/or updated external data; Examiner notes that Lah adds or removes certain actions based on the effectiveness of the marketing activities/strategies. Examiner interprets the effectiveness of the marketing activities/strategies as the quality of the dynamic sub-targets); … retraining the sub-target machine learning model using the static target inputs correlated to the new dynamic sub-target output as a function of the accuracy …; and retraining the sub-target machine learning model a function of modified correlations of examples of entity data inputs and static target inputs and dynamic sub-target outputs by updating the parameters in the rewritable ROM, wherein the processor integrates a feedback loop mechanism (Column 17, lines 18-34, The system may analyze the corpus of data and run continuous correlation analysis to determine the direction of value changes of one metric relative to a matrix of other metrics; In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning. As the system acquires additional data, it will be able to use that data to make comparative analyses on other businesses who have implemented similar strategies and determine what has the highest probability of working best for different types of companies across different types of industries. For example, the strategy implementation and corresponding result may be fed back into the models that correlate actions, results, external data, and/or other information used to learn from and develop strategies described herein. In some instances, such monitoring may also include re-scoring some or all channels, and the overall score. Such re-scores may trigger additional or alternative suggestions; Column 25, lines 7-20, In some instances, the AI engine 136 either may not have enough data or not have strong enough interpretation of the data to make accurate action predictions; Column 29, lines 22-67, As another example, processor(s) 112 may be programmed by one or more additional instructions that may perform some or all of the functionality attributed herein to one of the instructions. The various instructions described herein may be stored in a storage device 114, which may comprise random access memory (RAM), read only memory (ROM), and/or other memory. The storage device may store the computer program instructions (e.g., the aforementioned instructions) to be executed by processor 112 as well as data that may be manipulated by processor 112) to allow the user to provide input on analysis, wherein the processor integrates a feedback loop mechanism to allow the user to provide input on analysis, interpretation, and recommendation (Column 17, lines 36-43, In an implementation, the AI portal 134 may provide one or more interfaces, such as graphical user interfaces, to obtain goals, datasets, business processes, and/or other training parameters for training a model or otherwise creating a company instance for modeling. As such, the company may specify its goals and provide information used by the system to learn how to achieve those goals based on machine learned models trained and refined by the AI engine 136; Column 27, lines 40-44, In an operation 342, process 300 may include identifying strategy suggestions to implement. Such identification may be based on input received from the company via the dashboard interface and/or automated selection without such input; Column 27, lines 45-49, In operations 344 and 346, process 300 may include implementing and monitoring the strategy implementation. The monitored strategy implementation may be used to feedback into the model so that the model may learn from this data; Column 27, lines 50-55, FIGS. 4-6 depict various user interfaces, including a dashboard interface for configuring channels for data connectors, displaying and receiving acceptances of suggested actions, and displaying customer-specific information of a company, according to various implementations of the invention; Examiner notes that a dashboard interface is used to provide inputs from a user (e.g., company) related to the analysis, interpretation, and recommendations (e.g., feedback of which suggestions should be implemented)). identify at least one target path as a function of the first set of dynamic sub- targets (Column 10, lines 42-51, In an implementation, the strategy suggestion generator 128 may use output from the decision engine 126 to present comprehensive marketing strategy improvements across some or all channels 101 related to the company. In some instances, the strategy suggestion generator 128 may generate suggestions for a given channel 101 once it is decided that performance with respect to that channel should be increased. Alternatively or additionally, the strategy suggestion generator 128 may suggest an action that would achieve one or more goals specified in the user-defined rule; see Figure 5, Post content on both twitter and Facebook encouraging people to purchase your Weekend Island Tour, which will increase your sales by 20%); iteratively determine a static target status as a function of the first set of dynamic sub-targets and the one or more static targets using a status machine learning model comprising (Column 6, lines 48-52, As used herein, the term “effectiveness” refers to an ability to achieve certain goals. Thus, to increase effectiveness of marketing strategies means to be able to either achieve or exceed certain goals of those marketing strategies; Column 6, lines 56-67, Depending on various factors such as the domain of a company (e.g., the industry to which a given company belongs), date, season, weather, target demographics, content of data transmissions, and/or other factors, some networked electronic channels and/or content may be more effective at achieving certain goals (e.g., marketing or sales goals) than others. Because of the variety of different networked electronic channels and the various factors that influence whether or not use of a given networked electronic channel will facilitate achievement of those goals, the system may employ computerized artificial intelligence to parse, learn from and adapt to these and other variables; Column 17, lines 64-67 & Column 18, lines 1-19, The AI engine 136 may train the models by creating goals for the customer instance, assigning potential actions to achieve the goals, determining event sensitivity, measuring effectiveness (e.g., success) of the actions, monitoring success metrics, altering future system behavior, and/or using other processes. Creating goals may include setting a goal priority for the goals and setting rules that alter goal prioritization. Assigning potential actions may include creating actions based by defining action properties (which may include setting ratios and correlations between property values and creating forecasting models for action properties), defining action sequence, defining action start time content, defining action duration, determining cost of actions (which may include determining one-times cost vs. ongoing costs and diminishing costs over time), and determining action targets. Determining event sensitivity may include identifying external factors that affect the outcome of goals and modifying action selection based on event sensitivity. Measuring success may include defining goal success and defining success measurement mechanisms. Monitoring success metrics and altering future system behavior may be based on correlated data metrics and cause and effect analysis; Column 18, lines 60-64, In some instances, goals may not be static, but may change over time depending on certain conditions. The AI engine 136 may account for the realignment of goals based on certain calculable parameters; Examiner notes that the “static target status” may be changed over time based on external factors and effectiveness of the marketing strategies): receiving static training data, wherein the static training data correlates a plurality of the first set of dynamic sub-target data and static target data to a plurality of examples of static target data; training, iteratively, the status machine learning model using the static training data, wherein training the status machine learning model includes retraining the status machine learning model with feedback from previous iterations of the status machine learning model (Column 3, lines 31-34, The system may analyze the corpus of data and run continuous correlation analysis to determine the direction of value changes of one metric relative to a matrix of other metrics; Column 17, lines 18-34, In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning. As the system acquires additional data, it will be able to use that data to make comparative analyses on other businesses who have implemented similar strategies and determine what has the highest probability of working best for different types of companies across different types of industries. For example, the strategy implementation and corresponding result may be fed back into the models that correlate actions, results, external data, and/or other information used to learn from and develop strategies described herein. In some instances, such monitoring may also include re-scoring some or all channels, and the overall score. Such re-scores may trigger additional or alternative suggestions; Column 18, lines 16-19, Monitoring success metrics and altering future system behavior may be based on correlated data metrics and cause and effect analysis; Figure 5, Our projections indicate that you will increase your sales of this product by 20% this month by following this strategy); and generating the static target status using the trained status machine learning model (Column 17, lines 18-34, In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning; Column 18, lines 60-64, In some instances, goals may not be static, but may change over time depending on certain conditions. The AI engine 136 may account for the realignment of goals based on certain calculable parameters); identify a second set of dynamic sub-targets as a function of the static target status and the plurality of product data (Column 17, lines 18-34, In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning; Column 18, lines 60-64, In some instances, goals may not be static, but may change over time depending on certain conditions. The AI engine 136 may account for the realignment of goals based on certain calculable parameters; Examiner interprets “better strategies developed by the machine learning” as the “second set of dynamic sub-targets”); and generate a target [output] as a function of the static target status and the second set of dynamic sub-targets, wherein the apparatus is further configured to communicate a displayable [dashboard] to a display device to provide a graphical representation to the user, wherein the processor is configured to iteratively determine the static target status as a function of the first set of dynamic sub-targets and the one or more static targets, wherein the static target status is data associated with a reflection of the overall progress and performance of an entity in achieving its long-term objectives, wherein the processor is configured to continuously update the static target status, wherein the static target status is updated in real time (Column 5, lines 4-12, Various actions may be suggested based on real-time data from the various channels. As used herein, the term “real-time” may refer to receipt data by the strategy and suggestion engine from the connectors that triggers processing to generate a suggestion to perform an action. Put another way, in some instances, strategy and suggestion engine may attempt to determine suggestions upon receipt of data relating to the company as monitored and aggregated from one or more of the channels; Column 6, lines 26-29, FIG. 5 depicts a screenshot of a dashboard interface for displaying and receiving acceptances of suggested actions, according to an implementation of the invention; Column 10, lines 42-51, In an implementation, the strategy suggestion generator 128 may use output from the decision engine 126 to present comprehensive marketing strategy improvements across some or all channels 101 related to the company. In some instances, the strategy suggestion generator 128 may generate suggestions for a given channel 101 once it is decided that performance with respect to that channel should be increased. Alternatively or additionally, the strategy suggestion generator 128 may suggest an action that would achieve one or more goals specified in the user-defined rules; Column 17, lines 18-34, In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning; Column 25, lines 40-48, Client devices 170 may be configured as a personal computer (e.g., a desktop computer, a laptop computer, etc.), a smartphone, a tablet computing device, and/or other device that can be programmed to interface with computer system 110 (e.g., using the dashboard). Although not illustrated in FIG. 1, end user devices 140 may include one or more physical processors programmed by computer program instructions). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the sub-target machine learning model used for optimizing an objective function (e.g., provide marketing strategies that increase sales and/or profit) of the invention of Seybart et al. to further specify how the sub-target machine learning is iteratively trained over time by correlating entity data inputs to dynamic-sub-target outputs (e.g., how the marketing strategies are correlated to sales/revenue of the entity) of the invention of Lah because doing so would allow the machine learning to monitor the results of the strategy implementation in order to learn from the results and develop better strategies (see Lah, Column 17, lines 18-34) and change goals over time depending on certain conditions (Column 18, lines 60-64). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. The combination of Seybert et al. and Lah discloses training and retraining a sub-target machine learning model as a function of sub-target training data comprising a plurality of examples of entity data inputs and static target inputs correlated to dynamic sub-target outputs receiving user inputs (see Sybert et al., Paragraph 0038, training a machine learning to identify target strategies that increase sales and/or profit of products; see Lah, Column 17, lines 18-34, monitor the results of the strategy implementation in order to learn from the results and develop better strategies; As stated in Paragraph 0038 of Applicant’s specification, the dynamic sub-targets might involve implementing marketing strategies to boost their sales). Although the combination of Seybert et al. and Lah further discloses updating marketing strategies based the quality of the dynamic sub-target output as a function of the user feedback (see Lah, Column 6, lines 40-52, effectiveness score), and accuracy of the model (see Lah, Column 25, lines 7-20, In some instances, the AI engine 136 either may not have enough data or not have strong enough interpretation of the data to make accurate action predictions), the combination of Seybert et al. and Lah does not specifically disclose automatically retraining the machine learning model when the accuracy score is below a threshold. Also, although the combination of Seybert et al. and Lah further discloses receiving a plurality of entity data comprising a plurality of product data associated with an entity (see Seybert et al., see Figure 2 and related text in Paragraph 0025, The real-time market data can include anything from measuring sales performances of retail companies to optimizing in-store execution such as price, promotion and assortment), the combination of Seybert et al. and Lah does not specifically disclose wherein the product data is retrieved by using a web crawler. However, Figueroa et al. discloses to: receive a plurality of entity data comprising a plurality of product data associated with an entity, wherein receiving the plurality of entity data comprises: automatically retrieving one or more entity records using a web crawler, wherein the web crawler is additionally configured to systematically browse the internet by visiting a plurality of URLs, retrieving and indexing market data; and generating demand data based on the market data (Paragraph 0038, In some embodiments, the provided digital shelf analytic system may provide a predictive model for recommendations to improve sales performance (e.g. performance score). In some cases, the weighting coefficients may be generated by the predictive model. In some cases, the score may be the output of the predictive model, as a proxy for potential sales performance. A predictive model may be a trained model or machine learning algorithm trained model. The predictive model may be improved or optimized continuously using a combination of public and private data collected. In some cases, the input data to the predictive model may comprise data related to the various factors affecting the score as described above. In some cases, the input data may include user provided data related to a given product or brand (e.g., keyword/search terms for the product, product description, categories, images, etc). The input data may include raw data such as marketplace data that may be obtained automatically using techniques such as image recognition, parsing HTM, URL, watermark decoded from product image, image fingerprints, text fingerprints, cookie data, and the like. For example, Amazon pages for the most popular products or products in the same category may be crawled to independently compile data that cross-references Amazon ASINs to GTINs, manufacturers' model numbers, and other identifying data. In some cases, the input data may be retrieved from an external data source such as public and commercial brand database; In this case, sales performance is equivalent to demand data since it’s measuring sales/purchases/transactions for a specific product); … updating the sub-target training data as a function of the user feedback, wherein updating the sub-target training data as a function of user feedback comprises: identifying the quality of the dynamic sub-target output as a function of the user feedback, wherein identifying the quality of the dynamic sub-target output comprises generating an accuracy score of the dynamic sub-target output (Paragraph 0040, In some cases, the output of the predictive model may comprise recommendation information. The recommendation information may comprise information about improving sales performance. For example, the recommendation may comprise a recommended keyword/search terms of the product, description about the product, presentation of the product (e.g., image, video, etc), price, marketing strategy (e.g., paid presence), and the like. In some cases, the recommendations may be quantified so that one or more actions as recommended are executable to the user; Paragraph 0088, Data monitored by the model monitor system may include data involved in model training and during production. The data at model training may comprise, for example, training, test and validation data, predictions, or statistics that characterize the above datasets (e.g., mean, variance and higher order moments of the data sets). Data involved in production time may comprise time, input data, predictions made, and confidence bounds of predictions made. In some embodiments, the ground truth data (e.g., user provided recommendations) may also be monitored. The ground truth data may be monitored to evaluate the accuracy of a model and/or trigger retraining of the model. In some cases, users may provide ground truth data (e.g., user provided feedback) to the predictive model creation and management system 1000 after a model is in deployment phase. The model monitor system may monitor changes in data such as changes in ground truth data, or when new training data or prediction data becomes available; Paragraph 0089, As described above, the plurality of predictive models (e.g., model for predicting recommendations, weights, or scores) may be individually monitored or retrained upon detection of the model performance is below a threshold. During prediction time, predictions may be associated with the model in order to track data drift or to incorporate feedback from new ground truth data). removing a low quality dynamic sub-target output from the training data, wherein the low quality dynamic sub-target output indicates a low accuracy score; replacing the low quality dynamic sub-target output with a new dynamic sub-target output (Paragraph 0069, In some cases, the predictive model may be continually trained and improved using proprietary data or relevant data (e.g., user provided data, new data collected from ecommerce channels) so that the output can be better adapted to the specific product, ecommerce channel or a brand. In some cases, a predictive model may be pre-trained and implemented on the existing ecommerce system, and the pre-trained model may undergo continual re-training that involves continual tuning of the predictive model or a component of the predictive model (e.g., classifier) to adapt to changes in the implementation environment over time (e.g., changes in the marketplace data, model performance, user-specific data, etc). In some cases, the training data may be created based on user input including but not limited to, sales information (e.g., cost of product) and related recommendation; Paragraph 0088, Data monitored by the model monitor system may include data involved in model training and during production. The data at model training may comprise, for example, training, test and validation data, predictions, or statistics that characterize the above datasets (e.g., mean, variance and higher order moments of the data sets). Data involved in production time may comprise time, input data, predictions made, and confidence bounds of predictions made. In some embodiments, the ground truth data (e.g., user provided recommendations) may also be monitored. The ground truth data may be monitored to evaluate the accuracy of a model and/or trigger retraining of the model. In some cases, users may provide ground truth data (e.g., user provided feedback) to the predictive model creation and management system 1000 after a model is in deployment phase. The model monitor system may monitor changes in data such as changes in ground truth data, or when new training data or prediction data becomes available; Paragraph 0089, As described above, the plurality of predictive models (e.g., model for predicting recommendations, weights, or scores) may be individually monitored or retrained upon detection of the model performance is below a threshold. During prediction time, predictions may be associated with the model in order to track data drift or to incorporate feedback from new ground truth; Examiner notes that Figueroa is “updating the recommendations (e.g., marketing strategies/activities)” based on accuracy of predictions. Examiner interprets “updating recommendations” as “removing and replacing the dynamic sub-target” since it’s only using the marketing parameters that optimize a marketing campaign) retraining the sub-target machine learning model using the static target inputs correlated to the new dynamic sub-target output as a function of the accuracy score (Paragraph 0040, In some cases, the output of the predictive model may comprise recommendation information. The recommendation information may comprise information about improving sales performance. For example, the recommendation may comprise a recommended keyword/search terms of the product, description about the product, presentation of the product (e.g., image, video, etc), price, marketing strategy (e.g., paid presence), and the like; Paragraph 0078, The customer response score measures how positively customers respond to the selected product. The customer response may be related to sales rank, sales velocity, ratings, review sentiment, product page conversion rates and the like; Paragraph 0088, Data monitored by the model monitor system may include data involved in model training and during production. The data at model training may comprise, for example, training, test and validation data, predictions, or statistics that characterize the above datasets (e.g., mean, variance and higher order moments of the data sets). Data involved in production time may comprise time, input data, predictions made, and confidence bounds of predictions made. In some embodiments, the ground truth data (e.g., user provided recommendations) may also be monitored. The ground truth data may be monitored to evaluate the accuracy of a model and/or trigger retraining of the model. In some cases, users may provide ground truth data (e.g., user provided feedback) to the predictive model creation and management system 1000 after a model is in deployment phase. The model monitor system may monitor changes in data such as changes in ground truth data, or when new training data or prediction data becomes available; Paragraph 0089, As described above, the plurality of predictive models (e.g., model for predicting recommendations, weights, or scores) may be individually monitored or retrained upon detection of the model performance is below a threshold. During prediction time, predictions may be associated with the model in order to track data drift or to incorporate feedback from new ground truth). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the sub-target machine learning model used for optimizing an objective function (e.g., provide marketing strategies that increase sales and/or profit), wherein the sub-target machine learning is iteratively trained over time by correlating entity data inputs to dynamic-sub-target outputs of the invention of Seybart et al. and Lah to further retrain the sub-target machine learning when the accuracy score is below a threshold of the invention of Figueroa et al. because doing so would allow the plurality of models to monitor changes in data and retrain upon detection of model performance below a threshold (see Figueroa et al., Paragraphs 0088-0089). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. The combination of Seybert et al. and Lah further discloses receiving a plurality of entity data comprising a plurality of product data associated with an entity (see Seybert et al., see Figure 2 and related text in Paragraph 0025, The real-time market data can include anything from measuring sales performances of retail companies to optimizing in-store execution such as price, promotion and assortment). Although Figueroa et al. further discloses wherein the plurality of entity data may be received automatically using techniques such as image recognition (see Paragraph 0038), the combination of Seybert et al., Lah, and Figueroa et al. does not specifically disclose wherein the image recognition technique is using an optical character reader (OCR). However, Everest discloses to: receive a plurality of entity data comprising a plurality of product data associated with an entity, wherein receiving the plurality of entity data comprises: automatically retrieving one or more entity records using a web crawler, wherein the web crawler is additionally configured to systematically browse the internet by visiting a plurality of URLs, retrieving and indexing market data (Column 11, lines 35-46, In one or more embodiments, resource data 120 may be retrieved from one or more data acquisition systems 124. “Data acquisition system” for the purposes of this disclosure is software or an algorithm that is used to gather data from various sources. For example, data acquisition system 124 may include a web crawler. A “web crawler,” as used herein, is a program that systematically browses the internet for the purpose of web indexing. Web crawler may be seeded with platform URLs, wherein the crawler may then visit the next related URL, retrieve the content, index the content, and/or measures the relevance of the content to the topic of interest; Column 23, lines 24-30, In a non-limiting example, virtual activity data may be received, by processor 108, from virtual environment. Data related to user's activity in virtual environment such as, without limitation, online browsing, online shopping, social media posting, and the like may be collected by processor 108 as user profile 136); … converting at least a portion of the one or more entity records into machine encoded text using an optical character reader (OCR), wherein converting the at least a portion of the one or more entity records into the machine-encoded text comprises converting images of text in the at least a portion of the one or more entity records into the machine-encoded text and further comprises: pre-processing the images of text by de-skewing at least one image component associated with the at least a portion of the one or more entity records by applying a transform operation to the at least one image component; and implementing an OCR algorithm comprising a matrix matching process by comparing pixels of the pre-processed images to pixels of a stored glyph on a pixel-by-pixel basis; … and the converted at least a portion of the one or more entity records … (Column 12, lines 37-44, A user may input digital records and/or scanned physical documents that have been converted to digital documents, wherein data set 120 may include data that have bene converted into machine readable text. In some embodiments, optical character recognition or optical character reader (OCR) includes automatic conversion of images of written (e.g., typed, handwritten, or printed text) into machine-encoded text; Column 13, lines 4-12, Still referring to FIG. 1, in some cases, OCR processes may employ pre-processing of image components. Pre-processing process may include without limitation de-skew, de-speckle, binarization, line removal, layout analysis or “zoning,” line and word detection, script recognition, character isolation or “segmentation,” and normalization. In some cases, a de-skew process may include applying a transform (e.g., homography or affine transform) to the image component to align text; Column 13, lines 36-47, Still referring to FIG. 1, in some embodiments, an OCR process will include an OCR algorithm. Exemplary OCR algorithms include matrix-matching process and/or feature extraction processes. Matrix matching may involve comparing an image to a stored glyph on a pixel-by-pixel basis. In some cases, matrix matching may also be known as “pattern matching,” “pattern recognition,” and/or “image correlation.” Matrix matching may rely on an input glyph being correctly isolated from the rest of the image component. Matrix matching may also rely on a stored glyph being in a similar font and at the same scale as input glyph. Matrix matching may work best with typewritten text; Column 23, lines 24-30, In a non-limiting example, virtual activity data may be received, by processor 108, from virtual environment. Data related to user's activity in virtual environment such as, without limitation, online browsing, online shopping, social media posting, and the like may be collected by processor 108 as user profile 136). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the sub-target machine learning model used for optimizing an objective function (e.g., provide marketing strategies that increase sales and/or profit), wherein a plurality of entity data is received using different techniques such as image recognition of the invention of Seybart et al., Lah, and Figueroa et al. to further specify wherein the image is converted to machine-encoded text using an optical character reader of the invention of Everest because doing so would allow the machine learning model to receive data that is already converted into machine-encoded text (see Everest, Column 12, lines 37-44). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claim 11 (Currently Amended), Seybert et al. discloses a method for an identification of dynamic sub-targets, wherein the method comprises (Paragraph 0002, This disclosure relates generally to the technical field of market research, and, more particularly, to methods, systems, articles of manufacture, and apparatus to identify market strategies; As explained in Paragraph 0038 of Applicant’s specification, dynamic sub-targets might involve implementing marketing strategies to boost their sales): providing an apparatus including a processor and an application-specific integrated circuit communicatively connected to the processor, the application-specific integrated circuit instantiating a plurality of neural network nodes, wherein: the application-specific integrated circuit includes a rewritable read-only memory (ROM) storing a plurality of parameters, wherein the plurality of parameters includes the at least a parameter of each node of the plurality of nodes; the application-specific integrated circuit includes circuitry for each node of the plurality of nodes, the circuitry configured to perform a mathematical operation on inputs to the node using at least a parameter retrieved from memory (see Figure 2; Paragraph 0034, In the illustrated example of FIG. 1, the action determiner 114 accesses and analyzes the data stored in the client databases 102, 104, 106, 108 to determine a target market strategy for price, promotion, new products, assortment, and/or in-market execution. In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy; Paragraph 0093, Thus, for example, any of the example data accessor 202, the example data lake 204, the example model trainer 205, the example target principle generator 206, the example pricing determiner 208, the example promotion determiner 210, the example assortment determiner 212, the example new product determiner 214, the example in-store execution determiner 216, the example execution analyzer 218, the example pricing analyzer 220, the example promotion analyzer 222, the example assortment analyzer 224, the example new product analyzer 226, the example execution analyzer 228, the example score generator 230, the example output generator 232 and/or, more generally, the example action determiner 114 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)); Paragraph 0094, The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor and/or processor circuitry, such as the processor 2112 shown in the example processor platform 2100 discussed below in connection with FIG. 21. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 2112, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 2112 and/or embodied in firmware or dedicated hardware; Paragraph 0116, The processor platform 2100 can be a self-learning machine (e.g., a neural network); Examiner notes that a neural network comprises nodes and edges); and receiving, using at least the processor, a plurality of entity data comprising a plurality of product data associated with an entity (Figure 21, item 2100, processor platform; Paragraph 0025, The real-time market data can include anything from measuring sales performances of retail companies to optimizing in-store execution such as price, promotion and assortment; Paragraph 0028. In the illustrated example of FIG. 1, the respective client databases 102, 104, 106, 108 contain product information for associated individual clients (e.g., different retail chains, different brands, etc.). That is, the client databases 102, 104, 106, 108 store point of sale (POS) data. In examples disclosed herein, the client database 102 stores market data such as universal product code (UPC) level data including volumetric sales, price data, promotion data, and/or audit data. The client database 102 can store retail chain data (e.g., data from Target®, Walmart®, etc.) and/or independent retail data. For example, the client database 102 can cover grocery data, drug data, military commissary data, liquor data, etc.), …; identifying, using at least a processor, one or more static targets as a function of the plurality of entity data … using a first objective function, wherein identifying the one or more static targets comprises (Figure 21, item 2100, processor platform; Paragraph 0039, The example target principle generator 206 applies the machine learning model to determine target business strategies. For example, the target principle generator 206 determines guidelines (e.g., principles, rules, target metrics, parameter, etc.) for one or more market levers at the market, account, and/or store level): receiving a target area from a user; selecting a target metric as a function of the target area (Paragraph 0034, The action determiner 114 scores (e.g., ranks, prioritizes, etc.) the accounts and/or products of retailers and/or manufacturers based on compliance to the target market strategy to prioritize focus against the highest leverage opportunities. That is, the action determiner 114 determines and ranks one or more actions to increase sales and/or profit of products; In this case, the target area is financial performance and the target metric is sales and/or profit of products. See Paragraph 0033 in Applicant’s specification); generating a first objective function as a function the target metric; and selecting the one or more static targets as a function of optimizing the first objective function (Paragraph 0034, The action determiner 114 scores (e.g., ranks, prioritizes, etc.) the accounts and/or products of retailers and/or manufacturers based on compliance to the target market strategy to prioritize focus against the highest leverage opportunities. That is, the action determiner 114 determines and ranks one or more actions to increase sales and/or profit of products; Paragraph 0040, For example, the pricing determiner 208 analyzes sub-levers of the price lever (e.g., target price gaps, recommended price strategy, everyday price thresholds, target price positions, target price velocity, target historical price changes, etc.). For example, the example pricing determiner 208 determines a target everyday price for a product to increase (e.g., maximize) profit and volume growth; As stated in Paragraph 0033 of Applicant’s specification, the first objective function is a mathematical formula that defines a measure of performance that needs to be either maximized or minimized for a static target); identifying, using the at least a processor, a first set of dynamic sub-targets as a function of the one or more static targets and the plurality of product data (Figure 21, item 2100, processor platform; Paragraph 0034, In the illustrated example of FIG. 1, the action determiner 114 accesses and analyzes the data stored in the client databases 102, 104, 106, 108 to determine a target market strategy for price, promotion, new products, assortment, and/or in-market execution. In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy. The action determiner 114 scores (e.g., ranks, prioritizes, etc.) the accounts and/or products of retailers and/or manufacturers based on compliance to the target market strategy to prioritize focus against the highest leverage opportunities. That is, the action determiner 114 determines and ranks one or more actions to increase sales and/or profit of products; Paragraph 0038, The example model trainer 205 trains a machine learning model to identify target strategies for one or more market levers (e.g., price, promotion, new products, assortment, and/or in-market execution; As stated in Paragraph 0038 of Applicant’s specification, the dynamic sub-targets might involve implementing marketing strategies to boost their sales), wherein identifying the first set of dynamic sub-targets comprises: configuring the parameters of the rewritable ROM to instantiate a sub-target machine learning model comprising a neural network (see Figure 2; Paragraph 0034, In the illustrated example of FIG. 1, the action determiner 114 accesses and analyzes the data stored in the client databases 102, 104, 106, 108 to determine a target market strategy for price, promotion, new products, assortment, and/or in-market execution. In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy; Paragraph 0093, Thus, for example, any of the example data accessor 202, the example data lake 204, the example model trainer 205, the example target principle generator 206, the example pricing determiner 208, the example promotion determiner 210, the example assortment determiner 212, the example new product determiner 214, the example in-store execution determiner 216, the example execution analyzer 218, the example pricing analyzer 220, the example promotion analyzer 222, the example assortment analyzer 224, the example new product analyzer 226, the example execution analyzer 228, the example score generator 230, the example output generator 232 and/or, more generally, the example action determiner 114 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)); Paragraph 0094, The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor and/or processor circuitry, such as the processor 2112 shown in the example processor platform 2100 discussed below in connection with FIG. 21. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 2112, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 2112 and/or embodied in firmware or dedicated hardware; Paragraph 0116, The processor platform 2100 can be a self-learning machine (e.g., a neural network)); training the sub-target machine learning model as a function of sub-target training data comprising a plurality of examples of entity data inputs and static target inputs correlated to dynamic sub-target outputs (Paragraph 0034, In examples disclosed herein , the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy; Paragraph 0038, In the illustrated example of FIG. 2, the action determiner 114 includes an example model trainer 205. In some examples, the model trainer 205 includes means for model training (sometimes referred to herein as model training means). The example means for model training is hardware. The example model trainer 205 trains a machine learning model to identify target strategies for one or more market levers (e.g., price, promotion, new products, assortment, and/or in-market execution)), wherein training further comprises updating the plurality of parameters in the rewritable ROM (see Figure 2; Paragraph 0034, In the illustrated example of FIG. 1, the action determiner 114 accesses and analyzes the data stored in the client databases 102, 104, 106, 108 to determine a target market strategy for price, promotion, new products, assortment, and/or in-market execution. In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy; Paragraph 0093, Thus, for example, any of the example data accessor 202, the example data lake 204, the example model trainer 205, the example target principle generator 206, the example pricing determiner 208, the example promotion determiner 210, the example assortment determiner 212, the example new product determiner 214, the example in-store execution determiner 216, the example execution analyzer 218, the example pricing analyzer 220, the example promotion analyzer 222, the example assortment analyzer 224, the example new product analyzer 226, the example execution analyzer 228, the example score generator 230, the example output generator 232 and/or, more generally, the example action determiner 114 could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), programmable controller(s), graphics processing unit(s) (GPU(s)), digital signal processor(s) (DSP(s)), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)); Paragraph 0094, The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by a computer processor and/or processor circuitry, such as the processor 2112 shown in the example processor platform 2100 discussed below in connection with FIG. 21. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a DVD, a Blu-ray disk, or a memory associated with the processor 2112, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor 2112 and/or embodied in firmware or dedicated hardware; Paragraph 0116, The processor platform 2100 can be a self-learning machine (e.g., a neural network)); … identifying, using the at least a processor, at least one target path as a function of the first set of dynamic sub-targets (Paragraph 0039, In some examples, the target principles are conditions deemed “optimal” by the target principle generator 206. In the illustrated example of FIG. 2, the target principle generator 206 includes an example pricing determiner 208, an example promotion determiner 210, an example assortment determiner 212, an example new product determiner 214, and an example execution determiner 216 (sometimes referred-to as an in-store execution determiner 216). For example, the target principle generator 206 determines target principles for the pricing lever, the promotion lever, the assortment lever, the new product lever, and/or the execution lever; Paragraph 0040, For example, the pricing determiner 208 analyzes sub-levers of the price lever (e.g., target price gaps, recommended price strategy, everyday price thresholds, target price positions, target price velocity, target historical price changes, etc.). For example, the example pricing determiner 208 determines a target everyday price for a product to increase (e.g., maximize) profit and volume growth); iteratively determining, using at least a processor, a static target status comprising data associated with progress of the user in achieving a long-term objective as a function of the first set of dynamic sub-targets and the one or more static targets using a status machine learning model comprising (Paragraph 0034, In examples disclosed herein , the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy; Paragraph 0057, In the illustrated example of FIG. 2, the action determiner 114 includes an example execution analyzer 218. In some examples, the execution analyzer 218 includes means for comparing data (sometimes referred to herein as data comparing means). The example means for comparing data is hardware. The example execution analyzer 218 analyzes real-time market data (e.g., POS data, etc.) to identify products in which the in-market strategies and executions differ from the target principles determined by the example target principle generator 206. That is, in some examples, the execution analyzer 218 analyzes the real-time market data based on the levers analyzed by the target principle generator 206 (e.g., the pricing lever, the promotion lever, the assortment lever, the new product lever, and/or the execution lever). However, the execution analyzer 218 can analyze the real-time market data based on any suitable lever and/or sub-lever analyzed by the target principle generator 206; Paragraph 0058, The example pricing analyzer 220 compares the real-time market data to the target price principle determined by the pricing determiner 208; Table 11, lower price to be competitive; Examiner interprets “changing the price” as “the first set of dynamic sub-targets”); receiving static training data, wherein the static training data correlates a plurality of the first set of dynamic sub-target data and static target data to a plurality of examples of static target data; training, iteratively, the status machine learning model using the static training data, … (Paragraph 0034, In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy. The action determiner 114 scores (e.g., ranks, prioritizes, etc.) the accounts and/or products of retailers and/or manufacturers based on compliance to the target market strategy to prioritize focus against the highest leverage opportunities. That is, the action determiner 114 determines and ranks one or more actions to increase sales and/or profit of products; Paragraph 0038, The example model trainer 205 trains a machine learning model to identify target strategies for one or more market levers (e.g., price, promotion, new products, assortment, and/or in-market execution; Paragraph 0039, The example target principle generator 206 applies the machine learning model to determine target business strategies. For example, the target principle generator 206 determines guidelines (e.g., principles, rules, target metrics, parameter, etc.) for one or more market levers at the market, account, and/or store level. In some examples, the target principles are conditions deemed “optimal” by the target principle generator 206; see Figure 3 and related text in Paragraph 0067, In the example identify phase 306, the example execution analyzer 218 (FIG. 2) compares the target principles to in-market execution data to identify levers that are out of compliance. In the example score phase 308, the example score generator 230 (FIG. 2) generates scores for the levers based on whether the levers are out of compliance. For example, the score generator 230 aggregates the lever scores to generate account scores and/or market scores. That is, the example score generator 230 identifies levers with the highest opportunity to optimize market strategies (e.g., levers with relatively low scores); Examiner notes that Seybert is continuously optimizing marketing strategies based on the real-time market data, wherein the real-time data is used to determine the status/compliance of the marketing strategies); and generating the static target status using the trained status machine learning model (Paragraph 0039, The example target principle generator 206 applies the machine learning model to determine target business strategies. For example, the target principle generator 206 determines guidelines (e.g., principles, rules, target metrics, parameter, etc.) for one or more market levers at the market, account, and/or store level. In some examples, the target principles are conditions deemed “optimal” by the target principle generator 206); identifying, using at least a processor, a second set of dynamic sub-targets as a function of the static target status and the plurality of product data (Figure 21, item 2100, processor platform; Paragraph 0057, In the illustrated example of FIG. 2, the action determiner 114 includes an example execution analyzer 218. In some examples, the execution analyzer 218 includes means for comparing data (sometimes referred to herein as data comparing means). The example means for comparing data is hardware. The example execution analyzer 218 analyzes real-time market data (e.g., POS data, etc.) to identify products in which the in-market strategies and executions differ from the target principles determined by the example target principle generator 206. That is, in some examples, the execution analyzer 218 analyzes the real-time market data based on the levers analyzed by the target principle generator 206 (e.g., the pricing lever, the promotion lever, the assortment lever, the new product lever, and/or the execution lever). However, the execution analyzer 218 can analyze the real-time market data based on any suitable lever and/or sub-lever analyzed by the target principle generator 206; Paragraph 0059, The example promotion analyzer 222 compares the real-time market data to the target promotion principle determined by the example promotion determiner 210; Table 12, timing best offer; see Figure 3 and related text in Paragraph 0067, In the example identify phase 306, the example execution analyzer 218 (FIG. 2) compares the target principles to in-market execution data to identify levers that are out of compliance. In the example score phase 308, the example score generator 230 (FIG. 2) generates scores for the levers based on whether the levers are out of compliance. For example, the score generator 230 aggregates the lever scores to generate account scores and/or market scores. That is, the example score generator 230 identifies levers with the highest opportunity to optimize market strategies (e.g., levers with relatively low scores); Examiner notes that Seybert is continuously optimizing marketing strategies based on real-time market data. Therefore, the new marketing strategies with the highest opportunity are interpreted as the second set of dynamic sub-targets); and generating, using at least a processor, a target report as a function of the static target status and the second set of dynamic sub-targets (Paragraph 0065, Additionally or alternatively, the output generator 232 generates a report card, intelligent dashboard, etc. including aggregate report cards by market, brand, etc. of the opportunities of each product. Additionally or alternatively, the output generator 232 provides a recommended adjustment of a lever for the user to execute. For example, the output generator 232 generates the report 116 (FIG. 1) to display on the user device 118 (FIG. 1). In some examples, the output generator 232 causes a change in an advertised price of the product, releases an advertisement for broadcast having the updated price, etc. In still other examples, the output generator 232 generates control instructions to cause an advertisement, cause a price change in a retailer computer system, cause a temporary price change in a market of interest, etc.; Paragraph 0067. For example, the output generator 232 generates a report card displaying the levers of the account and/or product that require action (e.g., are out of compliance with the target principles; Paragraph 0078, At block 718, the first market analyst views the report (e.g., the insights banner 710, the driving force report 712, the grade report 714, and/or the insights report 716). For example, the first market analyst can determine market strategies that are working (e.g., levers with a relatively high grade) and market strategies that are not working (e.g., levers with a relatively low grade). At block 720, the first market analyst selects an opportunity. For example, the first market analyst selects a lever with a relatively low grade. For example, the first market analyst selects a first opportunity with a score of “C” and does not select a second opportunity with a score of “A”), wherein the target report comprises predictions related to an entity's future cash flow and includes tracking of the entity’s current cash flow (Paragraph 0025, These insights may also provide sales predictions based on the changes in a client's offerings, pricings, and/or marketing; Paragraph 0067. For example, the output generator 232 generates a report card displaying the levers of the account and/or product that require action (e.g., are out of compliance with the target principles; Paragraph 0078, At block 718, the first market analyst views the report (e.g., the insights banner 710, the driving force report 712, the grade report 714, and/or the insights report 716). For example, the first market analyst can determine market strategies that are working (e.g., levers with a relatively high grade) and market strategies that are not working (e.g., levers with a relatively low grade). At block 720, the first market analyst selects an opportunity. For example, the first market analyst selects a lever with a relatively low grade. For example, the first market analyst selects a first opportunity with a score of “C” and does not select a second opportunity with a score of “A”; Paragraph 0091, FIG. 13 illustrates example net profit data 1300 used by the example system of FIG. 1 to identify an action for a marketing strategy. In some examples, the pricing determiner 208 (FIG. 2) generates the net profit data 1300. For example, the pricing determiner 208 determines the target principle for the optimal price gaps sub-lever based on the net profit data 1300. For example, the pricing determiner 208 determines the net profit data 1300 for a pair of items (e.g., a pair of internal competitor items, a pair of external competitor items) using Monte Carlo simulations based on the 5.sup.th to 95.sup.th percentile price gap between the pair of items. The pricing determiner 208 determines the optimal price gap by identifying an example profit maximizing point 1302; Examiner notes that Seybert et al. is recommending strategies that maximize the cash flow (e.g., profit/sales)); recommending, using the at least a processor, cash flow optimization strategies to the user (Paragraph 0091, FIG. 13 illustrates example net profit data 1300 used by the example system of FIG. 1 to identify an action for a marketing strategy. In some examples, the pricing determiner 208 (FIG. 2) generates the net profit data 1300. For example, the pricing determiner 208 determines the target principle for the optimal price gaps sub-lever based on the net profit data 1300. For example, the pricing determiner 208 determines the net profit data 1300 for a pair of items (e.g., a pair of internal competitor items, a pair of external competitor items) using Monte Carlo simulations based on the 5.sup.th to 95.sup.th percentile price gap between the pair of items. The pricing determiner 208 determines the optimal price gap by identifying an example profit maximizing point 1302); and communicating a displayable image to a display device to provide a graphical representation to the user (Paragraph 0121, One or more output devices 2124 are also connected to the interface circuit 2120 of the illustrated example. The output devices 2124 can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display (LCD), a cathode ray tube display (CRT), an in-place switching (IPS) display, a touchscreen, etc.), a tactile output device, a printer and/or speaker. The interface circuit 2120 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor; As stated in Applicant’s specification, Paragraph 0113, examples of a display device that can communicate a displayable image include a liquid crystal display (LCD), a cathode ray tube (CRT), a plasma display, a light emitting diode (LED) display, and any combinations thereof), wherein the processor is configured to iteratively determine the static target status as a function of the first set of dynamic sub-targets and the one or more static targets, wherein the static target status is data associated with the reflection of the overall progress and performance of an entity in achieving its long-term objectives, wherein the processor is configured to continuously update the static target status, wherein the static target status is updated in real time (Paragraph 0034, In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy. The action determiner 114 scores (e.g., ranks, prioritizes, etc.) the accounts and/or products of retailers and/or manufacturers based on compliance to the target market strategy to prioritize focus against the highest leverage opportunities. That is, the action determiner 114 determines and ranks one or more actions to increase sales and/or profit of products; Paragraph 0038, The example model trainer 205 trains a machine learning model to identify target strategies for one or more market levers (e.g., price, promotion, new products, assortment, and/or in-market execution; Paragraph 0039, The example target principle generator 206 applies the machine learning model to determine target business strategies. For example, the target principle generator 206 determines guidelines (e.g., principles, rules, target metrics, parameter, etc.) for one or more market levers at the market, account, and/or store level. In some examples, the target principles are conditions deemed “optimal” by the target principle generator 206; see Figure 3 and related text in Paragraph 0067, In the example identify phase 306, the example execution analyzer 218 (FIG. 2) compares the target principles to in-market execution data to identify levers that are out of compliance. In the example score phase 308, the example score generator 230 (FIG. 2) generates scores for the levers based on whether the levers are out of compliance. For example, the score generator 230 aggregates the lever scores to generate account scores and/or market scores. That is, the example score generator 230 identifies levers with the highest opportunity to optimize market strategies (e.g., levers with relatively low scores); Paragraph 0094, Processor; Examiner notes that Seybert is continuously optimizing marketing strategies based on the real-time market data, wherein the real-time data is used to determine the status/compliance of the marketing strategies). Although Seybert et al. discloses training a sub-target machine learning model as a function of sub-target training data comprising a plurality of examples of entity data inputs and static target inputs correlated to dynamic sub-target outputs receiving user inputs (Paragraph 0038, training a machine learning to identify target strategies that increase sales and/or profit of products; As stated in Paragraph 0038 of Applicant’s specification, the dynamic sub-targets might involve implementing marketing strategies to boost their sales), Seybert et al. does not specifically disclose retraining the sub-target machine learning model by providing feedback indicating a quality of previous dynamic sub-target outputs. However, Lah discloses a method for an identification of dynamic sub-targets, wherein the method comprises: providing an apparatus including a processor… a rewritable read-only access memory; and receiving, using the at least a processor, a plurality of entity data comprising a plurality of product data associated with an entity, wherein the processor integrates a feedback loop mechanism to allow the user to provide input on analysis, interpretation, and recommendations (Column 17, lines 18-34, The system may analyze the corpus of data and run continuous correlation analysis to determine the direction of value changes of one metric relative to a matrix of other metrics; In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning. As the system acquires additional data, it will be able to use that data to make comparative analyses on other businesses who have implemented similar strategies and determine what has the highest probability of working best for different types of companies across different types of industries. For example, the strategy implementation and corresponding result may be fed back into the models that correlate actions, results, external data, and/or other information used to learn from and develop strategies described herein. In some instances, such monitoring may also include re-scoring some or all channels, and the overall score. Such re-scores may trigger additional or alternative suggestions; Column 17, lines 36-43, In an implementation, the AI portal 134 may provide one or more interfaces, such as graphical user interfaces, to obtain goals, datasets, business processes, and/or other training parameters for training a model or otherwise creating a company instance for modeling. As such, the company may specify its goals and provide information used by the system to learn how to achieve those goals based on machine learned models trained and refined by the AI engine 136; Column 27, lines 40-44, In an operation 342, process 300 may include identifying strategy suggestions to implement. Such identification may be based on input received from the company via the dashboard interface and/or automated selection without such input; Column 27, lines 45-49, In operations 344 and 346, process 300 may include implementing and monitoring the strategy implementation. The monitored strategy implementation may be used to feedback into the model so that the model may learn from this data; Column 27, lines 50-55, FIGS. 4-6 depict various user interfaces, including a dashboard interface for configuring channels for data connectors, displaying and receiving acceptances of suggested actions, and displaying customer-specific information of a company, according to various implementations of the invention; Column 29, lines 22-67, As another example, processor(s) 112 may be programmed by one or more additional instructions that may perform some or all of the functionality attributed herein to one of the instructions. The various instructions described herein may be stored in a storage device 114, which may comprise random access memory (RAM), read only memory (ROM), and/or other memory. The storage device may store the computer program instructions (e.g., the aforementioned instructions) to be executed by processor 112 as well as data that may be manipulated by processor 112; Examiner notes that a dashboard interface is used to provide inputs from a user (e.g., company) related to the analysis, interpretation, and recommendations (e.g., feedback of which suggestions should be implemented)); … wherein identifying the first set of dynamic sub-targets comprises: configuring the parameters of the rewritable ROM to instantiate a sub-target machine learning model comprising a …; training the sub-target machine learning model as a function of sub-target training data comprising a plurality of examples of entity data inputs and static target inputs correlated to dynamic sub-target outputs, wherein training further comprises updating the plurality of parameters in the rewritable ROM; receiving user inputs comprising user feedback indicating a quality of previous dynamic sub-target outputs generated based on previous entity data inputs and static target inputs using a user interface (Column 7, lines 1-4, A company may therefore use the system to achieve certain goals, which may be predefined by the company. The company may wish to achieve one or more goals such as, without limitation, maximize sales or profits; Column 17, lines 18-34, In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning. As the system acquires additional data, it will be able to use that data to make comparative analyses on other businesses who have implemented similar strategies and determine what has the highest probability of working best for different types of companies across different types of industries. For example, the strategy implementation and corresponding result may be fed back into the models that correlate actions, results, external data, and/or other information used to learn from and develop strategies described herein. In some instances, such monitoring may also include re-scoring some or all channels, and the overall score. Such re-scores may trigger additional or alternative suggestions; Column 17, lines 36-43, In an implementation, the AI portal 134 may provide one or more interfaces, such as graphical user interfaces, to obtain goals, datasets, business processes, and/or other training parameters for training a model or otherwise creating a company instance for modeling. As such, the company may specify its goals and provide information used by the system to learn how to achieve those goals based on machine learned models trained and refined by the AI engine 136; Column 27, lines 56-67, Figure 7 illustrates configuration of rules for action automation; Column 29, lines 22-67, As another example, processor(s) 112 may be programmed by one or more additional instructions that may perform some or all of the functionality attributed herein to one of the instructions. The various instructions described herein may be stored in a storage device 114, which may comprise random access memory (RAM), read only memory (ROM), and/or other memory. The storage device may store the computer program instructions (e.g., the aforementioned instructions) to be executed by processor 112 as well as data that may be manipulated by processor 112); updating the sub-target training data as a function of the user feedback, wherein updating the sub-target training data as a function of user feedback comprises: identifying the quality of the dynamic sub-target output as a function of the user feedback, wherein identifying the quality of the dynamic sub-target output comprises generating an accuracy … of the dynamic sub-target output (Column 17, lines 18-34, In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning. As the system acquires additional data, it will be able to use that data to make comparative analyses on other businesses who have implemented similar strategies and determine what has the highest probability of working best for different types of companies across different types of industries. For example, the strategy implementation and corresponding result may be fed back into the models that correlate actions, results, external data, and/or other information used to learn from and develop strategies described herein. In some instances, such monitoring may also include re-scoring some or all channels, and the overall score. Such re-scores may trigger additional or alternative suggestions; Column 24, lines 19-30, Relying on correlative data may be reasonably acceptable if that is all the data available, but true cause and effect is a much stronger base for accurate forecasting; Column 25, lines 7-20, In some instances, the AI engine 136 either may not have enough data or not have strong enough interpretation of the data to make accurate action predictions); removing a low quality dynamic sub-target output from the training data, …; replacing the low quality dynamic sub-target output with a new dynamic sub-target output (Column 5, lines 37-60, In other words, the strategy and suggestion engine may correlate various observed weather conditions with observed business performance (e.g., sales) and observed actions (e.g., marketing activities). Using the modeling, strategy and suggestion engine may suggest that the company suspend certain actions such as marketing activities because their effectiveness during inclement weather is reduced (e.g. deviating from a threshold amount) and external data indicating that inclement weather is expected. In some instances, strategy and suggestion engine may make time-bound suggestions, such as to suspend marketing activities for four days until the inclement weather is expected to pass. Because strategy and suggestion engine may operate in real-time, these and other suggestions may be updated based on updated internalized data and/or updated external data; Examiner notes that Lah adds or removes certain actions based on the effectiveness of the marketing activities/strategies. Examiner interprets the effectiveness of the marketing activities/strategies as the quality of the dynamic sub-targets); retraining the sub-target machine learning model using the static target inputs correlated to the new dynamic sub-target output; and retraining the sub-target machine learning model a function of modified correlations of examples of entity data inputs and static target inputs and dynamic sub-target outputs by updating the parameters in the rewritable ROM (Column 17, lines 18-34, The system may analyze the corpus of data and run continuous correlation analysis to determine the direction of value changes of one metric relative to a matrix of other metrics; In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning. As the system acquires additional data, it will be able to use that data to make comparative analyses on other businesses who have implemented similar strategies and determine what has the highest probability of working best for different types of companies across different types of industries. For example, the strategy implementation and corresponding result may be fed back into the models that correlate actions, results, external data, and/or other information used to learn from and develop strategies described herein. In some instances, such monitoring may also include re-scoring some or all channels, and the overall score. Such re-scores may trigger additional or alternative suggestions; Column 29, lines 22-67, As another example, processor(s) 112 may be programmed by one or more additional instructions that may perform some or all of the functionality attributed herein to one of the instructions. The various instructions described herein may be stored in a storage device 114, which may comprise random access memory (RAM), read only memory (ROM), and/or other memory. The storage device may store the computer program instructions (e.g., the aforementioned instructions) to be executed by processor 112 as well as data that may be manipulated by processor 112). identifying, using the at least a processor, at least one target path as a function of the first set of dynamic sub- targets (Column 10, lines 42-51, In an implementation, the strategy suggestion generator 128 may use output from the decision engine 126 to present comprehensive marketing strategy improvements across some or all channels 101 related to the company. In some instances, the strategy suggestion generator 128 may generate suggestions for a given channel 101 once it is decided that performance with respect to that channel should be increased. Alternatively or additionally, the strategy suggestion generator 128 may suggest an action that would achieve one or more goals specified in the user-defined rule; see Figure 5, Post content on both twitter and Facebook encouraging people to purchase your Weekend Island Tour, which will increase your sales by 20%); iteratively determining, using the at least a processor, a static target status comprising data associated with progress of the user in achieving a long-term objective as a function of the first set of dynamic sub-targets and the one or more static targets using a status machine learning model comprising (Column 6, lines 48-52, As used herein, the term “effectiveness” refers to an ability to achieve certain goals. Thus, to increase effectiveness of marketing strategies means to be able to either achieve or exceed certain goals of those marketing strategies; Column 6, lines 56-67, Depending on various factors such as the domain of a company (e.g., the industry to which a given company belongs), date, season, weather, target demographics, content of data transmissions, and/or other factors, some networked electronic channels and/or content may be more effective at achieving certain goals (e.g., marketing or sales goals) than others. Because of the variety of different networked electronic channels and the various factors that influence whether or not use of a given networked electronic channel will facilitate achievement of those goals, the system may employ computerized artificial intelligence to parse, learn from and adapt to these and other variables; Column 17, lines 64-67 & Column 18, lines 1-19, The AI engine 136 may train the models by creating goals for the customer instance, assigning potential actions to achieve the goals, determining event sensitivity, measuring effectiveness (e.g., success) of the actions, monitoring success metrics, altering future system behavior, and/or using other processes. Creating goals may include setting a goal priority for the goals and setting rules that alter goal prioritization. Assigning potential actions may include creating actions based by defining action properties (which may include setting ratios and correlations between property values and creating forecasting models for action properties), defining action sequence, defining action start time content, defining action duration, determining cost of actions (which may include determining one-times cost vs. ongoing costs and diminishing costs over time), and determining action targets. Determining event sensitivity may include identifying external factors that affect the outcome of goals and modifying action selection based on event sensitivity. Measuring success may include defining goal success and defining success measurement mechanisms. Monitoring success metrics and altering future system behavior may be based on correlated data metrics and cause and effect analysis; Column 18, lines 60-64, In some instances, goals may not be static, but may change over time depending on certain conditions. The AI engine 136 may account for the realignment of goals based on certain calculable parameters; Examiner notes that the “static target status” may be changed over time based on external factors and effectiveness of the marketing strategies): receiving static training data, wherein the static training data correlates a plurality of the first set of dynamic sub-target data and static target data to a plurality of examples of static target data; training, iteratively, the status machine learning model using the static training data, wherein training the status machine learning model includes retraining the status machine learning model with feedback from previous iterations of the status machine learning model (Column 3, lines 31-34, The system may analyze the corpus of data and run continuous correlation analysis to determine the direction of value changes of one metric relative to a matrix of other metrics; Column 17, lines 18-34, In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning. As the system acquires additional data, it will be able to use that data to make comparative analyses on other businesses who have implemented similar strategies and determine what has the highest probability of working best for different types of companies across different types of industries. For example, the strategy implementation and corresponding result may be fed back into the models that correlate actions, results, external data, and/or other information used to learn from and develop strategies described herein. In some instances, such monitoring may also include re-scoring some or all channels, and the overall score. Such re-scores may trigger additional or alternative suggestions; Column 18, lines 16-19, Monitoring success metrics and altering future system behavior may be based on correlated data metrics and cause and effect analysis; Figure 5, Our projections indicate that you will increase your sales of this product by 20% this month by following this strategy); and generating the static target status using the trained status machine learning model (Column 17, lines 18-34, In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning; Column 18, lines 60-64, In some instances, goals may not be static, but may change over time depending on certain conditions. The AI engine 136 may account for the realignment of goals based on certain calculable parameters); identifying, using the at least a processor, a second set of dynamic sub-targets as a function of the static target status and the plurality of product data (Column 17, lines 18-34, In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning; Column 18, lines 60-64, In some instances, goals may not be static, but may change over time depending on certain conditions. The AI engine 136 may account for the realignment of goals based on certain calculable parameters; Examiner interprets “better strategies developed by the machine learning” as the “second set of dynamic sub-targets”); generating, using the at least a processor, a target [output] as a function of the static target status and the second set of dynamic sub-targets, wherein the target report comprises predictions related to an entity's future cash flow and includes tracking of the entity's current cash flow; recommending, using the at least a processor, cash flow optimization strategies to the user; and communicating a displayable [dashboard] to a display device to provide a graphical representation to the user, wherein the processor is configured to iteratively determine the static target status as a function of the first set of dynamic sub-targets and the one or more static targets, wherein the static target status is data associated with a reflection of the overall progress and performance of an entity in achieving its long-term objectives, wherein the processor is configured to continuously update the static target status, wherein the static target status is updated in real time (Figure 5, Our projections indicate that you will increase your sales of this product by 20% this month by following this strategy; Column 5, lines 4-12, Various actions may be suggested based on real-time data from the various channels. As used herein, the term “real-time” may refer to receipt data by the strategy and suggestion engine from the connectors that triggers processing to generate a suggestion to perform an action. Put another way, in some instances, strategy and suggestion engine may attempt to determine suggestions upon receipt of data relating to the company as monitored and aggregated from one or more of the channels; Column 6, lines 26-29, FIG. 5 depicts a screenshot of a dashboard interface for displaying and receiving acceptances of suggested actions, according to an implementation of the invention; Column 10, lines 42-51, In an implementation, the strategy suggestion generator 128 may use output from the decision engine 126 to present comprehensive marketing strategy improvements across some or all channels 101 related to the company. In some instances, the strategy suggestion generator 128 may generate suggestions for a given channel 101 once it is decided that performance with respect to that channel should be increased. Alternatively or additionally, the strategy suggestion generator 128 may suggest an action that would achieve one or more goals specified in the user-defined rules; Column 17, lines 18-34, In an implementation, strategy implementation and monitoring engine 132 may monitor the results of the strategy implementation in order to learn from the results and develop better strategies. The results may facilitate continuous feedback loop and machine learning; Column 25, lines 40-48, Client devices 170 may be configured as a personal computer (e.g., a desktop computer, a laptop computer, etc.), a smartphone, a tablet computing device, and/or other device that can be programmed to interface with computer system 110 (e.g., using the dashboard). Although not illustrated in FIG. 1, end user devices 140 may include one or more physical processors programmed by computer program instructions; Examiner interprets “projection of sales” as the “entity’s future cash flow”). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the sub-target machine learning model used for optimizing an objective function (e.g., provide marketing strategies that increase sales and/or profit) of the invention of Seybart et al. to further specify how the sub-target machine learning is iteratively trained over time by correlating entity data inputs to dynamic-sub-target outputs (e.g., how the marketing strategies are correlated to sales/revenue of the entity) of the invention of Lah because doing so would allow the machine learning to monitor the results of the strategy implementation in order to learn from the results and develop better strategies (see Lah, Column 17, lines 18-34) and change goals over time depending on certain conditions (Column 18, lines 60-64). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. The combination of Seybert et al. and Lah discloses training and retraining a sub-target machine learning model as a function of sub-target training data comprising a plurality of examples of entity data inputs and static target inputs correlated to dynamic sub-target outputs receiving user inputs (see Sybert et al., Paragraph 0038, training a machine learning to identify target strategies that increase sales and/or profit of products; see Lah, Column 17, lines 18-34, monitor the results of the strategy implementation in order to learn from the results and develop better strategies; As stated in Paragraph 0038 of Applicant’s specification, the dynamic sub-targets might involve implementing marketing strategies to boost their sales). Although the combination of Seybert et al. and Lah further discloses updating marketing strategies based the quality of the dynamic sub-target output as a function of the user feedback (see Lah, Column 6, lines 40-52, effectiveness score), and accuracy of the model (see Lah, Column 25, lines 7-20, In some instances, the AI engine 136 either may not have enough data or not have strong enough interpretation of the data to make accurate action predictions), the combination of Seybert et al. and Lah does not specifically disclose automatically retraining the machine learning model when the accuracy score is below a threshold. Also, although the combination of Seybert et al. and Lah further discloses receiving a plurality of entity data comprising a plurality of product data associated with an entity (see Seybert et al., see Figure 2 and related text in Paragraph 0025, The real-time market data can include anything from measuring sales performances of retail companies to optimizing in-store execution such as price, promotion and assortment), the combination of Seybert et al. and Lah does not specifically disclose wherein the product data is retrieved by using a web crawler. However, Figueroa et al. discloses receiving, using the at least a processor, a plurality of entity data comprising a plurality of product data associated with an entity, wherein the processor integrates a feedback loop mechanism …, wherein receiving the plurality of entity data comprises: automatically retrieving one or more entity records using a web crawler, wherein the web crawler is additionally configured to systematically browse the internet by visiting a plurality of URLs, retrieving and indexing market data, and measuring a relevance of the market data to the entity; and generating demand data based on the market data (Paragraph 0038, In some embodiments, the provided digital shelf analytic system may provide a predictive model for recommendations to improve sales performance (e.g. performance score). In some cases, the weighting coefficients may be generated by the predictive model. In some cases, the score may be the output of the predictive model, as a proxy for potential sales performance. A predictive model may be a trained model or machine learning algorithm trained model. The predictive model may be improved or optimized continuously using a combination of public and private data collected. In some cases, the input data to the predictive model may comprise data related to the various factors affecting the score as described above. In some cases, the input data may include user provided data related to a given product or brand (e.g., keyword/search terms for the product, product description, categories, images, etc). The input data may include raw data such as marketplace data that may be obtained automatically using techniques such as image recognition, parsing HTM, URL, watermark decoded from product image, image fingerprints, text fingerprints, cookie data, and the like. For example, Amazon pages for the most popular products or products in the same category may be crawled to independently compile data that cross-references Amazon ASINs to GTINs, manufacturers' model numbers, and other identifying data. In some cases, the input data may be retrieved from an external data source such as public and commercial brand database; Examiner interprets the “most popular products” as the “relevance of the market data.” Also, sales performance is equivalent to demand data since it’s measuring sales/purchases/transactions for a specific product); … updating the sub-target training data as a function of the user feedback, wherein updating the sub-target training data as a function of user feedback comprises: identifying the quality of the dynamic sub-target output as a function of the user feedback, wherein identifying the quality of the dynamic sub-target output comprises generating an accuracy score of the dynamic sub-target output (Paragraph 0040, In some cases, the output of the predictive model may comprise recommendation information. The recommendation information may comprise information about improving sales performance. For example, the recommendation may comprise a recommended keyword/search terms of the product, description about the product, presentation of the product (e.g., image, video, etc), price, marketing strategy (e.g., paid presence), and the like. In some cases, the recommendations may be quantified so that one or more actions as recommended are executable to the user; Paragraph 0088, Data monitored by the model monitor system may include data involved in model training and during production. The data at model training may comprise, for example, training, test and validation data, predictions, or statistics that characterize the above datasets (e.g., mean, variance and higher order moments of the data sets). Data involved in production time may comprise time, input data, predictions made, and confidence bounds of predictions made. In some embodiments, the ground truth data (e.g., user provided recommendations) may also be monitored. The ground truth data may be monitored to evaluate the accuracy of a model and/or trigger retraining of the model. In some cases, users may provide ground truth data (e.g., user provided feedback) to the predictive model creation and management system 1000 after a model is in deployment phase. The model monitor system may monitor changes in data such as changes in ground truth data, or when new training data or prediction data becomes available; Paragraph 0089, As described above, the plurality of predictive models (e.g., model for predicting recommendations, weights, or scores) may be individually monitored or retrained upon detection of the model performance is below a threshold. During prediction time, predictions may be associated with the model in order to track data drift or to incorporate feedback from new ground truth data). removing a low quality dynamic sub-target output from the training data, wherein the low quality dynamic sub-target output indicates a low accuracy score; replacing the low quality dynamic sub-target output with a new dynamic sub-target output (Paragraph 0069, In some cases, the predictive model may be continually trained and improved using proprietary data or relevant data (e.g., user provided data, new data collected from ecommerce channels) so that the output can be better adapted to the specific product, ecommerce channel or a brand. In some cases, a predictive model may be pre-trained and implemented on the existing ecommerce system, and the pre-trained model may undergo continual re-training that involves continual tuning of the predictive model or a component of the predictive model (e.g., classifier) to adapt to changes in the implementation environment over time (e.g., changes in the marketplace data, model performance, user-specific data, etc). In some cases, the training data may be created based on user input including but not limited to, sales information (e.g., cost of product) and related recommendation; Paragraph 0088, Data monitored by the model monitor system may include data involved in model training and during production. The data at model training may comprise, for example, training, test and validation data, predictions, or statistics that characterize the above datasets (e.g., mean, variance and higher order moments of the data sets). Data involved in production time may comprise time, input data, predictions made, and confidence bounds of predictions made. In some embodiments, the ground truth data (e.g., user provided recommendations) may also be monitored. The ground truth data may be monitored to evaluate the accuracy of a model and/or trigger retraining of the model. In some cases, users may provide ground truth data (e.g., user provided feedback) to the predictive model creation and management system 1000 after a model is in deployment phase. The model monitor system may monitor changes in data such as changes in ground truth data, or when new training data or prediction data becomes available; Paragraph 0089, As described above, the plurality of predictive models (e.g., model for predicting recommendations, weights, or scores) may be individually monitored or retrained upon detection of the model performance is below a threshold. During prediction time, predictions may be associated with the model in order to track data drift or to incorporate feedback from new ground truth; Examiner notes that Figueroa is “updating the recommendations (e.g., marketing strategies/activities)” based on accuracy of predictions. Examiner interprets “updating recommendations” as “removing and replacing the dynamic sub-target” since it’s only using the marketing parameters that optimize a marketing campaign) retraining the sub-target machine learning model using the static target inputs correlated to the new dynamic sub-target output (Paragraph 0040, In some cases, the output of the predictive model may comprise recommendation information. The recommendation information may comprise information about improving sales performance. For example, the recommendation may comprise a recommended keyword/search terms of the product, description about the product, presentation of the product (e.g., image, video, etc), price, marketing strategy (e.g., paid presence), and the like; Paragraph 0078, The customer response score measures how positively customers respond to the selected product. The customer response may be related to sales rank, sales velocity, ratings, review sentiment, product page conversion rates and the like; Paragraph 0088, Data monitored by the model monitor system may include data involved in model training and during production. The data at model training may comprise, for example, training, test and validation data, predictions, or statistics that characterize the above datasets (e.g., mean, variance and higher order moments of the data sets). Data involved in production time may comprise time, input data, predictions made, and confidence bounds of predictions made. In some embodiments, the ground truth data (e.g., user provided recommendations) may also be monitored. The ground truth data may be monitored to evaluate the accuracy of a model and/or trigger retraining of the model. In some cases, users may provide ground truth data (e.g., user provided feedback) to the predictive model creation and management system 1000 after a model is in deployment phase. The model monitor system may monitor changes in data such as changes in ground truth data, or when new training data or prediction data becomes available; Paragraph 0089, As described above, the plurality of predictive models (e.g., model for predicting recommendations, weights, or scores) may be individually monitored or retrained upon detection of the model performance is below a threshold. During prediction time, predictions may be associated with the model in order to track data drift or to incorporate feedback from new ground truth). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the sub-target machine learning model used for optimizing an objective function (e.g., provide marketing strategies that increase sales and/or profit), wherein the sub-target machine learning is iteratively trained over time by correlating entity data inputs to dynamic-sub-target outputs of the invention of Seybart et al. and Lah to further retrain the sub-target machine learning when the accuracy score is below a threshold of the invention of Figueroa et al. because doing so would allow the plurality of models to monitor changes in data and retrain upon detection of model performance below a threshold (see Figueroa et al., Paragraphs 0088-0089). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. The combination of Seybert et al. and Lah further discloses receiving a plurality of entity data comprising a plurality of product data associated with an entity (see Seybert et al., see Figure 2 and related text in Paragraph 0025, The real-time market data can include anything from measuring sales performances of retail companies to optimizing in-store execution such as price, promotion and assortment). Although Figueroa et al. further discloses wherein the plurality of entity data may be received automatically using techniques such as image recognition (see Paragraph 0038), the combination of Seybert et al., Lah, and Figueroa et al. does not specifically disclose wherein the image recognition technique is using an optical character reader (OCR). However, Everest discloses and receiving, using the at least a processor, a plurality of entity data comprising a plurality of product data associated with an entity, …, wherein receiving the plurality of entity data comprises: automatically retrieving one or more entity records using a web crawler, wherein the web crawler is additionally configured to systematically browse the internet by visiting a plurality of URLs, retrieving and indexing market data, … (Column 11, lines 35-46, In one or more embodiments, resource data 120 may be retrieved from one or more data acquisition systems 124. “Data acquisition system” for the purposes of this disclosure is software or an algorithm that is used to gather data from various sources. For example, data acquisition system 124 may include a web crawler. A “web crawler,” as used herein, is a program that systematically browses the internet for the purpose of web indexing. Web crawler may be seeded with platform URLs, wherein the crawler may then visit the next related URL, retrieve the content, index the content, and/or measures the relevance of the content to the topic of interest; Column 23, lines 24-30, In a non-limiting example, virtual activity data may be received, by processor 108, from virtual environment. Data related to user's activity in virtual environment such as, without limitation, online browsing, online shopping, social media posting, and the like may be collected by processor 108 as user profile 136); … converting at least a portion of the one or more entity records into machine encoded text using an optical character reader (OCR), wherein converting the at least a portion of the one or more entity records into the machine-encoded text comprises converting images of text in the at least a portion of the one or more entity records into the machine-encoded text and further comprises: pre-processing the images of text by de-skewing at least one image component associated with the at least a portion of the one or more entity records by applying a transform operation to the at least one image component; and implementing an OCR algorithm comprising a matrix matching process by comparing pixels of the pre-processed images to pixels of a stored glyph on a pixel-by-pixel basis; … and the converted at least a portion of the one or more entity records … (Column 12, lines 37-44, A user may input digital records and/or scanned physical documents that have been converted to digital documents, wherein data set 120 may include data that have bene converted into machine readable text. In some embodiments, optical character recognition or optical character reader (OCR) includes automatic conversion of images of written (e.g., typed, handwritten, or printed text) into machine-encoded text; Column 13, lines 4-12, Still referring to FIG. 1, in some cases, OCR processes may employ pre-processing of image components. Pre-processing process may include without limitation de-skew, de-speckle, binarization, line removal, layout analysis or “zoning,” line and word detection, script recognition, character isolation or “segmentation,” and normalization. In some cases, a de-skew process may include applying a transform (e.g., homography or affine transform) to the image component to align text; Column 13, lines 36-47, Still referring to FIG. 1, in some embodiments, an OCR process will include an OCR algorithm. Exemplary OCR algorithms include matrix-matching process and/or feature extraction processes. Matrix matching may involve comparing an image to a stored glyph on a pixel-by-pixel basis. In some cases, matrix matching may also be known as “pattern matching,” “pattern recognition,” and/or “image correlation.” Matrix matching may rely on an input glyph being correctly isolated from the rest of the image component. Matrix matching may also rely on a stored glyph being in a similar font and at the same scale as input glyph. Matrix matching may work best with typewritten text; Column 23, lines 24-30, In a non-limiting example, virtual activity data may be received, by processor 108, from virtual environment. Data related to user's activity in virtual environment such as, without limitation, online browsing, online shopping, social media posting, and the like may be collected by processor 108 as user profile 136). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the sub-target machine learning model used for optimizing an objective function (e.g., provide marketing strategies that increase sales and/or profit), wherein a plurality of entity data is received using different techniques such as image recognition of the invention of Seybart et al., Lah, and Figueroa et al. to further specify wherein the image is converted to machine-encoded text using an optical character reader of the invention of Everest because doing so would allow the machine learning model to receive data that is already converted into machine-encoded text (see Everest, Column 12, lines 37-44). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claims 2 and 12 (Previously Presented), which are dependent of claims 1 and 11, the combination of Seybert et al., Lah, Figueroa et al., and Everest discloses all the limitations in claims 1 and 11. Seybert et al. further discloses wherein identifying the first set of dynamic sub-targets comprises: generating a second objective function as a function of an exemplary set of dynamic sub-targets and the one or more static targets (Paragraph 0034, That is, the action determiner 114 determines and ranks one or more actions to increase sales and/or profit of products; Paragraph 0040, For example, the example pricing determiner 208 determines a target everyday price for a product to increase (e.g., maximize) profit and volume growth); optimizing the second objective function (Paragraph 0034, In the illustrated example of FIG. 1, the action determiner 114 accesses and analyzes the data stored in the client databases 102, 104, 106, 108 to determine a target market strategy for price, promotion, new products, assortment, and/or in-market execution. In examples disclosed herein, the action determiner 114 uses one or more machine learning queries to continuously monitor real-time market data and compare the real-time market data to the target market strategy. The action determiner 114 scores (e.g., ranks, prioritizes, etc.) the accounts and/or products of retailers and/or manufacturers based on compliance to the target market strategy to prioritize focus against the highest leverage opportunities. That is, the action determiner 114 determines and ranks one or more actions to increase sales and/or profit of products. The action determiner 114 generates an example report 116 displaying the accounts, products, and/or levers where an action is recommended and the specific action to take by lever and sub-lever); and identifying the first set of dynamic sub-targets as a function of an optimized second objective function (Paragraph 0038, The example model trainer 205 trains a machine learning model to identify target strategies for one or more market levers (e.g., price, promotion, new products, assortment, and/or in-market execution); Paragraph 0039, The example target principle generator 206 applies the machine learning model to determine target business strategies. For example, the target principle generator 206 determines guidelines (e.g., principles, rules, target metrics, parameter, etc.) for one or more market levers at the market, account, and/or store level. In some examples, the target principles are conditions deemed “optimal” by the target principle generator 206. In the illustrated example of FIG. 2, the target principle generator 206 includes an example pricing determiner 208, an example promotion determiner 210, an example assortment determiner 212, an example new product determiner 214, and an example execution determiner 216 (sometimes referred-to as an in-store execution determiner 216). For example, the target principle generator 206 determines target principles for the pricing lever, the promotion lever, the assortment lever, the new product lever, and/or the execution lever. However, the target principle generator 206 can additionally or alternatively determine target principles for any suitable lever and/or sub-lever at the market, account, and/or store level; Examiner interprets “adjusting the price and/or promotion to increase sales and/or profit” as “the set of dynamic sub-targets” since those actions are marketing strategies used to boost/increase sales). Regarding claims 4 and 14 (Original), which are dependent of claims 1 and 11, the combination of Seybert et al., Lah, Figueroa et al., and Everest discloses all the limitations in claims 1 and 11. Seybert et al. further discloses wherein the memory further instructs the at least a processor to identify a target group as a function of the entity data (Paragraph 0054, In examples disclosed herein, the new product determiner 214 determines which retailers and/or stores first introduce new products and identifies the minimum rate of sale for an item to be launched in those stores based on historical new products. For example, the new product determiner 214 determines to introduce a new product in a specific retail store based on demographic data of shoppers of that retail store and a comparison of the hurdle rate for that store versus the expected sales of the new product. If the expected sales of the new product is higher than the hurdle rate and a product currently on the shelf can be found to be removed such that the sales of the new product is greater than the lost sales from delisting the product, the new product determiner 214 will identify that store as an opportunity for the new product; In this case, “identify a target group” is interpreted as evaluating sales of the product based on demographic data of shoppers). Regarding claims 5 and 15 (Original), which are dependent of claims 1 and 11, the combination of Seybert et al., Lah, Figueroa et al., and Everest discloses all the limitations in claims 1 and 11. Seybert et al. further discloses wherein the memory further instructs the at least a processor to: iteratively generate updated product data as a function of the first set of dynamic sub-targets and the static target status (Paragraph 0057, In the illustrated example of FIG. 2, the action determiner 114 includes an example execution analyzer 218. In some examples, the execution analyzer 218 includes means for comparing data (sometimes referred to herein as data comparing means). The example means for comparing data is hardware. The example execution analyzer 218 analyzes real-time market data (e.g., POS data, etc.) to identify products in which the in-market strategies and executions differ from the target principles determined by the example target principle generator 206. That is, in some examples, the execution analyzer 218 analyzes the real-time market data based on the levers analyzed by the target principle generator 206 (e.g., the pricing lever, the promotion lever, the assortment lever, the new product lever, and/or the execution lever). However, the execution analyzer 218 can analyze the real-time market data based on any suitable lever and/or sub-lever analyzed by the target principle generator 206; Paragraph 0058, The example pricing analyzer 220 compares the real-time market data to the target price principle determined by the pricing determiner 208; Table 11, lower price to be competitive; Examiner interprets “changing the price” as “the first set of dynamic sub-targets”); and identify the second set of dynamic sub-targets as a function of the updated product data (Paragraph 0058, For example, the pricing analyzer 220 analyzes the price gap sub-lever to determine whether the gap between the product and competitor product is above or below a price gap threshold (e.g., the price gap threshold corresponding to conditions deemed “optimal”). For example, the pricing analyzer 220 determines whether the gap between the product and the competitor product is 10% higher than the target price gap. In some examples, the pricing analyzer 220 determines flag criteria based on alternative thresholds than those illustrated in Table 11 (e.g., gap more than 15%, etc.). In some examples, both the pricing determiner 208 and the pricing analyzer 220 are constantly monitoring real-time market data and making changes to the target principle determined by the pricing determiner 208 and the compliance determined by the pricing analyzer 220; Examiner notes that Seybert et al. can identify new marketing strategies as a function of the updated product data). Regarding claims 6 and 16 (Original), which are dependent of claims 1 and 11, the combination of Seybert et al., Lah, Figueroa et al., and Everest discloses all the limitations in claims 1 and 11. Seybert et al. further discloses wherein receiving the plurality of entity data comprises generating a plurality of entity data … (Paragraph 0025, The real-time market data can include anything from measuring sales performances of retail companies to optimizing in-store execution such as price, promotion and assortment; Paragraph 0028. In the illustrated example of FIG. 1, the respective client databases 102, 104, 106, 108 contain product information for associated individual clients (e.g., different retail chains, different brands, etc.). That is, the client databases 102, 104, 106, 108 store point of sale (POS) data. In examples disclosed herein, the client database 102 stores market data such as universal product code (UPC) level data including volumetric sales, price data, promotion data, and/or audit data. The client database 102 can store retail chain data (e.g., data from Target®, Walmart®, etc.) and/or independent retail data. For example, the client database 102 can cover grocery data, drug data, military commissary data, liquor data, etc.). Although Seybert et al. discloses receiving and generating a plurality of entity data (e.g., sales information for a specific product or entity/retail), Seybert et al. does not specifically disclose wherein the data is generated using a plurality of tracking cookies. However, Figueroa et al. discloses wherein receiving the plurality of entity data comprises generating a plurality of entity data using a plurality of tracking cookies (Paragraph 0038, The predictive model may be improved or optimized continuously using a combination of public and private data collected. In some cases, the input data to the predictive model may comprise data related to the various factors affecting the score as described above. In some cases, the input data may include user provided data related to a given product or brand (e.g., keyword/search terms for the product, product description, categories, images, etc). The input data may include raw data such as marketplace data that may be obtained automatically using techniques such as image recognition, parsing HTM, URL, watermark decoded from product image, image fingerprints, text fingerprints, cookie data, and the like). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the system used for the identification of dynamic sub-targets based on a plurality of entity data of the invention of Seybart et al. to further specify wherein the plurality of entity data is generated by using a plurality of tracking cookies of the invention of Figueroa et al. because doing so would allow the system to obtain input data automatically using techniques such as cookie data (see Figueroa et al., Paragraph 0038). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claims 9 and 19 (Original), which are dependent of claims 1 and 11, the combination of Seybert et al., Lah, Figueroa et al., and Everest discloses all the limitations in claims 1 and 11. Seybert et al. further discloses wherein iteratively determining the static target status comprises updating static target status as a function of the second set of dynamic sub-targets (Paragraph 0057, In the illustrated example of FIG. 2, the action determiner 114 includes an example execution analyzer 218. In some examples, the execution analyzer 218 includes means for comparing data (sometimes referred to herein as data comparing means). The example means for comparing data is hardware. The example execution analyzer 218 analyzes real-time market data (e.g., POS data, etc.) to identify products in which the in-market strategies and executions differ from the target principles determined by the example target principle generator 206. That is, in some examples, the execution analyzer 218 analyzes the real-time market data based on the levers analyzed by the target principle generator 206 (e.g., the pricing lever, the promotion lever, the assortment lever, the new product lever, and/or the execution lever). However, the execution analyzer 218 can analyze the real-time market data based on any suitable lever and/or sub-lever analyzed by the target principle generator 206; Paragraph 0058, The example pricing analyzer 220 compares the real-time market data to the target price principle determined by the pricing determiner 208; Table 11, lower price to be competitive; Examiner notes that Sybert et al. further updates the price and/or promotion in response to determining that the real-time market data differs from the target principle). Regarding claims 10 and 20 (Original), which are dependent of claims 1 and 11, the combination of Seybert et al., Lah, Figueroa et al., and Everest discloses all the limitations in claims 1 and 11. Seybert et al. further discloses wherein the static target status comprises one or more vital metrics associated with the one or more static targets (Paragraph 0057, In the illustrated example of FIG. 2, the action determiner 114 includes an example execution analyzer 218. In some examples, the execution analyzer 218 includes means for comparing data (sometimes referred to herein as data comparing means). The example means for comparing data is hardware. The example execution analyzer 218 analyzes real-time market data (e.g., POS data, etc.) to identify products in which the in-market strategies and executions differ from the target principles determined by the example target principle generator 206. That is, in some examples, the execution analyzer 218 analyzes the real-time market data based on the levers analyzed by the target principle generator 206 (e.g., the pricing lever, the promotion lever, the assortment lever, the new product lever, and/or the execution lever). However, the execution analyzer 218 can analyze the real-time market data based on any suitable lever and/or sub-lever analyzed by the target principle generator 206; Paragraph 0058, The example pricing analyzer 220 compares the real-time market data to the target price principle determined by the pricing determiner 208; Table 11, lower price to be competitive; Examiner interprets the “sales for a specific product” as the “one or more vital metrics”). Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Seybert et al. (US 2022/0058661 A1), in view of Lah (US 11,494,721 B1), in further view of Figueroa et al. (US 2024/0177113 A1), Everest (US 12,242,945 B1), and Adabi (US 2021/0133785 A1). Regarding claims 7 and 17 (Original), which are dependent of claims 1 and 11, the combination of Seybert et al., Lah, Figueroa et al., and Everest discloses all the limitations in claims 1 and 11. Seybert et al. further discloses wherein receiving the plurality of entity data comprises generating a plurality of entity data … (Paragraph 0025, The real-time market data can include anything from measuring sales performances of retail companies to optimizing in-store execution such as price, promotion and assortment; Paragraph 0028. In the illustrated example of FIG. 1, the respective client databases 102, 104, 106, 108 contain product information for associated individual clients (e.g., different retail chains, different brands, etc.). That is, the client databases 102, 104, 106, 108 store point of sale (POS) data. In examples disclosed herein, the client database 102 stores market data such as universal product code (UPC) level data including volumetric sales, price data, promotion data, and/or audit data. The client database 102 can store retail chain data (e.g., data from Target®, Walmart®, etc.) and/or independent retail data. For example, the client database 102 can cover grocery data, drug data, military commissary data, liquor data, etc.). Although Seybert et al. discloses receiving and generating a plurality of entity data (e.g., sales information for a specific product or entity/retail), Seybert et al. does not specifically disclose wherein the data is generated using a chatbot. However, Adibi discloses wherein receiving the plurality of entity data comprises generating a plurality of entity data using a chatbot (Paragraph 0190, Chatbots may be used in some implementations to interact with customers and to send ads to customers and perform other marketing and advertising aspects. Chatbots provide personalized assistance, enhance customer service, provide product recommendations, process orders, share brand and/or product updates, provide in-store assistance and navigation, offer promotions based on location, automate processes, enable discovery, and/or support storytelling, depending on the implementation; Paragraph 0193, At 3410, a customer and an agent, such as a human agent and/or a virtual agent (e.g., a chatbot) establish a call (e.g., an audio phone call). The customer may contact the contact center with an inquiry, complaint, feedback, etc. and be put in touch with an agent of the contact center; Paragraph 0194, At 3420, during the call, depending on the implementation, the call is monitored for context, keywords, tones, emotions, aspects of demographics, aspects of psychographics, etc. Alternatively or additionally, a DNA, fingerprint, and/or one or more segments (as those terms are used herein) pertaining to the customer may be retrieved. The DNA, fingerprint, and/or one or more segments may be of the customer itself, or may be of one or more other customers who have been determined to have similar characteristics (e.g., personalities, demographics, psychographics, etc.) with the customer who is on the call with the agent; Paragraph 0195, At 3430, based on the monitoring performed at 3420, one or more ads are determined that target the customer. The ad(s) may be determined based on one or more of context, keywords, tones, emotions, aspects of demographics, aspects of psychographics, etc., a DNA, fingerprint, and/or one or more segments pertaining to the call and/or the customer; Paragraph 0199, Data and statistics based on the customer selections may be tracked, stored, maintained, and/or updated). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the system used for the identification of dynamic sub-targets based on a plurality of entity data of the invention of Seybart et al. to further specify wherein the plurality of entity data is generated by using a chatbot of the invention of Adabi because doing so would allow the system to track, store, maintain, and update statistics based on the customer selections (see Adabi, Paragraph 0195). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Seybert et al. (US 2022/0058661 A1), in view of Lah (US 11,494,721 B1), in further view of Figueroa et al. (US 2024/0177113 A1), Everest (US 12,242,945 B1), and Hoech (US 2023/0410015 A1). Regarding claims 8 and 18 (Original), which are dependent of claims 1 and 11, the combination of Seybert et al., Lah, and Figueroa et al. discloses all the limitations in claims 1 and 11. Seybert et al. further discloses wherein identifying the one or more static targets comprises: comparing the plurality of entity data associated with the entity to a plurality of entity data associated with a second entity; … (Paragraph 0024, Market data and analytics can deliver actionable insights for a company and provide better knowledge as to how that company pairs up against competitors and similar markets based on real-time market data). Although Sybert et al. discloses comparing the plurality of entity data associated with the entity to a plurality of entity data associated with a second entity (Paragraph 0048), Sybert et al. does not specifically disclose wherein the one or more static targets are identified as a function of the comparison. However, Hoech discloses wherein identifying the one or more static targets comprises: comparing the plurality of entity data associated with the entity to a plurality of entity data associated with a second entity; and identifying the one or more static targets as a function of the comparison (Paragraph 0023, One technique that can be useful for revenue forecasting, in the case where there is little or no historical sales data, can be based on applying benchmark revenue data from comparable products or services. The comparable products and services can be provided by the enterprise, by competitive enterprises, and so on. This technique can also consider seasonal variations in sales, changes in economic indicators, and the like. However, the historical data mentioned previously, and the benchmark revenue data may still not provide sufficient insight for providing revenue forecasts. Forecasts based on these datasets can miss key elements such as whether advertising campaigns are meeting sales objectives. Instead, a predictive growth algorithm can be selected for predicting future revenue. Enterprise revenue goals can be developed based on predicted growth, and the predicted growth can correspond to input pipelines. An input pipeline can be based on a visualization tool that can be used to illustrate the progress of potential customers through a sales technique or “pipeline”. The pipeline can be used to track progress, to indicate when an additional action or actions are required, etc.; Paragraph 0035, The flow 200 includes applying relevant benchmark revenue data from comparables and/or importing historical output revenue benchmark data 210. The relevant benchmark data from comparables can include data associated with comparable products and services offered by an enterprise, comparable products and services offered by a competitor, and so on. The benchmark data can include data from market trials. The benchmark data can be used when historical data is not available. The historical data can include data from a previous version of a product or service, previous sales of the current product or service, etc. In a usage example, a new company has developed a product and has readied it for sale. The new company does not have historical data, so must estimate sales based on competitor products, test sales, product research, and the like. In a second usage example, an established company has a new product for sale. Sales estimates for the new product can be based on sales of various versions of the product, similar products offered by the enterprise, etc.). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the system used for the identification of dynamic sub-targets based on a plurality of entity data, wherein a static target is defined for a product/entity (e.g. target market strategy) of the invention of Seybart et al. to further specify wherein the static strategy is identified by comparing the plurality of entity data associated with the product/entity to a plurality of entity data associated with a second product/entity of the invention of Hoech because doing so would allow the system to apply relevant benchmark revenue data from comparables and/or importing historical output revenue benchmark data (see Hoech, Paragraph 0035). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Guo (CN 116883045 A) – discloses deep learning technology to train and learn a large amount of reaching data, continuously perfects the self-automatic flow strategy generating ability, so as to provide the optimized automatic marketing strategy for the enterprise (see at least Page 3, Disclosure of Invention). De Mauro (De Mauro, A., Sestino, A. and Bacconi, A., 2022. Machine learning and artificial intelligence use in marketing: a general taxonomy. Italian Journal of Marketing, 2022(4), pp.439-457) – discloses some patterns in how ML and AI can support marketing strategies. From a consumers’ perspective, marketing efforts should be directed both to drive personalized actions required by consumers-related peculiarities and to enrich their overall customer journey. From a business perspective, machine learning can be exploited for consumer sensing and market understanding (and, thus, to ultimately improve decision-making processes) and also for supporting dynamic pricing and media optimization strategies, ultimately impacting financial results (see at least Conclusion). Liu (CN 117194779 A) – discloses to provide a marketing system optimization method, device, computer device and storage medium based on artificial intelligence, so as to solve the problem that the marketing strategy pre-performance result has bad accuracy and cannot automatically optimize and adjust according to the user requirement; Pages 6-7, Steps S10-S80, With continued reference to FIG. 2, a flowchart of one embodiment of a method of system security monitoring computation according to the present application is shown. The method for optimizing marketing system based on artificial intelligence comprises the following steps: step S40, performing prediction accuracy evaluation on the marketing prediction model, obtaining prediction accuracy score, and comparing the prediction accuracy score with a preset standard score; step S60, if the prediction accurate score is smaller than the standard score, then iteratively adjusting the model parameter and the feature engineering strategy of the marketing prediction model, until the prediction accurate score is greater than or equal to the standard score, and after the prediction accurate score is greater than or equal to the standard score, taking the adjusted marketing prediction model as the optimized marketing prediction model; step S70, performing marketing prediction according to the standard sample data, the user portrait, the optimized marketing prediction model and the preset marketing strategy to obtain the marketing prediction data; and in the embodiment, the marketing strategy is used for being put in various financial marketing scenes, such as product marketing, advertisement marketing, service marketing and so on, the preset standard score and marketing index can be set according to the actual condition. step S80, comparing the marketing prediction data with a preset marketing index to obtain a comparison result, and adjusting the marketing strategy according to the comparison result to obtain an optimized marketing system (see at least Page 3, Disclosure of Invention). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARJORIE PUJOLS-CRUZ whose telephone number is (571)272-4668. The examiner can normally be reached Mon-Thru 7:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia H Munson can be reached at (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.P./Examiner, Art Unit 3624 /PATRICIA H MUNSON/Supervisory Patent Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Mar 11, 2024
Application Filed
May 20, 2024
Non-Final Rejection — §101, §103, §112
Jul 02, 2024
Applicant Interview (Telephonic)
Jul 02, 2024
Examiner Interview Summary
Jul 02, 2024
Response Filed
Jul 18, 2024
Final Rejection — §101, §103, §112
Sep 03, 2024
Request for Continued Examination
Sep 04, 2024
Response after Non-Final Action
Sep 13, 2024
Non-Final Rejection — §101, §103, §112
Sep 26, 2024
Examiner Interview Summary
Sep 26, 2024
Applicant Interview (Telephonic)
Oct 02, 2024
Response Filed
Oct 17, 2024
Final Rejection — §101, §103, §112
Jan 21, 2025
Applicant Interview (Telephonic)
Jan 21, 2025
Examiner Interview Summary
Jan 22, 2025
Request for Continued Examination
Jan 23, 2025
Response after Non-Final Action
Feb 10, 2025
Non-Final Rejection — §101, §103, §112
Mar 10, 2025
Interview Requested
Mar 18, 2025
Examiner Interview Summary
Mar 18, 2025
Applicant Interview (Telephonic)
May 14, 2025
Response Filed
May 22, 2025
Final Rejection — §101, §103, §112
Aug 22, 2025
Request for Continued Examination
Aug 30, 2025
Response after Non-Final Action
Sep 08, 2025
Non-Final Rejection — §101, §103, §112
Nov 03, 2025
Interview Requested
Nov 10, 2025
Examiner Interview Summary
Nov 10, 2025
Applicant Interview (Telephonic)
Dec 11, 2025
Response Filed
Jan 06, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12106240
SYSTEMS AND METHODS FOR ANALYZING USER PROJECTS
2y 5m to grant Granted Oct 01, 2024
Patent 12014298
AUTOMATICALLY SCHEDULING AND ROUTE PLANNING FOR SERVICE PROVIDERS
2y 5m to grant Granted Jun 18, 2024
Patent 11966927
Multi-Task Deep Learning of Client Demand
2y 5m to grant Granted Apr 23, 2024
Patent 11941651
LCP Pricing Tool
2y 5m to grant Granted Mar 26, 2024
Patent 11847602
SYSTEM AND METHOD FOR DETERMINING AND UTILIZING REPEATED CONVERSATIONS IN CONTACT CENTER QUALITY PROCESSES
2y 5m to grant Granted Dec 19, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
18%
Grant Probability
46%
With Interview (+27.9%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month