Prosecution Insights
Last updated: April 19, 2026
Application No. 19/000,421

SYSTEM AND METHOD FOR IN-STORE CUSTOMER FEEDBACK COLLECTION AND UTILIZATION

Non-Final OA §101§103§112
Filed
Dec 23, 2024
Examiner
GARCIA-GUERRA, DARLENE
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Blue Boat Data Inc.
OA Round
1 (Non-Final)
23%
Grant Probability
At Risk
1-2
OA Rounds
4y 6m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
119 granted / 523 resolved
-29.2% vs TC avg
Strong +34% interview lift
Without
With
+34.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
53 currently pending
Career history
576
Total Applications
across all art units

Statute-Specific Performance

§101
36.6%
-3.4% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
2.6%
-37.4% vs TC avg
§112
16.2%
-23.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 523 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice to Applicant 1. The following is a NON-FINAL Office action upon examination of application number 19/000,421. Claims 1-20 are pending in this application, and have been examined on the merits discussed below. 2. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority 3. Application 19/000,421 filed 12/23/2024 is a Continuation of application 17/672,790, filed 02/16/2022. Application 17/672,790 claims Priority from Provisional Application 63/212,123, filed 06/18/2021. Information Disclosure Statement 4. The information disclosure statement (IDS) filed on 07/31/2025 has been acknowledged. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 5. The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. 6. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for pre-AIA the inventor(s), at the time the application was filed, had possession of the claimed invention. 7. Claims 1/11 recite “validate the Al model output based on a positive response and invalidate the Al model output based on a negative response.” There does not appear to be any written description in the Specification supporting “validate the Al model output based on a positive response and invalidate the Al model output based on a negative response.” The Specification only mentions the term “validate” in one instance in paragraph 0046, which indicates: “At 232, the newly registered feedback data on the hypotheses are transmitted to the backend system 110 for further storage and analysis. For example, regarding the mentioned exemplary hypothesis, the backend system 110 may validate or evaluate the hypothesis according to the received feedback and update the hypothesis.” The Specification is silent regarding validating the Al model output based on a positive response and invalidating the Al model output based on a negative response. At most the Specification describes “validating the hypothesis according to received feedback.” Even if a claim is supported by the Specification, the language of the Specification, to the extent possible, must describe the claimed invention so that one skilled in the art can recognize what is claimed. The appearance of mere indistinct words in a specification or a claim, even an original claim, does not necessarily satisfy that requirement." Enzo Biochem, Inc. v. Gen-Probe, Inc., 323 F.3d 956, 968, 63 USPQ2d 1609, 1616 (Fed. Cir. 2002). 35 U.S.C. 112(a) requires that the “specification shall contain a written description of the invention.” This requirement is separate and distinct from the enablement requirement. See, e.g., Vas-Cath, Inc. v. Mahurkar, 935 F.2d 1555, 1560, 19 USPQ2d 1111, 1114 (Fed. Cir. 1991). See also Univ. of Rochester v. G.D. Searle & Co., 358 F.3d 916, 920-23, 69 USPQ2d 1886, 1890-93 (Fed. Cir. 2004) (discussing history and purpose of the written description requirement). To satisfy the written description requirement, a patent specification must describe the claimed invention in sufficient detail that one skilled in the art can reasonably conclude that the inventor had possession of the claimed invention. See, e.g., Moba, B.V. v. Diamond Automation, Inc., 325 F.3d 1306, 1319, 66 USPQ2d 1429, 1438 (Fed. Cir. 2003); Vas-Cath, Inc. v. Mahurkar, 935 F.2d at 1563, 19 USPQ2d at 1116. However, a showing of possession alone does not cure the lack of a written description. Enzo Biochem, Inc. v. Gen-Probe, Inc., 323 F.3d 956, 969-70, 63 USPQ2d 1609, 1617 (Fed. Cir. 2002). An original claim may lack written description support when (1) the claim defines the invention in functional language specifying a desired result but the disclosure fails to sufficiently identify how the function is performed or the result is achieved or (2) a broad genus claim is presented but the disclosure only describes a narrow species with no evidence that the genus is contemplated. See Ariad Pharm., Inc. v. Eli Lilly & Co., 598 F.3d 1336, 1349-50 (Fed. Cir. 2010) en banc. The written description requirement is not necessarily met when the claim language appears in ipsis verbis in the specification. "Even if a claim is supported by the specification, the language of the specification, to the extent possible, must describe the claimed invention so that one skilled in the art can recognize what is claimed. The appearance of mere indistinct words in a specification or a claim, even an original claim, does not necessarily satisfy that requirement." Enzo Biochem, Inc. v. Gen-Probe, Inc., 323 F.3d 956, 968, 63 USPQ2d 1609, 1616 (Fed. Cir. 2002). Applying the above legal principles to the facts of the case at hand, the Examiner concludes that the Applicants' disclosure fails to sufficiently disclose possession at the time of the invention. Thus, the disclosure in the specification and drawings fails to demonstrate that the applicant was in possession of the invention and/or had reduced the invention to practice and/or the invention was ready for patenting at the time of the filing of the instant application. Because the disclosure of the instant application fails adequately describe the structure and functionality described above, it fails to clearly convey the information that the applicant has invented the subject matter which is claimed. Accordingly, claims 1-20 are rejected under 112(a). Applicant is reminded this is a written description rejection, not an enablement rejection. All claims dependent from above rejected claims are also rejected due to dependency. 8. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 9. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. 10. Claim 1 recites “receive a response from a customer based on one or more hypotheses;…and transmit the one or more hypotheses to the frontend system; analyze a response to determine at least one action to be taken…”. Claim 1 recites “receive a response from a customer” and later recites “analyze a response”. However, the claim does not clarify whether “a response” being analyzed is the same “response” previously received from the customer, a different response generated by the system, a response to a specific hypothesis presented to the customer, therefore rendering the claim scope ambiguous. Independent claim 11 recites similar limitations as those discussed above and are therefore found to be indefinite for the same reasons as claim 1. Appropriate correction is required. 11. Claim 1 recites “wherein the at least one processor is configured to: in response to receiving the customer data, calculate the one or more hypotheses defined by a structured data model including at least the set of structured data fields, wherein calculate is performed based on executing one or more operations configured to: input the set of structured data fields to an Al model trained to output one or more hypotheses in response to the input of the set of structured data fields, access an output of one or more hypotheses produced from the Al model, and transmit the one or more hypotheses to the frontend system…” The claim contains two separate recitations of “calculate” without explicitly linking the second “calculate” to the first. It is unclear whether the second “calculate” limitation is a new, independent calculation or merely describing the method of performing the first calculation. Therefore, rendering the claim indefinite. Appropriate correction is required. 12. Claim 7 recites “determine the at least one action item including…” The phrase “the at least one action item” lacks antecedent basis, and therefore renders the claim indefinite. While claim 1 introduces “at least one action,” claim 1/7 do not introduce “at least one action item”. Dependent claim 17 recites similar limitations as those discussed above and are therefore found to be indefinite for the same reasons as claim 7. Appropriate correction is required. Claim Rejections - 35 USC § 101 13. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 14. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The eligibility analysis in support of these findings is provided below, in accordance with MPEP 2106. With respect to Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is first noted that the system (claims 1-10) and method (claims 11-20) are directed to at least one potentially eligible category of subject matter (i.e., machine and process, respectively). Thus, Step 1 of the Subject Matter Eligibility test for claims 1-20 is satisfied. With respect to Step 2A Prong One, it is next noted that the claims recite an abstract idea that falls into the “Certain Methods of Organizing Human Activity” abstract idea set forth in MPEP 2106 because the claims recite steps for collecting customer feedback data from a customer regarding the product or service, which encompasses activity for managing personal behavior or relationships or interactions, as well as commercial interactions (e.g., advertising, marketing or sales activities or behaviors). With respect to independent claim 1, the limitations reciting the abstract idea are indicated in bold below: a frontend system communicatively connected to a backend system and configured to: tailor a visual display of a first user interface to limit data entry of customer data to a set of structured data fields; display a first screen of the first user interface tailored to capture respective ones of the set of structured data fields, the first screen configured to limit data entry to the set of structured data fields; receive a response from a customer based on one or more hypotheses; and the backend system comprising at least one processor, where the backend system is configured to store system data, wherein the system data comprises, at least, data related to a plurality of alternative products or services; wherein the at least one processor is configured to: in response to receiving the customer data, calculate the one or more hypotheses defined by a structured data model including at least the set of structured data fields, wherein calculate is performed based on executing one or more operations configured to: input the set of structured data fields to an Al model trained to output one or more hypotheses in response to the input of the set of structured data fields, access an output of one or more hypotheses produced from the Al model, and transmit the one or more hypotheses to the frontend system; analyze a response to determine at least one action to be taken; validate the Al model output based on a positive response and invalidate the Al model output based on a negative response; and transmit the at least one action to the frontend system for execution. Considered together, these steps set forth an abstract idea of managing personal behavior/relationships/interactions via rules or instructions that simply manage customer feedback at a point-of sale location (See Specification at paragraph [0004]: “A system for managing customer feedback regarding a product or service at a point-of- sale location is provided.”), and also set forth an abstract idea of managing marketing efforts (See Specification at paragraph 0005: “The action to be taken comprises one or more of: optimizing offerings related to the product or service; generating insight reports about the product or service; optimizing marketing efforts related to the product or service; optimizing affinity models related to the product or service; and optimizing inventory management tasks related to the product or service.”), which falls under the realm of managing commercial interactions (e.g., marketing or sales activities or behaviors), thus falling under the “Certain methods of organizing human activity” grouping set forth in MPEP 2106. Therefore, because the limitations above set forth activities falling within the “Certain methods of organizing human activity” abstract idea grouping described in MPEP 2106, the additional elements recited in the claims are further evaluated, individually and in combination, under Step 2A Prong Two and Step 2B below. Independent claim 11 recites similar limitations as those discussed above and are therefore found to recite the same or substantially the same abstract idea as claim 1. With respect to Step 2A Prong Two, the judicial exception is not integrated into a practical application. With respect to the independent claims, the additional elements are: a frontend system communicatively connected to a backend system and configured to: tailor a visual display of a first user interface, display a first screen of the first user interface, the backend system comprising at least one processor, and an Al model trained (claim 1), tailoring, by at least one processor, a visual display of a first user interface; displaying, by the at least one processor, a first screen of the first user interface, and an Al model trained (claim 11). These additional elements have been evaluated, but fail to integrate the abstract idea into a practical application because they amount to using generic computing elements or computer-executable instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), and merely serve to link the use of the judicial exception to a particular technological environment. See MPEP 2106.05(f) and 2106.05(h). Even if the “receive,” step is evaluated as an additional element, this step amounts at most to insignificant extra-solution data gathering activity, which is not indicative of a practical application, as noted in MPEP 2106.05(g). Similarly, even if the “transmit” and “output” steps are interpreted as being conducted by a computer or network, this activity amounts to insignificant extra-solution activity accomplished via receiving/transmitting data, which is not enough to amount to a practical application. See MPEP 2106.05(g). In addition, these limitations fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. With respect to Step 2B of the eligibility inquiry, it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to the independent claims, the additional elements are: a frontend system communicatively connected to a backend system and configured to: tailor a visual display of a first user interface, display a first screen of the first user interface, the backend system comprising at least one processor, and an Al model trained (claim 1), tailoring, by at least one processor, a visual display of a first user interface; displaying, by the at least one processor, a first screen of the first user interface, and an Al model trained (claim 11). These elements have been considered individually and in combination, but fail to add significantly more to the claims because they amount to using generic computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), and merely serves to link the use of the judicial exception to a particular technological environment and does not amount to significantly more than the abstract idea itself. Notably, Applicant’s Specification acknowledges that the claimed invention relies on nothing more than a general purpose computer executing instructions to implement the invention (Specification at paragraph [0024]: e.g., “The backend system 110, includes a processor 112 such as a computer or microcontroller configured to perform processing operations related to the received feedback from the customer...”). Accordingly, the generic computer involvement in performing the claim steps merely serves to generally link the use of the judicial exception to a particular technological environment, which does not add significantly more to the claim. See, e.g., Alice Corp., 134 S. Ct. 2347, 110 USPQ2d 1976.). With respect to the “receive,” “transmit,” and “output,” steps, it is noted that receiving/transmitting data has been recognized as well-understood, routine, and conventional, and thus insufficient to add significantly more to the abstract idea. See MPEP 2106.05(d) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). Even if the AI model was evaluated as an element beyond software/code for a generic computer to execute, it is noted that that the claimed use of Artificial Intelligence (AI) is recited at a high level of generality these elements amount to well-understood, routine, and conventional activity in the art, which fails to add significantly more to the claims. See, e.g., Magdon-Ismail et al., US 2009/0055270 (paragraph 39: “Both local and central engines may incorporate analysis techniques, such as artificial intelligence, machine learning and other techniques, which are well known in the art”). See also, Muchkaev, US 2010/0287011 (paragraph 47: “artificial intelligence algorithm such as a search algorithm, a learning algorithm, or any other artificial intelligence algorithm commonly known in the art”). In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrate the abstract idea into a practical application. Their collective functions merely provide generic computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that, as an ordered combination, amount to significantly more than the abstract idea itself. Dependent claims 2-10 and 12-20 recite the same abstract idea as recited in the independent claims, and when evaluated under Step 2A Prong One are found to merely recite details that serve to narrow the same abstract idea recited in the independent claims accompanied by the same generic computing elements or software as those addressed above in the discussion of the independent claims, which is not sufficient to amount to a practical application or add significantly more, or other additional elements that fail to amount to a practical application or add significantly more, as noted above. In particular, dependent claims 2-10 recite “wherein the set of structured data fields includes at least a product field, a product attribute field, a product attribute value field, and an action field,” “wherein the input to the model includes any data for the product field, product attribute field, the product attribute value field, and the action field,” “wherein the set of structured data fields reflect summarized hypotheses data include at least a product field, a product attribute field, a product attribute value field, and an action field,” “transmit collected customer responses including the set of structured data fields,” “wherein the one or more hypotheses are indicative of a question or action regarding a target product or service based on the customer data,” “determine the at least one action item including an automatic identification of new and non-existing inventory associated with the customer data,” “output a hypothesis that specifies criteria for a new product,” “output a hypothesis that identifies a missing product from inventory,” “identify whether a current product arrangement at a retail location associated with the frontend system matches a consumer demand based, at least in part, on the collected customer data consisting of the structured data model,” however these limitations cover organizing human activity since they flow directly from the customer feedback involving human interaction, which encompasses activity for managing personal behavior or relationships or interactions (e.g., following rules or instructions), which is part of the same abstract idea as addressed in the independent claims that falls within the “Certain Methods of Organizing Human Activity” abstract idea grouping. Accordingly, these steps are part of the same abstract idea(s) set forth in the independent claims. Dependent claims 12-20 have been evaluated as well, but are subject to similar findings as claims 2-10. The dependent claims recite additional elements of: an Al model is trained (claims 3, 8-10, 13, 18-20). However, when evaluated under Step 2A Prong Two and Step 2B, these additional elements do not amount to a practical application or significantly more since they merely require generic computing devices (or computer-implemented instructions/code) which as noted in the discussion of the independent claims above is not enough to render the claims as eligible. Even if the AI model was evaluated as an element beyond software/code for a generic computer to execute, it is noted that that the claimed use of Artificial Intelligence (AI) is recited at a high level of generality these elements amount to well-understood, routine, and conventional activity in the art, which fails to add significantly more to the claims. See, e.g., Magdon-Ismail et al., US 2009/0055270 (paragraph 39: “Both local and central engines may incorporate analysis techniques, such as artificial intelligence, machine learning and other techniques, which are well known in the art.”). See also, Anders et al., US 2020/0020015 (paragraph 101: “inferences may be performed by any combination of means known in the art, such as by pattern-matching, text analytics, semantic analytics, statistical methods, artificial intelligence, Bayesian analysis, machine learning, or keyword searching”). The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide generic computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to a practical application or significantly more than the abstract idea itself. For more information, see MPEP 2106. Claim Rejections - 35 USC § 103 15. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 16. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 17. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 18. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 19. Claims 1, 5-7, 10-11, 15-17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Montero et al., Pub. No.: US 2021/0103945 A1, [hereinafter Montero], in view of Brandish et al., Pub. No.: US 2017/0178205 A1, [hereinafter Brandish]. As per claim 1, Montero teaches a system for determining at least one action using an artificial intelligent ("Al") model (paragraphs 0038, 0040, 0130, 0230), the system (paragraphs 0038, 0130, 0226, 0230), comprising: a frontend system communicatively connected to a backend system and configured (paragraph 0136, discussing that the test promotion may be received at a user's smart phone or tablet, in a computer-implemented account associated with the user that is a member of the segmented subpopulation to be tested, via one or more social media sites, or displayed on electronic pricing tags within a retailer's physical store. The test promotion is presented to the user. The user's response to the test promotion is obtained and transmitted to a database for analysis; paragraph 0137, discussing the administering step from the forward-looking promotion optimization system perspective. The test promotions are generated using the test promotion generation server…The test promotions are provided to the users (e.g., transmitted or emailed to the user's smart phone or tablet or computer,…, displayed in the physical retailer)…The system receives the user's responses and stores the user's responses in the database for later analysis) to: provide a visual display of a first user interface (paragraph 0130, discussing that the test promotions may be administered via electronic pricing tags displayed within a physical retail location…The responses may be obtained at the point of sale terminal, or via a website or program,…, or via an app implemented on smart phones used by the individuals, for example; paragraph 0139, discussing various example methods for communicating the test promotions to individuals of the segmented subpopulations being tested. As shown in FIG. 5, the test promotions may be displayed on a webpage when the individual accesses his shopping or loyalty account via a computer or smart phone or tablet; paragraph 0142, discussing a test promotion administration module for administering the plurality of test promotions...There is also shown a module 812, representing the software/hardware module for receiving the responses. Module 812 may represent, for example, the point of sale terminal in a store,…, an app on a smart phone, a webpage displayed on a computer…, etc. where user responses can be received); receive a response from a customer based on one or more hypotheses (paragraph 0031, discussing that consumers are asked to state preferences. In an example conjoint study, a consumer may be approached at the store and asked a series of questions...Questions may be asked include, for example, “do you prefer Brand X or Brand Y”… or “do you prefer chocolate cookies or oatmeal cookies”…The consumer may state his preference on each of the questions posed; paragraph 0102, discussing that the test promotions themselves may be formulated to isolate specific test promotion variables (such as the aforementioned potato chip brown paper packaging or the 16-oz size packaging); paragraph 0130, discussing that the test promotions may be administered via electronic pricing tags displayed within a physical retail location…The responses may be obtained [i.e., receiving a response from a customer] at the point of sale terminal, or via a website or program,…, or via an app implemented on smart phones used by the individuals, for example; paragraph 0137, discussing that the test promotions are generated using the test promotion generation server…The test promotions are provided to the users…The system receives the user's responses; paragraph 0142, discussing a test promotion administration module for administering the plurality of test promotions...There is also shown a module 812, representing the software/hardware module for receiving the responses. Module 812 may represent, for example, the point of sale terminal in a store,…, an app on a smart phone, a webpage displayed on a computer…, etc. where user responses can be received; paragraphs 0081, 0088, 0134); and the backend system comprising at least one processor (paragraph 0136, discussing that the test promotion is received from the test promotion generation server…The user's response to the test promotion is obtained and transmitted to a database for analysis; paragraph 0143, discussing that the database is shown, representing the data store for user data and/or test promotion and/or general public promotion data and/or response data; paragraph 0227, discussing that FIG. 27B is an example of a block diagram for Computer System 2700. Attached to System Bus 2720 are a wide variety of subsystems. Processor(s) 2722 (also referred to as central processing units, or CPUs) are coupled to storage devices, including Memory 2724...), where the backend system is configured to store system data (paragraph 0129, discussing that the test promotions are administered to individuals and the individual responses are obtained and recorded in a database; paragraph 0143, discussing that one or more of modules 802-812 may be implemented on one or more servers. A database 814 is shown, representing the data store for user data and/or test promotion and/or general public promotion data and/or response data. Database 814 may be implemented by a single database or by multiple databases. The servers and database(s) may be coupled together using a local area network, an intranet, the internet, or any combination thereof; paragraph 0151), wherein the system data comprises, at least, data related to a plurality of alternative products or services (paragraph 0151, discussing that a database provides the server information regarding promotional variables that are to be altered to effectively test promotions within the retailer; paragraph 0111, discussing that test promotions 102a-102d are shown testing three test promotion variables X, Y, and Z, which may represent for example the size of the packaging (e.g., 12 oz. versus 16 oz.), the manner of display (e.g., at the end of the aisle versus on the shelf), and the discount (e.g., 10% off versus 2-for-1). These promotion variables are of course only illustrative and almost any variable involved in producing, packaging, displaying, promoting, discounting, etc. of the packaged product may be deemed a test promotion variable if there is an interest in determining how the consumer would respond to variations of one or more of the test promotion variables; paragraph 0159, discussing that promotions relevant to the products nearby may be transmitted to the device for display; paragraphs 0137, 0143); wherein the at least one processor (paragraph 0227, discussing that processor(s) 2722 (also referred to as central processing units, or CPUs) are coupled to storage devices, including Memory 2724...; paragraphs 0228, 0229, 0234, 0236) is configured to: in response to receiving the customer data, calculate the one or more hypotheses defined by a structured data model (paragraph 0079, discussing intelligent test designs for most effective experimentation of promotions and base pricing to more efficiently identify a highly effective general promotion and/or base prices. Such systems and methods assist administrator users to generate and deploy advertising campaigns, and optimize prices throughout the retailer…In one or more embodiments, the inventive forward-looking promotion optimization (FL-PO) involves obtaining actual revealed preferences from individual consumers of the segmented subpopulations being tested through deployment in physical retail spaces. As such the some of the disclosure will focus upon mechanisms of forward looking promotional optimizations, in order to understand the context within which the intelligent promotional design system excels; paragraph 0081, discussing that within the forward-looking promotion optimization, the revealed preferences are obtained when the individual consumers respond to specifically designed actual test promotions. The revealed preferences may be collected at a physical retailer based upon transaction records. For example, when a consumer responds in a physical store through completion of a transaction, to a test promotion that offers 20% off a particular consumer packaged goods (CPG) item, that response is tracked in his individual computer-implemented account, or in a transaction record; paragraph 0095, discussing that the promotion involving discounting 2-for-1 potato chips in brown paper bag packaging may be tested further to validate the hypothesis that such a combination elicits a more desirable response than the response from test promotions using only brown paper bag packaging or from test promotions using only 2-for-1 discounts. As many of the “winning” test promotion variable values may be identified and combined in a single promotion as desired. At some point, a combination of “winning” test promotion variables may be employed to create the general public promotion, in one or more embodiments; paragraph 0134, discussing that promotion testing using the test promotions on the segmented subpopulations occurs in parallel to the release of a general public promotion and may continue in a continual fashion to validate correlation hypotheses and/or to derive new general public promotions based on the same or different analysis results. If iterative promotion testing involving correlation hypotheses uncovered by analysis engine is desired, the same test promotions or new test promotions may be generated and executed against the same segmented subpopulations or different segmented subpopulations as needed…; paragraph 0216, discussing applying rules to discard outlier data, then on the remaining data apply a machine learning algorithm to identify other data that may be removed. The machine learning model may generate a confidence value of whether the data should be discarded; paragraphs 0090, 0093, 0129, 0207, 0215), wherein calculate is performed based on executing one or more operations configured to: input data to an Al model trained to output one or more hypotheses in response to the input (paragraph 0104, discussing that there is provided a promotional idea module for generating ideas for promotional concepts to test. The promotional idea generation module relies on a series of pre-constructed sentence structures that outline typical promotional constructs; paragraph 0114, discussing that the segmentation criteria are of course only illustrative and almost any demographics, behavioral, attitudinal, whether self-described, objective, interpolated from data sources (including past purchase or current purchase data), etc. may be used as segmentation criteria if there is an interest in determining how a particular subpopulation would likely respond to a test promotion; paragraph 0126, discussing that it is envisioned that dozens, hundreds, or even thousands of these test promotions may be administered concurrently or staggered in time to the dozens, hundreds or thousands of segmented subpopulations. Further, the large number of test promotions executed (or iteratively executed) improves the statistical validity of the correlations ascertained by analysis engine. This is because the number of variations in test promotion variable values, subpopulation attributes, etc. can be large, thus yielding rich and granulated result data. The data-rich results enable the analysis engine to generate highly granular correlations between test promotion variables, subpopulation attributes, and type/degree of responses, as well as track changes over time. In turn, these more accurate/granular correlations help improve the probability that a general public promotion created from these correlations would likely elicit the desired response from the general public. It would also, over, time, create promotional profiles for specific categories, brands, retailers, and individual shoppers where, e.g., shopper 1 prefers contests and shopper 2 prefers instant financial savings; paragraph 0140, discussing various example promotion-significant responses. As mentioned, redemption of the test offer is one strong indication of interest in the promotion. However, other consumer actions responsive to the receipt of a promotion may also reveal the level of interest/disinterest and may be employed by the analysis engine to ascertain which test promotion variable is likely or unlikely to elicit the desired response; paragraph 0215, discussing that the system may instead rely upon an artificial intelligence based models for computing the optimal pricing. These machine learned models may rely upon neural networks, Siamese networks, deep learning techniques or recurrent neural networks, for example; paragraph 0225, discussing that after the prices have thus been deployed, newer transaction data may be collected. New transaction data is leveraged to train and update the elasticity models and other machine learning algorithms. With these updated models, the process can iteratively repeat with even better price optimization. Eventually the true “optimal” price can thus be identified. This new optimal price may then be deployed across all stores of the retailer. Subsequent testing is also performed), access an output of one or more hypotheses produced from the Al model (paragraph 0079, discussing obtaining actual revealed preferences from individual consumers of the segmented subpopulations being tested through deployment in physical retail spaces. As such the some of the following disclosure will focus upon mechanisms of forward looking promotional optimizations, in order to understand the context within which the intelligent promotional design system excels, particularly within physical retail spaces; paragraph 0081, discussing that within the forward-looking promotion optimization, the revealed preferences are obtained when the individual consumers respond to specifically designed actual test promotions. The revealed preferences may be collected at a physical retailer based upon transaction records. For example, when a consumer responds in a physical store through completion of a transaction, to a test promotion that offers 20% off a particular consumer packaged goods (CPG) item, that response is tracked in his individual computer-implemented account, or in a transaction record; paragraph 0095, discussing that the promotion involving discounting 2-for-1 potato chips in brown paper bag packaging may be tested further to validate the hypothesis that such a combination elicits a more desirable response than the response from test promotions using only brown paper bag packaging or from test promotions using only 2-for-1 discounts. As many of the “winning” test promotion variable values may be identified and combined in a single promotion as desired. At some point, a combination of “winning” test promotion variables may be employed to create the general public promotion; paragraphs 0090, 0129), and transmit the one or more hypotheses to the frontend system (paragraph 0079, discussing intelligent test designs for most effective experimentation of promotions and base pricing to more efficiently identify a highly effective general promotion and/or base prices. Such systems and methods assist administrator users to generate and deploy advertising campaigns, and optimize prices throughout the retailer…In one or more embodiments, the inventive forward-looking promotion optimization (FL-PO) involves obtaining actual revealed preferences from individual consumers of the segmented subpopulations being tested through deployment in physical retail spaces. As such the some of the following disclosure will focus upon mechanisms of forward looking promotional optimizations, in order to understand the context within which the intelligent promotional design system excels; paragraph 0095, discussing that the promotion involving discounting 2-for-1 potato chips in brown paper bag packaging may be tested further to validate the hypothesis that such a combination elicits a more desirable response than the response from test promotions using only brown paper bag packaging or from test promotions using only 2-for-1 discounts. As many of the “winning” test promotion variable values may be identified and combined in a single promotion as desired. At some point, a combination of “winning” test promotion variables may be employed to create the general public promotion, in one or more embodiments; paragraph 0134, discussing that promotion testing using the test promotions on the segmented subpopulations occurs in parallel to the release of a general public promotion and may continue in a continual fashion to validate correlation hypotheses and/or to derive new general public promotions based on the same or different analysis results. If iterative promotion testing involving correlation hypotheses uncovered by analysis engine is desired, the same test promotions or new test promotions may be generated and executed against the same segmented subpopulations or different segmented subpopulations as needed. As mentioned, iterative promotion testing may validate the correlation hypotheses, serve to eliminate “false positives” and/or uncover combinations of test promotion variables that may elicit even more favorable or different responses from the test subjects); analyze a response to determine at least one action to be taken (paragraph 0195, discussing that the promotion involving discounting 2-for-1 potato chips in brown paper bag packaging may be tested further to validate the hypothesis that such a combination elicits a more desirable response than the response from test promotions using only brown paper bag packaging or from test promotions using only 2-for-1 discounts. As many of the “winning” test promotion variable values may be identified and combined in a single promotion as desired. At some point, a combination of “winning” test promotion variables (involving one, two, three, or more “winning” test promotion variables) may be employed to create the general public promotion; paragraph 0099, discussing that the analysis results from executing the plurality of test promotions are employed to generate future general public promotions. In this manner, data regarding the “expected” efficacy of the proposed general public promotion is obtained even before the proposed general public promotion is released to the public; paragraph 0134, discussing that promotion testing using the test promotions on the segmented subpopulations occurs in parallel to the release of a general public promotion and may continue in a continual fashion to validate correlation hypotheses and/or to derive new general public promotions based on the same or different analysis results. If iterative promotion testing involving correlation hypotheses uncovered by analysis engine 132 is desired, the same test promotions or new test promotions may be generated and executed against the same segmented subpopulations or different segmented subpopulations as needed…Iterative promotion testing may validate the correlation hypotheses, serve to eliminate “false positives” and/or uncover combinations of test promotion variables that may elicit even more favorable or different responses from the test subjects), validate the Al model output based on a positive response and invalidate the Al model output based on a negative response (paragraph 0091, discussing that promotion testing may be iterated over and over with different subpopulations (segmented using the same or different segmenting criteria) and different test promotions (devised using the same or different combinations of test promotion variables) in order to validate one or more the test promotion response analysis result(s) prior to the formation of the generalized public promotion. In this manner, “false positives” may be reduced; paragraph 0095, discussing that the promotion involving discounting 2-for-1 potato chips in brown paper bag packaging may be tested further to validate the hypothesis that such a combination elicits a more desirable response than the response from test promotions using only brown paper bag packaging or from test promotions using only 2-for-1 discounts; paragraph 0134, discussing that promotion testing using the test promotions on the segmented subpopulations occurs in parallel to the release of a general public promotion and may continue in a continual fashion to validate correlation hypotheses and/or to derive new general public promotions based on the same or different analysis results. If iterative promotion testing involving correlation hypotheses uncovered by analysis engine 132 is desired, the same test promotions or new test promotions may be generated and executed against the same segmented subpopulations or different segmented subpopulations as needed (paths 216/222/226 or 216/224/226 or 216/222/224/226). As mentioned, iterative promotion testing may validate the correlation hypotheses, serve to eliminate “false positives” and/or uncover combinations of test promotion variables that may elicit even more favorable or different responses from the test subjects; paragraph 0147, discussing that if the goal is to maximize profit for the sale of a certain newly created brand of potato chips, embodiments of the invention optimally and adaptively, without using required human intervention, plan the test promotions, iterate through the test promotions to test the test promotion variables in the most optimal way, learn and validate such that the most result-effective set of test promotions can be derived, and provide such result-effective set of test promotions as recommendations for generalized public promotion to achieve the goal of maximizing profit for the sale of the newly created brand of potato chips; paragraph 0182, discussing that even after roll out, the determinations made during optimization of any variables are routinely and continually reexamined, retested and validated. This ensures that any errors in the testing are corrected for; paragraph 0200, discussing that Spurious elastic effects may be filtered out, and overfitting to errors may by avoided by reducing the number of individually estimated elasticities by simple aggregation techniques, by also adjusting the statistical level of significance for assessing statistical effects (e.g., Bonferroni adjustment, etc.) and finally by cross-validating models and their elasticity estimates through sampling techniques paragraphs 0093, 0112, 0214); and transmit the at least one action to the frontend system for execution (paragraph 0179, discussing rolling out of pricing policies to a larger set of retailer establishments. This may include merely rolling out these pricing and promotion findings to other retail stores that are similar, or may be rolled out to a wider segment of brick-and-mortar retail locations; paragraph 0193, discussing providing retail stores instructions on prices that should be implemented). Montero does not explicitly teach tailor a visual display of a first user interface to limit data entry of customer data to a set of structured data fields; display a first screen of the first user interface tailored to capture respective ones of the set of structured data fields, the first screen configured to limit data entry to the set of structured data fields; by a structured data model including at least the set of structured data fields; input the set of structured data fields to a model to output in response to the input of the set of structured data fields. However, Brandish in the analogous art of customer feedback systems teaches these concepts. Brandish teaches: tailor a visual display of a first user interface to limit data entry of customer data to a set of structured data fields (paragraph 0001, discussing that invention relates to electronic systems, methods, apparatus and user interfaces for structuring and obtaining user feedback; paragraph 0011, discussing configuring, by a computer, an application-specific feedback survey based on one or more options for content to be included in the application and based on one or more selections; paragraph 0018-0022, discussing an interface coupled to be in communication with the processor to: configure an application-specific feedback survey based on the one or more options for content to be included in the application and based on the one or more selections;…; wherein the processor: presents the configured feedback survey to one or more users accessing the application; and receives structured feedback from the one or more users via the configured feedback survey; paragraph 0028, discussing that configuring the application-specific feedback survey comprises selecting one or more user interface (UI) elements for inclusion in the feedback survey, each UI element related to an aspect of the website or mobile application upon which feedback from users is to be sought; paragraph 0071, discussing configuring by a computer an application-specific feedback survey based on one or more options for content to be included in the application and based on one or more selections. For example, when the product owner logs into their account, they are presented with an admin portal view which enables the product owner to select one or more user interface (UI) elements to be included in the feedback survey. The UI elements can be predetermined and/or specific according to the aspect of the website or mobile application upon which the product owner is seeking feedback from users…; paragraph 0085); display a first screen of the first user interface tailored to capture respective ones of the set of structured data fields, the first screen configured to limit data entry to the set of structured data fields (paragraph 0066, discussing a system for managing the solicitation and receipt of structured user feedback...The system comprises an apparatus in the form of a data storage unit, such as a server, having stored therein data relating to one or more application-specific feedback surveys, each survey based on the one or more options for content to be included in the respective application and based on the one or more respective selections; paragraph 0071, discussing configuring by a computer an application-specific feedback survey based on one or more options for content to be included in the application and based on one or more selections. For example, when the product owner logs into their account, they are presented with an admin portal view which enables the product owner to select one or more user interface (UI) elements to be included in the feedback survey. The UI elements can be predetermined and/or specific according to the aspect of the website or mobile application upon which the product owner is seeking feedback from users…; paragraph 0073, discussing that when the feedback survey is published, the user can access the structured questions of the survey when accessing the relevant part, page or section of the product owner's website/application. Hence, the users are being asked specific questions based on the applicable UI element(s) about the product owner's website/application when exposed to the relevant part(s), page(s) or section(s) of the product owner's website/application; paragraphs 0085, discussing that various screenshots show examples of a product owner admin portal that enables a digital product owner to configure the UI elements according to the questions to be asked in the structured feedback survey and the format and number of the response options. The admin portal thus enables product owners to formulate particular questions regarding specific aspects of the online or mobile application and limit the scope of responses via the format and/or number of the response options); by a structured data model including at least the set of structured data fields (paragraph 0082, discussing examples of receiving structured user feedback via “learning” questions relating to the mobile sports application...FIG. 5A shows the same output as FIG. 4A and FIG. 5B shows a very similar welcome screen to that shown in FIG. 4B, which is displayed after the structured feedback survey tab or other icon is selected by the user or after a predetermined time has elapsed or after a predetermined period of inactivity. In FIG. 5C, the survey displays the same “radio” question as shown in FIG. 4C, but with fewer options. In FIG. 5D the survey seeks simple approval/disapproval of the current content of the application. In FIG. 5E the survey seeks user feedback regarding content the user wishes to see more of in the application in the form of check boxes 512. In FIG. 5F, the survey seeks ratings from users on specific content of the application via drop-down menus allowing one of a predetermined number of scores to be selected. FIGS. 5G and 5H solicit and receive user feedback in the form of a rating of the user's experience of the application as a whole); and input the set of structured data fields to a model to output in response to the input of the set of structured data fields (paragraph 0023, discussing that the method includes re-configuring the application-specific feedback survey, or configuring a further application-specific feedback survey, based on the structured feedback from the one or more users and presenting a re-configured feedback survey or further feedback survey to a user accessing the application; paragraph 0077, discussing that the method can include at re-configuring the application-specific feedback survey, or configuring a further application-specific feedback survey, based on the structured feedback from the one or more users and presenting a re-configured feedback survey, or a further application-specific feedback survey, to a user accessing the application; paragraph 0083, discussing that the formulation of the “learning” questions and their structure is facilitated by the product owner admin portal…The aforementioned screenshots demonstrate the flexibility provided by embodiments of the invention in the type, variety and structure of the questions that can be presented to users as part of the feedback survey to solicit specific responses to specific aspects of the website/mobile application). Montero is directed toward a method and apparatus for the generation of and testing of promotions and base pricing within brick and mortar retailer. Brandish relates to consumer feedback. Therefore, they are deemed to be analogous as they both are directed towards solutions for customer data analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Montero with Brandish because the references are analogous art because they are both directed to solutions for customer data analysis, which falls within applicant’s field of endeavor (customer feedback collection), and because modifying Montero to include Brandish’s features for including tailor a visual display of a first user interface to limit data entry of customer data to a set of structured data fields; display a first screen of the first user interface tailored to capture respective ones of the set of structured data fields, the first screen configured to limit data entry to the set of structured data fields; by a structured data model including at least the set of structured data fields; input the set of structured data fields to a model to output in response to the input of the set of structured data fields, in the manner claimed, would serve the motivation of facilitating improved structured feedback from users (Brandish at paragraph 0081) and providing product owners with accurate feedback (Brandish at paragraph 0097); or in the pursuit of testing customers' preferences and discovering what it is about a product that gives it appeal, thereby aiding in planning marketing and advertising strategies and in making changes in a product to increase its sales; and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 5, the Montero-Brandish combination teaches the system of claim 1. Montero further teaches wherein the frontend system is configured to transmit collected customer responses to the backend system (paragraph 0031, discussing that consumers are asked to state preferences. In an example conjoint study, a consumer may be approached at the store and asked a series of questions... Questions may be asked include, for example, “do you prefer Brand X or Brand Y”… or “do you prefer chocolate cookies or oatmeal cookies”…The consumer may state his preference on each of the questions posed; paragraph 0102, discussing that the test promotions themselves may be formulated to isolate specific test promotion variables (such as the aforementioned potato chip brown paper packaging or the 16-oz size packaging); paragraph 0104, discussing that there is provided a promotional idea module for generating ideas for promotional concepts to test. The promotional idea generation module relies on a series of pre-constructed sentence structures that outline typical promotional constructs; paragraph 0130, discussing that the test promotions may be administered via electronic pricing tags displayed within a physical retail location…The responses may be obtained at the point of sale terminal, or via a website or program,…, or via an app implemented on smart phones used by the individuals, for example; paragraph 0136, discussing that the test promotion is received from the test promotion generation server…In step 304, the test promotion is presented to the user. In step 306, the user's response to the test promotion is obtained and transmitted to a database for analysis [i.e., transmit collected customer feedback data to the backend system]; paragraph 0142) Montero does not explicitly teach that the collected customer responses include the set of structured data fields. However, Brandish in the analogous art of feedback management systems teaches this concept. Brandish teaches: collected customer responses including the set of structured data fields (paragraph 0001, discussing that invention relates to electronic systems, methods, apparatus and user interfaces for structuring and obtaining user feedback; paragraph 0018-0022, discussing an interface coupled to be in communication with the processor to: configure an application-specific feedback survey based on the one or more options for content to be included in the application and based on the one or more selections;…; wherein the processor: presents the configured feedback survey to one or more users accessing the application; and receives structured feedback from the one or more users via the configured feedback survey; paragraph 0028, discussing that configuring the application-specific feedback survey comprises selecting one or more user interface (UI) elements for inclusion in the feedback survey, each UI element related to an aspect of the website or mobile application upon which feedback from users is to be sought; paragraph 0066, discussing a system for managing the solicitation and receipt of structured user feedback...The system comprises an apparatus in the form of a data storage unit, such as a server, having stored therein data relating to one or more application-specific feedback surveys, each survey based on the one or more options for content to be included in the respective application and based on the one or more respective selections; paragraph 0082, discussing examples of receiving structured user feedback via “learning” questions relating to the mobile sports application...FIG. 5A shows the same output as FIG. 4A and FIG. 5B shows a very similar welcome screen to that shown in FIG. 4B, which is displayed after the structured feedback survey tab or other icon is selected by the user or after a predetermined time has elapsed or after a predetermined period of inactivity. In FIG. 5C, the survey displays the same “radio” question as shown in FIG. 4C, but with fewer options. In FIG. 5D the survey seeks simple approval/disapproval of the current content of the application. In FIG. 5E the survey seeks user feedback regarding content the user wishes to see more of in the application in the form of check boxes 512. In FIG. 5F, the survey seeks ratings from users on specific content of the application via drop-down menus allowing one of a predetermined number of scores to be selected. FIGS. 5G and 5H solicit and receive user feedback in the form of a rating of the user's experience of the application as a whole; paragraphs 0085, discussing that various screenshots show examples of a product owner admin portal that enables a digital product owner to configure the UI elements according to the questions to be asked in the structured feedback survey and the format and number of the response options. The admin portal thus enables product owners to formulate particular questions regarding specific aspects of the online or mobile application and limit the scope of responses via the format and/or number of the response options; paragraph 0073). Montero is directed toward a method and apparatus for the generation of and testing of promotions and base pricing within brick and mortar retailer. Brandish relates to consumer feedback. Therefore, they are deemed to be analogous as they both are directed towards solutions for customer data analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Montero with Brandish because the references are analogous art because they are both directed to solutions for customer data analysis, which falls within applicant’s field of endeavor (customer feedback collection), and because modifying Montero to include Brandish’s feature for including collected customer responses including the set of structured data fields, in the manner claimed, would serve the motivation of facilitating improved structured feedback from users (Brandish at paragraph 0081) and providing product owners with accurate feedback (Brandish at paragraph 0097); or in the pursuit of testing customers' preferences and discovering what it is about a product that gives it appeal, thereby aiding in planning marketing and advertising strategies and in making changes in a product to increase its sales; and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 6, the Montero-Brandish combination teaches the system of claim 1. Montero further teaches wherein the one or more hypotheses are indicative of a question or action regarding a target product or service based on the customer data (paragraph 0031, discussing that consumers are asked to state preferences. In an example conjoint study, a consumer may be approached at the store and asked a series of questions...Questions may be asked include, for example, “do you prefer Brand X or Brand Y”… or “do you prefer chocolate cookies or oatmeal cookies”…The consumer may state his preference on each of the questions posed; paragraph 0137, discussing that the test promotions are generated using the test promotion generation server. In step 314, the test promotions are provided to the users…In step 316, the system receives the user's responses; paragraph 0096, discussing that stated preference data is obtained when the consumer states what he would hypothetically do in response to, for example, a hypothetically posed conjoint test question; paragraph 0103, discussing that the test promotion response data may be analyzed to answer questions related to specific subpopulation attribute(s) or specific test promotion variable(s). With embodiments of the invention, it is now possible to answer, from the test subpopulation response data, questions such as “How deep of a discount is required to increase by 10% the volume of potato chip purchased by buyers who are 18-25 year-old male shopping on a Monday?” or to generate test promotions specifically designed to answer such a question). As per claim 7, the Montero-Brandish combination teaches the system of claim 1. Montero further teaches wherein the at least processor is configured to determine the at least one action item including an automatic identification of new and non-existing inventory associated with the customer data. (paragraph 0225, discussing that after the prices have thus been deployed, newer transaction data may be collected. New transaction data is leveraged to train and update the elasticity models and other machine learning algorithms. With these updated models, the process can iteratively repeat with even better price optimization. Eventually the true “optimal” price can thus be identified. This new optimal price may then be deployed across all stores of the retailer. Subsequent testing is also performed, but on a less frequent basis in order to ensure public shopping habits, new product offerings, competitors, or other factors have not altered the optimal price in a significant manner. If such a situation is found, the system may enter a more intensive testing regime again to determine what the new ‘optimal’ price is). As per claim 10, the Montero-Brandish combination teaches the system of claim 1. Montero further teaches wherein the Al model is trained to identify whether a current product arrangement at a retail location associated with the frontend system matches a consumer demand based, at least in part, on the collected customer data consisting of the structured data model. (paragraph 0019, discussing that a typical promotion optimization method may involve examining the sales volume of a particular CPG item over time (e.g., weeks). The sales volume may be represented by a demand curve as a function of time, for example. A demand curve lift (excess over baseline) or dip (below baseline) for a particular time period would be examined to understand why the sales volume for that CPG item increases or decreases during such time period; paragraph 0020, discussing that FIG. 1 shows an example demand curve 102 for Brand X cookies over some period of time. Two lifts 110 and 114 and one dip 112 in demand curve 102 are shown in the example of FIG. 1. Lift 110 shows that the demand for Brand X cookies exceeds the baseline at least during week 2. By examining the promotion effort that was undertaken at that time (e.g., in the vicinity of weeks 1-4 or week 2) for Brand X cookies, marketers have in the past attempted to judge the effectiveness of the promotion effort on the sales volume. If the sales volume is deemed to have been caused by the promotion effort and delivers certain financial performance metrics, that promotion effort is deemed to have been successful and may be replicated in the future in an attempt to increase the sales volume. On the other hand, dip 112 is examined in an attempt to understand why the demand falls off during that time (e.g., weeks 3 and 4 in FIG. 1). If the decrease in demand was due to the promotion in week 2 (also known as consumer pantry loading or retailer forward-buying, depending on whether the sales volume shown reflects the sales to consumers or the sales to retailers), this decrease in weeks 3 and 4 should be counted against the effectiveness of week 2; paragraph 0040, discussing that this model may leverage inputs including product volume levels based on historical day, date and store measurements, competitive price, promotions, and product socking metrics, for example. Subsequently these adjusted logs are leveraged to generate an elasticity model for each product. The elasticity model may be single variate, or multivariate when including cross elasticity effects. These elasticity models are again generated using machine learning algorithms; paragraph 0165, discussing that after the prices are updated, the transaction data for the items is collected. This includes sales volumes over time, changes in basket composition, etc. This data may be collected for a set period or may be tied to a transaction number. For example, some items are deemed very low volume, such as shoe polish in the grocery store. Under normal circumstances, volumes for such a product are measured in the single digits per week. The item itself costs the retailer money to stock (given the loss of shelf space) but may be deemed valuable to the retailer by providing a “one stop shop” for consumers. For such an item, modifying the price for a few days (or even weeks) may be insufficient to gain statistically useful information regarding the promotional variable change. Thus, for lower volume products, it may be more advantageous to set a statistically meaningful number of transactions (say 400 for example) and only modify the price once this this number of transactions has been met. Additionally, for long lasting products, it may be advantageous to also have prolonged testing periods in order to ascertain demand. For example, a Glade Plug In cartridge is intended to last 30 days. If promoted on one day, and most consumers are not in need of the item since their last cartridge is still operating, the short promotional testing may not adequately capture the impact of the promotion; paragraph 0167, discussing that the optimal price point for every product within a category is set by maximizing the overall objective function of that category which will include product self-elasticities and cross-product elasticities influencing the demand of one product in that category versus another. For example, as the system tests prices for shredded cheese, maybe moving price up on Sargento shredded cheese, the substitutability of this category may see shoppers buy more of Kraft shredded cheese. As a result the cross-elastic effect is taken into account and both Sargento and Kraft's prices will be tested and an optimum will be determined for both brands and that optimum will be tested as well to validate the projection. All price changes will be guided by the objective function which in this case would be to grow volume in the shredded cheese category while maintaining a certain level of margin; paragraphs 0204, 0209). Claim 11 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim , as discussed above. Further, as per claim 11 the Montero-Brandish combination recites a computer implemented method for determining at least one action using an artificial intelligent ("Al") model (Montero, paragraph 0038: “systems and methods for the generation and testing of optimal base prices within brick and mortar retailers”; paragraph 0139; paragraph 0215, discussing that in addition, or as an alternative, to the pricing optimization methods proposed above, the system may instead rely upon an artificial intelligence based models for computing the optimal pricing. These machine learned models may rely upon neural networks, Siamese networks, deep learning techniques or recurrent neural networks). Claim 15 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 5, as discussed above. Claim 16 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 6, as discussed above. Claim 17 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 7, as discussed above. Claim 20 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 10, as discussed above. 20. Claims 2-4 and 12-14 are rejected under 35 U.S.C. 103 as being unpatentable over Montero in view of Brandish, in further view of Agarwal et al., Pub. No.: US 2017/0140007 A1, [hereinafter Agarwal]. As per claim 2, the Montero-Brandish combination teaches the system of claim 1. Although not explicitly taught by Montero, Brandish in the analogous art of feedback management systems teaches wherein the set of structured data fields includes at least a product field, a product attribute field, and a product attribute value field (paragraph 0001, discussing that invention relates to electronic systems, methods, apparatus and user interfaces for structuring and obtaining user feedback; paragraph 0028, discussing that configuring the application-specific feedback survey comprises selecting one or more user interface (UI) elements for inclusion in the feedback survey, each UI element related to an aspect of the website or mobile application upon which feedback from users is to be sought; paragraph 0066, discussing a system for managing the solicitation and receipt of structured user feedback...The system comprises an apparatus in the form of a data storage unit, such as a server, having stored therein data relating to one or more application-specific feedback surveys, each survey based on the one or more options for content to be included in the respective application and based on the one or more respective selections; paragraph 0082, discussing examples of receiving structured user feedback via “learning” questions relating to the mobile sports application...FIG. 5A shows the same output as FIG. 4A and FIG. 5B shows a very similar welcome screen to that shown in FIG. 4B, which is displayed after the structured feedback survey tab or other icon is selected by the user or after a predetermined time has elapsed or after a predetermined period of inactivity. In FIG. 5C, the survey displays the same “radio” question as shown in FIG. 4C, but with fewer options. In FIG. 5D the survey seeks simple approval/disapproval of the current content of the application. In FIG. 5E the survey seeks user feedback regarding content the user wishes to see more of in the application in the form of check boxes 512. In FIG. 5F, the survey seeks ratings from users on specific content of the application via drop-down menus allowing one of a predetermined number of scores to be selected. FIGS. 5G and 5H solicit and receive user feedback in the form of a rating of the user's experience of the application as a whole; paragraphs 0085, discussing that various screenshots show examples of a product owner admin portal that enables a digital product owner to configure the UI elements according to the questions to be asked in the structured feedback survey and the format and number of the response options. The admin portal thus enables product owners to formulate particular questions regarding specific aspects of the online or mobile application and limit the scope of responses via the format and/or number of the response options; paragraph 0087, discussing that the user interface includes a menu to select the format of questions required, such as multiple choice with many answers, yes/no, dropdown ratings etc.…; paragraphs 0005, 0073). Montero is directed toward a method and apparatus for the generation of and testing of promotions and base pricing within brick and mortar retailer. Brandish relates to consumer feedback. Therefore, they are deemed to be analogous as they both are directed towards solutions for customer data analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Montero with Brandish because the references are analogous art because they are both directed to solutions for customer data analysis, which falls within applicant’s field of endeavor (customer feedback collection), and because modifying Montero to include Brandish’s features for including wherein the set of structured data fields includes at least a product field, a product attribute field, and a product attribute value field, in the manner claimed, would serve the motivation of facilitating improved structured feedback from users (Brandish at paragraph 0081) and providing product owners with accurate feedback (Brandish at paragraph 0097); or in the pursuit of testing customers' preferences and discovering what it is about a product that gives it appeal, thereby aiding in planning marketing and advertising strategies and in making changes in a product to increase its sales; and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. The Montero-Brandish combination does not explicitly teach wherein the set of structured data fields includes at least an action field. However, Agarwal in the analogous art of predictive modeling teaches these concepts. Agarwal teaches: wherein the set of structured data fields includes at least an action field (paragraph 0039, discussing that user interfaces and controls can be provided in connection with an example model generator allowing human or automated users to input data to populate and be used in an instantiation of a plan model. In some instances, source data can also be collected, requested, retrieved, or otherwise accessed to populate attribute fields, build logic of the plan model, calculate results from the logic within a given plan model, or be otherwise used to generate an instantiation of a particular plan model for addition to the set of plan models; paragraph 0040, discussing that particular instances of a plan model or a particular set of attribute values of a particular plan model can be adopted by an organization as a model of a current working plan, goal, assumption, or approach to be considered by the organization both in its analysis of other business scenarios as well as drive the real world behavior and decision-making of the organization; paragraph 0045, discussing that predictive analytics searches, supported by search engine 240, can include searches for facts that do not yet exist and that involve predictive computations based on assumptions or other information modeled in the enterprise data model; paragraph 0046, discussing that the search engine can also identify previously generated scenarios or plans to recommend possible actions that have in the past provide to be more successful. The recommendation results can be presented in the GUI as a set of settings, assumptions, and measure values that represent the recommendation. The same can be graphically represented within one or more of the supported infographic formats supported by the GUI engine). The Montero-Brandish combination describes features related to customer data analysis. Agarwal is directed toward a planning system that provides an interactive planning a visualization interface. Therefore, they are deemed to be analogous as they both are directed towards solutions for customer data analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the Montero-Brandish combination with Agarwal because the references are analogous art because they are both directed to solutions for customer data analysis, which falls within applicant’s field of endeavor (customer feedback collection), and because modifying the Montero-Brandish combination to include Agarwal’s feature for including wherein the set of structured data fields includes at least an action field, in the manner claimed, would serve the motivation of improving, guiding, and constraining construction and selection of planning and goal scenarios, analyses, and other uses of a plan model (Agarwal at paragraph 0076); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 3, the Montero-Brandish-Agarwal combination teaches the system of claim 2. Montero further teaches wherein the input to the Al model includes any data for the product field, product attribute field, the product attribute value field, and the action field (paragraph 0104, discussing that there is provided a promotional idea module for generating ideas for promotional concepts to test. The promotional idea generation module relies on a series of pre-constructed sentence structures that outline typical promotional constructs; paragraph 0114, discussing that the segmentation criteria are of course only illustrative and almost any demographics, behavioral, attitudinal, whether self-described, objective, interpolated from data sources (including past purchase or current purchase data), etc. may be used as segmentation criteria if there is an interest in determining how a particular subpopulation would likely respond to a test promotion; paragraph 0122, discussing that response analysis may employ any analysis technique (including statistical analysis) that may reveal the type and degree of correlation between test promotion variables, subpopulation attributes, and promotion responses; paragraph 0126, discussing that it is envisioned that dozens, hundreds, or even thousands of these test promotions may be administered concurrently or staggered in time to the dozens, hundreds or thousands of segmented subpopulations. Further, the large number of test promotions executed (or iteratively executed) improves the statistical validity of the correlations ascertained by analysis engine. This is because the number of variations in test promotion variable values, subpopulation attributes, etc. can be large, thus yielding rich and granulated result data. The data-rich results enable the analysis engine to generate highly granular correlations between test promotion variables, subpopulation attributes, and type/degree of responses, as well as track changes over time. In turn, these more accurate/granular correlations help improve the probability that a general public promotion created from these correlations would likely elicit the desired response from the general public. It would also, over, time, create promotional profiles for specific categories, brands, retailers, and individual shoppers where, e.g., shopper 1 prefers contests and shopper 2 prefers instant financial savings; paragraph 0140, discussing various example promotion-significant responses. As mentioned, redemption of the test offer is one strong indication of interest in the promotion. However, other consumer actions responsive to the receipt of a promotion may also reveal the level of interest/disinterest and may be employed by the analysis engine to ascertain which test promotion variable is likely or unlikely to elicit the desired response; paragraph 0215, discussing that the system may instead rely upon an artificial intelligence based models for computing the optimal pricing. These machine learned models may rely upon neural networks, Siamese networks, deep learning techniques or recurrent neural networks, for example; paragraph 0225, discussing that after the prices have thus been deployed, newer transaction data may be collected. New transaction data is leveraged to train and update the elasticity models and other machine learning algorithms. With these updated models, the process can iteratively repeat with even better price optimization. Eventually the true “optimal” price can thus be identified. This new optimal price may then be deployed across all stores of the retailer. Subsequent testing is also performed). Examiner notes that Brandish, in addition to Montero as cited above, also teaches any data for the product field, product attribute field, and the product attribute value field (paragraph 0340, discussing configuring by a computer an application-specific feedback survey based on one or more options for content to be included in the application and based on one or more selections. For example, when the product owner logs into their account, they are presented with an admin portal view (described further herein) which enables the product owner to select one or more user interface (UI) elements to be included in the feedback survey. The UI elements can be predetermined and/or specific according to the aspect of the website or mobile application upon which the product owner is seeking feedback from users. For example, the UI elements can relate to the main menu/navigation of their product, news items, videos, game centre, subscriber/membership, discounts/offers, statistics, betting, awards etc. or any aspect of the product owner's website or mobile application; paragraph 0074, discussing that the feedback survey is presented to a user when the user is accessing any part, page or section or multiple parts/pages/sections of the product owner's website/application. This can be the case when, for example, the feedback survey comprises structured questions applicable to multiple parts/pages/sections of the product owner's website/application). As per claim 4, the Montero-Brandish combination teaches the system of claim 1. Although not explicitly taught by the Montero-Brandish combination, Agarwal in the analogous art of predictive modeling teaches wherein the set of structured data fields reflect summarized hypotheses data include at least a product field, a product attribute field, a product attribute value field, and an action field (paragraph 0039, discussing that user interfaces and controls can be provided in connection with an example model generator allowing human or automated users to input data to populate and be used in an instantiation of a plan model. In some instances, source data can also be collected, requested, retrieved, or otherwise accessed to populate attribute fields, build logic of the plan model, calculate results from the logic within a given plan model, or be otherwise used to generate an instantiation of a particular plan model for addition to the set of plan models; paragraph 0040, discussing that particular instances of a plan model or a particular set of attribute values of a particular plan model can be adopted by an organization as a model of a current working plan, goal, assumption, or approach to be considered by the organization both in its analysis of other business scenarios as well as drive the real world behavior and decision-making of the organization; paragraph 0045, discussing that predictive analytics searches, supported by search engine 240, can include searches for facts that do not yet exist and that involve predictive computations based on assumptions or other information modeled in the enterprise data model; paragraph 0046, discussing that the search engine can also identify previously generated scenarios or plans to recommend possible actions that have in the past provide to be more successful. The recommendation results can be presented in the GUI as a set of settings, assumptions, and measure values that represent the recommendation. The same can be graphically represented within one or more of the supported infographic formats supported by the GUI engine). The Montero-Brandish combination describes features related to customer data analysis. Agarwal is directed toward a planning system that provides an interactive planning a visualization interface. Therefore, they are deemed to be analogous as they both are directed towards solutions for customer data analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the Montero-Brandish combination with Agarwal because the references are analogous art because they are both directed to solutions for customer data analysis, which falls within applicant’s field of endeavor (customer feedback collection), and because modifying the Montero-Brandish combination to include Agarwal’s feature for including wherein the set of structured data fields reflect summarized hypotheses data include at least a product field, a product attribute field, a product attribute value field, and an action field, in the manner claimed, would serve the motivation of improving, guiding, and constraining construction and selection of planning and goal scenarios, analyses, and other uses of a plan model (Agarwal at paragraph 0076); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 12 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 2, as discussed above. Claim 13 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 3, as discussed above. Claim 14 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 4, as discussed above. 21. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Montero in view of Brandish, in further view of Steingrimsson et al., Pub. No.: US 2019/0087529 A1, [hereinafter Steingrimsson]. As per claim 8, the Montero-Brandish combination teaches the system of claim 1, but it does not explicitly teach wherein the Al model is trained to output a hypothesis that specifies criteria for a new product. However, Steingrimsson in the analogous art of retail planning systems teaches this concept (paragraph 0006: “This invention presents a high-level framework for applying artificial intelligence (predictive analytics) to numerous areas of engineering product design, to mission or to retail planning.”; paragraph 0099, discussing a framework for utilizing big data analytics to harvest repositories of known good designs for the purpose of assisting with specific areas in engineering product design. The framework can be applied to product design in general, as well as to mission and retail planning...The framework for predictive analytics assumes that, during the course of design projects, design information is captured in structured fashion…Project binders from past design projects are then archived in databases and made available to designers working on new design projects…The system is trained so that it can provide the best possible guiding information, such as for new product design, and sanitize the decisions made on the new projects. On the new projects, the system helps identify anomalies, defined in terms of deviations from the guiding reference, and prompt for investigation; paragraph 0206, discussing that for new designs, designers could extract the design vector, x, from the new requirements, apply to the system model, and get the guiding design, y, as an output. The guiding design, y, could be a reference (starting point) for design of the new product. Such reference may help improve the fidelity of design decisions. If design decisions cause the product to deviate significantly from the reference, y, explanations are likely warranted). The Montero-Brandish combination describes features related to customer data analysis. Steingrimsson is directed toward applying artificial intelligence to aid with product design, mission or retail planning. Therefore, they are deemed to be analogous as they both are directed towards solutions for product planning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the Montero-Brandish combination with Steingrimsson because the references are analogous art because they are both directed to solutions for data analysis, which falls within applicant’s field of endeavor (customer feedback collection), and because modifying the Montero-Brandish combination to include Steingrimsson’s feature for including wherein the Al model is trained to output a hypothesis that specifies criteria for a new product, in the manner claimed, would serve the motivation of allowing companies to analyze the data collected, in search for insights, that can help them provide better value to their customers, and help them make better business decisions (Steingrimsson at paragraph 0007); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 18 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 8, as discussed above. 22. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Montero in view of Brandish, in further view of Ramakrishnan et al., Pub. No.: US 2021/0125255 A1, [hereinafter Ramakrishnan]. As per claim 9, the Montero-Brandish combination teaches the system of claim 1, but it does not explicitly teach wherein the Al model is trained to output a hypothesis that identifies a missing product from inventory. However, Ramakrishnan in the analogous art of product recommendation systems teaches this concept (paragraph 0011, discussing a method of automated product recommendation. The method includes generating an entry in a mirror-cart linked with a user account, the entry corresponding to a physical item selected for purchase by the user in a retail location. The method includes identifying a recipe associated with the item, which recipe identifies other items that can be combined with the selected item to create a product, and comparing items identified in the recipe to items contained in the minor-cart to identify a missing item, which missing item is identified in the recipe and is not contained in the mirror-cart. The method includes prompting the user to purchase the missing item; paragraph 0014, discussing that prompting the user to purchase the missing item includes recommending purchase of the missing item. In some embodiments, prompting the user to purchase the missing item includes requesting a user input indicating whether the user desires to purchase the missing item, receiving a response to the request for user input, and prompting the user to purchase the missing item when the user response indicates a desire to purchase the missing item; paragraph 0094, discussing the direct identification by the server of potential missing items from the mirror cart. In some embodiments, for example, the machine learning model can be trained to identify potential missing items based on features generated from the items in the mirror cart, and in some embodiments, these features can be generated based on items in the mirror cart as well as based on historic data relating to the user's previous purchases, which historic data is contained in the account database; paragraph 0101, discussing selecting missing items via use of a machine learning model trained to output missing items based on ingested features). The Montero-Brandish combination describes features related to customer data analysis. Ramakrishnan is directed toward systems and methods for automated product recommendation. Therefore, they are deemed to be analogous as they both are directed towards solutions for product planning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the Montero-Brandish combination with Ramakrishnan because the references are analogous art because they are both directed to solutions for data analysis, which falls within applicant’s field of endeavor (customer feedback collection), and because modifying the Montero-Brandish combination to include Ramakrishnan’s feature for including wherein the Al model is trained to output a hypothesis that identifies a missing product from inventory, in the manner claimed, would serve the motivation of providing automated product recommendations (Ramakrishnan at paragraph 0011); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 19 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 9, as discussed above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Whiting et al., Pub. No.: US 2019/0228736 A1 – describes a system that enables a variety of client devices having different display features to present a survey question to a user using a layout customized based on specific characteristics corresponding the each of the various client devices. Hudda et al., Pub. No.: US 2021/0264507 A1 – describes a product feedback page of a network-based commerce website. Khoury et al., Pub. No.: US 2021/0224858 A1 – describes an artificial intelligence (AI) module, such as an AI module that is configured for implementing one or more evaluations and/or scoring protocols for evaluating and scoring collected media elements. Agrawal, Narendra, and Stephen A. Smith. "Optimal inventory management for a retail chain with diverse store demands." European Journal of Operational Research 225.3 (2013): 393-403 – describes a dynamic stochastic optimization model that determines the total order size and the optimal inventory allocation across nonidentical stores in each period. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Darlene Garcia-Guerra whose telephone number is (571) 270-3339. The examiner can normally be reached on M-F 7:30a.m.-5:00p.m. EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian M. Epstein can be reached on 571- 270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Darlene Garcia-Guerra/ Primary Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

Dec 23, 2024
Application Filed
Feb 03, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602305
CUSTOMER JOURNEY PREDICTION AND RECOMMENDATION SYSTEMS AND METHODS
2y 5m to grant Granted Apr 14, 2026
Patent 12591927
SYSTEMS AND METHODS FOR DETERMINING A GRAPHICAL USER INTERFACE FOR GOAL DEVELOPMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12591845
METHOD AND ARRANGEMENT FOR CARRYING OUT CONSTRUCTION MEASURES
2y 5m to grant Granted Mar 31, 2026
Patent 12572876
SYSTEM AND METHOD FOR OBTAINING AUDIT EVIDENCE
2y 5m to grant Granted Mar 10, 2026
Patent 12572866
STORE MANAGEMENT SYSTEM AND STORE MANAGEMENT METHOD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
23%
Grant Probability
57%
With Interview (+34.1%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 523 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month