Prosecution Insights
Last updated: April 19, 2026
Application No. 18/478,944

UNIFIED SYSTEM FOR COMPREHENSIVE BRAND EXPERIENCE AND CUSTOMER EXPERIENCE ANALYSIS AND MEASUREMENT

Final Rejection §101§103§112
Filed
Sep 29, 2023
Examiner
GARCIA-GUERRA, DARLENE
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Proper Villains LLC
OA Round
2 (Final)
23%
Grant Probability
At Risk
3-4
OA Rounds
4y 6m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
119 granted / 523 resolved
-29.2% vs TC avg
Strong +34% interview lift
Without
With
+34.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
53 currently pending
Career history
576
Total Applications
across all art units

Statute-Specific Performance

§101
36.6%
-3.4% vs TC avg
§103
42.3%
+2.3% vs TC avg
§102
2.6%
-37.4% vs TC avg
§112
16.2%
-23.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 523 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice to Applicant The following is a FINAL Office action upon examination of application number 18/478,944 filed on 09/29/2023. Claims 1-18 are pending in this application, and have been examined on the merits discussed below. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Application 18/478,944 filed 09/29/2023 claims Priority from Provisional Application 63/378,065, filed 10/01/2022. Response to Amendment In the response filed November 24, 2025, Applicant amended claims 1, 3-10, and 12-18, and did not cancel any claims. No new claims were presented for examination. 5. Applicant's amendments to claims 1 and 10 are hereby acknowledged. The amendments are sufficient to overcome the previously issued claim objections; accordingly, these objections have been removed. 6. Applicant's amendments to claims 8 and 17 are hereby acknowledged. The amendments are sufficient to overcome the previously issued claim rejection under 35 U.S.C. 112(b); accordingly, this rejection has been withdrawn. 7. Applicant's amendments to the claims are hereby acknowledged. The amendments are not sufficient to overcome the previously issued claim rejection under 35 U.S.C. 101; accordingly, this rejection has been maintained. Response to Arguments 8. Applicant's arguments filed November 24, 2025, have been fully considered. 9. Applicant submits “The computer-readable storage medium of claim 1 does not fall within the "enumerated sub-groupings" as it does not include "of fundamental economic principles or practices, commercial or legal interactions, and managing personal behavior and relationships or interactions between people" and instead involves the storage of a template, the automatic generation of multiple different sets of questions from the stored template, providing targeted sets of questions to multiple groups of users and received user selectable responses via a communication interface, and transmission of a report for display. The computer-readable medium enables a central system to store the template, communicate with multiple remote user terminals. MPEP 2106.04(a)(2)(C) states that the "sub-grouping managing personal behavior or relationships or interactions between people' include social activities, teaching, and following rules or instructions" none of which are present in claim 1.” [Applicant’s Remarks, 11/24/2025, pages 8-9] The Examiner respectfully disagrees. In response, it is noted that the claim limitations “store a template for sets of questions for a multiple element brand design assessment automatically generate, from the stored template, multiple different sets of questions to assess experience from different groups associated with a brand design, wherein the different groups include a first type of user designated as a customer, a second type of user designated as associated with a culture of the brand design, and a third type of user designated as a community associated with the brand design; provide targeted set of questions for the multiple element brand design assessment; receive a set of user selectable responses in response to the provided targeted sets of questions; combine scores based on the set of user selectable responses received to generate metrics for assessed elements of the multiple element brand design assessment, wherein a compilation of scores is automatically performed and saved as responses arrive; update the saved compilation of scores as additional responses for a respective type of user are received: and transmit for display a report comprising the metrics for the assessed elements of the multiple element brand design assessment,” when evaluated under Step 2A Prong One, are part of the abstract idea itself, i.e., are steps within the “Certain methods of organizing human activity” group within the enumerated groupings of abstract ideas set forth in MPEP 2106. For example, the above steps cover embodiments for organizing human activity given that the sequence of activities pertaining to assessing brand experience and customer experience fall squarely within the realm of “managing personal behavior or relationships or interactions between people” or “following rules or instructions,” as explained by the “Certain Methods of Organizing Human Activity” abstract idea groping set forth in MPEP 2106. Next, it is noted that the defined sequence of activities for assessing experience from different groups associated with a brand design, when read in light of the Specification, is plainly generated for the primary purpose of managing human behavior, i.e., customer, as discussed throughout the Specification (See at least paragraph 0003, e.g., “Aspects disclosed herein include a method, system, and computer-readable media for assessing brand experience and customer experience as a tool to measure and align business intentions with customer experiences. A central processing system provides.”). Therefore, the primary purpose of the claimed invention is unequivocally for assessing customer experience Accordingly, Applicant’s argument is not persuasive because the claims have been shown to recite an abstract idea via limitations falling under the “Certain methods of organizing human activity” abstract idea groupings set forth in MPEP 2106 via limitations that set forth steps for managing personal behavior or relationships or interactions between people including following rules or instructions. The Office maintains that the claims recite to an abstract idea falling under the under the “Certain methods of organizing human activity.” 10. Applicant submits “even if the claims are considered to be directed to an abstract idea under Step 1 and Step 2A, Prong I, the Applicant submits that the claim as a whole integrates the judicial exception (to which the Applicant does not concede) into a practical application under Step 2A Prong II. See MPEP § 2106.04(d).” [Applicant’s Remarks, 11/24/2025, page 9] In response to Applicant’s argument that “the claim as a whole integrates the judicial exception (to which the Applicant does not concede) into a practical application under Step 2A Prong II. See MPEP § 2106.04(d),” it is noted that the additional elements in amended claim 1 are: computer executable code, processor circuitry, a central processing system, a communication interface, a central system, multiple remote user terminals, a first set of multiple remote user terminals, a second set of multiple remote user terminals, a third set of multiple remote user terminals of the third type of users, and each of the multiple remote user terminals, which merely serve to tie the abstract idea to a particular technological environment (computer-based operating environment) via generic computing hardware, software/instructions, which is not sufficient to amount to a practical application, as noted in MPEP 2106.05. Applicant has provided no facts/evidence nor provided a persuasive line of reasoning showing how the additional elements are integrated with the abstract idea to integrate the abstract idea into a practical application. It is also noted that the claims are devoid of any discernible change, transformation, or improvement to a computer (software or hardware) or any existing technology. Applicant has not shown that any specific technological improvement is achieved within the scope of the claims. It bears emphasis that no code, processor circuitry, central processing system, interface, central system, remote user terminals, or technological elements are modified or improved upon in any discernible manner. Instead, the result produced by the claims is simply information relating to a report comprising the metrics for the assessed elements of the multiple element brand design assessment, which is not a technical result or improvement thereof. Furthermore, the additional elements fail to integrate the abstract idea into a practical application because they fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. For the reasons above, this argument is found unpersuasive. 11. Applicant submits “When considered in combination, the additional elements of at least the storage, communication interface, and transmission for display, similar to BASCOM, the present claims recite a "technology based solution" of a system that enables storage of a template and the automatic generation of different sets of questions, exchange of information via a communication interface, saving/update as responses are received, and transmission of a report for display. Thus, the aspects presented herein provide for the collection of information to enable a report with a visual display of the information via a unique lens, as described in connection with FIG. 6 of the present application. See MPEP 2106.05.” [Applicant’s Remarks, 11/24/2025, page 10] Applicant alludes to Step 2B of the eligibility inquiry by suggesting that “When considered in combination, the additional elements of at least the storage, communication interface, and transmission for display, similar to BASCOM, the present claims recite a "technology based solution" of a system that enables storage of a template and the automatic generation of different sets of questions, exchange of information via a communication interface, saving/update as responses are received, and transmission of a report for display.” The Examiner respectfully disagrees. In response to Applicant’s citation to BASCOM, the Examiner points out that the Federal Circuit found that the claims in BASCOM included a “non-conventional and non-generic arrangement” of the additional elements, including installation of a filtering tool at a specific location, remote from end-users, with customizable filtering features specific to each end user. In contrast, Applicant's claims have not been shown to encompass a “non-conventional and non-generic arrangement” of the additional elements. The additional elements of computer executable code, processor circuitry, a central processing system, a communication interface, a central system, multiple remote user terminals, a first set of multiple remote user terminals, a second set of multiple remote user terminals, a third set of multiple remote user terminals of the third type of users, and each of the multiple remote user terminals have not been shown to be used or arranged in any unconventional manner or non-generic manner, and therefore the analogy to the CAFC’s eligibility rationale in the BASCOM decision is not persuasive. Moreover, in response to Applicant’s argument that “the aspects presented herein provide for the collection of information to enable a report with a visual display of the information via a unique lens, as described in connection with FIG. 6 of the present application,” it is noted that the claimed “transmit for display a report” merely amounts to presentation of results of abstract idea processing. Merely transmitting information and displaying results, even if characterized as via “a unique lens” does not integrate the abstract idea into a practical application. Displaying analyzed information itself does not improve the functioning of the computer or another technology. For the reasons above in addition to the reasons set forth below in the updated §101 rejection, the arguments and amendments are not sufficient to overcome the §101 rejection. 12. Applicant submits “The cited references fail to disclose or suggest at least "automatically generate, from the stored template, multiple different sets of questions to assess experience from different groups associated with a brand design, wherein the different groups include a first type of user designated as a customer, a second type of user designated as associated with a culture of the brand design, and a third type of user designated as a community associated with the brand design," in combination with the other aspects of claim 1.” [Applicant’s Remarks, 11/24/2025, page 11] In response to the Applicant’s argument that “the cited references fail to disclose or suggest at least "automatically generate, from the stored template, multiple different sets of questions to assess experience from different groups associated with a brand design, wherein the different groups include a first type of user designated as a customer, a second type of user designated as associated with a culture of the brand design, and a third type of user designated as a community associated with the brand design," in combination with the other aspects of claim 1,” it is noted that this argument is a mere allegation of patentability by the Applicant with no supporting rationale or explanation. Merely stating that the claims do not teach a feature does not offer any insight as to why the specific sections of the prior art relied upon by the Examiner fail to disclose the claimed features. Applicant's arguments amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Moreover, the Examiner notes the limitations being argued by Applicant as being newly amended to the claims in the response filed 11/24/2025, which have been addressed in the updated rejection below. Applicant’s argument has been considered, but it pertains to amendments to independent claim 1 that are believed to be addressed via the updated ground of rejection under §103 set forth in the instant Office action, which incorporates a new reference and new citations to address the amended limitations in claim 1 and supports a conclusion of obviousness of the amended claims. 13. Applicant submits “Jay does not disclose or suggest that "multiple different sets of questions" are automatically generated from a stored template "to assess experience from different groups associated with a brand design" as in amended claim 1 (emphasis added). Furthermore, Jay does not disclose or suggest that "the different groups include a first type of user designated as a customer, a second type of user designated as associated with a culture of the brand design, and a third type of user designated as a community associated with the brand design," (as in amended claim 1).” [Applicant’s Remarks, 11/24/2025, page 12] As best understood by the Examiner, Applicant argues that Jay does not disclose or suggest “automatically generate, from the stored template, multiple different sets of questions to assess experience from different groups associated with a brand design, wherein the different groups include a first type of user designated as a customer, a second type of user designated as associated with a culture of the brand design, and a third type of user designated as a community associated with the brand design.” In response, the Examiner notes the limitations being argued by Applicant as being newly amended to the claims in the response filed 11/24/2025, which have been addressed in the updated rejection below. Applicant’s argument has been considered, but it pertains to amendments to independent claim 1 that are believed to be addressed via the updated ground of rejection under §103 set forth in the instant Office action, which incorporates a new reference and new citations to address the amended limitations in claim 1 and supports a conclusion of obviousness of the amended claims. 14. Applicant submits “In addition to the automatic generation of the different sets of questions, amended claim 1 further includes "provide, via a communication interface, targeted sets of questions for a multiple element brand design assessment from a central system to multiple remote user terminals, wherein a first set of targeted customer questions are transmitted to a first set of multiple remote user terminals of the first type of users, a second set of targeted culture questions are transmitted to a second set of multiple remote user terminals of the second type of users, and a third set of targeted community questions are transmitted to a third set of multiple remote user terminals of the third type of users," which is not disclosed or suggested by the cited references.” [Applicant’s Remarks, 11/24/2025, page 12] In response to the Applicant’s argument that the cited references do not disclose or suggest “provide, via a communication interface, targeted sets of questions for a multiple element brand design assessment from a central system to multiple remote user terminals, wherein a first set of targeted customer questions are transmitted to a first set of multiple remote user terminals of the first type of users, a second set of targeted culture questions are transmitted to a second set of multiple remote user terminals of the second type of users, and a third set of targeted community questions are transmitted to a third set of multiple remote user terminals of the third type of users,” it is noted that this argument is a mere allegation of patentability by the Applicant with no supporting rationale or explanation. Merely stating that the claims do not teach a feature does not offer any insight as to why the specific sections of the prior art relied upon by the Examiner fail to disclose the claimed features. Applicant's arguments amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. Moreover, the Examiner notes the limitations being argued by Applicant as being newly amended to the claims in the response filed 11/24/2025, which have been addressed in the updated rejection below. Applicant’s argument has been considered, but it pertains to amendments to independent claim 1 that are believed to be addressed via the updated ground of rejection under §103 set forth in the instant Office action, which incorporates a new reference and new citations to address the amended limitations in claim 1 and supports a conclusion of obviousness of the amended claims. 15. Applicant submits “The audit unit in Jay does not disclose or suggest providing targeted sets of questions to the different types of users, as in claim 1. In contrast to the targeted sets of questions in claim 1, the footprints cited in Jay are determined by the audit unit "based on the IO audit." See Office Action at page 16 citing Jay.” [Applicant’s Remarks, 11/24/2025, page 12] The Examiner respectfully disagrees. With respect to the §103 rejection of independent claim 1, Applicant argues that “Jay does not disclose or suggest providing targeted sets of questions to the different types of users, as in claim.” However, Jay expressly discloses ad IO (impact-oriented) survey unit that generates questions from a template (col. 45, lines 5-10; col. 53, lines 51-65). The survey unit enables brands to configure surveys and structure questions to obtain desired insights. Users access and participate in “Active” quizzes/surveys via the system interface, as described in col. 39, lines 22-43, which necessarily requires transmitting the survey questions to user devices over a network interface. Moreover, Jay teaches that the IO survey unit renders each question with predetermined response options (col. 45, lines 27-31), and that surveys are configured by the brand for presentation in the digital marketplace (col. 53, lines 51-56). This demonstrates that the system provides specific (i.e., targeted) survey question sets from templates and delivered to users’ terminals for participation. Thus, given the broadest reasonable interpretation consistent with the specification in construing the claimed invention, it is Examiner’s position that the disclosure of Jay teaches and at least suggests the disputed limitation. Accordingly, this argument is found unpersuasive. 16. Applicant’s remaining arguments either logically depend from the above-rejected arguments, in which case they too are unpersuasive for the reasons set forth above, or they are directed to features which have been newly added via amendment. Therefore, this is now the Examiner's first opportunity to consider these limitations and as such any arguments regarding these limitations would be inappropriate since they have not yet been examined. A full rejection of these limitations will be presented later in this Office Action. Claim Objections 17. Claims 3 and 4 are objected to because the following informalities: typographical errors. Claim 3 was amended to recite “wherein the multiple remote user terminals are associated with with a brand, product, service, or organization that is a subject of the multiple element brand design assessment.” Claim 3 should recite “wherein the multiple remote user terminals are associated with a brand, product, service, or organization that is a subject of the multiple element brand design assessment.” Appropriate correction is required. Claim 4 was amended to recite “wherein the targeted sets of questions include one or more question for each of multiple focus areas including a fiscal area, a cultural area, a sociological area, and a contextual area.” Claim 3 should recite “wherein the targeted sets of questions include one or more questions for each of multiple focus areas including a fiscal area, a cultural area, a sociological area, and a contextual area.” Appropriate correction is required. Claim Rejections - 35 USC § 112 18. The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. 19. Claims 10-18 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. 20. Claim 10 was amended to recite “wherein a compilation of scores is automatically performed and saved as responses arrive; updating the stored compilation of scores as additional responses for a respective type of user are received. The phrase “the stored compilation of scores” lacks antecedent basis, and therefore renders the claim indefinite. It is unclear whether the “stored” and “saved” refer to the same structure, or a different compilation is intended. Appropriate correction is required. 21. All claims dependent from above rejected claims are also rejected due to dependency. Claim Rejections - 35 USC § 101 22. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 23. Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claims are directed to an abstract idea without significantly more. 24. Claims 1-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. The eligibility analysis in support of these findings is provided below, in accordance with MPEP 2106. With respect to Step 1 of the eligibility inquiry (as explained in MPEP 2106), it is first noted that the computer program product (claim 1-9) and method (claims 10-18) and are directed to at least one potentially eligible category of subject matter (i.e., article of manufacture and process, respectively). Thus, Step 1 of the Subject Matter Eligibility test for claims 1-18 is satisfied. With respect to Step 2A Prong One, it is next noted that the claims recite abstract ideas that fall into the (1) “Certain Methods of Organizing Human Activity” by setting forth steps for managing commercial interactions (e.g., marketing or sales activities or behaviors; business relations); and (2) “Mathematical Concepts” such as mathematical relationships, formulas and calculations, as set forth in the enumerated groupings of abstract ideas set forth in MPEP 2106. With respect to independent claim 1, the limitations reciting the abstract idea are indicated in bold below: computer executable code for information modeling, processor circuitry causes a central processing system to: store a template for sets of questions for a multiple element brand design assessment automatically generate, from the stored template, multiple different sets of questions to assess experience from different groups associated with a brand design, wherein the different groups include a first type of user designated as a customer, a second type of user designated as associated with a culture of the brand design, and a third type of user designated as a community associated with the brand design; provide, via a communication interface, targeted set of questions for the multiple element brand design assessment from a central system to multiple remote user terminals, wherein a first set of targeted customer questions are transmitted to a first set of multiple remote user terminals of the first type of users, a second set of targeted culture questions are transmitted to a second set of multiple remote user terminals of the second type of users, and a third set of targeted community questions are transmitted to a third set of multiple remote user terminals of the third type of users; receive, via the communication interface, a set of user selectable responses from each of the first set of multiple remote user terminals, the second set of multiple remote user terminals, and the third set of multiple remote user terminals in response to the provided targeted sets of questions; combine scores based on the set of user selectable responses received from each of the multiple remote user terminals to generate metrics for assessed elements of the multiple element brand design assessment, wherein a compilation of scores is automatically performed and saved as responses arrive; update the saved compilation of scores as additional responses for a respective type of user are received: and transmit for display a report comprising the metrics for the assessed elements of the multiple element brand design assessment. These limitations recite steps which encompass activity for managing personal behavior or relationships or interactions (e.g., following rules or instructions), and managing commercial interactions, and also recites limitations falling within the Mathematical Concepts abstract idea grouping. Because the above-noted limitations recite steps falling within the Certain methods of organizing human activity and Mathematical Concepts abstract idea groupings of MPEP 2106, they have been determined to recite at least one abstract idea when evaluated under Step 2A Prong One of the eligibility inquiry. Independent claim 10 recites similar limitations as the above-noted limitations recited in claim 1 and are therefore found to recite the same abstract idea. With respect to Step 2A Prong Two, the judicial exception is not integrated into a practical application. With respect to the independents claims, the additional elements recited in the claims are: computer executable code, processor circuitry, a central processing system, a communication interface, a central system, multiple remote user terminals, a first set of multiple remote user terminals, a second set of multiple remote user terminals, a third set of multiple remote user terminals of the third type of users, each of the first set of multiple remote user terminals, the second set of multiple remote user terminals, and the third set of multiple remote user terminals, and each of the multiple remote user terminals (claim 1); a communication interface, a central system, multiple remote user terminals, a first set of multiple remote user terminals, a second set of multiple remote user terminals, a third set of multiple remote user terminals of the third type of users, each of the first set of multiple remote user terminals, the second set of multiple remote user terminals, and the third set of multiple remote user terminals, and each of the multiple remote user terminals (claim 10). These additional elements have been evaluated, but fail to integrate the abstract idea into a practical application. These additional elements have been evaluated, but fail to integrate the abstract idea into a practical application because they amount to using generic computing elements or computer-executable instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment. See MPEP 2106.05(f) and 2106.05(h). Even if the step for outputting is not deemed part of the abstract idea, this step is at most directed to insignificant extra-solution activity, which is not sufficient to amount to a practical application. See MPEP 2106.05(g). In addition, these limitations fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. With respect to Step 2B, it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. With respect to the independents claims, the additional elements recited in the claims are: computer executable code, processor circuitry, a central processing system, a communication interface, a central system, multiple remote user terminals, a first set of multiple remote user terminals, a second set of multiple remote user terminals, a third set of multiple remote user terminals of the third type of users, each of the first set of multiple remote user terminals, the second set of multiple remote user terminals, and the third set of multiple remote user terminals, and each of the multiple remote user terminals (claim 1); a communication interface, a central system, multiple remote user terminals, a first set of multiple remote user terminals, a second set of multiple remote user terminals, a third set of multiple remote user terminals of the third type of users, each of the first set of multiple remote user terminals, the second set of multiple remote user terminals, and the third set of multiple remote user terminals, and each of the multiple remote user terminals (claim 10). These elements have been considered individually and in combination, but fail to add significantly more to the claims because they amount to using generic computing elements or instructions (software) to perform the abstract idea, similar to adding the words “apply it” (or an equivalent), which merely serves to link the use of the judicial exception to a particular technological environment and does not amount to significantly more than the abstract idea itself. Notably, Applicant’s Specification suggests that virtually any type of computing device under the sun can be used to implement the claimed invention (Specification at paragraph [0072]). Accordingly, the generic computer involvement in performing the claim steps merely serves to generally link the use of the judicial exception to a particular technological environment, which does not add significantly more to the claim. See, e.g., Alice Corp., 134 S. Ct. 2347, 110 USPQ2d 1976.). Even if the steps for transmitting are not deemed part of the abstract idea, these steps are at most directed to insignificant extra-solution activity, which has been recognized as well-understood, routine, and conventional, and thus insufficient to add significantly more to the abstract idea. See MPEP 2106.05(d) - Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrate the abstract idea into a practical application. Their collective functions merely provide generic computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that, as an ordered combination, amount to significantly more than the abstract idea itself. Dependent claims 2-9 and 11-18 recite the same abstract ideas as recited in the independent claims, and have been found to either recite additional details that are part of the abstract idea itself (when analyzed under Step 2A Prong One) along with, at most, additional elements that fail to integrate the abstract idea into a practical application or add significantly more. In particular, dependent claims 2-9 and 11-18 further narrow the abstract ideas recited in independent claim 1 by reciting additional details or steps that set forth mathematical relationships, formulas and calculations, which therefore fall under the “Mathematical Concepts” group; and also recite limitations that fall under the “Certain methods of organizing human activity” abstract idea grouping. For example, dependent claims 2-9 recite “wherein the multiple element brand design assessment comprises a life-centered brand design assessment,” “wherein the multiple remote user terminals are associated with a brand, product, service, or organization that is a subject of the multiple element brand design assessment,” “wherein the targeted sets of questions include one or more question for each of multiple focus areas including a fiscal area, a cultural area, a sociological area, and a contextual area,” “wherein the targeted sets of questions include one or more questions for each element in a category associated with the multiple focus areas,” “wherein the targeted sets of questions include the one or more questions to elicit a user selectable response relating to a brand, product, service, or organization for one or more elements including feasible, rational, understandable, thoughtful, beautiful, desirable, inclusive, equitable, accessible, adaptable, sustainable, or viable,” “wherein the targeted sets of questions include the one or more questions to elicit a user selectable response relating to a brand, product, service, or organization for each of element of feasible, rational, understandable, thoughtful, beautiful, desirable, inclusive, equitable, accessible, adaptable, sustainable, and viable,” “wherein the metrics include a combined score for each element, a maximum possible score for each element, and a deviation score for each element,” “wherein the targeted sets of questions provided are based on the template,” however, these steps can be accomplished via mathematical calculations and are also directed to “certain methods of organizing human activity.” As described above, dependent claims 2-9 and 1-18 further narrow the abstract ideas recited in independent claim 1 by reciting additional details or steps that set forth mathematical relationships, formulas and calculations and steps/details directly in support of organizing human activity. Accordingly, these steps are part of the same abstract idea(s) set forth in the independent claims. The other dependent claims have been evaluated as well, but similar to claims 2-9, these claims also recite details of the abstract ideas themselves accompanied by, at most, generic computer implementation, which is not enough to transform the claims into a practical application of the abstract idea or amount to significantly more than the abstract idea itself. See MPEP 2106.05(f),(h). See also, Alice Corp., 134 S. Ct. 2347, 110 USPQ2d 1976. When evaluated under Step 2A Prong Two and Step 2B, the additional elements do not amount to a practical application or significantly more since they merely require generic computing devices (or computer-implemented instructions/code) which as noted in the discussion of the independent claims above is not enough to render the claims as eligible. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide generic computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to a practical application or significantly more than the abstract idea itself. For more information, see MPEP 2106. Claim Rejections - 35 USC § 103 25. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 26. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 27. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the le7vel of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 28. Claims 1-7, 9-16, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Jay et al., Patent No.: US 11,790,418 B1, [hereinafter Jay], in view of Copeland et al., Pub. No.: US 2022/0270117 A1, [hereinafter Copeland], in further view of Nowak et al., Pub. No.: US 2014/0358636 A1, [hereinafter Nowak]. As per claim 1, Jay teaches a non-transitory computer-readable medium storing computer executable code for information modeling, the code when executed by processor circuitry causes a central processing system (col. 7, lines 4-8: “In yet another aspect, a non-transitory storage medium is described herein. The non-transitory storage medium stores a sequence of instructions, which when executed by a processor causes it to…”) to: store a template for sets of questions for a multiple element brand design assessment (col. 45, lines 5-16, discussing that the IO survey unit generates the IO survey questions based on a template and or custom built. The IO survey unit further enables the user to configure the settings of the quiz/survey. The IO survey unit receives the responses from the user and verifies the responses; col. 54, lines 12-22, discussing that the IO survey unit, via user interface view in FIG. 15b, enables the user to select the template to create the IO survey. The IO survey unit enables the user to select the template from the options such as net promoter score (NPS), rating, scored, binary, standard, matching. The NPS type quiz receives responses in 0-10 scale. The result of the NPS score would be the % of promoters (participants who answered 9-10 the most)−% of detractors (participants who answered 1-6 the most) resulting in the NPS “Score” i.e., NPS=% of promoters−% of detractors=has to equal or be greater than 50); provide, via a communication interface, targeted sets of questions for the multiple element brand design assessment from a central system to multiple remote user terminals (col. 39, lines 22-43, discussing that the “IO Quiz/survey” lists quizzes and surveys, run by the shop, with their corresponding status. The status comprises one of Active, Closed, Pending Results. The users can participate in Active Quizzes/surveys. The users can also see cumulative/collective results for Closed quizzes/surveys; col. 45, lines 5-10, discussing that the IO (impact oriented) survey unit generates the IO survey questions based on a template and or custom built. The IO survey unit further enables the user to configure the settings of the survey; col. 45, lines 27-31, discussing that every question, rendered by the IO survey unit, in the IO survey has a predetermined set of responses; col. 53, lines 51-65, discussing that FIGS.15a-15c illustrate user interface views of configuring an impact oriented (IO) survey. The IO survey unit enables the brands the ability to create interesting, interactive surveys that result in quality insights and actual rewards that foster customer goodwill and continued participation. The IO survey unit enables brands to structure interesting surveys…The IO survey unit creates the surveys from a template or custom built. The IO survey unit further depicts a brand logo and a name on top of the screen to give an immersive brand experience in the digital marketplace); receive, via the communication interface, a set of user selectable responses from each of the multiple remote user terminals in response to the provided targeted sets of questions (col. 40, lines 49-56, discussing that the “IO Quiz/survey” lists quizzes and surveys, run by the shop, with their corresponding status. The status comprises one of Active, Closed, Pending Results. The users can participate in Active Quizzes/surveys. The users can also see cumulative/collective results for Closed quizzes/surveys. The users can also further see which quizzes/surveys just closed out and the ones that just ended but are pending results being published for public view; col. 45, lines 9-10, discussing that the IO survey unit receives the responses from the user and verifies the responses); combine results based on the set of user selectable responses received from each of the multiple remote user terminals to generate metrics for assessed elements of the multiple element brand design assessment, wherein a compilation is automatically performed and saved as responses arrive (col. 9, lines 1-3, discussing calculating an impact oriented (IO) metrics associated with the IO profile based on the evaluation of the IO profile in real-time; col. 39, lines 3-21, discussing a shop dashboard navigation of a digital marketplace from a seller's perspective. The shop dashboard comprises “My shop” that enables the seller to upload new products and view their live listings...The shop dashboard navigation further comprises listings such as “IO profile”, “IO survey”,…, and “settings”…The “IO profile” listing is an overview of the brand's footprints and their overall value to society, planet, consumers, and the digital marketplace community. The “IO profile” lists out the Metrics and insights into the brand's behavior; col. 40, lines 49-56, discussing that the “IO Quiz/survey” lists quizzes and surveys, run by the shop, with their corresponding status. The status comprises one of Active, Closed, Pending Results. The users can participate in Active Quizzes/surveys. The users can also see cumulative/collective results for Closed quizzes/surveys. The users can also further see which quizzes/surveys just closed out and the ones that just ended but are pending results being published for public view; col. 54, lines 51-58, discussing that the IO survey unit automatically synchronizes the end consumer's profile information to the responses when the user enters into the survey. The IO survey unit publicly displays the brand's aggregate results to the IO profile when the Quiz/survey limit is reached. The IO survey unit enables the brand to disclose results of the Quiz/survey questionnaire, and the individual responses to the questionnaire subject to consent by the end consumer; col. 55, lines 36-42, discussing that the profile creating unit updates the IO profile of at least one of the seller, and the buyer. The profile creating unit tracks an activity performed by the seller that can alter information in the IO profile. The profile creating unit then records the activity through the blockchain network and updates the IO profile based on the activity executed; col. 59, lines 11-16, discussing that IO Metrics of the brand is composed of eight categories or “footprints” of evaluation. Each footprint is composed of five levels of scrutiny that come with corresponding points. Each footprint requires total transparency and proactive disclosures from the brand; col. 39, lines 22-43); transmit for display a report comprising the metrics for the assessed elements of the multiple element brand design assessment (col. 39, lines 3-21, discussing a shop dashboard navigation of a digital marketplace from a seller's perspective. The shop dashboard comprises “My shop” that enables the seller to upload new products and view their live listings...The shop dashboard navigation further comprises listings such as “IO profile”, “IO survey”,…, and “settings”…The “IO profile” listing is an overview of the brand's footprints and their overall value to society, planet, consumers, and the digital marketplace community. The “IO profile” lists out the Metrics and insights into the brand's behavior; col. 54, lines 51-58, discussing that the IO survey unit automatically synchronizes the end consumer's profile information to the responses when the user enters into the survey. The IO survey unit publicly displays the brand's aggregate results to the IO profile when the survey limit is reached. The IO survey unit enables the brand to disclose results of the survey questionnaire). While Jay describes combining results, Jay does not explicitly teach combine scores based on the set of user selectable responses received from each of the multiple remote user terminals to generate metrics for assessed elements of the multiple element brand design assessment, and a compilation of scores. Jay does not explicitly teach automatically generate, from the stored template, multiple different sets of questions to assess experience from different groups associated with a brand design, wherein the different groups include a first type of user designated as a customer, a second type of user designated as associated with a culture of the brand design, and a third type of user designated as a community associated with the brand design; wherein a first set of targeted customer questions are transmitted to a first set of multiple remote user terminals of the first type of users, a second set of targeted culture questions are transmitted to a second set of multiple remote user terminals of the second type of users, and a third set of targeted community questions are transmitted to a third set of multiple remote user terminals of the third type of users; receive, via the communication interface, a set of user selectable responses from each of the first set of multiple remote user terminals, the second set of multiple remote user terminals, and the third set of multiple remote user terminals in response to the provided targeted sets of questions; combine scores based on the set of user selectable responses received from each of the multiple remote user terminals to generate metrics for assessed elements of the multiple element brand design assessment, a compilation of scores; and update the saved compilation of scores as additional responses for a respective type of user are received. Copeland in the analogous art of brand analysis systems teaches: combine scores based on the set of user selectable responses received from each of the multiple remote user terminals to generate metrics for assessed elements of the multiple element brand design assessment, and a compilation of scores (paragraph 0036, discussing the process steps for determining a value return index (VRI) performed by VRI system. The VRI system is a tool that accurately quantifies the financial impact of the upper part of the purchasing funnel by generating scores across several categories (e.g., environmentalism, affordability, customer service, patriotism, equality and diversity). VRI helps a brand understand what elements of the brand are driving revenue, losing revenue and how the brand compares to its competitors. For example, a brand might learn that their efforts towards equality are driving significant revenue to the business and therefore, the brand needs to continue and increase strategies toward equality. The VRI system is critical for driving the strategy of a brand, brand spending, targeting the type of consumers, and the messaging of the brand in advertisements; paragraph 0037, discussing that the process begins wherein brand values are selected and entered into servers along with brand industry competitors. Each brand will define its core company values as part of its advertising strategy and campaigns. Examples of such values may include environmentalism, affordability, customer service, patriotism, equality, sustainability, innovation, security, health, quality, and diversity. Other values may be used. For example, Costco may believe diversity and equality in hiring is paramount to its marketing strategy and future marketing campaigns. These values can also be referred to as attributes, differentiators or characteristics of a brand. As part of this step, brands will also identify its company competitors. For example, if the brand is in the fast-food industry, the brand might look at McDonald's, Taco Bell, Burger King, and Wendy's as competitors; paragraph 0038, discussing that given the values and competitors, the process execution proceeds to step 402 wherein user (consumer) data (many users) is solicited and received relating to the brand values and brand competitors as well as demographic data of those users. This data may be solicited via electronic surveys, emails, portal access or other means known to those skilled in the art. The user data will help understand user perception of the brand; paragraph 0051). Jay is directed to customer analysis systems. Copeland is directed to consumer-focused marketing modeling. Therefore, they are deemed to be analogous as they both are directed towards solutions for customer analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Jay with Copeland because the references are analogous art because they are both directed to solutions for customer analysis, which falls within applicant’s field of endeavor (method and system for assessing brand experience and customer experience ), and because modifying Jay to include Copeland’s feature for combining scores based on the set of user selectable responses received from each of the multiple remote user terminals to generate metrics for assessed elements of the multiple element brand design assessment, and a compilation of scores, in the manner claimed, would serve the motivation of better predicting if a user will purchase a brand or not (Copeland at paragraph 0044); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. The Jay-Copeland combination does not explicitly teach automatically generate, from the stored template, multiple different sets of questions to assess experience from different groups associated with a brand design, wherein the different groups include a first type of user designated as a customer, a second type of user designated as associated with a culture of the brand design, and a third type of user designated as a community associated with the brand design; wherein a first set of targeted customer questions are transmitted to a first set of multiple remote user terminals of the first type of users, a second set of targeted culture questions are transmitted to a second set of multiple remote user terminals of the second type of users, and a third set of targeted community questions are transmitted to a third set of multiple remote user terminals of the third type of users; receive, via the communication interface, a set of user selectable responses from each of the first set of multiple remote user terminals, the second set of multiple remote user terminals, and the third set of multiple remote user terminals in response to the provided targeted sets of questions; and update the saved compilation of scores as additional responses for a respective type of user are received. However, Nowak in the analogous art of survey segmentation systems teaches these concepts. Nowak teaches: automatically generate, from the stored template, multiple different sets of questions to assess experience from different groups associated with a brand design, wherein the different groups include a first type of user designated as a customer, a second type of user designated as associated with a culture of the brand design, and a third type of user designated as a community associated with the brand design (paragraph 0010, discussing a method of creating survey segments and assigning users to the survey segments [i.e., different groups]; paragraph 0019, discussing that demographic information is used to construct one or more selectively sampled survey groups from users assigned to an ad hoc survey layer. When a new user joins the social networking system, the user creates a user profile, into which various details about the user are provided. For example, the user may provide his or her name, profile picture, city of residence, contact information, birth date, gender, marital status, family status, employment, educational background, preferences, interests, and other demographic information. Using the information users supply about themselves, among other information determined by the social networking system via the user's social network, survey creators can create ad hoc surveys that sample from the users assigned to one of the ad hoc survey layers. Surveys are offered to users that, for example, are known to use certain features of the social networking system, or are presumed to use certain types of products. Additionally, users are selected based on certain demographic characteristics, such as gender, age, geographic region, spoken language, etc. A user's eligibility for an ad hoc survey offering is determined by the creator of the ad hoc survey; paragraph 0029, discussing that ad hoc survey layers are used for custom surveys targeted based on user data. In one embodiment, layer 2 is an ad hoc survey layer, where approximately 49% of the users in the survey pool are assigned. Ad hoc survey layers can be subdivided into multiple layers. Users in the one or more ad hoc layers are selected for custom surveys based on various selection criteria, and each custom survey can have unique selection criteria. Users in the one or more ad hoc layers that are identified as users of a specific product, functionality, or interface of the social networking system are offered surveys based on that specific product, functionality, or interface. For example, users of the Spotify digital music service may be offered surveys about the Spotify service. Users of the Instagram photo sharing service may be offered surveys about the Instagram service. In another example, users of a messenger service that is internal to the social networking system may be offered surveys about the messenger service. Additionally, users of the various client devices are offered surveys on the interfaces to the social networking system available on their particular client device; paragraph 0030, discussing that users can be selected for surveys based on a determined pattern of usage of the social networking system, or based on demographic information about the user, or various other selection criteria. For example, users can be offered a survey of political opinion, and survey responses can be reported, displayed, or analyzed based on various elements of demographic information, or other characteristics of the surveyed users. Additionally, product developers for the social networking system can use one or more of the ad hoc layers for demographically targeted custom surveys, to select users who speak a specific language, or users from a specific nation or geographic region; paragraph 0048, discussing an exemplary method of creating survey segments and assigning users to the survey segments. At the beginning of a survey period, the social networking system uses one or more hash functions to compute a hash value of the user ID of each user that will be added to the survey pool…The social networking system assigns the user to a first survey layer using the hash value computed by the one or more hash functions; paragraph 0050, 0054); wherein a first set of targeted customer questions are transmitted to a first set of multiple remote user terminals of the first type of users, a second set of targeted culture questions are transmitted to a second set of multiple remote user terminals of the second type of users, and a third set of targeted community questions are transmitted to a third set of multiple remote user terminals of the third type of users (paragraph 0024, discussing that surveyed layers include a first survey layer (e.g., layer 1) which, in one embodiment, is a main survey layer, a second survey layer (e.g., layer 2) which, in one embodiment, is a product survey layer, and a third survey layer (e.g., layer 3) which, in one embodiment, is an ad hoc survey layer. Layer 1 and layer 2 each make up approximately twenty-four point five percent of the survey pool, while layer makes up approximately forty-nine percent of the survey pool; paragraph 0029, discussing that ad hoc survey layers are used for custom surveys targeted based on user data. In one embodiment, layer 2 is an ad hoc survey layer, where approximately 49% of the users in the survey pool are assigned. Ad hoc survey layers can be subdivided into multiple layers. Users in the one or more ad hoc layers are selected for custom surveys based on various selection criteria, and each custom survey can have unique selection criteria. Users in the one or more ad hoc layers that are identified as users of a specific product, functionality, or interface of the social networking system are offered surveys based on that specific product, functionality, or interface. For example, users of the Spotify digital music service may be offered surveys about the Spotify service. Users of the Instagram photo sharing service may be offered surveys about the Instagram service. In another example, users of a messenger service that is internal to the social networking system may be offered surveys about the messenger service. Additionally, users of the various client devices are offered surveys on the interfaces to the social networking system available on their particular client device [i.e., multiple remote user terminals]; paragraph 0045, discussing that an additional group of user experience surveys are offered to an additional group of users, (e.g., User Group "2") which may not have characteristics similar to those of other user groups, such as User Group "1" or User Group "3"; paragraph 0049, discussing that multiple ad hoc layers are used to limit the number of users available to a custom survey creator, for example, if the survey creator wishes to send a survey request to all eligible users at once. Additionally, the multiple ad hoc layers may be used to provide a more specific pool of users within the users across the various ad hoc layers. For example, an ad hoc custom survey on the messenger service may offer surveys to users in the ad hoc #1 layer, while a custom survey of users of the Instagram photo sharing service offers surveys to users in the ad hoc #2 layer; paragraph 0056, discussing that custom surveys are offered to users in the ad hoc survey layer; paragraph 0044); receive, via the communication interface, a set of user selectable responses from each of the first set of multiple remote user terminals, the second set of multiple remote user terminals, and the third set of multiple remote user terminals in response to the provided targeted sets of questions (paragraph 0050, discussing that in some instances, it is beneficial to custom survey creators to have a survey pool that is representative of a given population, such as the general user base of the social networking system. It is also beneficial to enable survey creators to request a survey pool that is representative of the population of a given nation or geographic region. In some instances, a specific demographic group will provide the best survey data for a custom survey creator that is marketing a specific product to a specific group of people. In one embodiment, custom ad hoc groups are selected for a particular survey, or group of surveys, by grouping users with particular characteristics based on the set of users in one or all of the one or more ad hoc layers. The custom ad hoc groups can be defined to contain a representative sample of a given population, or can be demographically targeted at a specific type of user; paragraph 0058, discussing that as survey results are collected, the responses from the various users are assessed to determine if one or more sample groups are deviating from the desired sampling distribution based in relation to targeted usage characteristics or demographic characteristics of the sampled users; paragraph 0052); update the saved compilation of scores as additional responses for a respective type of user are received (paragraph 0037, discussing that dynamic re-sampling of the survey pool for an ad hoc survey is available to custom survey creators. As users submit survey responses during a survey period, the demographic information of the users that respond to the surveys can be analyzed. If it appears the survey responses over-sample one or more demographic groups, while under-sampling other demographic groups, the survey pool is adjusted to maintain the desired demographic sample group for the survey; paragraph 0052, discussing that in addition to post hoc weighting, an embodiment of the social networking system implements a dynamic re-sampling method during the run period of custom, ad hoc surveys, such that the survey offerings are dynamically adjusted during the survey period to maintain the desired sample distribution. For example, if a scenario arises such that User "C" 708 accepts an offer to take a survey, and the user completes the ad hoc Messenger survey 702, a statistically appropriate number of males between the age of 18 and 36 will have answered the survey. If the survey is currently under-sampling females between the age of 18 and 36, additional users in the under-sampled demographic group (e.g., User "A" 704, and User "D" 710) are offered the survey if those users have not previously declined to take that survey, or if the users have not previously completed a survey within the survey cool-down period; paragraph 0058, discussing that as survey results are collected, the responses from the various users are assessed to determine if one or more sample groups are deviating from the desired sampling distribution based in relation to targeted usage characteristics or demographic characteristics of the sampled users; paragraph 0059). The Jay-Copeland combination describes features related to customer analysis. Nowak is directed to survey segmentation. Therefore, they are deemed to be analogous as they both are directed towards solutions for customer analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the Jay-Copeland combination with Nowak because the references are analogous art because they are both directed to solutions for customer analysis, which falls within applicant’s field of endeavor (method and system for assessing brand experience and customer experience), and because modifying the Jay-Copeland combination to include Nowak’s features for including automatically generate, from the stored template, multiple different sets of questions to assess experience from different groups associated with a brand design, wherein the different groups include a first type of user designated as a customer, a second type of user designated as associated with a culture of the brand design, and a third type of user designated as a community associated with the brand design; wherein a first set of targeted customer questions are transmitted to a first set of multiple remote user terminals of the first type of users, a second set of targeted culture questions are transmitted to a second set of multiple remote user terminals of the second type of users, and a third set of targeted community questions are transmitted to a third set of multiple remote user terminals of the third type of users; receive, via the communication interface, a set of user selectable responses from each of the first set of multiple remote user terminals, the second set of multiple remote user terminals, and the third set of multiple remote user terminals in response to the provided targeted sets of questions; and update the saved compilation of scores as additional responses for a respective type of user are received, in the manner claimed, would serve the motivation of enhancing the analysis by seeking specific feedback from the users (Nowak at paragraph 0002); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 2, the Jay-Copeland-Nowak combination teaches the non-transitory computer-readable medium of claim 1. Jay further teaches wherein the multiple element brand design assessment comprises a life-centered brand design assessment (col. 58, lines 37-57, discussing that the ethical footprint is provided by the IO profile creating unit based on tax havens, governance, ethics, culture, and sustainability. The ethical footprint is provided by the IO profile creating unit, based on determining the following: Does the business have the necessary support and infrastructure to develop a strong sustainability program?...How far is the company's daily in-house footprint sustainable? In the form of a publicly viewable checklist. Fair wages? Ethical treatment of labor? The ethical footprint seal lights up in five levels based on the number of items checked on the list. The ethical footprint requires the brand to actively disclose insights on the governance, and daily logistical commitment to sustainability, etc. Brand Profile Standing is defined by both User Generated Content (UGC) Reviews and IO Audits and Assessment from the digital marketplace; col. 60, lines 53-67, discussing that FIG. 19m illustrates targets for achieving an ethical footprint by the brand. The ethical footprint provides the way for the seller to map a brand's footprint regionally and globally based on their business size, to understand their carbon footprint, and sustainability capacity; col. 35, lines 9-34). As per claim 3, the Jay-Copeland-Nowak combination teaches the non-transitory computer-readable medium of claim 1. Jay further teaches wherein the multiple remote user terminals are associated with with a brand, product, service, or organization that is a subject of the multiple element brand design assessment (col. 35, lines 9-34, discussing that the audit unit performs the IO audit on the activity performed by the seller, and the IO profile of the seller in real-time. The audit unit determines impact oriented (IO) metrics achieved by the IO profile of the seller as IO footprints based on the IO audit. The IO footprints comprise at least one of a planetary footprint, a people footprint, a problems without footprint, a community footprint, a product footprint, an offset footprint, an ethical footprint, and a company footprint…The audit unit provides an impact oriented (IO) seal to the seller based on the IO footprints…The audit unit renders the IO seal to the brand of the seller when the brand has hit predetermined targets for their respective company size. The audit unit determines whether the brand has met the criteria requested by each seal for each of the five levels. The audit unit then credits the IO seal at that level to the brand upon confirming that the brand has met the criteria…; col. 39, lines 3-21, discussing that the shop dashboard navigation further comprises listings such as “IO profile”, “IO survey”,…, and “settings”…The “IO profile” listing is an overview of the brand's footprints and their overall value to society, planet, consumers, and the digital marketplace community. The “IO profile” lists out the Metrics and insights into the brand's behavior; col. 40, lines 49-56, discussing that the users can also see cumulative/collective results for Closed surveys; col. 54, lines 51-58, discussing that the IO survey unit automatically synchronizes the end consumer's profile information to the responses when the user enters into the survey. The IO survey unit publicly displays the brand's aggregate results to the IO profile when the Quiz/survey limit is reached. The IO survey unit enables the brand to disclose results of the Quiz/survey questionnaire, and the individual responses to the questionnaire subject to consent by the end consumer). As per claim 4, the Jay-Copeland-Nowak combination teaches the non-transitory computer-readable medium of claim 1. Jay further teaches wherein the targeted sets of questions include one or more question for each of multiple focus areas including a fiscal area, a cultural area, a sociological area, and a contextual area (col. 35, lines 9-34, discussing that the audit unit performs the IO audit on the activity performed by the seller, and the IO profile of the seller in real-time. The audit unit determines impact oriented (IO) metrics achieved by the IO profile of the seller as IO footprints based on the IO audit. The IO footprints comprise at least one of a planetary footprint, a people footprint, a problems without footprint, a community footprint, a product footprint, an offset footprint, an ethical footprint, and a company footprint…The audit unit provides an impact oriented (IO) seal to the seller based on the IO footprints…The audit unit renders the IO seal to the brand of the seller when the brand has hit predetermined targets for their respective company size. The audit unit determines whether the brand has met the criteria requested by each seal for each of the five levels. The audit unit then credits the IO seal at that level to the brand upon confirming that the brand has met the criteria…; col. 35, lines 41-59, discussing that the third party audit may be submitted by filling the following form fields: “Accreditation Type or Level” “Accreditation Title” “Clause” “Context” “Context of Organization” “Audit Outline” and then tabulating findings under three columns “Audit Question” “Audit Evidence” “Audit Summary”; col. 37, lines 26-41, discussing that the impact focus wallet comprises one of a planetary pillar wallet, a people pillar wallet, and problems without borders pillar wallet. The planetary pillar wallet comprises a climate action wallet, a fauna wallet, a flora wallet, an ocean wallet, a land wallet, and a freshwater wallet. The people pillar wallet comprises a hunger wallet, a poverty wallet, a renewable energy wallet, an economic empowerment wallet, a public health wallet, an education wallet, a social equity wallet, a cultural equity wallet, a clean water wallet, a waste management and sanitation wallet, an innovative infrastructure wallet, and a resource resilience and recovery wallet. The problems without borders pillar wallet comprises partnership and industry wallet; a politics, policy, and legislation wallet; an enabling access wallet; an international crises wallet; a religious charities wallet; and a cradle to cradle wallet; col. 58, lines 37-57, discussing that the ethical footprint is provided by the IO profile creating unit based on tax havens, governance, ethics, culture, and sustainability. The ethical footprint is provided by the IO profile creating unit, based on determining the following: Does the business have the necessary support and infrastructure to develop a strong sustainability program?...How far is the company's daily in-house footprint sustainable? In the form of a publicly viewable checklist. Fair wages? Ethical treatment of labor? The ethical footprint seal lights up in five levels based on the number of items checked on the list. The ethical footprint requires the brand to actively disclose insights on the governance, and daily logistical commitment to sustainability, etc. Brand Profile Standing is defined by both User Generated Content (UGC) Reviews and IO Audits and Assessment from the digital marketplace; col. 57, lines 32-39, discussing that the people footprint comprises a hunger footprint, a poverty footprint, a renewable energy footprint, an economic empowerment footprint, a public health footprint, an education footprint, a social equity footprint, a cultural equity footprint, a clean water footprint, a waste management and sanitation footprint, an innovative infrastructure footprint, and a resource resilience and recovery footprint). As per claim 5, the Jay-Copeland-Nowak combination teaches the non-transitory computer-readable medium of claim 4. Jay further teaches wherein the targeted sets of questions include one or more questions for each element in a category associated with the multiple focus areas (col. 39, lines 22-43, discussing that the “IO Quiz/survey” lists quizzes and surveys, run by the shop, with their corresponding status. The status comprises one of Active, Closed, Pending Results. The users can participate in Active Quizzes/surveys. The users can also see cumulative/collective results for Closed quizzes/surveys; col. 45, lines 5-10, discussing that the IO (impact oriented) survey unit generates the IO survey questions based on a template and or custom built. The IO survey unit further enables the user to configure the settings of the survey; col. 45, lines 27-31, discussing that every question, rendered by the IO survey unit, in the IO survey has a predetermined set of responses; col. 46, lines 10-20, discussing that each question, rendered by the IO survey unit, can have as many responses to define as many choices as felt necessary by the quiz master/content creator. No limits on the number of questions or the number of responses each question can have; col. 53, lines 51-65, discussing that FIGS. 15a-15c illustrate user interface views of configuring an impact oriented (IO) survey. The IO survey unit enables the brands the ability to create interesting, interactive surveys that result in quality insights and actual rewards that foster customer goodwill and continued participation. The IO survey unit enables brands to structure interesting surveys…The IO survey unit creates the surveys from a template or custom built. The IO survey unit further depicts a brand logo and a name on top of the screen to give an immersive brand experience in the digital marketplace; col. 61, lines 28-36, discussing targets for achieving a company footprint by the brand. The ethical footprint has various levels. The IO profile creating unit requires information regarding actively disclosing insights on the governance and daily logistical commitment to sustainability. For each level, the IO profile creating unit requires a list of details by rendering a list of questions). Examiner notes that Copeland, in addition to Jay as cited above, also teaches wherein the targeted sets of questions include one or more questions for each element in a category associated with the multiple focus areas (paragraph 0038, discussing that given the values and competitors, the process execution proceeds to step 402 wherein user (consumer) data (many users) is solicited and received relating to the brand values and brand competitors as well as demographic data of those users. This data may be solicited via electronic surveys, emails, portal access or other means known to those skilled in the art. The user data will help understand user perception of the brand…Any number of questions may be used that have a material effect on an outcome. The nature of the questions or solicitation may involve numeral ratings, binary choices (yes/no). These are examples. The user data may also include (1) meta data such as time of obtained data, browser used, length of time to enter the data and (2) geolocation data of user providing the data to verify or support the accuracy of the purchases of the users). As per claim 6, the Jay-Copeland-Nowak combination teaches the non-transitory computer-readable medium of claim 5. Jay further teaches wherein the targeted sets of questions include the one or more questions to elicit a user selectable response relating to a brand, product, service, or organization for one or more elements including feasible, rational, understandable, thoughtful, beautiful, desirable, inclusive, equitable, accessible, adaptable, sustainable, or viable (col. 58, lines 37-57, discussing that the ethical footprint is provided by the IO profile creating unit based on tax havens, governance, ethics, culture, and sustainability. The ethical footprint is provided by the IO profile creating unit, based on determining the following: Does the business have the necessary support and infrastructure to develop a strong sustainability program? Does the daily office space walk the talk? Does the office building recycle? What does it recycle? What soap is being used in the office cafeteria in the kitchen? Smart bulbs? Water conservation? Single use cups and disposable plastics allowed? How far is the company's daily in-house footprint sustainable? In the form of a publicly viewable checklist. Fair wages? Ethical treatment of labor? The ethical footprint seal lights up in five levels based on the number of items checked on the list. The ethical footprint requires the brand to actively disclose insights on the governance, and daily logistical commitment to sustainability, etc. Brand Profile Standing is defined by both User Generated Content (UGC) Reviews and IO Audits and Assessment from the digital marketplace; col. 61, lines 28-35). As per claim 7, the Jay-Copeland-Nowak combination teaches the non-transitory computer-readable medium of claim 5. Jay further teaches wherein the targeted sets of questions include the one or more questions to elicit a user selectable response relating to a brand, product, service, or organization for each of element of sustainable (col. 58, lines 37-57, discussing that the ethical footprint is provided by the IO profile creating unit based on tax havens, governance, ethics, culture, and sustainability. The ethical footprint is provided by the IO profile creating unit, based on determining the following: Does the business have the necessary support and infrastructure to develop a strong sustainability program? Does the daily office space walk the talk? Does the office building recycle? What does it recycle? What soap is being used in the office cafeteria in the kitchen? Smart bulbs? Water conservation? Single use cups and disposable plastics allowed? How far is the company's daily in-house footprint sustainable? In the form of a publicly viewable checklist. Fair wages? Ethical treatment of labor? The ethical footprint seal lights up in five levels based on the number of items checked on the list. The ethical footprint requires the brand to actively disclose insights on the governance, and daily logistical commitment to sustainability, etc. Brand Profile Standing is defined by both User Generated Content (UGC) Reviews and IO Audits and Assessment from the digital marketplace; col. 61, lines 28-35). Jay does not explicitly teach wherein the targeted sets of questions include the one or more questions to elicit a user selectable response relating to a brand, product, service, or organization for each of element of feasible, rational, understandable, thoughtful, beautiful, desirable, inclusive, equitable, accessible, adaptable, and viable. However, Copeland in the analogous art of brand analysis systems teaches this concept. Copeland teaches: wherein the targeted sets of questions include the one or more questions to elicit a user selectable response relating to a brand, product, service, or organization for each of element of feasible, rational, understandable, thoughtful, beautiful, desirable, inclusive, equitable, accessible, adaptable, and viable (paragraph 0036, discussing that FIG. 4 depicts a flow diagram (flowchart) of the process steps for determining a value return index (VRI) performed by the VRI system…The VRI system is a tool that accurately quantifies the financial impact of the upper part of the purchasing funnel by generating scores across several categories (e.g., environmentalism, affordability, customer service, patriotism, equality and diversity). VRI helps a brand understand what elements of the brand are driving revenue, losing revenue and how the brand compares to its competitors. For example, a brand might learn that their efforts towards equality are driving significant revenue to the business and therefore, the brand needs to continue and increase strategies toward equality. The VRI system is critical for driving the strategy of a brand, brand spending, targeting the type of consumers, and the messaging of the brand in advertisements; paragraph 0037, discussing that the process begins with step 400 wherein brand values are selected and entered into servers 104 along with brand industry competitors. Each brand will define its core company values as part of its advertising strategy and campaigns. Examples of such values may include environmentalism, affordability, customer service, patriotism, equality, sustainability, innovation, security, health, quality, and diversity. Other values may be used. For example, Costco may believe diversity and equality in hiring is paramount to its marketing strategy and future marketing campaigns. These values can also be referred to as attributes, differentiators or characteristics of a brand. As part of this step, brands will also identify its company competitors. For example, if the brand is in the fast-food industry, the brand might look at McDonald's, Taco Bell, Burger King, and Wendy's as competitors; paragraph 0038, discussing that given the values and competitors, the process execution proceeds to step 402 wherein user (consumer) data (many users) is solicited and received relating to the brand values and brand competitors as well as demographic data of those users. This data may be solicited via electronic surveys, emails, portal access or other means known to those skilled in the art. The user data will help understand user perception of the brand. For example, data may include answers to a question such as “Which of the following stores do you think treats their employees well?” A) Walmart, B) Target, C) Costco, D) Aldi, E) Kroger, F) None of the above. Any number of questions may be used that have a material effect on an outcome. The nature of the questions or solicitation may involve numeral ratings, binary choices (yes/no). These are examples. The user data may also include (1) meta data such as time of obtained data, browser used, length of time to enter the data and (2) geolocation data of user providing the data to verify or support the accuracy of the purchases of the users). Jay is directed to customer analysis systems. Copeland is directed to consumer-focused marketing modeling. Therefore, they are deemed to be analogous as they both are directed towards solutions for customer analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Jay with Copeland because the references are analogous art because they are both directed to solutions for customer analysis, which falls within applicant’s field of endeavor (method and system for assessing brand experience and customer experience), and because modifying Jay to include Copeland’s feature for including wherein the targeted sets of questions include the one or more questions to elicit a user selectable response relating to a brand, product, service, or organization for each of element of feasible, rational, understandable, thoughtful, beautiful, desirable, inclusive, equitable, accessible, adaptable, and viable, in the manner claimed, would serve the motivation of better predicting if a user will purchase a brand or not (Copeland at paragraph 0044); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 9, the Jay-Copeland-Nowak combination teaches the non-transitory computer-readable medium of claim 1. Jay further teaches wherein the targeted sets of questions provided to the multiple remote user terminals are based on the template (col. 45, lines 5-16, discussing that the IO survey unit generates the IO survey questions based on a template and or custom built. The IO survey unit further enables the user to configure the settings of the quiz/survey. The IO survey unit receives the responses from the user and verifies the responses; col. 54, lines 12-22, discussing that the IO survey unit, via user interface view in FIG. 15b, enables the user to select the template to create the IO survey. The IO survey unit enables the user to select the template from the options such as net promoter score (NPS), rating, scored, binary, standard, matching. The NPS type quiz receives responses in 0-10 scale. The result of the NPS score would be the % of promoters (participants who answered 9-10 the most)−% of detractors (participants who answered 1-6 the most) resulting in the NPS “Score” i.e., NPS=% of promoters−% of detractors=has to equal or be greater than 50). Claim 10 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 1, as discussed above. Further, as per claim 10 the Jay-Copeland-Nowak combination teaches a method of assessing brand experience and customer experience (Jay, col., 14, lines 13-19; col. 15, lines 57-64). Claim 11 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 2, as discussed above Claim 12 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 3, as discussed above. Claim 13 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 4, as discussed above. Claim 14 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 5, as discussed above. Claim 15 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 6, as discussed above. Claim 16 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 7, as discussed above. Claim 18 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 9, as discussed above. 29. Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Jay in view of Copeland, in view of Nowak, in further view of Akkiraju et al., Pub. No.: US 2017/0061497 A1, [hereinafter Akkiraju]. As per claim 8, the Jay-Copeland-Nowak combination teaches the non-transitory computer-readable medium of claim 7. Although not explicitly taught by Jay, Copeland in the analogous art of brand analysis systems teaches wherein the metrics include a combined score for each element (paragraph 0036, discussing the process steps for determining a value return index (VRI) performed by VRI system. The VRI system is a tool that accurately quantifies the financial impact of the upper part of the purchasing funnel by generating scores across several categories (e.g., environmentalism, affordability, customer service, patriotism, equality and diversity). VRI helps a brand understand what elements of the brand are driving revenue, losing revenue and how the brand compares to its competitors. For example, a brand might learn that their efforts towards equality are driving significant revenue to the business and therefore, the brand needs to continue and increase strategies toward equality. The VRI system is critical for driving the strategy of a brand, brand spending, targeting the type of consumers, and the messaging of the brand in advertisements; paragraph 0037, discussing that the process begins wherein brand values are selected and entered into servers along with brand industry competitors. Each brand will define its core company values as part of its advertising strategy and campaigns. Examples of such values may include environmentalism, affordability, customer service, patriotism, equality, sustainability, innovation, security, health, quality, and diversity. Other values may be used. For example, Costco may believe diversity and equality in hiring is paramount to its marketing strategy and future marketing campaigns. These values can also be referred to as attributes, differentiators or characteristics of a brand. As part of this step, brands will also identify its company competitors. For example, if the brand is in the fast-food industry, the brand might look at McDonald's, Taco Bell, Burger King, and Wendy's as competitors; paragraph 0038, discussing that given the values and competitors, the process execution proceeds to step 402 wherein user (consumer) data (many users) is solicited and received relating to the brand values and brand competitors as well as demographic data of those users. This data may be solicited via electronic surveys, emails, portal access or other means known to those skilled in the art. The user data will help understand user perception of the brand; paragraph 0051). Jay is directed to customer analysis systems. Copeland is directed to consumer-focused marketing modeling. Therefore, they are deemed to be analogous as they both are directed towards solutions for customer analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Jay with Copeland because the references are analogous art because they are both directed to solutions for customer analysis, which falls within applicant’s field of endeavor (method and system for assessing brand experience and customer experience), and because modifying Jay to include Copeland’s feature for including wherein the metrics include a combined score for each element, in the manner claimed, would serve the motivation of better predicting if a user will purchase a brand or not (Copeland at paragraph 0044); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. The Jay-Copeland combination does not explicitly teach wherein the metrics include a maximum possible score for each element, and a deviation score for each element. However, Akkiraju in the analogous art of brand personality inference systems teaches this concept. Akkiraju teaches: wherein the metrics include a maximum possible score for each element, and a deviation score for each element (paragraph 0074, discussing that users that are participants in the survey are presented with a standardized electronic questionnaire that is directed to their perceptions of a brand. The participants rate how descriptive the various personality traits of a brand personality scale (BPS) are of the brand in question, e.g., using the Aaker BPS, the participant rates the 42 traits of the Aaker BPS with regard to how well they believe the traits are descriptive of, or are associated with, the particular brand. Various ranges of scoring with regard to each of the traits may be provided, e.g., a 0 to 7 scale with 7 being maximally descriptive and 0 being not descriptive of the brand. The traits may be arranged in random order to control order effects. Duplicative questions may be included to filter low quality responses; paragraph 0090, discussing that statistical analysis may be applied to the collected counts of instances of keywords for each of the LIWC categories to generate a plurality of statistical descriptors. In one illustrative embodiment, 60 LIWC categories are utilized with 7 statistical descriptors for each of the 60 LIWC categories: mean, 5th to 95th percentile, variance, skew, kurtosis, minimum, and maximum. These statistical descriptors indicate for each LIWC categories, the most predictive keywords corresponding to the LIWC category and the degree or confidence of the occurrence of these keywords being predictive of the corresponding LIWC category. Thus, a combination of the LIWC categories and statistical descriptors may be utilized for each of the principle driving factors of brand personality to devise a brand personality model for modeling brand personality…; paragraph 0124, discussing that the brand personality singularity metrics consider the variances of brand personality scales and the changes of these variances over time. The “variance” of a brand personality scale itself is the variance between the individual brand personality traits that make up the brand personality scale. The variance may be calculated with regard to groupings of traits, e.g., dividing a brand personality scale having 42 brand personality traits into three groups of 14 brand personality traits each; paragraph 0125, discussing that the variances between groups of brand personality traits and within groups of brand personality traits may be used to calculate the degree of singularity, e.g., a ratio of the variance between the group and other groups, to the variance within the group. For example, the variance between groups may be generated by computing the mean of each group and then computing the variance between these three groups. The variances within each group may be calculated by computing the variance between each pair of members of the group and then computing the mean of the variances). The Jay-Copeland-Nowak combination describes features related to customer analysis. Akkiraju is directed to mechanisms to implement a brand personality inference engine. Therefore, they are deemed to be analogous as they both are directed towards solutions for customer analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the Jay-Copeland-Nowak combination with Akkiraju because the references are analogous art because they are both directed to solutions for customer analysis, which falls within applicant’s field of endeavor (method and system for assessing brand experience and customer experience), and because modifying the Jay-Copeland-Nowak combination to include Akkiraju’s feature for including wherein the metrics include a maximum possible score for each element, and a deviation score for each element, in the manner claimed, would serve the motivation of facilitating brand personality assessment and the generation of recommendations for performing actions to improve the perception of brand personality (Akkiraju at paragraph 0049); and further obvious because the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 17 recites substantially similar limitations that stand rejected via the art citations and rationale applied to claim 8, as discussed above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. MCCABE et al., Pub. No.: US 2024/0346534 A1 – describes a system and method for collecting customer responses. Torbey et al., Pub. No.: US 2012/0324353 A1 – describes customer and user experience analysis. Plummer et al., Patent No.: US 8,478,621 B1 – describes a method of accumulating, weighting, and presenting customer experience data. Froman et al., Pub. No.: US 2021/0090097 A1 – describes that clients want to deploy surveys to respondents via email and SMS. Clients want the ability to define delivery criteria based on profiling answers. If a survey is going out to a client's own community, the client often wants some control over the messaging. Tsai, Yi-Ching, Hui-Chen Chang, and Kung-Chung Ho. "A study of the relationship among brand experiences, self-concept congruence, customer satisfaction, and brand preference." Contemporary Management Research 11.2 (2015) – investigates the relationship among brand experiences, self-concept congruence, customer satisfaction, and brand preference. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DARLENE GARCIA-GUERRA whose telephone number is (571) 270-3339. The examiner can normally be reached M-F 7:30a.m.-5:00p.m. EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian M. Epstein can be reached on (571) 270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Darlene Garcia-Guerra/ Primary Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

Sep 29, 2023
Application Filed
May 19, 2025
Non-Final Rejection — §101, §103, §112
Nov 24, 2025
Response Filed
Mar 04, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602305
CUSTOMER JOURNEY PREDICTION AND RECOMMENDATION SYSTEMS AND METHODS
2y 5m to grant Granted Apr 14, 2026
Patent 12591927
SYSTEMS AND METHODS FOR DETERMINING A GRAPHICAL USER INTERFACE FOR GOAL DEVELOPMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12591845
METHOD AND ARRANGEMENT FOR CARRYING OUT CONSTRUCTION MEASURES
2y 5m to grant Granted Mar 31, 2026
Patent 12572876
SYSTEM AND METHOD FOR OBTAINING AUDIT EVIDENCE
2y 5m to grant Granted Mar 10, 2026
Patent 12572866
STORE MANAGEMENT SYSTEM AND STORE MANAGEMENT METHOD
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
23%
Grant Probability
57%
With Interview (+34.1%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 523 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month