Prosecution Insights
Last updated: April 19, 2026
Application No. 18/658,025

SYSTEMS AND METHODS FOR MACHINE LEARNING MODEL TO CALCULATE USER ELASTICITY AND GENERATE RECOMMENDATIONS USING HETEROGENEOUS DATA

Non-Final OA §101§DP
Filed
May 08, 2024
Examiner
DETWEILER, JAMES M
Art Unit
3621
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Zs Associates Inc.
OA Round
3 (Non-Final)
38%
Grant Probability
At Risk
3-4
OA Rounds
2y 12m
To Grant
83%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
193 granted / 502 resolved
-13.6% vs TC avg
Strong +44% interview lift
Without
With
+44.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
39 currently pending
Career history
541
Total Applications
across all art units

Statute-Specific Performance

§101
30.7%
-9.3% vs TC avg
§103
34.2%
-5.8% vs TC avg
§102
7.1%
-32.9% vs TC avg
§112
23.3%
-16.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 502 resolved cases

Office Action

§101 §DP
DETAILED ACTION Status of the Application In response filed on May 12, 2025, the Applicant amended claims 1 and 11. Claims 1-20 are pending and currently under consideration for patentability. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments and Arguments v Applicant’s arguments, with respect to the rejection of claims 1-20 under 35 U.S.C. 101 have been fully considered and are not persuasive. The rejections of claims 1-20 under 35 U.S.C. 101 have been maintained accordingly. Applicant specifically argues that 1) “generating a training dataset including features and a set of elasticity scores is not advertising, nor is it a "sales activity or behavior." Instead, the claims are directed to training a machine-learning model to generate uplift scores for a set of users. The claims do not recite any type of advertisement, marketing, or performance of any action that could be construed as a sales activity or behavior. Accordingly, the claims do not recite an abstract idea.” Examiner respectfully disagrees with Applicant’s first argument. Generating a set of offer recommendations for a plurality of users (e.g., based on a set of generated features based on interaction data of the users) and generating and/or updating a model configured to generate uplift scores (a measure of impact of purchasing probability of users due to offers being presented) using a generated training dataset, are both advertising, marketing, or sales activities. Generating a set of offers of items (e.g., item discounts) is undeniably an advertising, marketing, or sales activities. Generating and updating a model that is configured to generate uplift scores for users is also an advertising, marketing, or sales activity, as the purpose of such a model is to identify optimal users to provide with offer recommendations (as evidenced by Applicant’s specification and claim 3). Doing so can help increase or maximize revenue for the entity generating the offer recommendations (e.g., per [0068] & [0072] & [0083] of Applicant’s published disclosure). The steps of generating the training data set is part of this process. That the model is required to be a machine-learning model provides nothing more than mere instructions to implement an abstract idea on a generic computer and further serves merely to generally link the use of the judicial exception to a particular technological environment or field of use. The focus of the claim as a whole is directed to a result or effect that itself is the abstract idea. Applicant specifically argues that 2) “These claim features provide a technical improvement to systems that generate uplift scores. As explained in paragraph [0124] of the as-filed Specification, the "machine learning model may be periodically and/or continuously trained. For instance, as the recommendations (or other predictions and derived information) are presented to the end-user, the system may monitor the end-user's behavior (e.g., whether a recommendation was accepted/rejected or whether a predicted attribute was revised). The monitored data may be fed back into the machine learning model to improve its accuracy. The machine learning model can re-calibrate itself accordingly, such that the results are customized for the end-user." See amended independent claim 1. These approaches improve the accuracy of the machine-learning model and therefore accuracy of generated uplift scores.” Examiner respectfully disagrees with Applicant’s second argument. Improving the accuracy of a model (e.g., a predicting/calculating uplift scores) based on updating the model with new data does not amount to an improvement to computer functionality/capabilities, an improvement to a computer-related technology or technological environment, and do not amount to a technology-based solution to a technology-based problem. Updating the model (and/or generating the training dataset) is also not an “additional” element in the claims (it is part of the abstract idea). At most, the ordered combination of claim elements is directed to a non-technical improvement to an abstract idea itself (e.g., an improved business process for generating uplift scores or offer recommendations for users). Applicant specifically argues that 3) “Applicant submits that the claims recite additional elements that amount to significantly more than any purported abstract idea. In particular, the claims recite unconventional operations that cannot be considered to routine or well-understood, such as "generat[ing] the training dataset to include the set of features of the plurality of users and a set of elasticity scores corresponding to the plurality of users determined from interactions with the set offer recommendations," and "updat[ing] a machine-learning model to generate uplift scores for users using the training dataset," where "the uplift scores representing an impact on purchasing probability due to offers being presented," as recited in amended independent claim 1.” Examiner respectfully disagrees with Applicant’s third argument. Generating the training dataset and updating the model are not “additional” elements in the claim. These steps are part of the abstract idea. Furthermore, the search for an inventive concept should not be confused with a novelty or non-obviousness determination. See Mayo, 566 U.S. at 91, 101 USPQ2d at 1973 (rejecting "the Government’s invitation to substitute §§ 102, 103, and 112 inquiries for the better established inquiry under § 101 "). As made clear by the courts, the "‘novelty’ of any element or steps in a process, or even of the process itself, is of no relevance in determining whether the subject matter of a claim falls within the § 101 categories of possibly patentable subject matter." Intellectual Ventures I v. Symantec Corp., 838 F.3d 1307, 1315, 120 USPQ2d 1353, 1358 (Fed. Cir. 2016). v Applicant’s arguments, with respect to the rejection of amended claims 1 and 11 under 35 U.S.C. §103 have been considered, and are persuasive. Examiner notes that Applicant’s arguments with respect to Valentine are not persuasive. Applicant argues “The Office Action cites to paragraphs [0123]-[0125] and [0145]-[0147] of Valentine as allegedly teaching a model that generates forecast sales lift for each user/segment using elasticity scores for given product- user/segment combinations." See Office Action, page 14. Paragraphs [0123]-[0125] appear to describe a "coefficient estimator 308" which uses "imputed variables and data to estimate coefficients." Paragraphs [0145]-[0147] of Valentine appear to describe a plot, shown in FIG. 30 of Valentine, that indicates "average category lift by segment for a proposed promotional activity," which in one example "may indicate the lift for a given category for a 10% decrease in price. However, merely using a coefficient estimator 308 to "estimate coefficients," and generating plots relating to lift and proposed promotional activities, as described in Valentine, does not teach or suggest generating a "training dataset to include [a] set of features of [a] plurality of users and a set of elasticity scores corresponding to the plurality of users determined from interactions with [a]set offer recommendations," and updating "a machine-learning model to generate uplift scores for users using the training dataset," where "the uplift scores represent[t] an impact on purchasing probability due to offers being presented," as claimed. Valentine is simply silent to the generation of any such training dataset, much less a training dataset including a set of elasticity scores and features as claimed.” Applicant has oversimplified the teachings of Valentine. Paragraphs [0144]-[0146] do not simply merely disclose generating plots relating to lift. Paragraphs [0144] explain that the system may generate sets of elasticity values for different customer segments for different products using transaction log data for these customers over certain period of time. These correspond to the set of elasticity scores corresponding to the plurality of users as claimed. Paragraphs [0145]-[0147] explain that this set of elasticity values “enable the analysis and forecasting of the impact of price…on distinct consumer segments. For example sales lift for a price change for numerous segments. It may be seen here that different segments react differently to price change than other segments”. Paragraph [0012] further explains that the aggregated transaction data can be analyzed to generate “elasticity coefficients”, and that “these coefficients may be utilized to generate…lifts and demand models by an optimization engine”. Clearly, the elasticity coefficients (elasticity scores) are used (e.g., as input) to generate lift model(s) (e.g., the lift models used to predict sales lift for a price change as discussed in paragraphs [0145]-[0147] and as illustrated in Figs. 28 (which shows different uplift scores for different segments of users for different offer price differences) and 30. Paragraph [0123]-[0125] discussed the model generation process, and suggests that the modeling engine can use regression or other “machine learning” techniques to create the models, which can “use their knowledge about the prior distribution of coefficients to guide the model estimation”. Machine learning inherently requires a training data set comprising the inputs used to make a prediction of a certain output. Generating (i.e., training) a lift model (which generates predicted lift values) using machine learning and utilizing elasticity coefficients clearly uses a set of elasticity coefficients (or “knowledge about the prior distribution of coefficients”) as training data to generate the model. Therefore, although not explicitly discussing a step of generating a “training dataset”, Valentine nevertheless discloses this feature based on the description of the types of models being generated (lift model) and the data inputs used to generate these models (elasticity coefficients) and the way the models may be generated (regression or other forms of machine learning – which inherently require a training dataset of the predictor variables). Although the Examiner believes Valentine discloses generating a training dataset including a set of elasticity scores corresponding to the plurality of users, the training data set used to generate the machine learning model does not itself comprise the set of features as well. Nor does is it reasonable to interpret the interactions used to generate the elasticity scores as interactions specifically with the set of generated offer recommendations, per Applicant’s disclosure. Applicant’s claims and original disclosure inform the broadest reasonable interpretation of the claimed uplift scores and elasticity scores. There are several examples in the prior art of systems configured to generate such elasticity scores for segments of customers using various machine learning models (e.g. neural networks), to generate such uplift scores for customers segments using various machine learning models (e.g. neural networks), and/or to generate elasticity scores and uplift scores for various customer segments, and/or to use uplift scores and/or elasticity scores to segment customers and/or to identify targets for marketing campaigns. Because elasticity scores and uplift scores are very similar, it is not common for elasticity scores to be used to train or update a machine learning model to generate uplift scores. It should be noted that the prior art does disclose using elasticity scores as inputs to models used to generate uplift values. However, the specific requirements of Applicant’s claims distinguish them from the prior art. Applicant’s claims require a combination of generating a set of offer recommendations, generating a training dataset to include at least both i) the set of features of the plurality of users and ii) a set of elasticity scores corresponding to the plurality of users determined from interactions with the set of offer recommendations. A model used to generate uplift scores for all possible offer values (or prices/discounts not corresponding to a previously generated set of offer recommendations) would not read on the claimed invention. Furthermore, the claims require updating a machine-learning model (requiring the model to have already been trained, per [0124] of Applicant’s disclosure). Finally, the generated uplift scores are specifically representative of an impact on purchasing probability due to offers being presented. While individual features may be known per se, there is no teaching or suggestion absent applicants’ own disclosure to combine these features in the way that is claimed other than with impermissible hindsight. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. v Claim(s) 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Claim(s) 11-20 is/are drawn to methods (i.e., a process), while claim(s) 1-10 is/are drawn to systems (i.e., a machine/manufacture). As such, claims 1-20 is/are drawn to one of the statutory categories of invention (Step 1: YES). Step 2A - Prong One: In prong one of step 2A, the claim(s) is/are analyzed to evaluate whether it/they recite(s) a judicial exception. Claim 1 (representative of independent claim(s) 11) recites/describes the following steps; generate a set of features for a training dataset, the set of features generated based on interaction data of a plurality of users; generate a set of offer recommendations for the plurality of users based on the set of features; and generate the training dataset to include the set of features of the plurality of users and a set of elasticity scores corresponding to the plurality of users determined from interactions with the set of offer recommendation update a model to generate uplift scores for users using the training dataset, the uplift scores representing an impact on purchasing probability due to offers being presented These steps, under its broadest reasonable interpretation, describe or set-forth generating a set of features based on interaction data of a plurality of users, generating a set of offer recommendations (e.g., product advertisements/offers/promotion/prices) for the plurality of users based on the generated set of features, and updating an uplift model based on a generated training dataset (which includes the set of features of the plurality of users and a set of elasticity scores corresponding to the plurality of users determined from interactions with the set of offer recommendation), which amounts to a fundamental economic principle or practice and/or a commercial or legal interactions (specifically, an advertising, marketing or sales activity or behavior). Generating a set of offers of items (e.g., item discounts) is undeniably an advertising, marketing, or sales activities. Generating and updating a model that is configured to generate uplift scores for users is also an advertising, marketing, or sales activity, as the purpose of such a model is to identify optimal users to provide with offer recommendations (as evidenced by Applicant’s specification and claim 3). Doing so can help increase or maximize revenue for the entity generating the offer recommendations (e.g., per [0068] & [0072] & [0083] of Applicant’s published disclosure). The steps of generating the training data set is part of this process. These limitations therefore fall within the “certain methods of organizing human activity” subject matter grouping of abstract ideas. Additionally and/or alternatively, each of the above-recited steps, under their broadest reasonable interpretation, encompass a human manually (e.g., in their mind, or using paper and pen) generating a set of features based on interaction data of a plurality of users, generating a set of offer recommendations (e.g., product advertisements/offers/promotion/prices) for the plurality of users based on the generated set of features, and updating an uplift model based on a generated training dataset (which includes the set of features of the plurality of users and a set of elasticity scores corresponding to the plurality of users determined from interactions with the set of offer recommendation), (i.e., one or more concepts performed in the human mind, such as one or more observations, evaluations, judgments, opinions), but for the recitation of generic computer components. If one or more claim limitations, under their broadest reasonable interpretation, covers performance of the limitation(s) in the mind but for the recitation of generic computer components, then it falls within the “mental processes” subject matter grouping of abstract ideas. As such, the Examiner concludes that claim 1 recites an abstract idea (Step 2A – Prong One: YES). Independent claim(s) 11 recite/describe nearly identical steps (and therefore also recite limitations that fall within this subject matter grouping of abstract ideas), and this/these claim(s) is/are therefore determined to recite an abstract idea under the same analysis. Each of the depending claims likewise recite/describe these steps (by incorporation - and therefore also recite limitations that fall within this subject matter grouping of abstract ideas), and this/these claim(s) is/are therefore determined to recite an abstract idea under the same analysis. Any element(s) recited in a dependent claim that are not specifically identified/addressed by the Examiner under step 2A (prong two) or step 2B of this analysis shall be understood to be an additional part of the abstract idea recited by that particular claim. The same reasoning is similarly applicable to the limitations in the remaining dependent claims, and their respective limitations are not reproduced here for the sake of brevity. Step 2A - Prong Two: In prong two of step 2A, an evaluation is made whether a claim recites any additional element, or combination of additional elements, that integrate the exception into a practical application of that exception. An “addition element” is an element that is recited in the claim in addition to (beyond) the judicial exception (i.e., an element/limitation that sets forth an abstract idea is not an additional element). The phrase “integration into a practical application” is defined as requiring an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception. The claim(s) recite the additional elements/limitations of “a system, comprising: one or more processors coupled to non-transitory memory, the one or more processors configured to” (independent claim 1) “by one or more processors coupled to non-transitory memory… by the one or more processors… by the one or more processors” (independent claim 11) “a machine-learning model” (independent claims 1 and 11) “wherein the one or more processors are further configured to” (dependent claims 2-9) “by the one or more processors” (dependent claims 12-19) “the machine-learning model” (dependent claims 2, 4, 6, 12, 14, and 16) “in a graphical user interface” (dependent claims 6 and 16) “in the graphical user interface” (dependent claims 7 and 17) The requirement to execute the claimed steps/functions using “a system, comprising: one or more processors coupled to non-transitory memory, the one or more processors configured to” (independent claim 1) and/or “by one or more processors coupled to non-transitory memory… by the one or more processors… by the one or more processors” (independent claim 11) and/or “wherein the one or more processors are further configured to” (dependent claims 2-9) and/or “by the one or more processors” (dependent claims 12-19) is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. Applicant’s own disclosure explains that these elements may be embodied as a general-purpose computer (e.g., see paragraphs [0046]-[0051] “can include…a computing device 160, and/or a server 170…includes a memory…a communications interface…and a processor…random access memory (RAM) a read-only memory (ROM), a hard drive…and/or the like…can store…one or more software modules and/or code that includes instructions to cause the processor to execute one or more processes…processor can be…any suitable processing device…general-purpose processor, a central processing unit (CPU)…”, [0085]-[0090] “in some instances, the computing device can be/include, for example, a personal computer, a laptop, a smartphone…server…”, [0104]-[0108] “in some embodiments, the devices can be implemented on a single hardware device…or a software platform…can be performed by any processor or computer discussed and/or shown herein… “ and [0128]-[0129] “general-purpose processor…” of the published disclosure). This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application (see MPEP 2106.05(f)). The requirement for the recited model to be “a machine-learning model” (independent claims 1 and 11) and “the machine-learning model” (dependent claims 2, 4, 6, 12, 14, and 16) provides nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f) and the July 2024 Subject Matter Eligibility Examples and corresponding analysis. MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. The machine-learning model is used to generally apply the abstract idea without placing any limits on how the machine-learning model functions. Rather, these limitations only recite the outcome of “to generate update scores” and do not include any details about how the machine-learning model accomplishes these functions. That a machine is required to learn the model invokes computers or other machinery merely as a tool to perform an existing process (i.e., learn some statistical model/correlation). Furthermore, the machine-learning model is recited at a high level of generality. See MPEP 2106.05(f) and the July 2024 Subject Matter Eligibility Examples and corresponding analysis. This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application (see MPEP 2106.05(f)). The recited additional element(s) of “in a graphical user interface” (dependent claims 6 and 16) and/or “in the graphical user interface” (dependent claims 7 and 17) serves merely to generally link the use of the judicial exception to a particular technological environment or field of use. Specifically, it/they serve(s) to limit the application of the abstract idea to computing environments, such as distributed computing environments and/or the internet, where information is represented digitally, exchanged between computers over a network, and presented using graphical user interfaces. This reasoning was demonstrated in Intellectual Ventures I LLC v. Capital One Bank (Fed. Cir. 2015), where the court determined "an abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment, such as the Internet [or] a computer"). This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application (see MPEP 2106.05(g)). The recitation of “a machine-learning model” (independent claims 1 and 11) and/or “the machine-learning model” (dependent claims 2, 4, 6, 12, 14, and 16) also merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element “a machine-learning model” limits the identified judicial exceptions to computing environments where models/correlations are learned using computers (i.e., machines), this type of limitation merely confines the use of the abstract idea to a particular technological environment (machine-learned models) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h) and the July 2024 Subject Matter Eligibility Examples and corresponding analysis. This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application (see MPEP 2106.05(g)). Furthermore, although the claims recite a specific sequence of computer-implemented functions, and although the specification suggests certain functions may be advantageous for various reasons (e.g., business reasons), the Examiner has determined that the ordered combination of claim elements (i.e., the claims as a whole) are not directed to an improvement to computer functionality/capabilities, an improvement to a computer-related technology or technological environment, and do not amount to a technology-based solution to a technology-based problem. For example, Applicant’s as-filed specification suggests that it is advantageous for advertisers/business to analyze historical user data (e.g., interaction data of a plurality of users) to generate a set of features for a training data set, generate a set of offer recommendations for the plurality of users based on the set of features; and update a model (e.g., a model configured to generate uplift scores for users based on a set of elasticity scores determined from interactions with the set of offer recommendations, the uplift scores representing an impact on purchasing probability due to offers being presented), because doing so can help to effectively identify users that will react to an offer recommendation (e.g., reward/discount) in a way that is positive with respect to a desired performance indicator (e.g., engage with the offer, purchase a product, increase revenue) for targeting with the offer recommendation(s) (see, for example, paragraphs [0004] & [0068] & [0072] & [0093] of Applicant’s published disclosure). These are non-technical subjective business advantages/improvements. At most, the ordered combination of claim elements is directed to a non-technical improvement to an abstract idea itself (e.g., an improved way of generating a set of offer recommendations for users). Dependent claims 10 and 20 fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims 10 and 20 is/are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea recited in each respective claim). For example, claim 10 recites “wherein the interaction data of the plurality of users comprises heterogeneous data including at least one of multiple data types or originating from multiple data sources”. This is an abstract limitation which further sets forth the abstract idea encompassed by claim 10. This limitation is not an “additional element”, and therefore it is not subject to further analysis under Step 2A- Prong Two or Step 2B. The same logic applies to each of the other dependent claims, whose limitations are not being repeated here for the sake of brevity and clarity. The Examiner has therefore determined that the additional elements, or combination of additional elements, do not integrate the abstract idea into a practical application. Accordingly, the claim(s) is/are directed to an abstract idea (Step 2A – Prong two: NO). Step 2B: In step 2B, the claims are analyzed to determine whether any additional element, or combination of additional elements, is/are sufficient to ensure that the claims amount to significantly more than the judicial exception. This analysis is also termed a search for an "inventive concept." An "inventive concept" is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim as a whole amounts to significantly more than the judicial exception itself. Alice Corp., 134 S. Ct. at 2355, 110 USPQ2d at 1981 (citing Mayo, 566 U.S. at 72-73, 101 USPQ2d at 1966) As discussed above in “Step 2A – Prong 2”, the requirement to execute the claimed steps/functions using “a system, comprising: one or more processors coupled to non-transitory memory, the one or more processors configured to” (independent claim 1) and/or “by one or more processors coupled to non-transitory memory… by the one or more processors… by the one or more processors” (independent claim 11) and/or “wherein the one or more processors are further configured to” (dependent claims 2-9) and/or “by the one or more processors” (dependent claims 12-19) and/or “a machine-learning model” (independent claims 1 and 11) and/or “the machine-learning model” (dependent claims 2, 4, 6, 12, 14, and 16) is equivalent to adding the words “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer. These limitations therefore do not qualify as “significantly more” (see MPEP 2106.05(f)). As discussed above in “Step 2A – Prong 2”, the recited additional element(s) of “in a graphical user interface” (dependent claims 6 and 16) and/or “in the graphical user interface” (dependent claims 7 and 17) and/or “a machine-learning model” (independent claims 1 and 11) and/or “the machine-learning model” (dependent claims 2, 4, 6, 12, 14, and 16) serves merely to generally link the use of the judicial exception to a particular technological environment or field of use. These limitations therefore do not qualify as “significantly more” (see MPEP 2106.05(g)). Viewing the additional limitations in combination also shows that they fail to ensure the claims amount to significantly more than the abstract idea. When considered as an ordered combination, the additional components of the claims add nothing that is not already present when considered separately, and thus simply append the abstract idea with words equivalent to “apply it” on a generic computer and/or mere instructions to implement the abstract idea on a generic computer, and generally link the abstract idea to a particular technological environment or field of use. Dependent claims 10 and 20 fail to include any additional elements. In other words, each of the limitations/elements recited in respective dependent claims 10 and 20 is/are further part of the abstract idea as identified by the Examiner for each respective dependent claim (i.e. they are part of the abstract idea identified by the Examiner to which each respective claim is directed). The Examiner has therefore determined that no additional element, or combination of additional claims elements is/are sufficient to ensure the claim(s) amount to significantly more than the abstract idea identified above (Step 2B: NO). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. v Claims 1-20 are rejected on the ground of non-statutory anticipation-type double patenting as being unpatentable over claims 1-18 of US Patent No. 12,033,177 (corresponding to co-pending US Application No. 18/372,616). Although the conflicting claims are not identical, they are not patentably distinct from each other. Each of the instant claims is anticipated by at least one claim of US Patent No. 12,033,177. The exact limitations of each of these claims are not being reproduced here for clarity and brevity, as the Examiner believes the anticipation would be self-evident to a PHOSITA. It is further noted that Applicant has previously filed a Terminal Disclaimer for each of the previous patents in the family chain. v Claims 1-20 are rejected on the ground of non-statutory anticipation-type double patenting as being unpatentable over claims 1-20 of US Patent No. 11,803,871 (corresponding to co-pending US Application No. 17/545,221). Although the conflicting claims are not identical, they are not patentably distinct from each other. Each of the instant claims is anticipated by at least one claim of US Patent No. 11,803,871. The exact limitations of each of these claims are not being reproduced here for clarity and brevity, as the Examiner believes the anticipation would be self-evident to a PHOSITA. It is further noted that Applicant has previously filed a Terminal Disclaimer for each of the previous patents in the family chain. Indication of Novel and Non-Obvious Subject Matter Independent claims 1 and 11 recite novel and non-obvious subject matter. Each of the dependent claims similarly recite novel and non-obvious subject matter by virtue of their dependency on one of these claims. The following is an examiner’s statement of reasons for indication of novel and non-obvious subject matter: The closest prior art of record is Mena (U.S. PG Pub No. 2007/0011224, January 11, 2007 - hereinafter "Mena”); Valentine et al. (U.S. PG Pub No. 2011/0131079, June 2, 2011 - hereinafter "Valentine”); Xu et al. (U.S. PG Pub No. 2016/0189207 June 30, 2016 - hereinafter "Xu”); Hines et al. (U.S. Patent No. 8,170,823 May 1, 2012- hereinafter "Hines”); Vitaladevuni et al. (U.S. Patent No. 10,354,184 July 16, 2019- hereinafter "Vitaladevuni”); Fano et al. (U.S. PG Pub No. 2005/0189414, September 1, 2005 - hereinafter "Fano”); Michaud et al. (U.S. PG Pub No. 2010/0191570 July 29, 2010 - hereinafter "Michaud”); Fahner et al. (U.S. PG Pub No. 2012/0158474 June 21, 2012); Friedman et al. (U.S. PG Pub No. 2020/0234365, July 23, 2020); Jai et al. (U.S. PG Pub No. 2020/0134628, April 30, 2020); Zheng et al. (U.S. Patent No. 9,208,444, December 8, 2015); Lei et al. (U.S. PG Pub No. 2021/0334830 October 28, 2021); and “Modeling the Distribution of Price Sensitivity and Implications for Optimal Retail Pricing” (Blattberg, Robert C. et al., published in Journal of Business and Economic Statistics, February 1995) Mena discloses clustering/segmenting customers based on customer transaction data and using a neural network to generate predictive scores for customer propensity to purchase in response to promotional offers and further segmenting the customers base on their predicted propensity to buy/respond to marketing offers and identifying target users based thereon for generating/transmitting a recommendation offer. Valentine discloses clustering/segmenting customers based on customer transaction data and calculating a set of elasticity scores for one or more segments of users using machine learning algorithms. Xu discloses training a neural network based on historic offers provided to historic users and a subset of the historic offers that were accepted by the historic users, the neural network trained using customer data to output uplift scores for users based on their associated data. Hines discloses training a model to generate uplift scores using elasticity scores as input. Discloses iteratively updating the model using responses to promotions, and corresponding elasticity scores calculated based on these responses. Vitaladevuni discloses training a neural network to determine a user’s purchase probability based on changes in product price, and based user-specific learned pricing sensitivities as input. Fano discloses calculating, using the set of elasticity scores, a threshold for identifying various customer segments. Michaud discloses presenting a graphical indication of a distribution of elasticity score among a set of users. Fahner discloses a machine learned model which calculates a score (CEI) representative of a lift due to a coupon for respective users, which factors in the amount of the discount and takes into consideration the user’s price sensitivities ([0035]-[0036], [0022], [0025]). Model is trained based on historical offer acceptances. Friedman discloses wherein the set of elasticity scores is used to calculate a threshold for identifying the segments ([0050] “….assigning said customer to one of said plurality of price-sensitivity segments…taking into account said price-sensitivity score…each of price-sensitivity segments is stored as a plurality of price-sensitivity thresholds…over the range of possible price-sensitivity scores”). Jai discloses ranking, by the processor, each feature within the set of features, wherein the processor uses a subset of the set of features in accordance with their respective ranking to generate the graph ([0070] feature rankings presented to user to select features to use). Zheng discloses presenting histograms showing the distribution of various customer-scores within a particular customer segment in order to provide retailers additional insights regarding the distribution of certain scores within a particular customer segment (Fig 3B) Lei discloses pre-processing heterogeneous customer data, using ML to extract significant predictor features related to purchase propensity and demand, generating various clusters/segments using the extracted features, and running additional ML models on top of the extracted features and clusters/segments to derive propensity/demand scores for various customer subsets for use in deploying optimized promotions for certain customer subsets. “Modeling the Distribution of Price Sensitivity and Implications for Optimal Retail Pricing” discloses deriving price sensitivity values for various households/segments and determining the distribution of these sensitivity values and using these insights to increase revenue by more optimally pricing their products. As per Claims 1 and 11, the closest prior art of record taken either individually or in combination with other prior art of record fails to teach or suggest the specific combination of “generate a set of offer recommendations for the plurality of users based on the set of features; generate the training dataset to include the set of features of the plurality of users and a set of elasticity scores corresponding to the plurality of users determined from interactions with the set of offer recommendations; and update a machine-learning model to generate uplift scores for users based on a set of elasticity scores determined from interactions with the set of offer recommendations using the training dataset, the uplift scores representing an impact on purchasing probability due to offers being presented.” Applicant’s claims and original disclosure inform the broadest reasonable interpretation of the claimed uplift scores and elasticity scores. There are several examples in the prior art of systems configured to generate such elasticity scores for segments of customers using various machine learning models (e.g. neural networks), to generate such uplift scores for customers segments using various machine learning models (e.g. neural networks), and/or to generate elasticity scores and uplift scores for various customer segments, and/or to use uplift scores and/or elasticity scores to segment customers and/or to identify targets for marketing campaigns. However, while individual features may be known per se, there is no teaching or suggestion absent applicants’ own disclosure to combine these features in the way that is claimed (e.g., training a neural network to generate/output uplift scores based at least on elasticity scores as input, where the elasticity scores are specifically a numeric value representing a magnitude of change in purchasing probability for a respective user relative to a magnitude of change in price, and where the generated uplift scores are specifically representative of an impact on purchasing probability for one or more users due to offers being presented to the one or more users), other than with impermissible hindsight Claims 2-10 and 12-19 depend upon claims 1 and 11 and have all the limitations of claims 1 and 11 and are novel and non-obvious for the same reason. Conclusion No claim is allowed THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAMES M DETWEILER whose telephone number is (571)272-4704. The examiner can normally be reached on Monday-Friday from 8 AM to 5 PM ET. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Waseem Ashraf can be reached at telephone number (571)-270-3948. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /JAMES M DETWEILER/Primary Examiner, Art Unit 3621
Read full office action

Prosecution Timeline

May 08, 2024
Application Filed
Feb 07, 2025
Non-Final Rejection — §101, §DP
May 01, 2025
Examiner Interview Summary
May 01, 2025
Applicant Interview (Telephonic)
May 12, 2025
Response Filed
May 30, 2025
Final Rejection — §101, §DP
Aug 04, 2025
Applicant Interview (Telephonic)
Aug 04, 2025
Examiner Interview Summary
Aug 04, 2025
Response after Non-Final Action
Sep 15, 2025
Request for Continued Examination
Oct 01, 2025
Response after Non-Final Action
Dec 17, 2025
Non-Final Rejection — §101, §DP
Mar 25, 2026
Applicant Interview (Telephonic)
Mar 25, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596943
MACHINE-LEARNED MODEL INCLUDING INCREMENTALITY ESTIMATION
2y 5m to grant Granted Apr 07, 2026
Patent 12586132
METHOD AND APPARATUS FOR FACILITATING MERCHANT SELF SERVICE WITH RESPECT TO FINANCING AND CONTENT SUPPORT FOR MARKETING EVENTS
2y 5m to grant Granted Mar 24, 2026
Patent 12567087
COORDINATED DEPLOYMENT OF ENLARGED GRAPHICAL COMMUNICATIONS IN DISPENSING ENVIRONMENTS
2y 5m to grant Granted Mar 03, 2026
Patent 12567006
SYSTEM AND METHOD FOR MACHINE LEARNING-BASED DELIVERY TAGGING
2y 5m to grant Granted Mar 03, 2026
Patent 12530705
MERCHANT LOYALTY PLATFORM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
38%
Grant Probability
83%
With Interview (+44.2%)
2y 12m
Median Time to Grant
High
PTA Risk
Based on 502 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month