Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Application
This non-final office action is in response to Applicant’s response to the restriction requirement filed on 11/24/2025. Applicant elected Group II with traverse. Upon further consideration of the claims and review of the prior art, Examiner is persuaded by Applicant that a serious search burden is not present. Therefore, the restriction requirement has been withdrawn. Claims 1-20 are currently pending and have been examined below.
Claim Objections
Claim 19 is objected to because of the following informalities: “with plurality of consumers” in line 11 should be replaced with “with the plurality of consumers” or equivalent. Appropriate correction is required.
Claim Rejections – 35 U.S.C. 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1-14
Per step 1 of the eligibility analysis set forth in MPEP § 2106, subsection III, the claims are directed towards a process, machine, or manufacture.
Per step 2A Prong One, Claim 1 recites specific limitations which fall within at least one of the groupings of abstract ideas enumerated in MPEP 2106.04(a)(2) as follows:
receiving data from a consumer including identifying information of the consumer;
transform the data from an input format to a proprietary intermediate format that reduces the data to descriptive features, the descriptive features not including the identifying information;
evaluating the reduced data;
sending an eligibility notification to the consumer based on a result of the evaluation.
As noted above, these limitations fall within at least one of the groupings of abstract ideas enumerated in the MPEP 2106.04(a)(2). Specifically, these limitations fall within the group Certain Methods of Organizing Human Activity (i.e., fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). That is – the limitations recite a method to verify consumer eligibility for gated offers by evaluating anonymized consumer data for offer eligibility and notifying eligible consumers which is an advertising/marketing activity that falls within the certain methods of organizing human activities grouping. Additionally, the steps above also fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. Specifically, a human being can mentally (or with pen and paper) transform consumer data into an intermediate format with descriptive features that removes consumer identifying information, evaluate the data for offer eligibility, notify a consumer of eligibility based on the result of the evaluation. Accordingly claim 1 recites an abstract idea.
Per step 2A Prong 2, the Examiner finds that the judicial exception is not integrated into a practical application. Claim 1 recites the additional limitations of:
applying one or more feature extractors [that transform the data from an input format to a proprietary intermediate] digital [format];
evaluating the reduced data using a trained machine-learning (ML) model, the trained ML model trained on descriptive features extracted from similar consumer data.
The additional limitations when viewed individually and when viewed as an ordered combination, and pursuant to the broadest reasonable interpretation, do not integrate the abstract idea into a practical application because each of the additional elements are recited at high level of generality implementing the abstract idea on a computer (i.e. apply it) or generally linking the use of the judicial exception to a particular technological environment. Specifically:
With respect to applying one or more feature extractors [that transform the data from an input format to a proprietary intermediate] digital [format]; Examiner notes that this limitation is claimed at a high level of generality. The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". Here, paragraph [0014] of Applicant’s specification recites that “a proposed data anonymization system includes one or more "feature extractors" (e.g., software subroutines) that perform an automated task to precompute or transform data from an input format (e.g., plain text files, XML, JSON, images, etc.) to a proprietary intermediate digital format, either compressed or uncompressed, that excludes sensitive data.” At this level of generality the recitation of generic software to convert data from one format (e.g., plain text) to another unspecified propriety format does not restrict how the result is accomplished or describe a specific mechanism to accomplish the result beyond the use of a generic software subroutine to convert between two formats. Accordingly, this limitation is at the “apply it” level and does not integrate the abstract idea into a practical application.
With respect to “evaluating the reduced data using a trained machine-learning (ML) model, the trained ML model trained on descriptive features extracted from similar consumer data”, Examiner notes that this limitation is recited at a high level of generality using a generic trained machine learning model to evaluate the data. The claims do not specify the specific type of machine learning model used, how the model is trained, or the specific inputs or outputs to the model beyond specifying that the model evaluates the reduced data. The generic use of a machine learning model to perform the claimed limitations merely generally applies the abstract idea without placing any limits on how the machine learning model functions. Further, the recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015. Here, Examiner notes that paragraph [0031] of Applicant’s specification recites that “the AI/ML models include AI, ML, deep learning (DL), and/or other neural network models.” At this level of generality, the recitation of claim limitations that attempt to cover any solution to an identified problem (i.e., performing the claimed evaluation step using a generic machine learning model) merely generally links the abstract idea to a technical field/environment, namely a generic computing environment applying machine learning. Additionally, see Recentive Analytics, Inc. v. Fox Corp. et al., No. 2023-2437, slip op. at 18 (Fed. Cir. Apr. 18, 2025) holding that claims “that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.” Here, Examiner takes the position that utilizing a generic machine learning model to perform the claimed evaluation step is the mere application of generic machine learning to a new data environment. Because no improvement to the underlying machine learning models is disclosed, this limitation does not integrate the abstract idea into a practical application.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are recited at high level of generality implementing the abstract idea on a computer (i.e. apply it) or generally linking the use of the judicial exception to a particular technological. The same analysis applies here in 2B, i.e., mere instructions to apply an exception in a particular technological environment cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Dependent claims 2-14 merely further narrow the abstract idea and/or generally link the abstract idea to a particular technological environment / apply it and therefore do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea.
Claims 15-18
Per step 1 of the eligibility analysis set forth in MPEP § 2106, subsection III, the claims are directed towards a process, machine, or manufacture.
Per step 2A Prong One, Claim 15 recites specific limitations which fall within at least one of the groupings of abstract ideas enumerated in MPEP 2106.04(a)(2) as follows:
receive data from a consumer including identifying information of the consumer;
extract a set of descriptive features from the data, the descriptive features not including the identifying information;
classify descriptive features of the set of descriptive features; and
send a notification to the consumer based on the classification.
As noted above, these limitations fall within at least one of the groupings of abstract ideas enumerated in the MPEP 2106.04(a)(2). Specifically, these limitations fall within the group Certain Methods of Organizing Human Activity (i.e., fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). That is – the limitations recite a method to anonymize consumer data, classifying consumer features based on the anonymized data and notify the consumer of offer eligibility based on the classification which is an advertising/marketing activity that falls within the certain methods of organizing human activities grouping. Additionally, the steps above also fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. Specifically, a human being can mentally (or with pen and paper) extract descriptive features from consumer data, remove consumer identifying information from the extracted features, classify the descriptive features, notify a consumer based on the classification. Accordingly claim 15 recites an abstract idea.
Per step 2A Prong 2, the Examiner finds that the judicial exception is not integrated into a practical application. Claim 15 recites the additional limitations of:
[classify descriptive features of the set of descriptive features] using a trained machine-learning (ML) model.
The additional limitations when viewed individually and when viewed as an ordered combination with the abstract limitations, and pursuant to the broadest reasonable interpretation, do not integrate the abstract idea into a practical application because each of the additional elements are recited at high level of generality implementing the abstract idea on a computer (i.e. apply it) or generally linking the use of the judicial exception to a particular technological environment. Specifically:
With respect to “classify descriptive features of the set of descriptive features using a trained machine-learning (ML) model”, Examiner notes that this limitation is recited at a high level of generality using a generic trained machine learning model to classify descriptive features. The claims do not specify the specific type of machine learning model used, how the model is trained, or the specific inputs or outputs to the model beyond specifying that the model classifies the descriptive features. The generic use of a machine learning model to perform the claimed limitations merely generally applies the abstract idea without placing any limits on how the machine learning model functions. The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015. Here, Examiner notes that paragraph [0031] of Applicant’s specification recites that “the AI/ML models include AI, ML, deep learning (DL), and/or other neural network models.” At this level of generality, the recitation of claim limitations that attempt to cover any solution to an identified problem (i.e., performing the claimed classification step using a generic machine learning model) merely generally links the abstract idea to a technical field/environment, namely a generic computing environment applying machine learning. Additionally, see Recentive Analytics, Inc. v. Fox Corp. et al., No. 2023-2437, slip op. at 18 (Fed. Cir. Apr. 18, 2025) holding that claims “that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.” Here, Examiner takes the position that utilizing a generic machine learning model to perform the claimed classification step is the mere application of generic machine learning to a new data environment. Because no improvement to the underlying machine learning models is disclosed, this limitation does not integrate the abstract idea into a practical application.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are recited at high level of generality implementing the abstract idea on a computer (i.e. apply it) or generally linking the use of the judicial exception to a particular technological environment. The same analysis applies here in 2B, i.e., mere instructions to apply an exception in a particular technological environment cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Dependent claims 16-18 merely further narrow the abstract idea and/or generally link the abstract idea to a particular technological environment / apply it and therefore do not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea.
Claims 19-20
Per step 1 of the eligibility analysis set forth in MPEP § 2106, subsection III, the claims are directed towards a process, machine, or manufacture.
Per step 2A Prong One, Claim 19 recites specific limitations which fall within at least one of the groupings of abstract ideas enumerated in MPEP 2106.04(a)(2) as follows:
receiving a set of consumer data including identifying information of a plurality of consumers;
reducing the set of consumer data to a set of descriptive features, the descriptive features not including the identifying information;
assigning each descriptive feature of the set of descriptive features an identification (ID) code;
classifying the set of descriptive features into predetermined consumer categories;
correlating the classified set of descriptive features with plurality of consumers using the ID codes assigned to the descriptive features; and
sending one or more consumers of the plurality of consumers a notification of eligibility for a gated offer based on the classification.
As noted above, these limitations fall within at least one of the groupings of abstract ideas enumerated in the MPEP 2106.04(a)(2). Specifically, these limitations fall within the group Certain Methods of Organizing Human Activity (i.e., fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). That is – the limitations recite a method to anonymize consumer data and determine descriptive features, assign ID codes to the features, classifying consumer features into consumer categories, correlate the classified set of descriptive features classification which ID codes, and sending consumers eligibility notifications for gated offers based on the classification which is an advertising/marketing activity that falls within the certain methods of organizing human activities grouping. Additionally, the steps above also fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III. Specifically, a human being can mentally (or with pen and paper) anonymize consumer data and determine descriptive features, assign ID codes to the features, classifying consumer features into consumer categories, correlate the classified set of descriptive features classification which ID codes, and notify eligible consumers for gated offers based on the classification. Accordingly claim 19 recites an abstract idea.
Per step 2A Prong 2, the Examiner finds that the judicial exception is not integrated into a practical application. Claim 19 recites the additional limitation of:
[classifying the set of descriptive features into predetermined consumer categories] using a trained machine-learning (ML) model.
The additional limitations when viewed individually and when viewed as an ordered combination with the abstract limitations, and pursuant to the broadest reasonable interpretation, do not integrate the abstract idea into a practical application because each of the additional elements are recited at high level of generality implementing the abstract idea on a computer (i.e. apply it) or generally linking the use of the judicial exception to a particular technological environment. Specifically:
With respect to “classifying the set of descriptive features into predetermined consumer categories using a trained machine-learning (ML) model”, Examiner notes that this limitation is recited at a high level of generality using a generic trained machine learning model to classify descriptive features into predetermined consumer categories. The claims do not specify the specific type of machine learning model used, how the model is trained, or the specific inputs or outputs to the model beyond specifying that the model classifies the descriptive features. The generic use of a machine learning model to perform the claimed limitations merely generally applies the abstract idea without placing any limits on how the machine learning model functions. The recitation of claim limitations that attempt to cover any solution to an identified problem with no restriction on how the result is accomplished and no description of the mechanism for accomplishing the result, does not integrate a judicial exception into a practical application or provide significantly more because this type of recitation is equivalent to the words "apply it". See Electric Power Group, LLC v. Alstom, S.A., 830 F.3d 1350, 1356, 119 USPQ2d 1739, 1743-44 (Fed. Cir. 2016); Intellectual Ventures I v. Symantec, 838 F.3d 1307, 1327, 120 USPQ2d 1353, 1366 (Fed. Cir. 2016); Internet Patents Corp. v. Active Network, Inc., 790 F.3d 1343, 1348, 115 USPQ2d 1414, 1417 (Fed. Cir. 2015. Here, Examiner notes that paragraph [0031] of Applicant’s specification recites that “the AI/ML models include AI, ML, deep learning (DL), and/or other neural network models.” At this level of generality, the recitation of claim limitations that attempt to cover any solution to an identified problem (i.e., performing the claimed classification step using a generic machine learning model) merely generally links the abstract idea to a technical field/environment, namely a generic computing environment applying machine learning. Additionally, see Recentive Analytics, Inc. v. Fox Corp. et al., No. 2023-2437, slip op. at 18 (Fed. Cir. Apr. 18, 2025) holding that claims “that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.” Here, Examiner takes the position that utilizing a generic machine learning model to perform the claimed classification step is the mere application of generic machine learning to a new data environment. Because no improvement to the underlying machine learning models is disclosed, this limitation does not integrate the abstract idea into a practical application.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination do not amount to significantly more than the abstract idea. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements are recited at high level of generality implementing the abstract idea on a computer (i.e. apply it) or generally linking the use of the judicial exception to a particular technological environment. The same analysis applies here in 2B, i.e., mere instructions to apply an exception in a particular technological environment cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B.
Dependent claim 20 merely further narrows the abstract idea and/or generally links the abstract idea to a particular technological environment / applies it and therefore does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 2, 4, 5-7, and 14-18 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication Number 20200242466 (“Mohassel”) in view of US Patent Publication Number 11941655 (“Smaniotto”).
Claim 1
As per claim 1, Mohassel teaches a method to verify consumer eligibility for gated offers, comprising:
receiving data from a consumer including identifying information of the consumer ([0002] “account holder information.” And, [0046] “a given number of participating computers, p1, p2, . . . , pN, (also referred to as clients) each have private data, respectively d1, d2, . . . , dN.” And, [0014] “The private data from multiple sources can be secret shared.” And, [0056] “a data client can secret-share its private data.” And, [0228] “each client can generate shares of its own private data and then send each share to one of the servers.”);
applying one or more feature extractors that transform the data from an input format to a proprietary intermediate digital format that reduces the data to descriptive features, the descriptive features not including the identifying information ([0015] “the private input data can be represented as integers . . . multiplying these integers (and other intermediate values) and integer-represented weights.” And, [0230] “truncate the results so as to reduce storage size.” And, [0231] “the features can be stored as integers.” And, [0040] “The private data from multiple sources can be secret shared.” And, [0041] “the secret-shared parts can be multiplied by weights and functions applied to them in a privacy-preserving manner.” And, [0058] “encode the secret as an arbitrary length binary number.” And, [0059] “intermediate values can occur during the training and/or evaluation of the model. Examples of intermediate values include the output of a node in a neural network, an inner product of input values and weights prior to evaluation by a logistic function . . . The intermediate values are sensitive because they can also reveal information about the data. Thus, every intermediate value can remain secret-shared.” And, [0042] “secret-shared result (e.g., the delta value for updating the weights) can be truncated by truncating the secret-shared parts at the training computers, thereby allowing efficient computation and limiting the amount of memory for storing the integer values.” And, [0248] “apply the final (optimized) weight parts of the model to the d feature and intermediate values.”);
evaluating the reduced data using a trained machine-learning (ML) model, the trained ML model trained on descriptive features extracted from similar consumer data ([0105] “the evaluation algorithm takes {circumflex over (x)} and F as input.” And, [0109] “runs the evaluation algorithm.” And, [0233] “evaluated across the training samples, e.g., to provide a current accuracy of the mode.” And, [0003] “Process starts with training data, shown as existing records.” And, [0004] “a learning process can be used to train the model. Learning module is shown receiving existing records and providing model after training has been performed.” And, [0059] “intermediate values can occur during the training and/or evaluation of the model. Examples of intermediate values include the output of a node in a neural network, an inner product of input values and weights prior to evaluation by a logistic function, etc.” And, [0048] “a set of clients C1, . . . , Cm want to train various models on their joint data . . . the data can be horizontally or vertically partitioned, or be secret-shared among the clients, e.g., as part of a previous computation.” Examiner interprets the partitioned joint data from the set of clients as similar consumer data.).
Mohassel does not explicitly teach but Smaniotto teaches:
sending an eligibility notification to the consumer based on a result of the evaluation ([col. 11, line 65 – col. 12, line 3] “identifying a set of products that may be eligible for offers.” And, [col. 2, line 35 – line 40] “analyze the set of data using the machine learning model to determine a digital offer for an additional product associated with the entity or an additional entity, and avail the digital offer for review by the individual via an electronic device.” And, [col. 6, lines 41-57] “input this data into a trained machine learning model, which may be configured to output a set of digital offers that may entice the set of individuals to make additional product purchases. The set of electronic devices may access and display the set of digital offers for review.” And, see Figure 4B displaying the available offers on the user device.).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify Mohassel to include sending an eligibility notification to the consumer based on a result of the evaluation as taught by Smaniotto in order to “manage and facilitate digital rewards and/or offers for users, such as the set of users associated with the electronic devices” ([col. 5, lines 25-35] and “ensure accurate, automatic, and efficient determination of digital offers” ([col. 4, 33-50]).
Claim 2
As per claim 2, Mohassel does not explicitly teach but Smaniotto teaches:
wherein evaluating the reduced data using the trained ML model further comprises classifying the descriptive features into one of a plurality of predefined categories, and determining in real time at a point of sale whether the consumer is eligible for a gated offer based on the classification ([claim 1] “a first machine learning model using a first set of training data identifying at least (i) a set of product catalogs, and (ii) a categorization of a purchased set of products within the set of product catalogs.” And [col. 11 line 54 – col. 12, line 3] “identify a segment of users that meets a given budget requirement for a given product category over a given period of time, and/or that meets any specified goal(s) of a marketing campaign.” And, [col. 11, line 65 – col. 12, line 3] “identifying a set of products that may be eligible for offers.” And, [col. 2, line 35 – line 40] “analyze the set of data using the machine learning model to determine a digital offer for an additional product associated with the entity or an additional entity, and avail the digital offer for review by the individual via an electronic device.” And, [col. 6, lines 41-57] “input this data into a trained machine learning model, which may be configured to output a set of digital offers that may entice the set of individuals to make additional product purchases. The set of electronic devices may access and display the set of digital offers for review.” And, see Figure 4B displaying the available offers on the user device.).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify the combination of Mohassel and Smaniotto to include wherein evaluating the reduced data using the trained ML model further comprises classifying the descriptive features into one of a plurality of predefined categories, and determining in real time at a point of sale whether the consumer is eligible for a gated offer based on the classification as taught by Smaniotto in order to “manage and facilitate digital rewards and/or offers for users, such as the set of users associated with the electronic devices” ([col. 5, lines 25-35] and “ensure accurate, automatic, and efficient determination of digital offers” ([col. 4, 33-50]).
Claim 4
As per claim 4, Mohassel further teaches:
wherein the reduced data is a compressed, lower-dimension version of the data ([0058] “encode the secret as an arbitrary length binary number.” And, [0059] “intermediate values can occur during the training and/or evaluation of the model. Examples of intermediate values include the output of a node in a neural network, an inner product of input values and weights prior to evaluation by a logistic function . . . The intermediate values are sensitive because they can also reveal information about the data. Thus, every intermediate value can remain secret-shared.” And, [0042] “secret-shared result (e.g., the delta value for updating the weights) can be truncated by truncating the secret-shared parts at the training computers, thereby allowing efficient computation and limiting the amount of memory for storing the integer values.”).
Claim 5
As per claim 4, Mohassel does not explicitly teach but Smaniotto teaches:
wherein the identifying information includes one or more of:
a name of the consumer; an address, phone number, and/or email of the consumer; an Internet Protocol (IP) address of the consumer; credit and/or financial information of the consumer; an identification number or code of the consumer; and an image of the consumer ([col. 4, lines 1-10] “tracks customer interactions and behavior across different channels, such as email.” And, [col. 10, lines 35-42] “identified from a given user's purchase history, e-mail, digital image.”).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify the combination of Mohassel and Smaniotto to include wherein the identifying information includes one or more of: a name of the consumer; an address, phone number, and/or email of the consumer; an Internet Protocol (IP) address of the consumer; credit and/or financial information of the consumer; an identification number or code of the consumer; and an image of the consumer as taught by Smaniotto in order to “manage and facilitate digital rewards and/or offers for users, such as the set of users associated with the electronic devices” ([col. 5, lines 25-35] and “ensure accurate, automatic, and efficient determination of digital offers” ([col. 4, 33-50]).
Claim 6
As per claim 6, Mohassel further teaches:
wherein transforming the data from the input format to the proprietary intermediate digital format further comprises transforming the data in a non-invertible manner, where no inverse transformation exists that could generate the data in the input format from the reduced data in the proprietary intermediate digital format ([0015] “the private input data can be represented as integers (e.g., by shifting bits of floating-point numbers) . . . multiplying these integers (and other intermediate values) and integer-represented weights.” Examiner interprets shifted floating point numbers / secret shared parts as data transformed in a non-invertible manner. And, [0231] “the features can be stored as integers.” And, [0040] “The private data from multiple sources can be secret shared.” And, [0041] “the secret-shared parts can be multiplied by weights and functions applied to them in a privacy-preserving manner.” And, [0058] “encode the secret as an arbitrary length binary number.” And, [0059] “intermediate values can occur during the training and/or evaluation of the model. Examples of intermediate values include the output of a node in a neural network, an inner product of input values and weights prior to evaluation by a logistic function . . . The intermediate values are sensitive because they can also reveal information about the data. Thus, every intermediate value can remain secret-shared.” And, [0042] “secret-shared result (e.g., the delta value for updating the weights) can be truncated by truncating the secret-shared parts at the training computers, thereby allowing efficient computation and limiting the amount of memory for storing the integer values.” And, [0248] “apply the final (optimized) weight parts of the model to the d feature and intermediate values.”).
Claim 7
As per claim 7, Mohassel further teaches:
wherein reducing the data to descriptive features further comprises converting the data into a one-dimensional or multi-dimensional vector of numerical values ([0015] “the private input data can be represented as integers . . . multiplying these integers (and other intermediate values) and integer-represented weights.” And, [0231] “the features can be stored as integers.” And, [0058] “encode the secret as an arbitrary length binary number.” And, [0059] “intermediate values can occur during the training and/or evaluation of the model. Examples of intermediate values include the output of a node in a neural network, an inner product of input values and weights prior to evaluation by a logistic function . . . The intermediate values are sensitive because they can also reveal information about the data. Thus, every intermediate value can remain secret-shared.” Examiner interprets an inner product of input values as an example vector of numerical values. And, [0042] “secret-shared result (e.g., the delta value for updating the weights) can be truncated by truncating the secret-shared parts at the training computers, thereby allowing efficient computation and limiting the amount of memory for storing the integer values.” And, [0248] “apply the final (optimized) weight parts of the model to the d feature and intermediate values.”).
Claim 14
As per claim 14, Mohassel further teaches:
further comprising storing the reduced data, and retraining the ML model based on the reduced data ([0042] “memory for storing the integer values.” And [0230] “truncate the results so as to reduce storage size.” And, [0228] “training computers store secret-shared private data.” And, [0233] “evaluated across the training samples, e.g., to provide a current accuracy of the mode.” And, [0003] “Process starts with training data, shown as existing records.” And, [0004] “a learning process can be used to train the model. Learning module is shown receiving existing records and providing model after training has been performed.” And, [0059] “intermediate values can occur during the training and/or evaluation of the model. Examples of intermediate values include the output of a node in a neural network, an inner product of input values and weights prior to evaluation by a logistic function, etc.” And, [0048] “a set of clients C1, . . . , Cm want to train various models on their joint data . . . the data can be horizontally or vertically partitioned, or be secret-shared among the clients, e.g., as part of a previous computation.” And, [0014] “iterative updates of the weights based on error differences in a current predicted output and the known outputs of the data sample.”).
Claim 15
As per claim 15, Mohassel teaches a data anonymization system, comprising a processor and a non-transitory memory storing instructions that when executed, cause the processor to:
receive data from a consumer including identifying information of the consumer ([0002] “account holder information.” And, [0046] “a given number of participating computers, p1, p2, . . . , pN, (also referred to as clients) each have private data, respectively d1, d2, . . . , dN.” And, [0014] “The private data from multiple sources can be secret shared.” And, [0056] “a data client can secret-share its private data.” And, [0228] “each client can generate shares of its own private data and then send each share to one of the servers.”);
extract a set of descriptive features from the data, the descriptive features not including the identifying information ([0015] “the private input data can be represented as integers . . . multiplying these integers (and other intermediate values) and integer-represented weights.” And, [0231] “the features can be stored as integers.” And, [0040] “The private data from multiple sources can be secret shared.” And, [0041] “the secret-shared parts can be multiplied by weights and functions applied to them in a privacy-preserving manner.” And, [0058] “encode the secret as an arbitrary length binary number.” And, [0042] “secret-shared result (e.g., the delta value for updating the weights) can be truncated by truncating the secret-shared parts at the training computers, thereby allowing efficient computation and limiting the amount of memory for storing the integer values.”);
classify descriptive features of the set of descriptive features using a trained machine-learning (ML) model ([0087] “classification problems with two classes, the output label y is binary.” And, [0231] “the features can be stored as integers.” And, [0229] “The output Y of a training sample can correspond to a known classification that is determined by a separate mechanism, e.g., based on information that is obtained after the d features.” And, [0105] “the evaluation algorithm takes {circumflex over (x)} and F as input.” And, [0109] “runs the evaluation algorithm.” And, [0233] “evaluated across the training samples, e.g., to provide a current accuracy of the mode.” And, [0003] “Process starts with training data, shown as existing records.” And, [0048] “a set of clients C1, . . . , Cm want to train various models on their joint data . . . the data can be horizontally or vertically partitioned, or be secret-shared among the clients, e.g., as part of a previous computation.”).
Mohassel does not explicitly teach but Smaniotto teaches:
send a notification to the consumer based on the classification ([col. 11, line 65 – col. 12, line 3] “identifying a set of products that may be eligible for offers.” And, [col. 2, line 35 – line 40] “analyze the set of data using the machine learning model to determine a digital offer for an additional product associated with the entity or an additional entity, and avail the digital offer for review by the individual via an electronic device.” And, [col. 6, lines 41-57] “input this data into a trained machine learning model, which may be configured to output a set of digital offers that may entice the set of individuals to make additional product purchases. The set of electronic devices may access and display the set of digital offers for review.” And, see Figure 4B displaying the available offers on the user device.).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify Mohassel to include send a notification to the consumer based on the classification as taught by Smaniotto in order to “manage and facilitate digital rewards and/or offers for users, such as the set of users associated with the electronic devices” ([col. 5, lines 25-35] and “ensure accurate, automatic, and efficient determination of digital offers” ([col. 4, 33-50]).
Claim 16
As per claim 16, Mohassel further teaches:
that cause the processor to perform one or more of:
convert the data into a one-dimensional or multi-dimensional vector of numerical values;
filter the data using one or more digital signal processing filters;
calculate descriptive statistics of the data;
reduce image data of the data to a frequency distribution of intensity values for a plurality of color channels of the image data; and
generate a differential output of data elements included in two or more fields of an IP message of the data using one or more of precomputed lookup tables, public databases, and/or public registries, the differential output comprising a relationship between a first data element included in a metadata field of the IP message and a second data element included in a message field of the IP messages ([0015] “the private input data can be represented as integers . . . multiplying these integers (and other intermediate values) and integer-represented weights.” And, [0231] “the features can be stored as integers.” And, [0058] “encode the secret as an arbitrary length binary number.” And, [0059] “intermediate values can occur during the training and/or evaluation of the model. Examples of intermediate values include the output of a node in a neural network, an inner product of input values and weights prior to evaluation by a logistic function . . . The intermediate values are sensitive because they can also reveal information about the data. Thus, every intermediate value can remain secret-shared.” Examiner interprets an inner product of input values as an example vector of numerical values. And, [0042] “secret-shared result (e.g., the delta value for updating the weights) can be truncated by truncating the secret-shared parts at the training computers, thereby allowing efficient computation and limiting the amount of memory for storing the integer values.” And, [0248] “apply the final (optimized) weight parts of the model to the d feature and intermediate values.”).
Claim 17
As per claim 17, Mohassel further teaches:
wherein the reduced data is stored in a database of the data anonymization system ([0230] “truncate the results so as to reduce storage size.” And, [0042] “secret-shared result (e.g., the delta value for updating the weights) can be truncated by truncating the secret-shared parts at the training computers, thereby allowing efficient computation and limiting the amount of memory for storing the integer values.” And, [0053] “servers . . . can collectively store all of the private data.”).
Claim 18
As per claim 18, Mohassel further teaches:
wherein the reduced data is transmitted between a data acquisition system of the data anonymization system and a data processing system of the data anonymization system ([0052] “a two-server architecture for use in training a machine learning model using secret shared data from data clients.” And, [0230] “truncate the results so as to reduce storage size.” And, [0010] “an untrusted server collects and combines the encrypted data from multiple clients, and transfers the data to a trusted client.” And, [0108] “The receiver can obtain the corresponding random numbers from the sender via oblivious transfer, and thus the sender does not know the receiver's input.” And, [0230] “truncate the results so as to reduce storage size.” And, [0042] “secret-shared result (e.g., the delta value for updating the weights) can be truncated by truncating the secret-shared parts.” And, [0126] “receiver sending the sender's encrypted share.”).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication Number 20200242466 (“Mohassel”) in view of US Patent Publication Number 11941655 (“Smaniotto”) as applied to claim 2 above, and in further view of US Patent Application Publication Number US Patent Publication Number 11847246 (“Rodgers”).
Claim 3
As per claim 3, Mohassel does not explicitly teach but Rodgers teaches
wherein the ML model is trained via a training procedure comprising:
extracting descriptive features from a set of consumer data similar to the received data; assigning an identification (ID) code to the extracted descriptive features; labeling the descriptive features with ground truth labels, the ground truth labels correlated with the descriptive features using the ID code; training the ML model on the labeled descriptive features ([col. 1, lines 44-47] “generating a token representative of private data.” And, [col. 1, lines 35-40] “associating the token and associated entity with additional corresponding data as feature data for training of a machine learning system . . . the token may be a number, string, hash.” Examiner interprets the token assigned to the feature data as the identification code.” And, [col. 1, lines 39-45] “The token provides a mechanism whereby the organization can communicate event or attribute labels to the machine learning system without revealing to the learning machine system the meaning of the event or attribute labels.” And, [col. 2, lines 15-16] “The token may be representative of membership in a group.” And, [col. 7, lines 30-35] “the trained machine learning model assigns the entity to the group.” And, [col. 6, lines 55-60] “using tokens to identify entities who are members of a group . . . The groups may be, for example, millennials who have a credit card balance, first time home buyers, couples with dual income and no children, active car buyers, etc.” Examiner interprets the labeled groups as ground truth labels.).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify the combination of Mohassel and Smaniotto to include wherein the ML model is trained via a training procedure comprising: extracting descriptive features from a set of consumer data similar to the received data; assigning an identification (ID) code to the extracted descriptive features; labeling the descriptive features with ground truth labels, the ground truth labels correlated with the descriptive features using the ID code; training the ML model on the labeled descriptive features as taught by Rodgers in order to “improv[e] the trained machine learning system using feedback” (Rodgers [col. 3, lines 15-16]).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication Number 20200242466 (“Mohassel”) in view of US Patent Publication Number 11941655 (“Smaniotto”) as applied to claim 7 above, and in further view of US Patent Application Publication Number 20210056459 (“Yerli”)
Claim 8
As per claim 8, Mohassel does not explicitly teach but Yerli teaches:
wherein reducing the data to descriptive features further comprises filtering the converted data using one or more digital signal processing filters ([0072] “required during data preparation including . . . digital signal processing algorithms such as raw data reduction or filtering.” And, [0076] “converting raw data into machine-usable data sets feasible for application as machine learning data sets. Data preparation may include techniques known in the art, such as data pre-processing, which may be used . . . techniques such as . . . data transformation, and data reduction . . . which may be used to convert the raw data into a suitable format.”).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify the combination of Mohassel and Smaniotto to include wherein reducing the data to descriptive features further comprises filtering the converted data using one or more digital signal processing filters as taught by Yerli so that “there is no need for manually labeling the data . . . enabling a faster and more cost-efficient way of training of machine learning algorithms” (Yerli [0020]).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication Number 20200242466 (“Mohassel”) in view of US Patent Publication Number 11941655 (“Smaniotto”) as applied to claim 7 above, and in further view of US Patent Application Publication Number 20230176888 (“Garg”).
Claim 9
As per claim 9, Mohassel does not explicitly teach but Garg teaches:
wherein reducing the data to descriptive features further comprises computing descriptive statistics of the data ([0066] “one or more features extracted . . . suitable features include . . . descriptive statistics . . . and a reduced-dimensionality version of the restricted data.”).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify the combination of Mohassel and Smaniotto to include wherein reducing the data to descriptive features further comprises computing descriptive statistics of the data as taught by Garg so that “inferences drawn from at least a portion of the . . . data” (Garg [0066]) are easier for advertisers to make.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication Number 20200242466 (“Mohassel”) in view of US Patent Publication Number 11941655 (“Smaniotto”) as applied to claim 7 above, and in further view of US Patent Application Publication Number 20060077409 (“Hoshii”).
Claim 10
As per claim 10, Mohassel does not explicitly teach but Hoshii teaches:
wherein the data includes image data, and reducing the image data to descriptive features further comprises reducing the image data to a frequency distribution of intensity values for a plurality of color channels of the image data ([0059] “frequency distributions of gray levels of red, green, and blue represented by color data items constituting the input image data.” And, [0017] “a frequency distribution is produced using each of components of first color image data, that is, each of color data items constituting first color image data. A highly frequently used color is identified based on the distribution.” And, [abstract] “reductions in the number of calculations and an amount of data.”).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify the combination of Mohassel and Smaniotto to include wherein the data includes image data, and reducing the image data to descriptive features further comprises reducing the image data to a frequency distribution of intensity values for a plurality of color channels of the image data as taught by Hoshii resulting in “reductions in the number of calculations and an amount of data” (Hoshii [abstract]).
Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication Number 20200242466 (“Mohassel”) in view of US Patent Publication Number 11941655 (“Smaniotto”) as applied to claim 1 above, and in further view of US Patent Application Publication Number 20080244744 (“Thomas”).
Claim 11
As per claim 11, Mohassel does not explicitly teach but Thomas teaches:
wherein the data includes an Internet Protocol (IP) message, and reducing the data to descriptive features further comprises generating a differential or summary output of data elements included in two or more fields of the IP message using one or more of precomputed lookup tables, public databases, and/or public registries ([0013] “sampling previous host reputation on IP address . . . correlator process includes sample data . . . using an infrastructure that aggregates and shares reputation and fingerprint data. And, [0086] “Header fields and IP address.” And, [0029] “The table below lists examples of host action and host attribute.” And, [0072] “Content Style Sheet element, javascript, flash or other HTML element from the “fingerprinter.” And, [0073] “fingerprint is stored in a database.”).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify the combination of Mohassel and Smaniotto to include wherein the data includes an Internet Protocol (IP) message, and reducing the data to descriptive features further comprises generating a differential or summary output of data elements included in two or more fields of the IP message using one or more of precomputed lookup tables, public databases, and/or public registries as taught by Thomas “in order to preserve anonymity” (Thomas [0097]) and to “result[] in rapid transfer of information such as computer data, voice or other multimedia information” (Thomas [0005]).
Claim 12
As per claim 12, Mohassel does not explicitly teach but Thomas teaches:
wherein generating the differential or summary output of the data elements included in the two or more fields of the IP message further comprises determining a relationship between a first data element included in a metadata field of the IP message and a second data element included in a message field of the IP message, where either or both of the first data element and the second data element include
plain text, HTML/XML, JSON, image, audio, and/or binary data ([0013] “sampling previous host reputation on IP address . . . correlator process includes sample data . . . using an infrastructure that aggregates and shares reputation and fingerprint data. And, [0034] “attributes collected using javascript, flash, HTML.” And, [0086] “Header fields and IP address.” And, [0029] “The table below lists examples of host action and host attribute.” And, [0072] “Content Style Sheet element, javascript, flash or other HTML element from the “fingerprinter.” And, [0073] “fingerprint is stored in a database.”).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify the combination of Mohassel, Smaniotto, and Thomas to include wherein generating the differential or summary output of the data elements included in the two or more fields of the IP message further comprises determining a relationship between a first data element included in a metadata field of the IP message and a second data element included in a message field of the IP message, where either or both of the first data element and the second data element include plain text, HTML/XML, JSON, image, audio, and/or binary data as taught by Thomas “in order to preserve anonymity” (Thomas [0097]) and to “result[] in rapid transfer of information such as computer data, voice or other multimedia information” (Thomas [0005]).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application Publication Number 20200242466 (“Mohassel”) in view of US Patent Publication Number 11941655 (“Smaniotto”) in view of view of US Patent Application Publication Number 20080244744 (“Thomas”) as applied to claim 12 above, and in further view of US Patent Application Publication Number 20190268421 (“Markuze”).
Claim 13
As per claim 13, Mohassel does not explicitly teach but Markuze teaches:
wherein determining a relationship between a first data element included in a metadata field of the IP message and a second data element included in a message field of the IP message further comprises calculating a physical distance between a first location indicated by an IP address of the IP message and a second location implied or stated in the message field of the IP message ([0338] “identify an approximate location for each IP address, and then uses the identified locations of the IP addresses to quantify a distance between two IP addresses.” And, [0150] “P header, the TCP header and payload of the original data message, as well as a padding field, which includes the next field.”).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify the combination of Mohassel, Smaniotto, and Thomas to include wherein determining a relationship between a first data element included in a metadata field of the IP message and a second data element included in a message field of the IP message further comprises calculating a physical distance between a first location indicated by an IP address of the IP message and a second location implied or stated in the message field of the IP message as taught by Markuze “in order to improve redundancy and high availability” (Markuze [0012]).
Claims 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Publication Number 11847246 (“Rodgers”) in view of US Patent Application Publication Number 20200242466 (“Mohassel”) in view of US Patent Publication Number 11941655 (“Smaniotto”).
Claim 19
As per claim 19, Rodgers teaches a method for targeting potential customers with gated offers,
comprising:
receiving a set of consumer data including identifying information of a plurality of consumers ([col. 1, lines 18-21] “communicating personally identifiable information between entities.” And, [col. 3, lines 25-30] “determine information about a group of people. These people may be, for example, its customers, employees, potential customers, people in a particular geographic area, etc. In this example, the organization wants to know which customers are likely to get married within a particular time (for example, within the next six months).” And, [col. 4, lines 50-55] “For example, if a customer “John Smith” has a shared id of 14521, was married on Mar. 1, 2000, and the married event was identified using the token “A24”, then the list may include an entry:’14521, A24, 03/01/2000.” And, [col. 4, lines 55-60] “the computer may send the list to a machine learning system.”);
assigning each descriptive feature of the set of descriptive features an identification (ID) code ([col. 1, lines 44-47] “generating a token representative of private data.” And, [col. 1, lines 35-40] “associating the token and associated entity with additional corresponding data as feature data for training of a machine learning system . . . the token may be a number, string, hash.” Examiner interprets the token assigned to the feature data as the identification code.” And, [col. 1, lines 39-45] “The token provides a mechanism whereby the organization can communicate event or attribute labels to the machine learning system without revealing to the learning machine system the meaning of the event or attribute labels.”);
classifying the set of descriptive features into predetermined consumer categories using a trained machine-learning (ML) model ([col. 1, lines 35-40] “associating the token and associated entity with additional corresponding data as feature data for training of a machine learning system . . . the token may be a number, string, hash.” [col. 2, lines 15-16] “The token may be representative of membership in a group.” And, [col. 7, lines 30-35] “the trained machine learning model assigns the entity to the group.” And, [col. 6, lines 55-60] “using tokens to identify entities who are members of a group . . . The groups may be, for example, millennials who have a credit card balance, first time home buyers, couples with dual income and no children, active car buyers, etc.” Examiner interprets the groups as the consumer categories.);
correlating the classified set of descriptive features with plurality of consumers using the ID codes assigned to the descriptive features ([col. 1, lines 35-40] “associating the token and associated entity with additional corresponding data as feature data for training of a machine learning system . . . the token may be a number, string, hash.” And, [col. 2, lines 2-8] “receiving from a trained learning machine information identifying one or more entities likely to be associated with a token, where the trained learning machine is trained to make inferences about entities based on a token that is indicative of private data.” And, [col. 2, lines 15-16] “The token may be representative of membership in a group.” And, [col. 7, lines 30-35] “the trained machine learning model assigns the entity to the group.” And, [col. 6, lines 55-60] “using tokens to identify entities who are members of a group . . . The groups may be, for example, millennials who have a credit card balance, first time home buyers, couples with dual income and no children, active car buyers, etc.” Examiner notes that consumers are correlated with the descriptive features of the relevant groups based on the token (i.e., ID code).
Rodgers does not explicitly teach but Mohassel teaches:
reducing the set of consumer data to a set of descriptive features, the descriptive features not including the identifying information ([0015] “the private input data can be represented as integers . . . multiplying these integers (and other intermediate values) and integer-represented weights.” And, [0230] “truncate the results so as to reduce storage size.” And, [0231] “the features can be stored as integers.” And, [0040] “The private data from multiple sources can be secret shared.” And, [0041] “the secret-shared parts can be multiplied by weights and functions applied to them in a privacy-preserving manner.” And, [0058] “encode the secret as an arbitrary length binary number.” And, [0059] “intermediate values can occur during the training and/or evaluation of the model. Examples of intermediate values include the output of a node in a neural network, an inner product of input values and weights prior to evaluation by a logistic function . . . The intermediate values are sensitive because they can also reveal information about the data. Thus, every intermediate value can remain secret-shared.” And, [0042] “secret-shared result (e.g., the delta value for updating the weights) can be truncated by truncating the secret-shared parts at the training computers, thereby allowing efficient computation and limiting the amount of memory for storing the integer values.” And, [0248] “apply the final (optimized) weight parts of the model to the d feature and intermediate values.”).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify Rodgers to include reducing the set of consumer data to a set of descriptive features, the descriptive features not including the identifying information as taught by Mohassel in order “to more efficiently train a machine learning model while preserving data privacy of various data sources” (Mohassel [0013]).
Rodgers does not explicitly teach but Smaniotto teaches:
sending one or more consumers of the plurality of consumers a notification of eligibility for a gated offer based on the classification ([col. 11, line 65 – col. 12, line 3] “identifying a set of products that may be eligible for offers.” And, [col. 2, line 35 – line 40] “analyze the set of data using the machine learning model to determine a digital offer for an additional product associated with the entity or an additional entity, and avail the digital offer for review by the individual via an electronic device.” And, [col. 6, lines 41-57] “input this data into a trained machine learning model, which may be configured to output a set of digital offers that may entice the set of individuals to make additional product purchases. The set of electronic devices may access and display the set of digital offers for review.” And, see Figure 4B displaying the available offers on the user device. And [claim 1] “a first machine learning model using a first set of training data identifying at least (i) a set of product catalogs, and (ii) a categorization of a purchased set of products within the set of product catalogs.” And [col. 11 line 54 – col. 12, line 3] “identify a segment of users that meets a given budget requirement for a given product category over a given period of time, and/or that meets any specified goal(s) of a marketing campaign.”).).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to modify Rodgers and Mohassel to include sending one or more consumers of the plurality of consumers a notification of eligibility for a gated offer based on the classification as taught by Smaniotto in order to “manage and facilitate digital rewards and/or offers for users, such as the set of users associated with the electronic devices” ([col. 5, lines 25-35] and “ensure accurate, automatic, and efficient determination of digital offers” ([col. 4, 33-50]).
Claim 20
As per claim 20, Rodgers further teaches:
wherein reducing the set of consumer data to a set of descriptive features includes one or more of:
converting the data into a one-dimensional or multi-dimensional vector of numerical values;
filtering the data using one or more digital signal processing filters;
calculating descriptive statistics of the data;
reducing image data of the data to a frequency distribution of intensity values for a plurality of color channels of the image data; and
generating a differential output of data elements included in two or more fields of an IP message of the data using one or more of precomputed lookup tables, public databases, and/or public registries, the differential output comprising a relationship between a first data element included in a metadata field of the IP message and a second data element included in a message field of the IP messages ([col. 5, lines 65-67] “the trainer may calculate numerical representations of training data (e.g., in vector form) And, [col. 3, lines 25-30] “determine information about a group of people. These people may be, for example, its customers, employees, potential customers, people in a particular geographic area, etc. In this example, the organization wants to know which customers are likely to get married within a particular time (for example, within the next six months).” And, [col. 4, lines 50-55] “For example, if a customer “John Smith” has a shared id of 14521, was married on Mar. 1, 2000, and the married event was identified using the token “A24”, then the list may include an entry:’14521, A24, 03/01/2000.” And, [col. 4, lines 55-60] “the computer may send the list to a machine learning system.”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US Patent Application Publication Number 20180285969 (“Busch”) discloses using machine learning to predict consumer eligibility
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALLAN J WOODWORTH, II whose telephone number is (571)272-6904. The examiner can normally be reached Mon-Fri 9:00-5:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ilana Spar can be reached on (571) 270-7537. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALLAN J WOODWORTH, II/Primary Examiner, Art Unit 3622