Prosecution Insights
Last updated: April 19, 2026
Application No. 17/899,355

COMPUTER-BASED SYSTEMS HAVING TECHNOLOGICALLY IMPROVED MACHINE LEARNING RECOMMENDATION ENGINES CONFIGURED/PROGRAMMED TO UTILIZE DYNAMIC VARIABLE RATIO FEEDBACK AND METHODS OF USE THEREOF

Non-Final OA §101§102§103
Filed
Aug 30, 2022
Examiner
SMITH, KEVIN LEE
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Capital One Services LLC
OA Round
1 (Non-Final)
37%
Grant Probability
At Risk
1-2
OA Rounds
4y 8m
To Grant
55%
With Interview

Examiner Intelligence

Grants only 37% of cases
37%
Career Allow Rate
49 granted / 134 resolved
-18.4% vs TC avg
Strong +18% interview lift
Without
With
+18.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
45 currently pending
Career history
179
Total Applications
across all art units

Statute-Specific Performance

§101
30.7%
-9.3% vs TC avg
§103
36.4%
-3.6% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
17.3%
-22.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 134 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2. This communication is in response to the Applicant’s submission filed 30 August 2022, where: Claims 1-20 are pending. Claims 1-20 are rejected. Information Disclosure Statement 3. An information disclosure statement was submitted on 30 August 2022. The submission complies with the provisions of 37 CFR 1.97. Accordingly, the Examiner considered the information disclosure statement. Drawings 4. The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: Fig. 3, reference “310” is not mentioned in the description. Fig. 3, reference “320” is not mentioned in the description. Fig. 3, reference “340” is not mentioned in the description. Fig. 3, reference “350” is not mentioned in the description. Fig. 3, reference “360” is not mentioned in the description. Fig. 3, reference “370” is not mentioned in the description. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the Examiner, the Applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 U.S.C. § 101 5. 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 6. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites a method, which is a process, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitations of “[(b)] utilizing, by the at least one processor, a feedback machine learning model to predict an average feedback attribute and an average feedback variability attribute based at least in part on the event data,” “[(e)] generating, by the at least one processor, a feedback probability distribution,” and “[(f)] generating, by the at least one processor, a new event feedback for at least one new event.” These activities of “[(b)] utilizing” and [(e), (f)] generating” are limitations that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, are a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The claim recites more specifics or details to the abstract idea of “[(b)] utilizing,” “[(b.1)] wherein the average feedback attribute comprises an average event feedback percentage,” and “[(b.2)] wherein the average feedback variability attribute comprises an average event feedback variability percentage,” and accordingly, are merely more specific to the abstract idea. The claim also recites more specifics or details to the abstract idea of “[(e)] generating . . . a feedback probability distribution,” “based at least in part on: [(e.1)] iv) the at least one new event attribute, [(e.2)] v) the average feedback attribute, and [(e.3)] vi) the average feedback variability attribute of the feedback data entry,” and accordingly, is merely more specific to the abstract idea. The claim further recites more specifics or details to the abstract idea of “[(f)] generating . . . a new event feedback,” “based at least in part on: [(f.1)] iii) the at least one new event attribute, and [(f.2)] iv) at least one random selection from the feedback probability distribution,” and accordingly is merely more specific to the abstract idea. Accordingly, claim 1 recites an abstract idea. Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include “at least one processor” and a “computing device,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim also recites a “feedback machine learning model,” which is recited at a high level of generality, and accordingly, a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim recites additional elements of “[(a)] receiving, by at least one processor, event data comprising at least one event data entry that represents at least one event,” and “[(d)] receiving, by the at least one processor, at least one new event indication associated with the user profile.” The activities of “[(a), (d)] receiving,” which are insignificant extra-solution activities of mere data gathering, (MPEP § 2106.05(g)), that do not serve to integrate the abstract idea into a practical application. The claim also recites “[(c)] generating, by the at least one processor, a feedback data entry in a user profile to store the average feedback attribute and the average feedback variability attribute in association with the user profile,” the activity of “[(c)] generate . . . to store” is the insignificant extra-solution activity of providing for data gathering and storage, (MPEP § 2106.05(g)), that does not serve to integrate the abstract idea into a practical application. Also, the additional element of “[(g)] instructing, by the at least one processor, to display the new event feedback for the at least one new event indication on a computing device associated with the user profile.” The activity of “[(g)] instructing . . . to display” is a post-solution insignificant extra-solution activity of outputting a data display instruction, (MPEP § 2106.05(g)), that does not serve to integrate the abstract idea into a practical application. The claim recites more details or specifics of the additional element of “[(a)] receiving,” “[(a.1)] wherein the at least one event data entry comprises at least one event attribute,” and “[(d)] receiving,” “[(d.1)] wherein the at least one new event indication indicates at least one new event and at least one new event attribute of the at least one new event,” which are merely more specific to the respective additional element. Accordingly, claim 1 is directed to an abstract idea. Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements include “at least one processor” and a “computing device,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that do not amount to significantly more than the abstract idea. The claim also recites a “feedback machine learning model,” which is recited at a high level of generality, and accordingly, a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim recites additional elements of “[(a)] receiving, by at least one processor, event data comprising at least one event data entry that represents at least one event,” and “[(d)] receiving, by the at least one processor, at least one new event indication associated with the user profile.” The activities of “[(a), (d)] receiving,” which are well-understood, routine, and conventional activities of receiving data over a network, (MPEP § 2106.05(d) sub II.i), that do not amount to significantly more than the abstract idea. The claim also recites “[(c)] generating, by the at least one processor, a feedback data entry in a user profile to store the average feedback attribute and the average feedback variability attribute in association with the user profile,” the activity of “[(c)] generate . . . to store” is a well-understood, routine, and conventional activity of storing information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea. Also, the additional element of “[(g)] instructing, by the at least one processor, to display the new event feedback for the at least one new event indication on a computing device associated with the user profile.” The activity of “[(g)] instructing . . . to display” is a well-understood, routine, and conventional activity of transmitting data over a network, (MPEP § 2106.05(d) sub II.i), that does not amount to significantly more than the abstract idea. The claim recites more details or specifics of the additional element of “[(a)] receiving,” “[(a.1)] wherein the at least one event data entry comprises at least one event attribute,” and “[(d)] receiving,” “[(d.1)] wherein the at least one new event indication indicates at least one new event and at least one new event attribute of the at least one new event,” which are merely more specific to the respective additional element. Accordingly, claim 1 is subject-matter ineligible. Claim 11 recites a system, which is a product, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, under Step 2A Prong One, the claim recites the limitations of “[(b)] utilize a feedback machine learning model to predict an average feedback attribute and an average feedback variability attribute based at least in part on the event data,” “[(e)] generate a feedback probability distribution,” and “[(f)] generate a new event feedback for at least one new event.” These activities of “[(b)] utilize” and [(e), (f)] generate” are limitations that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, are a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The claim recites more specifics or details to the abstract idea of “[(b)] utilize,” “[(b.1)] wherein the average feedback attribute comprises an average event feedback percentage,” and “[(b.2)] wherein the average feedback variability attribute comprises an average event feedback variability percentage,” and accordingly, are merely more specific to the abstract idea. The claim also recites more specifics or details to the abstract idea of “[(e)] generate a feedback probability distribution,” “based at least in part on: [(e.1)] iv) the at least one new event attribute, [(e.2)] v) the average feedback attribute, and [(e.3)] vi) the average feedback variability attribute of the feedback data entry,” and accordingly, is merely more specific to the abstract idea. The claim further recites more specifics or details to the abstract idea of “[(f)] generate a new event feedback,” “based at least in part on: [(f.1)] iii) the at least one new event attribute, and [(f.2)] iv) at least one random selection from the feedback probability distribution,” and accordingly is merely more specific to the abstract idea. Accordingly, claim 11 recites an abstract idea. Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include “at least one processor configured to execute software instructions, wherein upon execution the software instructions cause the at least one processor,”” and a “computing device,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim also recites a “feedback machine learning model,” which is recited at a high level of generality, and accordingly, a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim recites additional elements of “[(a)] receive event data comprising at least one event data entry that represents at least one event,” and “[(d)] receive at least one new event indication associated with the user profile.” The activities of “[(a), (d)] receive,” which are insignificant extra-solution activities of mere data gathering, (MPEP § 2106.05(g)), that do not serve to integrate the abstract idea into a practical application. The claim also recites “[(c)] generate a feedback data entry in a user profile to store the average feedback attribute and the average feedback variability attribute in association with the user profile,” the activity of “[(c)] generate . . . to store” is the insignificant extra-solution activity of providing for data gathering and storage, (MPEP § 2106.05(g)), that does not serve to integrate the abstract idea into a practical application. Also, the additional element of “[(g)] instruct to display the new event feedback for the at least one new event indication on a computing device associated with the user profile.” The activity of “[(g)] instruct to display” is a post-solution insignificant extra-solution activity of outputting a data display instruction, (MPEP § 2106.05(g)), that does not serve to integrate the abstract idea into a practical application. The claim recites more details or specifics of the additional element of “[(a)] receive,” “[(a.1)] wherein the at least one event data entry comprises at least one event attribute,” and “[(d)] receive,” “[(d.1)] wherein the at least one new event indication indicates at least one new event and at least one new event attribute of the at least one new event,” which are merely more specific to the respective additional element. Accordingly, claim 11 is directed to an abstract idea. Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements include “at least one processor configured to execute software instructions, wherein upon execution the software instructions cause the at least one processor,” and a “computing device,” which are generic computer components used to implement the abstract idea, (MPEP § 2106.05(f)), that do not amount to significantly more than the abstract idea. The claim also recites a “feedback machine learning model,” which is recited at a high level of generality, and accordingly, a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim recites additional elements of “[(a)] receive event data comprising at least one event data entry that represents at least one event,” and “[(d)] receive at least one new event indication associated with the user profile.” The activities of “[(a), (d)] receive,” which are well-understood, routine, and conventional activities of receiving data over a network, (MPEP § 2106.05(d) sub II.i), that do not amount to significantly more than the abstract idea. The claim also recites “[(c)] generate a feedback data entry in a user profile to store the average feedback attribute and the average feedback variability attribute in association with the user profile,” the activity of “[(c)] generate . . . to store” is a well-understood, routine, and conventional activity of storing information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea. Also, the additional element of “[(g)] instruct to display the new event feedback for the at least one new event indication on a computing device associated with the user profile.” The activity of “[(g)] instruct to display” is a well-understood, routine, and conventional activity of transmitting data over a network, (MPEP § 2106.05(d) sub II.i), that does not amount to significantly more than the abstract idea. The claim recites more details or specifics of the additional element of “[(a)] receive,” “[(a.1)] wherein the at least one event data entry comprises at least one event attribute,” and “[(d)] receive,” “[(d.1)] wherein the at least one new event indication indicates at least one new event and at least one new event attribute of the at least one new event,” which are merely more specific to the respective additional element. Accordingly, claim 11 is subject-matter ineligible. Claim 2 depends from claim 1. Claim 12 depends from claim 11. The claims further recite “[(h)] determining, by the at least one processor, a target variability.” The activity of “[(h)] determining” is a limitation that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The claims recite more details or specifics of the abstract idea of “[(h)] determining,” where “the average feedback attribute comprising at least one of: a maximum feedback rate, a minimum feedback rate, or a number of standard deviations,” and accordingly, is merely more specific to the abstract idea. The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Thus, claims 2 and 12 are subject-matter ineligible. Claim 3 depends directly or indirectly from claim 1. Claim 13 depends directly or indirectly from claim 11. The claims further recite “[(i)] generating, by the at least one processor, a user-specific variable rate feedback record linked to the user profile.” The activity of "[(i)] generating,” is a limitation that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The claims recite more details or specifics of the abstract idea of “[(i)] generating,” “[(i.1)] wherein the user-specific variable rate feedback record comprises the average feedback attribute and a target variability attribute specifying the target variability,” and accordingly, is merely more specific to the abstract idea. The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Thus, claims 3 and 13 are subject-matter ineligible. Claim 4 depends from claim 1. Claim 14 depends from claim 11. The claims recite more details or specifics to the abstract idea of “[(b)] utilizing . . . to predict,” “wherein the average feedback attribute comprises at least one: [(b.3)] a target frequency comprising an average frequency of applying the new event feedback in response to the at least one new event, or [(b.3)] a target feedback quantity an average quantity of the new event feedback in response to the at least one new event,” and accordingly, is merely more specific to the abstract idea. The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Thus, claims 4 and 14 are subject-matter ineligible. Claim 5 depends from claim 1. Claim 15 depends from claim 11. The claims further recite the limitations of “[(h)] determining, by the at least one processor, at least one engagement metric measuring user engagement based at least in part on the at least one new event,” “[(i)] comparing, by the at least one processor, the at least one engagement metric with at least one threshold engagement value,” and “[(j)] determining, by the at least one processor, a modification to the average feedback attribute based at least in part on comparing the at least one engagement metric with at least one threshold engagement value.” The activities of "[(h), (j)] determining” and “[(i)] comparing” are limitations that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Thus, claims 5 and 15 are subject-matter ineligible. Claim 6 depends directly or indirectly from claim 1. Claim 16 depends directly or indirectly from claim 11. The claims further recite the limitation “[(k)] utilizing, by the at least one processor, the feedback machine learning model to determine the modification to the average feedback attribute based at least in part on model parameters and the at least one engagement metric.” The activity of “[(k)] utilizing . . . to determine the modification” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Thus, claims 6 and 16 are subject-matter ineligible. Claim 7 depends directly or indirectly from claim 1. Claim 17 depends directly or indirectly from claim 11. The claims recite more details or specifics to the additional element of “the feedback machine learning model,” “[(b)] wherein the feedback machine learning model comprises at least one reinforcement model,” and accordingly, is merely more specific to the additional element. Thus, claims 7 and 17 are subject-matter ineligible. Claim 8 depends directly or indirectly from claim 1. Claim 18 depends directly or indirectly from claim 11. The claims further recite the limitation of “[(l)] producing, by the at least one processor, a training dataset that correlates the event data with previous modifications to the average feedback attribute.” The plain meaning of the term “producing” is making or creating something actively, such as selecting data for a training use. Thus, the broadest reasonable interpretation of the term “[(l)] producing” can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, is a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The claims also further recite the limitation “[(m)] training, by the at least one processor, the feedback machine learning model based at least in part on the training dataset,” which is the use of a generic computer component (feedback machine learning model) to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application under Step 2A Prong Two, nor amounts to significantly more than the abstract idea under Step 2B. Thus, claims 8 and 18 are subject-matter ineligible. Claim 9 depends from claim 1. Claim 19 depends from claim 11. The claims recite more details or specifics to the abstract idea of “[(e)] generating . . . a feedback probability distribution” “wherein the feedback probability distribution [(e.4)] comprises a normal distribution,” and accordingly, is merely more specific to the abstract idea. The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Thus, claims 9 and 19 are subject-matter ineligible. Claim 10 depends from claim 1. The claim recites more details or specifics to the abstract idea of “[(e)] generating . . . a feedback probability distribution” “wherein the feedback probability distribution [(e.4)] comprises a gamma distribution,” and accordingly, is merely more specific to the abstract idea. The additional elements of the claim does not serve to integrate the abstract idea into integrated into a practical application, (see MPEP § 2106.04(d)), nor do the additional elements amount to significantly more than the abstract idea, (MPEP § 2106.05 sub I; see also MPEP § 2106.05(a) – (h)), and thus, the claim recites no more than the abstract idea. Thus, claim 10 is subject-matter ineligible. Claim 20 recites a method, which is a process, and thus one of the statutory categories of patentable subject matter. (35 U.S.C. § 101). However, the claim recites the limitations of “[(c)] generating, by the at least one processor, a feedback probability distribution,” and “[(d)] generating, by the at least one processor, a new event feedback for at least one new event.” The activities of “[(c), (d)] generating” are limitations that can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions, and accordingly, are a mental process, (MPEP § 2106.04(a)(2) sub III), which is one of the groupings of abstract ideas. (MPEP § 2106.04(a)(2)). The claim recites more details or specifics of the abstract idea of “[(c)] generating . . . a feedback probability distribution,” that is “based at least in part on: [(c)] vii) the at least one new event attribute, [(c)] viii) the average feedback attribute, and [(c)] ix) the average feedback variability attribute of the feedback data entry,” and the abstract idea of [(d)] generating, by the at least one processor, a new event feedback” that is “based at least in part on: [(d)] v) the at least one new event attribute, and [(d)] vi) at least one random selection from the feedback probability distribution,” and accordingly, are respectively merely more specific to the abstract idea. Accordingly, claim 20 recites an abstract idea. Under Step 2A Prong Two, the claim as a whole is not integrated into a practical application, because the additional elements recited in the claim beyond the identified judicial exception include “at least one processor,” which is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim also recites the limitation of “[(a)] receiving, by the at least one processor, at least one new event indication associated with a user profile,” which is a pre-solution insignificant extra-solution activity of mere data gathering, (MPEP § 2106.05(f)), that does not serve to integrate the abstract idea into a practical application. The claim also recites the limitation of “[(e)] updating, by the at least one processor, the user profile with a feedback score indicative of the new event feedback,” which is a post-solution insignificant extra-solution activity of outputting data, (MPEP § 2106.05(g)), that does not serve to integrate the abstract idea into a practical application. The claim also recites more details or specifics of the additional element of “[(a)] receiving . . . at least one new event indication,” “[(a.1)]wherein the at least one new event indication indicates at least one new event and at least one new event attribute of the at least one new event,” and accordingly, is merely more specific to the additional element. Accordingly, claim 20 is directed to the abstract idea. Finally, under Step 2B, the additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. The additional elements include “at least one processor,” which is a generic computer component used to implement the abstract idea, (MPEP § 2106.05(f)), that does not amount to significantly more than the abstract idea. The claim also recites the limitation of “[(a)] receiving, by the at least one processor, at least one new event indication associated with a user profile,” in which the activity of “[(a)] receiving” is a well-understood, routine, and conventional activity of receiving data over a network, (MPEP §2106.05(d) sub II.i), that does not amount to significantly more than the abstract idea. The claim also recites the limitation of “[(e)] updating, by the at least one processor, the user profile with a feedback score indicative of the new event feedback,” in which the activity of “[(d)] updating” is a well-understood, routine, and conventional activity of storing information in memory, (MPEP § 2106.05(d) sub II.iv), that does not amount to significantly more than the abstract idea. The claim also recites more details or specifics of the additional element of “[(a)] receiving . . . at least one new event indication,” “[(a.1)]wherein the at least one new event indication indicates at least one new event and at least one new event attribute of the at least one new event,” and accordingly, is merely more specific to the additional element. Accordingly, claim 20 is subject-matter ineligible. Claim Rejections - 35 U.S.C. § 102 7. The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 8. Claims 1-4, 11-14, and 20 are rejected under 35 U.S.C. § 102(a)(2) as being anticipated by US Patent 12033222 to Sivaraman et al. [hereinafter Sivaraman]. Regarding claims 1 and 11, Sivaraman teaches [a] method (Sivaraman 2:7-8 teaches a “method”) of claim 1, and [a] system (Sivaraman 3:46-47 teaches “a system that implements machine learning”) of claim 11, comprising: [(a)] receiving, by at least one processor, event data comprising at least one event data entry that represents at least one event (Sivaraman, Fig. 3, teaches an input valuation system [Examiner annotation in dashed-line text boxes]: PNG media_image1.png 467 585 media_image1.png Greyscale (Sivaraman 5:56-60 teaches “[i]nput can be provided [(that is, receiving)] from customers in the form of comments or a rating with respect to a purchased product [(that is, receiving . . . event data comprising at least one event data entry that represents at least one event)]. The value of the user input can be assessed with respect to feedback from other users including whether or not, or to what degree, they agree with the user input”); [(a.1)] wherein the at least one event data entry comprises at least one event attribute (Sivaraman 14:37-93 teaches “the identity of a particular user or features of the user captured in a user profile can be provided as input to the machine learning model [(that is, “identity” or “features” is the at least one event data entry comprises at least one event attribute)]”); [(b)] utilizing, by the at least one processor, a feedback machine learning model to predict (Sivaraman, Fig. 6, teaches machine learning to determine customer behavior [Examiner annotations in dashed-line text boxes]: PNG media_image2.png 590 735 media_image2.png Greyscale an average feedback attribute (Sivaraman 9:36-40 teaches “an aggregate score can be computed based on scores associated with multiple instances of user input. By way of example, user input scores of eighty and twenty can be averaged to produce an aggregate score [(that is, “averaged to produce an aggregate score” is an average feedback attribute)] for a user of fifty”) and an average feedback variability attribute (Sivaraman 13:51-55 teaches “a score of eighty can represent a valid and valuable input. By contrast a score of twenty can denote an invalid and unvaluable input corresponding to an outlier. Further, the score can reflect an aggregate of feedback [(that is, an average)] with respect to multiple inputs, comments, or ratings of a user [(that is, an outlier score indicating “valid and valuable” versus “invalid and unvaluable” are an average feedback variability attribute)]”) based at least in part on the event data (Sivaraman 15:59-65 teaches “learned behavior of a customer can be utilized to predict the likely reaction of others to products or services without the customer actually trying the products or services. A simple description of a product or service may be provided to a machine learning process [(that is, a feedback machine learning model)] to determine whether or not the customer would be likely to give the product or service a high rating [(that is, that is, the provided “description” is based at least in part on the event data)]”) (Sivaraman 5:56-67 teaches that “[i]nput can be provided from customers in the form of comments or a rating with respect to a purchased product [(that is, “a purchased product” is based at least in part on the event data)]. The value of the user input can be assessed with respect to feedback from other users including whether or not, or to what degree, they agree with the user input. The value can be captured as a numerical score, or crowd guarantee score, or any other means of scoring or rating (e.g., stars, scale . . . ). In this manner, determination of reliability of a user, or user input, can be enabled based on feedback provided by other users [(that is, “reliability” is an average feedback variability attribute)]. In other words, the score represents a probability that a user is similar to other users, for example with respect to an opinion regarding a product or service”; Sivaraman 6:10-14 teaches the “input valuation system 106 can determine the score computed based on feedback by other users on user input of another and provide at least the score to the recommendation system 104 for further use and processing”); [(b.1)] wherein the average feedback attribute comprises an average event feedback percentage (Sivaraman 13:47-53 teaches “the score captures feedback on the input to aid in valuation of the input. In accordance with one implementation, the score can be expressed as a percentage or value out of a total of one hundred. For example, a score of eighty can represent a valid and valuable input. By contrast a score of twenty can denote an invalid and unvaluable input corresponding to an outlier”); [(b.2)] wherein the average feedback variability attribute comprises an average event feedback variability percentage (Sivaraman 9:17-23 teaches “the score can indicate whether other users agree or disagree with user input. In one instance, the score can be a number representing a percentage of other users who agree with the input and/or disagree with the input [(that is, with “other users” is an average feedback variability percentage)]. For example, if eight other users agree with the input and two people disagree, the score can be eighty for eighty percent or eighty out of one hundred”); [(c)] generating, by the at least one processor, a feedback data entry in a user profile to store the average feedback attribute and the average feedback variability attribute in association with the user profile (Sivaraman 10:33-37 teaches “input valuation can be utilized to render a machine learning process performed by the recommendation system more efficient as well as accurate. For example, a score representing a level of agreement of other users can be linked to a user profile”; Sivaraman 10:50-54 teaches the “recommendation system 104 can essentially employ machine learning to learn behavior of users with respect to products or services based on a user profile including information regarding financial transactions, ratings, and score”); [(d)] receiving, by the at least one processor, at least one new event indication associated with the user profile (Sivaraman 6:7-10 teaches “predictions, including suggested products of interest, can be improved by including high quality data and excluding low quality data, for instance by weighting data based on score [(that is, “suggested products of interest” is receiving . . . at least one new event indication associated with the user profile)]”; Sivaraman 5:38-42 teaches “the recommendation system 104 can analyze purchase history of a user, determine information regarding the amount of money a customer spends on various products and frequent purchases, and predict and recommend bank products to the user based on the analysis”; Sivaraman 10:62-65 teaches “a prediction can be made that a user would likely give a product or service a high rating. Predictions that can be made based on the learned behavior can form a basis for one or more recommendations or suggestions”), [(d.1)] wherein the at least one new event indication indicates at least one new event and at least one new event attribute of the at least one new event (Sivaraman 16:22-25 teaches “aspects can be directed to a single entity. For example, a single bank can provide features to allow users to discover, evaluate and receive recommendations regarding products of the bank”); [(e)] generating, by the at least one processor, a feedback probability distribution (Sivaraman, Fig. 5, teaches a machine learning algorithm output of customer ratings of products or services 620 [Examiner annotations in dashed-line text boxes]: PNG media_image3.png 820 1096 media_image3.png Greyscale Sivaraman 11:55-63 teaches that the “score can thus appropriately weight customer data such that if the score is high, representing a high level of agreement with other customers, the data is of higher value than others with respect to predicting future behavior of customers. By contrast, a lower score, representing a low level of agreement with other customers, can be used to reduce the value with respect to prediction output of the machine learning algorithm 600, in one instance, can be customer ratings of products or services 620 [(that is, the “customer rating output” is generating . . . a feedback probability distribution”) based at least in part on: [(e.1)] iv) the at least one new event attribute (Sivaraman 16:22-25 teaches “aspects can be directed to a single entity. For example, a single bank can provide features to allow users to discover, evaluate and receive recommendations regarding products of the bank [(that is, such “recommendations” have the at least one new event attribute)]”), [(e.2)] v) the average feedback attribute (see above, Sivaraman 9:36-40), and [(e.3)] vi) the average feedback variability attribute (see above, Sivaraman 13:51-55) of the feedback data entry; [(f)] generating, by the at least one processor, a new event feedback for at least one new event (Sivaraman 15:59-65 teaches that at “numeral 970 [of Fig. 9], learned behavior of a customer can be utilized to predict the likely reaction of others to products or services without the customer actually trying the products or services. A simple description of a product or service may be provided to a machine learning process to determine whether or not the customer would be likely to give the product or service a high rating [(that is, “likely to give . . . a high rating” is generating . . . a new event feedback for at least one new event)]”) based at least in part on: [(f.1)] iii) the at least one new event attribute (Sivaraman 15:62-63 teaches a “simple description of a product or service [(that is, “description” is the at least one new event attribute)] may be provided to a machine learning process”), and [(f.2)] iv) at least one random selection from the feedback probability distribution (Sivaraman 10:58 to 11:1 teaches “reaction from a user to a product or service, which the user has not tried or a potential future product or service, can be determined based on a description of the product or service, and learned behavior of the user. For example, a prediction can be made that a user would likely give a product or service a high rating [(that is, generating . . . a new event feedback for at least one new event )]. Predictions that can be made based on the learned behavior can form a basis for one or more recommendations or suggestions. For example, a prediction that indicates a user will respond positively to a product or service can be utilized as a basis to recommend the product or service to the user); and [(g)] instructing, by the at least one processor, to display the new event feedback for the at least one new event indication on a computing device associated with the user profile (Sivaraman, claim 1, teaches to “convey, for display on a device and via a communication connection, the first recommendation in the electronic social network [(that is, instructing . . . to display the new event feedback for the at least one new event indication on a computing device associated with the user profile)]”). Regarding claims 2 and 12, Sivaraman teaches all of the limitations of claims 1 and 11, as described above in detail. Sivaraman teaches - determining, by the at least one processor, a target variability of the average feedback attribute (Sivaraman 6:65 to 7:7 teaches “the financial social network can identify other users that purchase the product or service and user input comprising a review of the product or service [(that is, “review of the product or service” is the average feedback attribute)]. In accordance with one aspect, identification of other users 110 that purchased the product or service can be filtered based on a score representing agreement with the input provided by a number of other users of the social network. For example, the score can be compared to a predetermined threshold capturing a baseline for trustworthiness or agreement associated with validating the user input or user [(that is, a “baseline” is determining . . . a target variability of the average feedback attribute)]”) comprising at least one of: a maximum feedback rate, a minimum feedback rate, or a number of standard deviations (Sivaraman 10:36-44 teaches “a score representing a level of agreement of other users can be linked to a user profile. Rather than employing all data regarding a product or service, data can be restricted to users with scores above a predetermined threshold [(that is, a number of standard deviations)]. In this manner, the data of a subset of users [(that is, a number)] can be employed as representative of other users. Consequently, the amount of data that is utilized to generate and train a model is reduced and outliers are eliminated causing the predictions to be more accurate”). Regarding claims 3 and 13, Sivaraman teaches all of the limitations of claims 2 and 12, respectively, as described above in detail. Sivaraman teaches - generating, by the at least one processor, a user-specific variable rate feedback record linked to the user profile (Sivaraman 10:36-37 teaches “a score representing a level of agreement of other users can be linked to a user profile [(that is, linked to the user profile)]”; Sivaraman 9:18-25 teaches a “score can be a number representing a percentage of other users who agree with the input and/or disagree with the input [(that is, “percentage agree and/or disagree” is a user-specific variable rate feedback record)]. For example, if eight other users agree with the input and two people disagree, the score can be eighty for eighty percent or eighty out of one hundred. The score can thus be termed a validation score associated with determining whether or not user input is valid in the view of other users [(that is, the “validation score” is generating a user -specific variable rate feedback record linked to the user profile )]”), wherein the user-specific variable rate feedback record comprises the average feedback attribute (see above, Sivaraman 9:36-40 (average feedback attribute)) and a target variability attribute specifying the target variability (Sivaraman 10:38-41 teaches “data can be restricted to users with scores above a predetermined threshold [(that is, a “predetermined threshold” is a target variability attribute specifying the target variability)]. In this manner, the data of a subset of users can be employed as representative of other users”). Regarding claims 4 and 14, Sivaraman teaches all of the limitations of claims 1 and 11, respectively, as set out above in detail. Sivaraman teaches - [wherein the average feedback attribute comprises at least one:] a target frequency comprising an average frequency of applying the new event feedback in response to the at least one new event, or a target feedback quantity an average quantity of the new event feedback in response to the at least one new event (Sivaraman 10:37-44 teaches “[r]ather than employing all data regarding a product or service, data can be restricted to users with scores above a predetermined threshold [(that is, the “predetermined threshold” is a target feedback quantity of the new event feedback in response to the at least one new event)]. In this manner, the data of a subset of users can be employed as representative of other users. Consequently, the amount of data that is utilized to generate and train a model is reduced and outliers are eliminated causing the predictions to be more accurate [(that is, “predictions” are the new event feedback in response to the at least one new event)]”). Claim Rejections – 35 U.S.C. § 103 9. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 10. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 11. This application currently names joint inventors. In considering patentability of the claims the Examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the Examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. 12. Claims 5-8 and 15-18 are rejected under 35 U.S.C. § 103 as being unpatentable over US Patent 12033222 to Sivaraman et al. [hereinafter Sivaraman] in view of US Published Application 20190205402 to Sernau et al. [hereinafter Sernau]. Regarding claims 5 and 15, Sivaraman teaches all of the limitations of claims 1 and 11, respectively, as described above in detail. Though Sivaraman teaches generating a recommendation conveyed to users of the social network based on the score and user profile of the user associated with the score; Sivaraman, however, does not explicitly teach - further comprising: determining, by the at least one processor, at least one engagement metric measuring user engagement based at least in part on the at least one new event comparing, by the at least one processor, the at least one engagement metric with at least one threshold engagement value; and determining, by the at least one processor, a modification to the average feedback attribute based at least in part on comparing the at least one engagement metric with at least one threshold engagement value. But Sernau teaches - further comprising: determining, by the at least one processor, at least one engagement metric measuring user engagement based at least in part on the at least one new event (Sernau ¶ 0026 teaches “the machine-learning model 250 may also take as input user data associated with the user for whom the content item was presented [(that is, “content item presented” is inherently based at least in part on the at least one new event)]. In particular embodiments, each training sample in the training data set may include (1) a training content item (which may correspond to the training content items used for training the first machine-learning model 230) with attributes of the custom types, (2) user data associated with a user to whom the training content item was presented, and (3) a ranking metric (e.g., whether the user clicked or downloaded the content item) [(that is, determining . . . at least one engagement metric measuring user engagement based at least in part on the at least one new event)] that represents how the training content item should be ranked (e.g., this may be considered as the ground truth or label of the raining sample)”); comparing, by the at least one processor, the at least one engagement metric with at least one threshold engagement value (Sernau ¶ 0036 teaches “the system may compare the ranking score of the content item with the ranking scores of other content items, respectively, to determine which of the content items have the highest-ranking scores. As another example, the system may determine whether the ranking score generated by the third machine-learning model is above a certain predetermined threshold value [(that is, comparing . . . the at least one engagement metric with at least one threshold engagement value)]”); and determining, by the at least one processor, a modification to the average feedback attribute based at least in part on comparing the at least one engagement metric with at least one threshold engagement value (Sernau ¶ 0021 teaches a “ranking system 200 may further take into consideration context information surrounding a particular ranking request or the context in which content item is to be displayed. . . . [A] social-networking platform may identify posts, newsfeeds, or videos that the user recently viewed or engaged with (e.g., by commenting, “liking,” sharing, etc.) [(that is, to “identify . . . recently viewed or engaged with ” is determining . . . a modification to the average feedback attribute)]. Such user context information may be used by the ranking system 200 to predict which of the content items 211 would likely be of interest to the user [(that is, “recently viewed or engaged” inherently is based at least in part on comparing the at least one engagement metric with at least one threshold engagement value )]”). Sivaraman and Sernau are from the same or similar field of endeavor. Sivaraman teaches soliciting feedback from other users of a social network regarding the input, and a score can be generated for a user that represents a level of agreement of the other users. Sernau teaches that overall affinity may change based on continued monitoring of the actions or relationships associated with the social-graph entity, where affinity measures the strength of a relationship or level of interest between particular objects associated with the online social network, such as users, concepts, content, actions, advertisements, other objects associated with the online social network. Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify Sivaraman pertaining to a user and others’ feedback with the ranking metric and affinity changes of Sernau. The motivation to do so is because “[a] ranking system may need to be sufficiently robust to handle content from a variety of sources.” (Sernau ¶ 0012). Regarding claims 6 and 16, the combination of Sivaraman and Sernau teaches all of the limitations of claims 5 and 15, as described above in detail. Sernau teaches - further comprising utilizing, by the at least one processor, the feedback machine learning model to determine the modification to the average feedback attribute based at least in part on model parameters and the at least one engagement metric (Sernau ¶ 0060 teaches “social-networking system 460 may measure or quantify social-graph affinity using an affinity coefficient (which may be referred to herein as “coefficient”). The coefficient may represent or quantify the strength of a relationship between particular objects associated with the online social network”; Sernau ¶ 0061 teaches “social-networking system 460 may determine [affinity] coefficients using machine-learning algorithms trained on historical actions and past user responses, or data farmed from users by exposing them to various options and measuring responses [(that is, an “affinity coefficient” is utilizing . . . the feedback machine learning model to determine the modification to the average feedback attribute based at least in part on model parameters and the at least one engagement metric)]”). Regarding claims 7 and 17, the combination of Sivaraman and Sernau teaches all of the limitations of claims 6 and 16, respectively, as described above in detail. Sivaraman teaches - wherein the feedback machine learning model comprises at least one reinforcement model (Sivaraman 10:7-9 teaches “[a]ny number of types of models can be generated including supervised learning, unsupervised learning, and reinforcement learning types [(that is, the feedback machine learning model comprises at least one reinforcement model)]”). Regarding claims 8 and 18, the combination of Sivaraman and Sernau teaches all of the limitations of claims 6 and 16, respectively, as described above in detail. Sivaraman teaches - further comprising: producing, by the at least one processor, a training dataset that correlates the event data with previous modifications to the average feedback attribute (Sivaraman 10:5-6 teaches a “machine learning model learns on its own from experience [(that is, “experience” is inherently previous modifications to the average feedback attribute)]”; Sivaraman 10:19-21 teaches “the training component 420 provides training data to a machine learning model comprising data and a label describing the data [(that is, producing . . . a training dataset that correlates the event data with previous modifications to the average feedback attribute)]”); and training, by the at least one processor, the feedback machine learning model based at least in part on the training dataset (Sivaraman 10:17-26 teaches a “training component 420 is configured to train a model produced by the generation component 410. In a supervised learning context, the training component 420 provides training data to a machine learning model comprising data and a label describing the data. The training data can be utilized by the model to automatically learn from the training data. . . . In one instance, a portion of training data can be withheld with respect to training and instead utilized to evaluate performance and fine tune machine learning parameters [(that is, training . . . the feedback machine learning model based at least in part on the training dataset)]”). 13. Claims 9 and 19 are rejected under 35 U.S.C. § 103 as being unpatentable over US Patent 12033222 to Sivaraman et al. [hereinafter Sivaraman] in view of US Published Application 20140207718 to Bento Ayres Pereira et al. [hereinafter Pereira]. Regarding claims 9 and 19, Sivaraman teaches all of the limitations of claims 1 and 11, respectively, as described above in detail. Though Sivaraman teaches an output probability distribution; Sivaraman, however, does not explicitly teach - wherein the feedback probability distribution comprises a normal distribution. But Pereira teaches - wherein the feedback probability distribution comprises a normal distribution (Pereira ¶ 0071 teaches “all user/movie pairs (ii) in the training set. The distribution seems to be well approximated by a normal distribution, FIG. 5 (b) shows the distribution of residuals for a single user (user with ID 56094 in the training set). This still roughly agrees with a Gaussian distribution, although not as closely as for the overall distribution [(that is, a normal distribution)]”; [Examiner notes that a normal distribution is also known as the Gaussian distribution, which is a continuous probability distribution that is symmetric about the mean]).Sivaraman and Pereira are from the same or similar field of endeavor. Sivaraman teaches soliciting feedback from other users of a social network regarding the input, and a score can be generated for a user that represents a level of agreement of the other users as a feedback distribution. Pereira teaches identifying contextual information of a group of users, gathering user access data of the users on the basis of the contextual information of the group of users, analyzing temporal information of the user access data, and identifying particular users in the group of users on the basis of the analyzed temporal information and the contextual information. Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify Sivaraman pertaining to a user feedback probability distribution of a machine learning model with the Gaussian distribution (normal distribution output) of Pereira. The motivation to do so is because “incorporation of contextual information is likely to play an ever-increasing role in recommendation systems because of the broad availability of such information, and the need for more accurate systems.” (Pereira ¶ 0004). 14. Claim 10 is rejected under 35 U.S.C. § 103 as being unpatentable over US Patent 12033222 to Sivaraman et al. [hereinafter Sivaraman] in view of Gharibshah et al., “User Response Prediction in Online Advertising,” arXiv (2021) [hereinafter Gharibshah]. Regarding claim 10, Sivaraman teaches all of the limitations of claim 1, as described above in detail. Though Sivaraman teaches an output probability distribution; Sivaraman, however, does not explicitly teach - wherein the feedback probability distribution comprises a gamma distribution. But Gharibshah teaches - wherein the feedback probability distribution comprises a gamma distribution. (Gharibshah p. 11, “4.1.2 Representative Hierarchy based CTR Prediction Frameworks,” third paragraph, teaches that “to reduce the variance made by the sparse clicks and/or impressions [(that is, “clicks and/or impressions” are feedback)], a sampling approach is used to alleviate the rarity issue via negative sampling of majority class, i.e. webpages without a click response. To control the effect of the bias made by sampling, a two-step method is used to predict the click-through rate. In the first step, a maximum entropy model [(that is, the output distribution is a “maximum entropy model” is a gamma distribution )]”; [Examiner notes that a gamma distribution is the maximum entropy probability distribution for a random variable X]).produces is optimized based on an iterative proportional fitting method to estimate the actual number of impressions at all defined levels in the hierarchical structure]). Sivaraman and Gharibshah are from the same or similar field of endeavor. Sivaraman teaches soliciting feedback from other users of a social network regarding the input, and a score can be generated for a user that represents a level of agreement of the other users as a feedback distribution. Gharibshah teaches a comprehensive review of user response prediction in online advertising and related recommender applications having an essential goal of a thorough understanding of online advertising platforms, stakeholders, data availability, and typical ways of user response prediction. Thus, it would have been obvious to a person having ordinary skill in the art as of the effective filing date of the Applicant’s invention to modify Sivaraman pertaining to a user feedback probability distribution of a machine learning model with the maximum entropy model (gamma distribution output) of Gharibshah. The motivation to do so is to “control the effect of the bias made by sampling by a two-step method to predict the click-through rate [including] a maximum entropy model.” (Gharibshah p. 11, “4.1.2 Representative Hierarchy based CTR Prediction Frameworks,” third paragraph). Conclusion 15 The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure: (US Published Application 20180293488 to Dang et al.) teaches predicted ratings may be provided to marketing organizations or other entities that seek to generate personalized recommendations for individuals. The predicted ratings may enable more accurate and/or more deeply personalized recommendations to be generated for individuals, with respect to goods and/or services to be offered, travel destinations to be suggested, suggested media consumption options, suggested locations to visit, and so forth. (Tanrisevdi et al., “A supervised data mining approach for predicting comment card ratings,” Emerald Publishing Limited (Jan 2022)) teaches a novel overall rating prediction technique allowing hotel management to improve their operations by evaluating guest feedback through HCCs more effectively and quickly. 16. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to KEVIN L. SMITH whose telephone number is (571) 272-5964. Normally, the Examiner is available on Monday-Thursday 0730-1730. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, KAKALI CHAKI can be reached on 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.L.S./ Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Aug 30, 2022
Application Filed
Feb 22, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591815
METHOD AND SYSTEM FOR UPDATING MACHINE LEARNING BASED CLASSIFIERS FOR RECONFIGURABLE SENSORS
2y 5m to grant Granted Mar 31, 2026
Patent 12585917
REINFORCEMENT LEARNING USING ADVANTAGE ESTIMATES
2y 5m to grant Granted Mar 24, 2026
Patent 12547759
PRIVACY PRESERVING MACHINE LEARNING MODEL TRAINING
2y 5m to grant Granted Feb 10, 2026
Patent 12530613
SYSTEMS AND METHODS FOR PERFORMING QUANTUM EVOLUTION IN QUANTUM COMPUTATION
2y 5m to grant Granted Jan 20, 2026
Patent 12518214
DISTRIBUTED MACHINE LEARNING SYSTEMS INCLUDING GENERATION OF SYNTHETIC DATA
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
37%
Grant Probability
55%
With Interview (+18.0%)
4y 8m
Median Time to Grant
Low
PTA Risk
Based on 134 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month