Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This Office Action is in response to an AMENDMENT entered December 16, 2025 for the patent application 18/612,119.
Status of Claims
Claims 1 – 3, 6 - 13 and 16 - 20 are pending in the application.
Claims 1, 11 and 20 are currently amended in the application.
Claims 4, 5, 14 and 15 are cancelled in the application without prejudice or disclaimer.
Response to Arguments
Examiner would like to point out that the Supreme Court in KSR International Co. v. Teleflex Inc. described seven rationales to support rejections under 35 U.S.C. 103:
Combining prior art elements according to known methods to yield predictable results;
Simple substitution of one known element for another to obtain predictable results;
Use of known technique to improve similar devices (methods, or products) in the same way;
Applying a known technique to a known device (method, or product) ready for improvement to yield predictable results;
“Obvious to try” –choosing from a finite number of identified, predictable solutions, with a reasonable expectation of success;
Known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations would have been predictable to one of ordinary skill in the art; and
Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention.
Prior art is not limited just to the references being applied, but includes the understanding of one of ordinary skill in the art. The prior art reference (or references when combined) need not teach or suggest all the claim limitations; however, Office personnel must explain why the difference(s) between the prior art and the claimed invention would have been obvious to one of ordinary skill in the art. The “mere existence of differences between the prior art and an invention does not establish the invention’s nonobviousness.” see Dann v. Johnson, 425 U.S. 219, 230 (1976).
Applicant's arguments filed with an Amendment on December 16, 2025 have been fully considered but they are not persuasive.
Applicant Argument:
“Applicant next directs the Examiner's attention to the discussion of "certain methods of organizing human activity" in the M.P.E.P, which states that "not all methods of organizing human activity are abstract ideas." …Applicant next directs the Examiner's attention to the Federal Circuit's decision in DDR Holdings, LLC v. Hotels.com (hereinafter, DDR).“, (see page 12 of the Remarks).
Examiner’s Response:
Examiner respectfully disagrees. The 2019 Revised Patent Subject Matter Eligibility Guidance states that a category of abstract ideas is a "Fundamental Economic Practice", which are concepts relating to the economy and commerce, such as agreements between people in the form of contracts, legal obligations, and business relations. A method, system and apparatus for automatically recommending personalized estimates of amounts of time needed to compete a task, falls under the category of concepts relating to business relations and commerce. Abstract ideas are not limited to ideas that may be characterized as economic principles; nor are abstract ideas limited to the examples set forth in Alice, i.e., fundamental economic practices, certain methods of organizing human activities, an idea of itself, and mathematical relationships or formulae. See Alice, 134 S. Ct. at 2350, 2356. The additional elements of the peer-to-peer terminals are being used as tool to implement the identified abstract idea. The Federal Circuit's ruling interpreting Alice in DDR Holdings, LLC v. Hotels.com, L.P., which held that if claims are directed to a business challenge particular to the Internet and/or are necessarily rooted in computer technology, they might survive the Alice two-step analysis. There, the patents were directed to “generating a composite web page that combines certain visual elements of a 'host' website with content of a third-party merchant." Effectively, the claimed invention allowed a host website to retain visitors when they clicked on a merchant's advertisement rather than being taken to that merchant's website. The court found the claims patent eligible under Section 101, in part, because the claims recited "a specific way to automate the creation of a composite web page by an 'outsource provider' that incorporates elements from multiple sources in order to solve a problem faced by websites on the Internet." The court stated that although the claims address a business challenge (retaining website visitors), it is a challenge particular to the Internet. This is not the case with the claims of this application. Improving a technological field, computer functionality or technology does not include using a generic computer elements to do more efficiently what could otherwise be done by hand. (SiRF) Further, where nothing in the claims or specification provides any detail about how the computer functions are performed, one may presume those functions can be performed by any generic computer with conventional programming (TriPlay)).
Applicant Argument:
“While Applicant believes the claims do not recite an abstract idea, Applicant asserts that any alleged abstract idea is integrated into a practical application.“, (see page 13 of the Remarks).
Examiner’s Response:
Examiner respectfully disagrees. Step 2A requires the analysis of prong one, does it recite an abstract ideal, prong two, is it directed to an abstract idea. Prong two has to be a technological improvement (integrate the abstract idea into a practical application). There is no technological improvement because a technical problem is not being solved, but a financial problem is being solved as stated in applicant’s specification para. [0017 – [0019], [0027] and [0048]. Under Step 2A to be integrated into a practical application means it has to be shown that there is a technological solution being carried out by the claimed subject matter. The problem being solved is not technological in nature, but financial. An improved abstract ideal is still an abstract ideal. Since, there is no technological improvement being provided by the claims, therefore, there is no integration into a practical application. These claims are merely, a solution to a financial problem and are not technical in nature.
Applicant Argument:
“Step 2B - The Claims Recite an Inventive Concept that Amounts to Significantly More
than an Abstract Idea…. With respect to BASCOM“, (see page 17 of the Remarks).
Examiner’s Response:
Examiner respectfully disagrees. In the claims included individually or as an ordered combination limitations that are “significantly more” than the abstract idea itself. This includes analysis as to whether there is an improvement to either the “computer itself,” "another technology,” the "technical field,” or significantly more than what is “well-understood, routine, or conventional” in the related arts. The USPTO 2019 Revised Patent Subject Matter Eligibility Guidance lists included tables and appendices examples of what has been considered by the Courts to be generic computer functioning and not “significantly more” than what is well-understood, routine, and conventional in the field of endeavor. These examples, a non-exhaustive illustrative list, constitute the basis for the findings below. In step two as per Alice, consideration of the elements of each claim both individually and “as an ordered combination” is set forth to determine whether the additional elements “transform the nature of the claim” into a patent-eligible application. Alice, 134 S. Ct. at 2355. In the case with Bascom, the filtering process was directed toward solving a logistical problem that went beyond simple filtering, as there were problems not with the data but instead with locating a tool for filtering internet content on each local computer: where each tool on individual computers were subject to being modified or thwarted by literate end-users and to avoid install time on every end user device a specific tool. The claimed limitations are not directed toward solving any problem that goes beyond simple filtering and discarding data according to parameters. Bascom identified that filtering data was an abstract concept. The ordered combination is not directed toward an improvement of a technology or solve a technical problem. The Federal circuit focused on the particular arrangement of generic and conventional components in the claims. They further stated: "An inventive concept that transforms the abstract idea into a patent eligible invention must be significantly more than the abstract idea itself, and cannot be simply an instruction to implement or apply the abstract idea on a computer. Slip Op. at 14. The court further stated on page 15: "The inventive concept described and claimed in the '606 patent is the installation of a filtering tool at a specific location, remote from the end-users, with customizable filtering features specific to each end user". On page 18, of Bascom states, “The ‘606 patent is instead claiming a technology based solution (not an abstract idea-based solution implemented with general technical components in a conventional way) to filter content on the Internet that overcomes existing problems with other Internet filtering systems.
Re: Claim 1, the applicant asserts that cited prior art do not teach – “providing a plurality of time-to-complete (TTC) models trained at different quantile levels to a machine learning model configured to select one TTC model of the plurality of TTC models as a selected TTC model to generate an estimated amount of time needed for a current user to complete the task.“, (see page 19 of the Remarks).
The Examiner respectfully disagrees, Rezaeian does discloses “providing a plurality of time-to-complete (TTC) models trained at different quantile levels to a machine learning model configured to select one TTC model of the plurality of TTC models as a selected TTC model to generate an estimated amount of time needed for a current user to complete the task.“ at (Rezaeian, [0102] - At block 850, the learner system may identify a plurality of selection models. Each selection model of the plurality of selection models may be associated with a protocol for selecting one or more tasks from the set of tasks. For example, the protocol may be or may include the execution of a multi-armed bandit algorithm or model that, when executed, automatically selects an incomplete subset of tasks from the set of available tasks. An accuracy or performance value may be determined for each selection model of the plurality of selection models. For example, an accuracy may be determined on whether or not a user selects a task from the incomplete subset of tasks that are presented on the intelligent UI. If a task from the incomplete subset of tasks is selected by a user during a time period (e.g., during a session in which the user is navigating the intelligent UI, over an hour, over a day, over a week, over a month, etc.), then a feedback signal may be transmitted back to the learner system to indicate that the multi-armed bandit model accurately predicted that the user would select that task. If a task from the incomplete subset of tasks is not selected over the time period (the same or a different time period), then the multi-armed bandit model that selected that task did not accurately predict that the user would complete that task.), with further emphasis at (Rezaeian, [0058], [0059] - The pair-wise quantitative comparative assessment can include, for example, generating a similarity metric using the representative messages and determining whether the metric exceeds a threshold metric (e.g., that is predefined, a default number of identified by a user). The similarity metric may be based on (for example) whether the representative messages include a same (or similar) number of components, number of variable (or non-variable) com-ponents, content of each of one or more non-variable com-ponents, characteristic (e.g., format, character type or length) of one or more variable components, and so on. The similarity metric may be based on generating a correlation coefficient between the inter-cluster messages or by per-forming a clustering technique using a larger set of messages to an extent to which representative messages of the clusters are assigned to a same cluster or share components (e.g., if a technique includes using a component analysis, such as principal component analysis or independent component
analysis.).
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim(s) 1 – 3, 6 - 13 and 16 - 20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1 – 3, 6 - 13 and 16 - 20 are either directed to a method or system or computer readable medium, which are statutory categories of invention. (Step 1: YES).
The Examiner has identified method claim 1 as the claim that represents the claimed invention for analysis and is similar to system claim 11 and computer readable claim 20. Claim 1 recites the limitations of:
( A ) providing a plurality time-to-complete (TTC) models trained at different quantile levels to a machine learning model configured to select one TTC model of the plurality of TTC models as a selected TTC model to generate an estimated amount of time needed for a current user to complete the task;
( B ) obtaining the estimated amount of time from the selected TTC model;
( C ) obtaining feedback data regarding the estimated amount of time obtained from the selected TTC model, wherein the feedback data indicates whether the current user completed the task; and
( D ) training the machine learning model based, at least in part, on the feedback data, wherein the training comprises:
in response to the feedback data indicating the current user completed the task, training the machine learning model comprises incrementing a confidence variable associated with the machine learning model; or
in response to the feedback data indicating the current user did not complete the task, training the machine learning model comprises decrementing the confidence variable associated with the machine learning model.
These limitations without the bolded limitations above, cover performance of the limitations as certain methods of organizing human activity under their broadest reasonable interpretation.
More specifically, these limitations cover performance of the limitations as a fundamental economic practice.
In summary, if claim 1 limitations, under its broadest reasonable interpretation, covers performance of the limitation as a fundamental economic practice, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Claims 11 and 20 are also abstract for similar reasons. (Step 2A-Prong 1: YES. The claims are abstract).
The use of the machine learning model or any of the bolded limitations in claim 1 are just applying generic computer components to the recited abstract limitations. Similar arguments apply to claims 11 and 20.
Therefore, the above mentioned judicial exception is not integrated into a practical application by merely applying generic computer components (bolded elements).
Furthermore, the “providing” and “obtaining” steps are recited at a high level of generality and amounts to mere data gathering/transmitting, which are forms of insignificant extra-solution activity (See MPEP 2106.05(g): CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375 (Fed. Cir. 2011); and OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015)).
In addition, supported by specification, the computer hardware are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component., see MPEP 2106.05(f), where applying a computer or using a computer is not indicative of a practical application).
Claim 1, limitation ( A ) – ( C ) above in Applicant’s specification para [0025], which discloses “The computing environment includes a training pipeline 110. The training pipeline 110 includes aspects related to generating training data and then using that training data to train a plurality of time-to-complete (TTC) models 118 that, once trained, may be provided as inputs to a machine learning model 112. For example, the processes described with respect to the training pipeline 110 may be performed by a model training component running on one or more physical computing devices (e.g., the same as or separate from the server 104). Furthermore, the training pipeline 110 may represent operations that are performed initially in order to generate the machine learning model 112. Additionally, the training pipeline 110 may represent operations that are performed iteratively over time, such as in real-time and/or at regular intervals, to re-train the machine learning model 112.“.
Also, claim 1, limitation ( B ) and ( D ) above in Applicant’s specification para [0053], which discloses “Operation 302 includes providing a plurality of time-to-complete (TTC) models (e.g., TTC models 118 illustrated in FIG. 2) trained at different quantile levels to a machine learning model (e.g., machine learning model 112 illustrated in FIG. 2) configured to select one TTC model of the plurality of TTC models as a selected TTC model to generate an estimated amount of time needed for a current user to complete the task. For instance, the plurality of TTC models trained at the different quantile levels may include a first TTC model trained at a first quantile level (e.g., 50% quantile level) to estimate an average time needed to complete the task, a second TTC model trained at a second quantile level (e.g., top 25% quantile level) to estimate an above-average time needed to complete the task, and a third TTC model trained at a third quantile level (e.g., top 70% quantile level) to estimate a below-average time needed to complete the task.“. Similar arguments apply to claims 11 and 20.
Accordingly, these additional elements, when considered separately and as an ordered combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Therefore, claims 1, 11 and 20 are directed to an abstract idea without a practical application. (Step 2A-Prong 2: NO. The additional claimed elements are not integrated into a practical application).
The claims 1, 11 and 20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements (bolded elements above) amount to no more than mere instructions to apply the abstract idea using generic computer components. In conclusion, merely "applying" the exception using generic computer components cannot provide an inventive concept. Therefore, the claims 1, 11 and 20 are not patent eligible under 35 USC 101. (Step 2B: NO. The claims do not provide significantly more).
Dependent Claims
Dependent claims 2 – 3, 6 – 10, 12, 13 and 16 - 19 are also rejected under 35 U.S.C. 101. Dependent claims 2 – 3, 6 – 10, 12, 13 and 16 - 19 are further define the abstract idea or further define the extra-solution activities that are present in independent claim 1 thus abstract idea correspond to certain methods of organizing human activity as presented above. Claims 2 – 3, 6 – 10, 12, 13 and 16 - 19 clearly further define the abstract idea as stated above and further define extra-solution activities such as presenting data and transmitting/receiving data.
Furthermore, dependent claims 2 – 3, 6 – 10, 12, 13 and 16 - 19 do not include any additional elements that integrate the abstract idea into a practical application or are sufficient to amount to significantly more than the judicial exception when considered both individually and as an ordered combination.
Regarding claims 2 and 12, these claims merely recite additional steps that amount to no more than insignificant extra-solution activity. Specifically, claim 2 states “wherein the feedback data comprises an actual amount of time the current user took to complete the task.”. These steps amount to no more than mere data gathering/analysis, which is a form of insignificant extra- solution activity (See M PEP 2016.05(g): CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375 (Fed. Cir. 2011); and GIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015)). Such limitations do not integrate the abstract idea into a practical application, or amount to significantly than the abstract idea, because the courts have found the concept of data gathering to be well-understood, routine, and conventional activity (See MPEP 2106.05(d): GIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363 (Fed. Cir. 2015); and buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, (Fed. Cir. 2014)). Similar arguments can be made for claim 12.
Regarding claims 3 and 13, these claim merely recite, "when the actual amount of time the current user took to complete the task differs from the estimated amount of time obtained from the selected TTC model, the training comprises training the machine learning model to select a different TTC model as the selected TTC model for a subsequent user having one or more features in common with the current user.“. This does not integrate the abstract idea into a practical application because it does not impose any meaningful limitation on practicing the abstract idea. Similar arguments can be made for claim 13.
Regarding claims 6 and 17, these claims merely recite, “providing one or more contextual features as an input to the machine learning model, and providing feature data for the current user as an input to the machine learning model.". These limitation merely recites storing data in a server which amounts to no more than gathering/storing data which is a form of insignificant extra-solution activity (See MPEP 2106.0S(g)(3)(iii): GIP Technologies, 788 F.3d at 1363). This does not integrate the abstract idea into a practical application because it has been determined, by the courts, that the concept of storing data is well-understood, routine, and conventional activity (See MPEP 2106.0S(d)(II): Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334 (Fed. Cir. 2015)). Similar arguments can be made for claim 17.
Regarding claim 7, these claim merely recite, “wherein the one or more contextual features comprise at least one of a current time, a current day, or a current month.”. This does not integrate the abstract idea into a practical application because it does not impose any meaningful limitation on practicing the abstract idea..
Regarding claims 8 and 18, these claims merely add further description to the process of “wherein the machine learning model is configured to select the one TTC model as the selected TTC model based on at least one of the feature data for the current user or the one or more contextual features,”, which amounts to no more than gathering/storing data which is a form of insignificant extra-solution activity (See MPEP 2106.0S(g)(3)(iii): GIP Technologies, 788 F.3d at 1363). This does not integrate the abstract idea into a practical application because it has been determined, by the courts, that the concept of storing data is well-understood, routine, and conventional activity (See MPEP 2106.0S(d)(II): Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334 (Fed. Cir. 2015)). Similar arguments can be made for claim 18.
Regarding claims 9 and 19, these claims merely add further description to the process of “wherein the machine learning model is configured to assign a relevancy score to each of the TTC models based, at least in part, on the one or more contextual features; and select the one TTC model of the plurality of TTC models as the selected TTC model based, at least in part, on the relevancy score assigned to each of the TTC models, wherein the relevancy score for the one TTC model is higher than the relevancy score for every other TTC model of the plurality of TTC models.". This does not integrate the abstract idea into a practical application because it does not impose any meaningful limitation on practicing the abstract idea. Similar arguments can be made for claim 19.
Regarding claims 10 and 16, these claim merely add further description to the process of “wherein the plurality of TTC models comprise: a first TTC model trained at a first quantile level to estimate an average time for completing the task; a second TTC model trained at a second quantile level to estimate an above-average time for completing the task; and a third TTC model trained at a third quantile level to estimate a below-average time for completing the task.,”, which amounts to no more than gathering/storing data which is a form of insignificant extra-solution activity (See MPEP 2106.0S(g)(3)(iii): GIP Technologies, 788 F.3d at 1363). This does not integrate the abstract idea into a practical application because it has been determined, by the courts, that the concept of storing data is well-understood, routine, and conventional activity (See MPEP 2106.0S(d)(II): Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334 (Fed. Cir. 2015)). Similar arguments can be made for claim 16.
As a result, such limitations do not overcome the requirements as described above. Therefore, claims 2 – 3, 6 – 10, 12, 13 and 16 - 19 are directed to an abstract idea. Thus, claims 1 – 3, 6 - 13 and 16 – 20 are not patent eligible.
Claim Rejections – 35 USC §103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 – 3, 6 - 13 and 16 - 20 are rejected under 35 U.S.C. 103 as being obvious over Amir H. Rezaeian et al. (Pub. No. 2020/0125586 A1 – hereinafter, Rezaeian) in view of Dwipam Katariya et al. (Pub. No. 2024/0061845 A1 – hereinafter, Katariya) and further in view of Rachael M Dickens et al. (Pub. No. 2020/0380406 A1 – hereinafter, Dickens).
Re: Claim 1, Rezaeian discloses a method for automatically recommending personalized estimates of amounts of time needed to complete a task, the method comprising:
providing a plurality time-to-complete (TTC) models trained at different quantile levels to a machine learning model configured to select one TTC model of the plurality of TTC models as a selected TTC model to generate an estimated amount of time needed for a current user to complete the task (Rezaeian, [0102] - At block 850, the learner system may identify a plurality of selection models. Each selection model of the plurality of selection models may be associated with a protocol for selecting one or more tasks from the set of tasks. For example, the protocol may be or may include the execution of a multi-armed bandit algorithm or model that, when executed, automatically selects an incomplete subset of tasks from the set of available tasks. An accuracy or performance value may be determined for each selection model of the plurality of selection models. For example, an accuracy may be determined on whether or not a user selects a task from the incomplete subset of tasks that are presented on the intelligent UI. If a task from the incomplete subset of tasks is selected by a user during a time period (e.g., during a session in which the user is navigating the intelligent UI, over an hour, over a day, over a week, over a month, etc.), then a feedback signal may be transmitted back to the learner system to indicate that the multi-armed bandit model accurately predicted that the user would select that task. If a task from the incomplete subset of tasks is not selected over the time period (the same or a different time period), then the multi-armed bandit model that selected that task did not accurately predict that the user would complete that task.);
obtaining the estimated amount of time from the selected TTC model (Rezaeian, [0062] - As another example, a condition may define an event that corresponds to a degree to which a quantity of log messages being assigned to a given threshold is changing, such as by identifying a threshold for a slope of a time series or a threshold for a difference in counts or percentages or log message assigned to the cluster between two time bins. As yet another example, a condition may define an event that corresponds to multiple cluster assignments, such as an event that indicates that a time series of each of the multiple clusters has a similar shape (e.g., by determining whether curve-fit coefficients are similar enough to be within a threshold amount, by determining whether a time of one or more peaks in time series are within a defined threshold time, determining whether a correlation coefficient between time series of the clusters exceeds a threshold, and/or determining whether a difference between a variability of a time series of each of the individual clusters and a variability of a sum of the time series exceeds a threshold value).);
obtaining feedback data regarding the estimated amount of time obtained from the selected TTC model, wherein the feedback data indicates whether the current user completed the task (Rezaeian, [0142] - At operation 1046, the customer's subscription order may be managed and tracked by an order management and monitoring module 1026. In some instances, order management and monitoring module 1026 may be configured to collect usage statistics for the services in the sub scription order, such as the amount of storage used, the amount data transferred, the number of users, and the amount of system up time and system down time.); (Rezaeian, [0101] – At block 840, a full set of tasks available to be performed may be stored at the performable tasks data store. The full set of performable tasks may correspond to a specific application. The performable tasks data store may store executable code that, when executed, causes a task to be performed. Further, the performable task data store may store multiple full sets of tasks for various applications, such that a single set of tasks corresponds to a single application. Each task of the set of tasks may include one or more actions performable using an application of the one or more applications. As a non-limiting example, a task for complete a form may include four separate actions, including retrieving the form, displaying the form, automatically completing at least a portion of the form, and saving the completed form.).
However, Rezaeian does not expressly disclose:
training the machine learning model based, at least in part, on the feedback data.
In a similar field of endeavor, Katariya discloses:
training the machine learning model based, at least in part, on the feedback data (Katariya, [0032], [0036] - As shown by reference number 205, a machine learning model may be trained using a set of observations. The set of observations may be obtained from training data (e.g., historical data), such as data gathered during one or more processes described herein. In some implementations, the machine learning system may receive the set of observations (e.g., as input) from the user device, the interaction party device, the user profile database, and/or the interaction party database, as described elsewhere herein.).
Therefore, in light of the teachings of Katariya, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the method of Rezaeian, motivation according to one KSR Exemplary Rationale where a known technique is used to improve similar methods and systems in the same way by after training, the machine learning system may store the machine learning model as a trained machine learning model 225 to be used to analyze new observations.
However, Rezaeian in view of Katariya does not expressly disclose:
training the machine learning model based, at least in part, on the feedback data, wherein the training comprises:
in response to the feedback data indicating the current user completed the task, training the machine learning model comprises incrementing a confidence variable associated with the machine learning model.
In a similar field of endeavor, Dickens discloses:
training the machine learning model based, at least in part, on the feedback data, wherein the training comprises:
in response to the feedback data indicating the current user completed the task, training the machine learning model comprises incrementing a confidence variable associated with the machine learning model (Dickens, [0020], [0021] - According to an embodiment, for real-time data being collected and accessed, such as user profile data, user preference data, user biometric data or user feedback data being transmitted to and received by computing devices, a dynamic feedback program may receive consent from the user, via an opt-in feature or an opt-out feature, prior to commencing the collecting of data or the monitoring and analyzing of the collected data. For example, in some embodiments, the dynamic feedback program may notify the user when the collection of data begins via a graphical user interface (GUI) or a screen on a computing device. The user may be provided with a prompt or a notification to acknowledge an opt-in feature or an opt-out feature.).
Therefore, in light of the teachings of Dickens, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify the method of Rezaeian in view of Katariya, motivation according to one KSR Exemplary Rationale where a known technique is used to improve similar methods and systems in the same way by provide an receiving and storing user feedback to the machine learning model for a current interval and may include converting the machine learning model performance to an increase in a performance speed.
Re: Claim 2, Rezaeian in view of Katariya discloses the method of Claim 1,
wherein the feedback data comprises an actual amount of time the current user took to complete the task (Katariya, [0032] - As shown by reference number 130, the recommendation system may transmit, to the user device of the target user, recommended interaction party data indicating one or more of the recommended interaction parties to be presented on a display of the user device. In some implementations, for each recommended interaction party, the recommended interaction party data may include an option for the user to provide feedback regarding the particular recommended interaction party. For example, the feedback may be whether the user is interested or not. Alternatively, the feedback may be able to be provided after the user has performed an interaction with the particular recommended interaction party, and may indicate whether or not the user had a positive experience with the particular recommended interaction party. In scenarios in which a machine learning model was used to determine the recommended interaction parties, the feedback may be used to re-train the machine learning model, as described in more detail below in connection with FIG. 2.). The rationale for support of motivation, obviousness and reason to combine see claim 1 above.
Re: Claim 3, Rezaeian in view of Katariya discloses the method of Claim 2, wherein:
when the actual amount of time the current user took to complete the task differs from the estimated amount of time obtained from the selected TTC model, the training comprises training the machine learning model to select a different TTC model as the selected TTC model for a subsequent user having one or more features in common with the current user (Katariya, [0040] - The target variable may represent a value that a machine learning model is being trained to predict, and the feature set may represent the variables that are input to a trained machine learning model to predict a value for the target variable. The set of observations may include target variable values so that the machine learning model can be trained to recognize patterns in the feature set that lead to a target variable value. A machine learning model that is trained to predict a target variable value may be referred to as a supervised learning model.). The rationale for support of motivation, obviousness and reason to combine see claim 1 above.
Re: Claim 6, Rezaeian discloses the method of Claim 1, further comprising:
providing one or more contextual features as an input to the machine learning model (Rezaeian, [0013] - Various implementations may include one or more of the following features. The computer-implemented method further including: receiving input at the interface, the input corresponding to a selection of a presented task of the presented subset of tasks; and in response to receiving the input, accessing the application associated with the selected task (e.g., the application can be accessed using an exposed interface, such as an API), the application associated with the selected task being configured to perform the selected task. The computer-implemented method may also include identifying, using the application, the one or more actions included in the selected task. The computer-implemented method may also include automatically performing the one or more actions by triggering the application to perform the one or more actions in response to the received input. The computer-implemented method further including: in response to receiving the input, generating a positive feedback signal indicating that the selected task was selected. The computer-implemented method may also include updating the contextual model based on the positive feedback signal, where the positive feedback signal causes the contextual model to bias predicted tasks towards the selected task.); and
providing feature data for the current user as an input to the machine learning model (Rezaeian, [0008] - In some implementations, each instance a user accesses the intelligent UI, a first portion of the interface may present one or more links. For example, each link may correspond to a suggested task that is predicted for the user (e.g., based on a result of inputting the user profile of the user or other contextual information of the user into the trained machine-learning model). In some implementations, the link may be selectable, such that when the link is selected, the one or more actions associated with the suggested task are automatically performed by the appropriate application. For example, when the link is selected, an exposed interface (e.g., an Application Programming Inter face) can be used to access the application that corresponds to the suggested task in order to complete the associated action(s). Additionally, a signal representing that a link was selected may be transmitted as feedback to the trained machine-learning model, so that the model may be updated (e.g., may bias prediction towards the selected task).).
Re: Claim 7, Rezaeian discloses the method of Claim 6,
wherein the one or more contextual features comprise at least one of a current time, a current day, or a current month (Rezaeian, [0072] - When a user has not logged into the homepage before, the user vector that is generated for that user may not contain much data (e.g., at least a few characteristics, such as the user's position and salary may be contained in the user vector). The contextual model can be used to identify which tasks were previously suggested, for example, for similar users, for users located at the same office, for users in the same department or organization, a default list of tasks, and the like. Advantageously, the embodiments described herein provide a multi-user learning capability, in that the contextual model learns how users perform tasks, and a particular user's user vector can be compared against those users and/or the particular users previously completed tasks to predict which tasks that the particular user is likely to perform at a given time, day, week, month, quarter, or year.).
Re: Claim 8, Rezaeian discloses the method of Claim 6,
wherein the machine learning model is configured to select the one TTC model as the selected TTC model based on at least one of the feature data for the current user or the one or more contextual features (Rezaeian, [0006] - The machine-learning model may represent a model of some or all of the users of an entity and their interactions with various applications (e.g., which tasks those users have previously completed). When a particular user accesses the intelligent UI, a learner system may generate a user vector for that particular user. In some implementations, the user vector may be a vector representation of various information about the user. For example, the user vector may include the user's access level, current location, whether the user is working remotely, previous interactions with applications, previous tasks completed using the applications, frequency of completing certain tasks, and other suitable information. The user vector may be fed into the machine-learning model to predict which tasks the user will need to complete (e.g., on a given day). The machine-learning model can output a prediction of one or more tasks that the user will likely need to complete. The machine-learning model prediction is based, at least in part, on the user vector, the set of suggestable tasks, and/or the tasks completed by users with similar attributes (e.g., users in the same location).).
Re: Claim 9, Rezaeian discloses the method of Claim 7,
wherein the machine learning model is configured to assign a relevancy score to each of the TTC models based, at least in part, on the one or more contextual features (Rezaeian, [0102] - For example, the protocol may be or may include the execution of a multi-armed bandit algorithm or model that, when executed, automatically selects an incomplete subset of tasks from the set of available tasks. An accuracy or performance value may be determined for each selection model of the plurality of selection models. For example, an accuracy may be determined on whether or not a user selects a task from the incomplete subset of tasks that are presented on the intelligent UI. If a task from the incomplete subset of tasks is selected by a user during a time period (e.g., during a session in which the user is navigating the intelligent UI, over an hour, over a day, over a week, over a month, etc.), then a feedback signal may be transmitted back to the learner system to indicate that the multi-armed bandit model accurately predicted that the user would select that task. If a task from the incomplete subset of tasks is not selected over the time period (the same or a different time period), then the multi-armed bandit model that selected that task did not accurately predict that the user would complete that task. Thus, the model's performance or accuracy may be reduced. The accuracy for each selection model may be cumulatively calculated or updated for each iteration of prediction. For instance, an iteration of prediction may be a single session in which the user browses the intelligent UI. As a non limiting example, the accuracy may be calculated by calculating a cumulative average score for each iteration of prediction. If any tasks of an incomplete subset of tasks are selected, then, for example, a "1" may be added to the cumulative score for each tasks selected. A "O" or a negative value (e.g., -1) may be added to the cumulative score for each suggested task that was not selected during the time period.); and
select the one TTC model of the plurality of TTC models as the selected TTC model based, at least in part, on the relevancy score assigned to each of the TTC models, wherein the relevancy score for the one TTC model is higher than the relevancy score for every other TTC model of the plurality of TTC models (Rezaeian, [0102] - The accuracy for each selection model may be cumulatively calculated or updated for each iteration of prediction. For instance, an iteration of prediction may be a single session in which the user browses the intelligent UI. As a non limiting example, the accuracy may be calculated by calculating a cumulative average score for each iteration of prediction. If any tasks of an incomplete subset of tasks are selected, then, for example, a "1" may be added to the cumulative score for each tasks selected. A "O" or a negative value (e.g., -1) may be added to the cumulative score for each suggested task that was not selected during the time period. Advantageously, if the accuracy of the selected model decreases, then, when the next most accurate model becomes the most accurate model, the next most accurate model would be selected to be executed during the next iteration of prediction to select an incomplete subset of tasks using the techniques included in that model (e.g., Thompson sampling may be one model and upper confidence bound selection may be the other model). Thus, the most accurate model would always be executed, even as the accuracy of the selected model decline for any reason.).
Re: Claim 10, Rezaeian in view of Katariya discloses the method of Claim 1, wherein the plurality of TTC models comprise:
a first TTC model trained at a first quantile level to estimate an average time for completing the task (Katariya, [0021] - Additionally, the recommendation system may determine the recommended interaction parties based on one or more factors and/or one or more conditions. For example, one factor may include a current time of day, and an associated condition may be that a timestamp associated with a particular historical interaction of the user is within a time threshold (e.g., 30 minutes, 1 hour, or 2 hours) of the current time of day. As an example, if a current time of day is 12:00 PM, then for a time threshold of 1 hour, then the recommendation system may identify interaction parties with which other users, such as users similar to the user (e.g., users in a same user cluster, as described in more detail below), have historical interactions that occurred between 11:00 AM and 1:00 PM. Additionally, or alternatively, the recommendation system may identify interaction parties similar to interaction parties with which the user has historical interactions (e.g., interaction parties in a same inter action party cluster) that occurred between 11:00 AM and 1:00 PM.);
a second TTC model trained at a second quantile level to estimate an above-average time for completing the task (Katariya, [0029], [0030] - As another example of a commonality, a commonality may exist if average interaction amounts associated with the recommended interaction parties and one or more of the historical interaction parties are within a threshold of each other. The threshold may be a percentage threshold (e.g., the average interaction amounts are within a percentage threshold (e.g., 1%, 5%, or 10%) of each other). Alter natively, the threshold may be an amount threshold (e.g., the average interaction amounts are within an amount threshold (e.g., $5, $10, $25, or $50) of each other). The historical interaction data may indicate interaction amounts of the historical interactions and from which the recommendation system may determine the average interaction amounts. Additionally, or alternatively, the average interaction amounts may be obtained from the interaction party data base, in which data indicating the average interaction amounts is stored and associated with interaction party identifiers corresponding to the interaction parties (e.g., the recommended interaction parties and/or the historical interaction parties).); and
a third TTC model trained at a third quantile level to estimate a below-average time for completing the task (Katariya, [0024] - Additionally, or alternatively, the recommendation system may determine recommended interaction parties based on information associated with the user (e.g., credit history, socioeconomic status). The recommendation system may obtain the information from the user account stored in the user profile database. Additionally, or alternatively, the recommendation system may obtain the information from one or more third-party databases (e.g., a credit history database, a social security database, and/or other federal databases). For example, if the user is classified with a particular socioeconomic status, then the recommendation system may identify and reconned interaction parties that have interactions that match the particular socioeconomic status. As another example, if the user has a high credit score (e.g., above 700), then the recommendation system may identify and reconned interaction parties that have interactions associated with high interaction amounts (e.g., above $1,000) and/or that accept credit (e.g., credit cards) as a form of completing the interaction. If the user has a low credit score (e.g., below 500), then the recommendation system may identify and reconned interaction parties with low interaction amounts (e.g., less than $50) and/or that only accept case as a form of completing the interaction.). The rationale for support of motivation, obviousness and reason to combine see claim 1 above.
Re: Claim 11, Claim 11 is a system claim corresponding to method claim 1. Therefore, claim 11 is analyzed and rejected as previously discussed with respect to claim 1.
Re: Claim 12, Claim 12 is a system claim corresponding to method claim 2. Therefore, claim 2 is analyzed and rejected as previously discussed with respect to claim 2.
Re: Claim 13, Claim 13 is a system claim corresponding to method claim 3. Therefore, claim 13 is analyzed and rejected as previously discussed with respect to claim 3.
Re: Claim 16, Claim 16 is a system claim corresponding to method claim 10. Therefore, claim 16 is analyzed and rejected as previously discussed with respect to claim 10.
Re: Claim 17, Claim 17 is a system claim corresponding to method claim 6. Therefore, claim 17 is analyzed and rejected as previously discussed with respect to claim 6.
Re: Claim 18, Claim 18 is a system claim corresponding to method claim 8. Therefore, claim 18 is analyzed and rejected as previously discussed with respect to claim 8.
Re: Claim 19, Claim 19 is a system claim corresponding to method claim 9. Therefore, claim 19 is analyzed and rejected as previously discussed with respect to claim 9.
Re: Claim 20, Claim 20 is an apparatus claim corresponding to method claim 1 and system claim 10. Therefore, claim 20 is analyzed and rejected as previously discussed with respect to claims 1 and 10.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office Action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN H. HOLLY whose telephone number is (571)270-3461. The examiner can normally be reached on MON. - FRI 10 AM - 8 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MATTHEW S. GART can be reached on 571-272-3955. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/John H. Holly/Primary Examiner, Art Unit 3696