DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This non-final action is responsive to the application filed on 2/28/23.
Claims 1-20 are pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
The claimed invention (claims 1-20) is directed to an abstract without significantly more. This judicial exception is not integrated into a practical application. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
Exemplary Claim 1 is ineligible
Under the broadest reasonable interpretation, the terms of the claim are presumed to have their plain meaning consistent with the specification as it would be interpreted by one of ordinary skill in the art. See MPEP 2111.
Based on the plain meaning of the words in the claim, the broadest reasonable interpretation of claim 1 involves a system using the mental processes of observation, evaluation, choice, and judgment to generate information (evaluation). See MPEP 2106.03.
Step 2A, Prong One: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04, subsection II, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. As discussed above, the broadest reasonable interpretation of the limitations is that those steps fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III.
Specifically, “process each of the one or more candidate insights data objects using at least the biasing weights applied to the trained empathy based representation of the user to generate a real-time prediction score for each of the one or more candidate insight data objects” is nothing more than mental operations to examine and evaluate data using observations and evaluations, judgments, and/or choices of a prediction score for each of the candidate insight data object(s). Further, determination of the “insight data object having a highest score” is a mental process of evaluating and judging the respective “object” relative to a “highest score.”
Step 2A, Prong Two: This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception or whether the claim is “directed to” the judicial exception. This evaluation is performed by (1) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (2) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. See MPEP 2106.04(d).
The recited “first set of machine learning models” and “second machine learning model” are nothing more than a generic attempt to apply the abstract idea to a generic computer and/or technological field (machine learning) and amount to no more than a result and/or outcome of the machine learning with nothing recited in the claim about the actual performance of models. Further, the recited a computer (e.g., “processor”) provides nothing more than mere instructions to implement an abstract idea on a generic computer.
See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception.
The recited “generate a tuning matrix…to be applied as biasing weights to the trained empathy based representation of the user” is insignificant extra-solution activity of data gathering such as obtaining input for an equation (In re Grams), obtaining information about network transactions (CyberSource), or updating an activity log (Ultramercial). That is, nothing is specifically recited with respect to the “trained empathy based representation of the user” that is anything other than, e.g., testing about how potential customers respond to offers and using statistics to calculate or update an optimized price model (OIP Technologies).
Further, the recited “maintain…each tracking an empathy-based aspect of a user, a trained empathy based representation of the user…tracking a different empathy-based aspect of the user” is/are insignificant extra-solution activity of a particular data source or data type to be manipulated such as limiting a database index (Intellectual Ventures), selecting particular data (Ameranth), or selecting information based on types of information and availability of information (Electric Power Group).
Further, the “maintain…a trained circumstance based representation of the user” is/are insignificant extra-solution activity of a data source or type of data to be manipulated, such as selecting information based on types and availability of information (Electric Power Group) and, even further, similar to determining a style for performing an action (In re Brown).
The “receive one or more candidate insight data objects representing potential computer-based interactive notifications” is insignificant extra-solution activity of mere data gathering, such as obtaining information over the Internet (CyberSource).
Further, the recited “transmit the candidate insight data object…to a user interface associated with the user” is insignificant extra-solution activity of data gathering (e.g., presenting offers, OIP Technologies; obtaining networked information, CyberSource).
Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application, and the claim is directed to the judicial exception.
Step 2B: This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. See MPEP 2106.05. As explained with respect to Step 2A, Prong Two, there are additional elements which were shown to be merely an attempt to apply the abstract ideas to a generic computer and/or technological field), as insignificant extra-solution activity, or as a result/outcome of the respectively recited machine learning models, which cannot provide an inventive concept. See MPEP 2106.05(f).
Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and/or insignificant extra-solution activity, which do not provide an inventive concept.
The other independent claims (e.g., 11 and 20) are rejected based on a similar rationale.
The dependent claims are rejected based on a similar rationale, all of which either merely recite further mental process, insignificant extra-solution activity of data gathering, or an attempt to apply the abstract idea using a generic computer and/or technological field and/or as a result/outcome.
Claim 2 recites insignificant extra-solution activity of data gathering (i.e., “receive a data set…”) and generically and at a high level performing “re-train” of the respective models, which is insignificant extra-solution activity of mere data gathering such as modeling an equation based on retrieved data (In re Grams).
See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data gathering and/or output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and/or outputting. See MPEP 2106.05.
Claim 3 recites insignificant extra-solution activity of types of data.
Claim 4 recites “separate model weighting” which is insignificant extra-solution activity of a data source or type for processing, and further recites “interactive control elements” which are insignificant extra-solution activity of data source or type of data to be manipulated such as requesting from a user data (Ultramercial); further, changing the tuning matrix, which is insignificant extra-solution activity (data gathering) of performing a formulation for an equation (In re Grams).
Claim 5 recites tracking environmental features associated with a particular contextual environment of the user, which is insignificant extra-solution activity of data gathering.
Claim 6 recites insignificant extra-solution activity of data source or type (of the “environmental features”).
Claim 7 recites insignificant extra-solution activity of data source of type for processing (of the “location”).
Claim 8 recites insignificant extra-solution activity of data source or type for processing (with respect to the “determination of whether the user is currently in transit includes…).
Claim 9 recites insignificant extra-solution activity of data source or type (of the “data set”).
Claim 10 recites insignificant extra-solution activity of data source or type (of the “data set”).
Claims 11 to 20 recites similar limitations as above and are similarly rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 5, 11, 15, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anamandra et al. (US 20230131099, Herein “Anamandra”) in view of Olshansky (US 20200311682) in view of Knoppert et al. (US 20210240283, Herein “Knoppert”).
Regarding claim 1, Anamandra teaches A system for controlling generation of one or more computer-generated insights using empathy-based machine learning features (psychological modeling of user (abstract) performed on system (figs. 1 and 2)), the system comprising:
a processor, operating in conjunction with computer memory and data storage, the processor configured (processor, system [0002]) to:
maintain, in a first set of machine learning models each tracking an empathy-based aspect of a user (e.g., 135a and 135b (fig. 1)), a trained empathy based representation of the user , each of the machine learning models of the first set of machine learning models tracking a different empathy-based aspect of the user (respective user models, e.g., 135a and 135b (fig. 1) each model trained on, e.g., indicators or notifications of employee burnout [0045]; Examiner’s note: Olshansky more specifically teaches empathy);
maintain, in a second machine learning model, a trained circumstance based representation of the user, receive one or more candidate insight data objects representing potential computer- based interactive notifications (a global ML model 115 (fig. 1); receiving parameter spaces of user such as receiving at least a first parameter space [0006]; model 115 trained using personal data [0046] representing user behaviors based on, e.g., detection and leading indicators [0051]);
generate a tuning matrix from the second machine learning model to be applied as biasing weights to the trained empathy based representation of the user (updated global machine learning model parameter space [0007]; update the trained empathy based representation of the user (e.g., local machine learning model 135a) [0060]; the updating the parameter space based on weight propagation [0049]; the at, the parameter space passage includes weights for model update [0060]);
process each of the one or more candidate insight data objects using at least the biasing weights applied to the trained empathy based representation of the user to generate a real-time prediction score for each of the one or more candidate insight data objects (at the local machine learning model, based on the updated parameter/weight space [0039] process user activity based on a respective selected machine learning model to calculate a prediction corresponding to a calculated output value of a respective user model, such as involving a confidence score for, e.g., various points in time and further variables [0048]; in particular, root causes corresponding with a determined top k quantity of root causes [0048]; further, the local machine learning model 135 performs inference [0049]); and
transmit the candidate insight data object having a highest score to a user interface associated with the user (determined top k quantity of root causes which may are aggregated and shared with an employer (i.e., the employer associated with the user) [0048]).
However, while Anamandra discloses trained machine learning models to predict user behavior based on application to a user corresponding to such psychological data as employee burnout (abstract), Anamandra fails to specifically teach machine learning models each tracking an empathy-based aspect of a user.
Yet, in a related art, Olshansky discloses candidate empathy models based on a user role [0119] such that each empathy model is based on analyzed behavioral data [0004].
It would have been obvious to one of ordinary skill in the art prior to the invention’s effective filing date to combine the empathy model of Olshansky with the personality model of Anamandra to have machine learning models each tracking an empathy-based aspect of a user. The combination would allow for, according to the motivation of Olshansky, automating content interaction [0001] by examining behavioral data of the user particularly with respect to empathy for better determining representative content for the user [0002]; e.g., the empathy score model is generated by observing user behavior such that the model can be used to determining a user empathy and apply to candidates in a candidate database [0004] to more effectively determine content for a user [0006].
However, while Anamandra discloses update the trained empathy based representation of the user (e.g., local machine learning model 135a) [0060], Anamandra in view of Olshansky fails to specifically teach tuning matrix.
Yet, in a related art, Knoppert discloses machine learning model modeled using weight matrices [0117] and, in particular, a training performed based on adjustments according to the weight matrices [0118] for fine tuning the weight matrices [0119].
It would have been obvious to one of ordinary skill in the art prior to the invention’s effective filing date to combine the tuning matrix of Knoppert with the empathy modeling and processing of Anamandra in view of Olshansky to have tuning matrix. The combination would allow for, according to the motivation of Knoppert, adjusting according to weight matrix in order to more accurately reflect model parameters that describe the individual user in a particular mood state [0118] and [0119].
Regarding claim 5, Anamandra in view of Olshansky in view of Knoppert teaches the limitations of claim 1, as above.
Furthermore, Anamandra teaches The system of claim 1, wherein the second machine learning model is configured to track environmental features associated with a particular contextual environment of the user (for updating the global machine learning model [0039], associated with contextual environment of the user such as causes for an employee decline [0052]).
Regarding claim 11, the claim recites similar limitations as claim 1 – see above.
Regarding claim 15, the claim recites similar limitations as claim 5 – see above.
Regarding claim 20, Anamandra teaches A non-transitory computer readable medium storing machine interpretable instruction sets, which when executed by a processor, cause the processor to perform a method for controlling generation of one or more computer-generated insights using empathy-based machine learning features (machine learning model (abstract) on a system [0002], figs. 1 and 2), the method comprising:
The claim recites similar limitations as claim 1 – see above.
Claim(s) 2, 9, 12, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anamadra in view of Olshansky in view of Knoppert in view of Nagarajan et al. (US 11,593,677, Herein “Nagarajan”).
Regarding claim 2, Anamandra in view of Olshansky in view of Knoppert teaches the limitations of claim 1, as above.
However, Anamandra in view of Olshansky in view of Knoppert fails to specifically teach The system of claim 1, wherein the processor is further configured to: receive, from the user interface associated with the user, a data set representative of an outcome associated with presentation of the candidate insight data object to the user; and
re-train the first set of machine learning models and the second machine learning model using the data set representative of the outcome associated with presentation of the candidate insight data object to the user.
Yet, in a related art, Nagarajan discloses based on a user selection performing retraining of the first and second machine learning models (fig. 6).
It would have been obvious to one of ordinary skill in the art prior to the invention’s effective filing date to combine the retraining of the first and second models and receiving data associated with the presentation of candidate object from the user of Nagarajan with the empathy based modeling and execution of Anamandra in view of Olshansky in view of Knoppert to have receive, from the user interface associated with the user, a data set representative of an outcome associated with presentation of the candidate insight data object to the user; and re-train the first set of machine learning models and the second machine learning model using the data set representative of the outcome associated with presentation of the candidate insight data object to the user. The combination would allow for, according to the motivation of Nagarajan, retraining the machine learning models based on a user selection particularly involving a user interaction with a user interface of a device for optimizing assets or objects that a user can interact with (fig. 6 and corresponding specification discussion) thus improving the generation of content for the user based not only on a better prediction of an aspect of a user using categorizqation machine learning model and a user activity profile, but also learning based on the user activity associated with the user (i.e., retraining) (cols. 1 and 2).
Regarding claim 9, Anamadra in view of Olshansky in view of Knoppert in view of Nagarajan teaches the limitations of claims 1 and 2, as above.
Furthermore, Nagarajan teaches The system of claim 2, wherein the data set representative of the outcome includes interactions on the user interface associated with the notification (model training corresponding with user historical data such as activity profile for prediction (figs. 3 and 4)).
Regarding claim 12, the claim recites similar limitations as claim 2 – see above.
Regarding claim 19, the claim recites similar limitations as claim 9 – see above.
Claim(s) 3 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anamandra in view of Olshansky in view of Knoppert in view of Chandrasekara (US 20190266999, Herein “Chandrasekara”).
Regarding claim 3, Anamandra in view of Olshansky in view of Knoppert teaches the limitations of claim 1, as above.
However, Anamandra in view of Olshansky in view of Knoppert fails to specifically teach The system of claim 1, wherein the empathy-based aspects include at least one of curiosity, preconceptions, inspirations, direct experience, listening, or imagination.
Yet, in a related art Chandrasekara discloses user direct experiences [0004] and further user’s emotions such as somber, happy, good/bad, past interactions, frustrated, relaxed, happy, hurried, polite, matter of fact, heart rate, perspirations, et. [0034] to [0043] corresponding with data to be used with machine learning algorithms to learn mapping between a situation and most appropriate emotional reaction for the user [0005].
It would have been obvious to one of ordinary skill in the art prior to the invention’s effective filing date to combine any of the curiosity, preconceptions, inspirations, direct experience, listening, or imagination emotional characteristics of Chandrasekara with the user emotional experience particularly with respect to empathy of Anamandra in view of Olshansky in view of Knoppert to have curiosity, preconceptions, inspirations, direct experience, listening, or imagination. The combination would allow for, according to the motivation of Chandrasekaran, data to train the machine learning algorithm to learn mapping for association of the user with respect to an appropriate emotional response for the user [0005] particularly with respect to various emotional states of the user [0006], thus better providing a response as a notification to the user, such as appropriately adapting the response to the user and, in particular, with respect to the determined emotional state of the user by modifying data corresponding with notifications in the response to the user [0008].
Regarding claim 13, the claim recites similar limitations as claim 3 – see above.
Claim(s) 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anamandra in view of Olshansky in view of Knoppert in view of Chandrasekara in view of Hu et al. (US 20220034668, Herein “Hu”).
Regarding claim 4, Anamandra in view of Olshansky in view of Knoppert in view of Chandrasekara teaches the limitations of claims 1 and 3, as above.
However, Anamandra in view of Olshansky in view of Knoppert in view of Chandrasekar fails to specifically teach The system of claim 3, wherein each of the different machine learning models of the first set of machine learning models are associated with a separate model weighting, and wherein the user interface includes interactive control elements which are configured to receive user inputs modifying the model weightings such that the first tuning matrix can be changed based on different weights applied to the different machine learning models of the first set of machine learning models.
Yet, in a related art, Hu discloses based on a user selection adjusting the weights of the first and second machine learning models [0021].
It would have been obvious to one of ordinary skill in the art prior to the invention’s effective filing date to combine the weight adjustment based on user selection of Hu with the machine learning modeling of Anamandra in view of Olshansky in view of Knoppert in view of Chandrasekar to have wherein each of the different machine learning models of the first set of machine learning models are associated with a separate model weighting, and wherein the user interface includes interactive control elements which are configured to receive user inputs modifying the model weightings such that the first tuning matrix can be changed based on different weights applied to the different machine learning models of the first set of machine learning models. The combination would allow for, according to the motivation of Hu, a sort of training that can be performed during user operation [0022] such that the user can control the adjustment of the weights of the machine learning models particularly since the training is different based on different users and the contexts [0023].
Regarding claim 14, the claim recites similar limitations as claim 4 – see above.
Claim(s) 6, 7, 8, 16, 17, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anamandra in view of Olshansky in view of Knoppert in view of Subramanian (US 20230356556).
Regarding claim 6, Anamandra in view of Olshansky in view of Knoppert teaches the limitations of claims 1 and 5, as above.
Furthermore, Anamandra teaches The system of claim 5, wherein the environmental features include at least time, weather, and location of the user (e.g., productivity data such as quantity of productive time, delays in meeting deadlines, etc. and even further situational data such as social situation, family situation, sleep patterns, etc. [0042]).
However, Anamandra in view of Olshansky in view of Knoppert fails to specifically teach weather.
Yet, in a related art, Subramanian discloses road conditions [0015].
It would have been obvious to one of ordinary skill in the art prior to the invention’s effective filing date to combine the weather of Subramanian with the conditional modeling of Anamandra in view of Olshansky in view of Knoppert to have weather. The combination would allow for, according to the motivation of Subramanian, analyzing a vehicle’s context such as road conditions and other factors that impact the performance of the vehicle, thus optimizing the driver’s experience for the current driving conditions [0002]; further, for vehicular analysis while the vehicle is driving down the road [0015].
Regarding claim 7, Anamandra in view of Olshansky in view of Knoppert in view of Subramanian teaches the limitations of claims 1, 5 and 6, as above.
Furthermore, Subramanian teaches The system of claim 6, wherein the location of the user further includes a determination of whether the user is currently in transit (determination based on vehicle information such as vehicle velocity, inertial movement, etc. [0004] corresponding with for collecting data while the vehicle is driving down the road [0015]).
Regarding claim 8, Anamandra in view of Olshansky in view of Knoppert in view of Subramanian teaches the limitations of claims 1 and 5-7, as above.
Furthermore, Subramanian teaches The system of claim 7, wherein the determination of whether the user is currently in transit includes obtaining additional features associated with a vehicle in which the user is currently in transit (in addition to velocity, determining inertial movement and road conditions [0004]).
Regarding claim 16, the claim recites similar limitations as claim 6 – see above.
Regarding claim 17, the claim recites similar limitations as claim 7 – see above.
Regarding claim 18, the claim recites similar limitations as claim 8 – see above.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Anamandra in view of Olshansky in view of Knoppert in view of Karp et al. (US 20190379615, Herein “Karp”).
Regarding claim 10, Anamandra in view of Olshansky in view of Knoppert teaches the limitations of claim 1, as above.
However, Anamandra in view of Olshansky in view of Knoppert fails to specifically teach The system of claim 1, wherein the data set representative of the outcome includes payment interactions associated with the notification.
Yet, in a related art, Karp discloses the prediction system used with notifications sent to the user in anticipation of a user action such as payment [0050]; payment [0035].
It would have been obvious to one of ordinary skill in the art prior to the invention’s effective filing date to combine the outcome includes payment interactions (associated with the notification) of Karp with the notification modeling of Anamandra in view of Olshansky in view of Knoppert to have wherein the data set representative of the outcome includes payment interactions associated with the notification. The combination would allow for, according to the motivation of Karp, training a prediction model which may be used to anticipate actions of the user based on known user activity [0035] thus enhancing user interaction by providing customers routine actions in an expedited, extended fashion such as by way of payment interactions [0002].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON EDWARDS whose telephone number is (571) 272-5334. The examiner can normally be reached on Mon-Fri; 8am-5pm EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached on 571-272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form
/JASON T EDWARDS/ Examiner, Art Unit 2145